Course Outline

1. Module-1 : Case studies of how Telecom Regulators have used Big Data Analytics for imposing compliance :

  • TRAI ( Telecom Regulatory Authority of India)
  • Turkish Telecom regulator : Telekomünikasyon Kurumu
  • FCC -Federal Communication Commission
  • BTRC – Bangladesh Telecommunication Regulatory Authority

2. Module-2 : Reviewing Millions of contract between CSPs and its users using unstructured Big data analytics

  • Elements of NLP ( Natural Language Processing )
  • Extracting SLA ( service level agreements ) from millions of Contracts
  • Some of the known open source and licensed tool for Contract analysis ( eBravia, IBM Watson, KIRA)
  • Automatic discovery of contract and conflict from Unstructured data analysis

3. Module -3 : Extracting Structured information from unstructured Customer Contract and map them to Quality of Service obtained from IPDR data & Crowd Sourced app data. Metric for Compliance. Automatic detection of compliance violations.

4. Module- 4 : USING app approach to collect compliance and QoS data- release a free regulatory mobile app to the users to track & Analyze automatically. In this approach regulatory authority will be releasing free app and distribute among the users-and the app will be collecting data on QoS/Spams etc and report it back in analytic dashboard form :

  • Intelligent spam detection engine (for SMS only) to assist the subscriber in reporting
  • Crowdsourcing of data about offending messages and calls to speed up detection of unregistered telemarketers
  • Updates about action taken on complaints within the App
  • Automatic reporting of voice call quality ( call drop, one way connection) for those who will have the regulatory app installed
  • Automatic reporting of Data Speed

5. Module-5 : Processing of regulatory app data for automatic alarm system generation (alarms will be generated and emailed/sms to stake holders automatically) :
Implementation of dashboard and alarm service

  • Microsoft Azure based dashboard and SNS alarm service
  • AWS Lambda Service based Dashboard and alarming
  • AWS/Microsoft Analytic suite to crunch the data for Alarm generation
  • Alarm generation rules

6. Module-6 : Use IPDR data for QoS and Compliance-IPDR Big data analytics:

  • Metered billing by service and subscriber usage
  • Network capacity analysis and planning
  • Edge resource management
  • Network inventory and asset management
  • Service-level objective (SLO) monitoring for business services
  • Quality of experience (QOE) monitoring
  • Call Drops
  • Service optimization and product development analytics

7. Module-7 : Customer Service Experience & Big Data approach to CSP CRM :

  • Compliance on Refund policies
  • Subscription fees
  • Meeting SLA and Subscription discount
  • Automatic detection of not meeting SLAs

8. Module-8 : Big Data ETL for integrating different QoS data source and combine to a single dashboard alarm based analytics:

  • Using a PAAS Cloud like AWS Lambda, Microsoft Azure
  • Using a Hybrid cloud approach

Requirements

There are no specific requirements needed to attend this course.

  14 Hours
 

Testimonials

Related Courses

Apache Accumulo Fundamentals

 21 hours

Apache Accumulo is a sorted, distributed key/value store that provides robust, scalable data storage and retrieval. It is based on the design of Google's BigTable and is powered by Apache Hadoop, Apache Zookeeper, and Apache Thrift. This

Apache Airflow

 21 hours

Apache Airflow is a platform for authoring, scheduling and monitoring workflows. This instructor-led, live training (online or onsite) is aimed at data scientists who wish to use Apache Airflow to build and manage end-to-end data pipelines. By

Apache Drill

 21 hours

Apache Drill is a schema-free, distributed, in-memory columnar SQL query engine for Hadoop, NoSQL and other Cloud and file storage systems. The power of Apache Drill lies in its ability to join data from multiple data stores using a single query.

Apache Drill Performance Optimization and Debugging

 7 hours

Apache Drill is a schema-free, distributed, in-memory columnar SQL query engine for Hadoop, NoSQL and and other Cloud and file storage systems. The power of Apache Drill lies in its ability to join data from multiple data stores using a single

Apache Drill Query Optimization

 7 hours

Apache Drill is a schema-free, distributed, in-memory columnar SQL query engine for Hadoop, NoSQL and other Cloud and file storage systems. The power of Apache Drill lies in its ability to join data from multiple data stores using a single query.

Apache Hama

 14 hours

Apache Hama is a framework based on the Bulk Synchronous Parallel (BSP) computing model and is primarily used for Big Data analytics. In this instructor-led, live training, participants will learn the fundamentals of Apache Hama as they step

Apache Arrow for Data Analysis across Disparate Data Sources

 14 hours

Apache Arrow is an open-source in-memory data processing framework. It is often used together with other data science tools for accessing disparate data stores for analysis. It integrates well with other technologies such as GPU databases, machine

Big Data & Database Systems Fundamentals

 14 hours

The course is part of the Data Scientist skill set (Domain: Data and Technology).

Data Vault: Building a Scalable Data Warehouse

 28 hours

Data Vault Modeling is a database modeling technique that provides long-term historical storage of data that originates from multiple sources. A data vault stores a single version of the facts, or "all the data, all the time". Its

Data Virtualization with Denodo Platform

 14 hours

Denodo is a data virtualization platform for managing big data, logical data warehouses, and enterprise data operations. This instructor-led, live training (online or onsite) is aimed at architects, developers, and administrators who wish to use

Dremio for Self-Service Data Analysis

 21 hours

Dremio is an open-source "self-service data platform" that accelerates the querying of different types of data sources. Dremio integrates with relational databases, Apache Hadoop, MongoDB, Amazon S3, ElasticSearch, and other data sources.

Apache Druid for Real-Time Data Analysis

 21 hours

Apache Druid is an open-source, column-oriented, distributed data store written in Java. It was designed to quickly ingest massive quantities of event data and execute low-latency OLAP queries on that data. Druid is commonly used in business

Apache Kylin: From Classic OLAP to Real-Time Data Warehouse

 14 hours

Apache Kylin is an extreme, distributed analytics engine for big data. In this instructor-led live training, participants will learn how to use Apache Kylin to set up a real-time data warehouse. By the end of this training, participants will

Zeppelin for Interactive Data Analytics

 14 hours

Apache Zeppelin is a web-based notebook for capturing, exploring, visualizing and sharing Hadoop and Spark based data. This instructor-led, live training introduces the concepts behind interactive data analytics and walks participants through the