Cloud Insights

Prevent your data lake from turning into a data swamp

Build a light-weight efficient data lake on Cloud

The future of Service Providers will be driven by agile and data-driven decision-making. Service Providers in the connectedness industry generate data from various sources every day. Hence, integrating and storing their massive, heterogeneous, and siloed volumes of data in centralized storage is a key imperative.

The demand for every service provider is a data storage and analytics solution of high quality, which could offer more flexibility and agility than traditional systems. A serverless data lake is a popular way of storing and analyzing data in a single repository. It features huge storage, autonomous maintenance, and architectural flexibility for diverse kinds of data.

Storing data of all types and varieties in central storage may be convenient but it can create additional issues. According to Gartner, “80% of data lakes do not include effective metadata management capabilities, which makes them inefficient.” The data lakes of the service providers are not living up to expectations due to reasons such as the data lake turning into a data swamp, lack of business impact, and complexities in data pipeline replication.

  • Manoj Kumar
  • Sriram V
  • Sathya Narayanan
Cloud Insights

Don’t let the infrastructure management cloud your mind

Implement Infrastructure as Code (IaC) to reduce provisioning time by 65%

IT infrastructures are generally imagined as big rooms with huge servers and systems connected with a web of wires. Provisioning of this infrastructure has always been a manual process for the service providers in the connectedness industry, which leads to a lot of accuracy and consistency issues. The advent of cloud computing helped in addressing most of these issues. However, the configuration consistency, manual scalability, and cost issues persisted. Also, deploying complex infrastructure solutions requires considerable effort from cloud architects. These efforts are neither easy to repeat nor modified in a single shot.

To overcome these challenges, service providers can implement a DevOps Infrastructure as Code (IaC) methodology, which helps in automating the manual, error-prone provisioning tasks. It allows service providers to define the final state infrastructure, application configurations, and scaling policies in a codified way. This, in turn, reduces the dependency on cloud architects and provisioning time significantly.

Infrastructure as Code (IaC) helps the service providers to define the cloud infrastructure, application configurations, and scaling policies in a codified way.

  • Deepak Jayagopal
  • Kanapathi Raja
  • Sathya Narayanan

Breaking the barrier between Machine Learning (ML) prototype and production

Leverage MLOps to scale and realize the ML use cases faster

Most businesses in the ‘Connectedness’ industry have started embracing Machine Learning (ML) technology to provide effective customer service to the customers. However, managing these ML projects and putting them into action is challenging. For service providers who strive to move beyond ideation and embed ML into their business processes, Machine Learning Operations (MLOps) will be a game-changer. According to Gartner, “Launching ML pilots is deceptively easy but deploying them into production is notoriously challenging”. Listed below are a few challenges that make it hard to scale ML initiatives.

  • Lack of automated mechanism to address the change request in ML pipeline
  • Inefficient ways of retraining and deploying the ML models to accommodate the data changes
  • Lack of in-depth visibility of the model’s performance

Service providers need to implement the MLOps approach to overcome these challenges, which automates and monitors the entire machine learning life cycle. It enables consistent improvement in the baseline accuracy and accelerates the production time of ML models.

Launching ML pilots is deceptively easy but deploying them into production is notoriously challenging.

The successful implementation of the MLOps approach requires the right set of enablers such as de-coupled architecture, standard change management process, automated retraining and deployment of ML models, and continuous monitoring.


To treat, or not to treat: Increase marketing ROI with targeted campaigns, through uplift modelling

While running direct marketing campaigns, businesses must map the right customers to a given promotional offer to maximize the campaign effect. For example, which customers should receive a discount on subscription, to minimize the business overall churn rate.

Different methods can be used to identify the right set of target customers for campaigns, such as, manual spreadsheet-based statistical modelling and outcome modelling. These methods, however, have some limitations like:

  • Randomized and inaccurate list of target customers
  • Lack of granular details such as which customers are most likely to respond to marketing campaigns
  • Low marketing ROI due to poor response rate from customers

Machine Learning (ML)-based uplift modelling is a promising approach to overcome the above limitations. It allows businesses to categorize customers as the ones who are likely to respond positively to a campaign and those who would remain neutral or even react negatively.


An uplift model increases marketing ROI by determining the right target customers.

A well-executed uplift model would improve a business marketing efficiency and help in driving higher incremental revenue. The successful implementation of the model requires the right set of enablers such as raw data acquisition, feature engineering, and AI/ML model development.


Explainable Machine Learning (ML) models demystified

Enable 5X transparency in AIOps, achieving a more reliable and accurate business outcome

Service providers in the connectedness vertical embrace Artificial Intelligence for IT Operations (AIOps) to transform their businesses, but the users are hesitant in entrusting their operations to a complexly driven platform that provides no clarity and visibility into its functionality. Due to the lack of transparency, service providers are concerned about making bad decisions based on AI recommendations and the liability of such decisions and actions.

In their quest for autonomous operations, service providers seek to be more proactive with predictive analytics, where the machines make most of the decisions and help engineers take preemptive actions. However, the engineers need to have complete visibility into the underlying logic used by the AIOps and the ability to validate if the outcome is reliable.

Figure1: Assisted Artificial Intelligence and Machine Learning Framework

To accelerate AI/ML model development with enhanced transparency, enterprises must switch from existing auto-machine learning to assisted AI/ML framework-based solutions.

Explainable Machine Learning (ML) models aim to solve this problem by explaining the logic of the AIOps solutions so that the users can easily understand the outcome. The model explains the application of the AI solution and its result to the users in a way that they can clearly understand, rely on, and trust the outcome. Explanation in the ML model can be viewed as a means to transforming a black-box AIOps into a glass-box AIOps, by precisely lifting the veil on its computing and logic.