Categories
Operational Excellence

Bridging the gap between demand and capacity

Leverage AI-powered capacity planning to modernize field services

Most service providers face challenges in planning and allocating field technicians based on the demand vs capacity. According to Gartner, “Balancing available resources against the demand for those resources is essential to successful initiative completion“. Inefficient capacity planning often leads to over-staffing or under-staffing of field technicians. This further results in order fallouts and dissatisfied customers. The most common sources of dysfunction are:

  • Unavailability of tools to estimate capacity in real-time
  • Lack of strategy to identify the key influencing factors that impact the capacity planning process
  • Lack of mechanisms to assign the right technician for the right service
  • No end-to-end visibility into field service capacity


According to Gartner, “Balancing available resources against the demand for those resources is essential to successful initiative completion“.

To overcome these challenges and handle the diverse field data, service providers in the connectedness industry should move towards intelligent capacity planning, which helps in the real-time mapping of dispatches and the optimal usage of resources. Leveraging an AI-powered capacity planning framework helps the service providers to reduce resource wastage by 20% and improve the effectiveness of service response and customer satisfaction. Enterprise AI can, over time, improve the prediction of field technician work hours by considering the key factors such as weather, season and maintenance data.

Fig: Leverage AI-powered capacity planning framework for real-time field tech resource management

Categories
Operational Excellence

Recipe for managing the digital workforce effectively

Build a comprehensive RPA Bot governance model to reduce operation hassles, improve bot performance and scale automation programs

Service Providers are now riding the automation wave. Painful manual tasks, which burdened staff for ages, can be easily handled by the software bots. However, in the process of onboarding the digital workforce, most service providers have missed establishing robust and unified governance. In a survey done by Forrester Consulting, 69% of the respondents said they face difficulty in managing rules that guide bot behavior and 61% responded that control & operations of RPA bots are immature.

The lack of unified governance of the digital workforce significantly impacts different users such as the RPA Center of Excellence (COE), Business Unit Owners, Production Support, and Operations Team. These users face challenges such as managing bot license and application credentials, orchestrating bots across platforms and analyzing real-time bot performance and its utilization. They also lack real-time alerts on process failures & forecasts, which often lead to missing the SLA for critical deliveries.

Service providers must establish an effective RPA bot governance model by focusing on key areas. A few of them are listed below:

  • Integrated Visual Control Room- Provides a high level of collaboration & transparency while managing bots across processes and platforms. This helps to find the root cause of non-functioning bots
  • Delivery Forecast & Inflow Alert Mechanism: Helps to visualize key metrics in real-time to meet the SLAs
  • Automated Application Credential Management & Bot License Tracker: Prevents production outage by avoiding account lock and license expiry issues

Governance of the Digital Workforce is becoming a consistent challenge while adopting Robotics and Cognitive Automation. A Forrester Consulting report shows that 70% of service providers struggle with BOT performance and scalability issues.

Categories
Digital Customer Experience

A blueprint for digital customer experience implementation

Become digital-first organization and deliver superior experience faster to the customers

Service providers in the connectedness industry have invested heavily in tools and technologies to help them understand their customers more deeply and to gain the advantages of superior customer experience (CX). In the past, when executed well, CX investments have yielded good results: customer retention and acquisition, increased sales, and loyalty. But the world has changed. How we interact with brands has evolved and so too has customer experience. Yet as service providers strive to form a more complete picture of customer preferences and behaviors, they continue to struggle with inherent organizational and technological challenges, which includes organizational and cultural challenges, lack of digital maturity, lack of hyper-personalization, and increasing business complexity.

To overcome these challenges, service providers must digitalize their entire CX landscape by evolving from a multi-stack and multi-channel operator to a unified digital operator. This can be achieved by empowering people and processes with domain driven approach, shifting to mono-repo, hyper-automating development process with CI/CD and implementing the digital experience platform framework. Eventually, the digital CX implementation will enable service providers to become a digital-first organization and bring superior customer experiences faster to market.


Service providers must digitalize their entire CX landscape by evolving from a multi-stack and multi-channel operator to a unified digital operator

Key transformation focus-areas to deliver a superior digital CX across multiple channels

Right now, we’re on the brink of an experience renaissance. CX is not going away, but its value proposition is stalling because many of the fundamentals of CX are now commonplace and no longer enough for differentiation and growth. This renaissance is galvanizing service providers to push beyond the CX philosophy and reimagine their entire business through the lens of experience. Thus, it is critical that every service provider must build digital capabilities to deliver exceptional experiences for their customers.

Categories
Operational Excellence

Creating a smart field workforce with an AI-powered video guide

Leverage video AI to improve field engineers’ efficiency, reduce site visits, and accelerate install to commission cycle time by 3X

Inefficiencies in field services contribute the most to the capital expenditure of service providers. One of the major reasons for field service inefficiency is repeat site visits or rework, leading to a 5X increase in repair cost and delay in order delivery time.

In the case of field surveys, data shows that 40-60% of installation orders require a site survey, out of which 18% require repeat surveys. The sites survey is done manually, requiring manual data capture and physical audits leading to errors and incomplete data. Hence, the process becomes extremely time-consuming.

To overcome these challenges, service providers must leverage the power of video intelligence. Prodapt’s AI-driven video intelligence framework powered by Vyntelligence can create a smart field work force. Surveyor captures a video and voices it over, using a guided storyboard. The framework auto-captures the details and sends alerts for missing details. A survey is submitted with 100% details and can be a point of reference for specific details or future changes. This leads to 3X acceleration in installation time and improved customer experience.


Enable field engineers with AI-powered devices to improve ‘right-first-time’ field work and enhance customer experience through reduced
‘time-to-resolve’

The three main components of this framework are –

  • AI-assisted video guide – Provides a structured guided storyboard for field engineers to effortlessly capture the data
  • Recommendation engine – Enables guided actions to various business stakeholders. Gives AI-powered recommendations and real-time visibility into the jobs to supervisors, auditors, and field engineers
  • Smart dashboards – Provides end-to-end visibility into jobs driving smarter actions for management and business as a whole
Categories
Cloud

What should an enterprise consider when adopting or rapidly expanding its multi-cloud strategy?

Cloud has been in existence since 2006, when Amazon Web Services (AWS) first announced its cloud services for enterprise customers. Two years later, Google launched App Engine, followed by Alibaba and Microsoft’s Azure services. The most recent addition to the public cloud service providers’ list is OCI (Oracle Cloud Infrastructure).

As per the  Gartner 2021 Magic Quadrant, AWS is the market leader, followed by Microsoft Azure and Google Cloud Platform in the second and third positions, respectively. As cloud technology evolves, so do the customer requirements. Today, cloud adoption is one of the top priorities among C-suite executives. The Covid-19 pandemic further accelerated the need for cloud adoption as digitalization is no longer optional for organizations but a mandate. As the pandemic nears its end, there is a surge in demand for cloud services as most enterprises are increasingly leveraging it. As a result, enterprises don’t spend enough time on the “right” workload assessment. There is a possibility that enterprises might get impacted due to this sudden move to the cloud and may have to eventually exit or switch to another Hyperscaler at a later stage.

As per  Gartner’s report, 81% of the respondents said they currently work with two or more public cloud providers. It means multi-cloud is the future of cloud computing.

  1. Regional Presence – This is one of the most common requirements when selecting the Hyperscaler. Most well-known Hyperscalers have extended their global reach to tap into new markets, meet existing customer demands and adhere to regulatory/compliance requirements. Regional presence has a strong impact as enterprises would prefer being closer to their customers, abide by the compliance requirements defined by their country and offer high performant services with low latency. When planning to onboard another Hyperscaler, enterprises must ensure that it fulfils all the regulatory and compliance requirements and has a presence in the local region. Additionally, enterprises must perform a small proof of concept if switching due to latency-related reasons. Besides, they must also evaluate the connectivity options available through Hyperscaler or their Channel Partners.
  2. Best-of-Breed Services – All major Hyperscalers offer a huge portfolio of services across infrastructure, platform, data services, and AI/ML. Yet, some cloud service providers enjoy market leadership for specific services. Enterprises can go for any Hyperscaler for general infrastructure. However, large enterprises, majorly depending upon Microsoft technologies and tools, prefer Azure, as they get to leverage the Microsoft Licensing Model and ease of integration. Lastly, GCP becomes the vendor of choice among enterprises regarding AI/ML/Data services. When evaluating another Hyperscaler, enterprises must validate new and different services that are available with the new Hyperscaler. Evaluate these services for proper functionality, limitations, resource limit, and availability in the chosen region. For a Hyperscaler, all services may not be available in all the regions. Review the Hyperscaler’s roadmap and ensure that the required services will be available before the switch-over.
  3. Vendor Independence – Vendor/cloud provider lock-in can be extremely detrimental, keeping you captive for non-competitive pricing. It can also impact your agility, productivity, and growth if a cloud provider is failing to live up to the committed SLA terms and you are prevented from switching to another provider. Opting for a multi-cloud strategy early in the cloud journey would help enterprises avoid getting locked into such vendor dependence. There are different models today, like using generic services from one Hyperscaler and specialized services from another and using one Hyperscaler for production workload and another for disaster recovery. Enterprises should ensure that the applications can work across different clouds before finalizing the strategy, especially for stateful applications.
  4. Infrastructure Performance – Every Hyperscaler has built its environment using different virtualization technology called a hypervisor. While AWS uses Xen hypervisor for the old generation and Nitro Hypervisor for the newer generation, Oracle Cloud Infrastructure uses Xen technology, and Google Cloud Platform uses KVM. In addition, their services are hosted on the latest and greatest hardware stack. There is a possibility that some workloads may perform slightly better in one environment than another due to abstraction overhead or the underlying new hardware. Also, some Hyperscalers offer different hardware in different regions, so enterprises need to assess this based on the application they plan to deploy in a region. As a recommendation, enterprises can perform a Proof of Concept (PoC) by running the same application across different Hyperscalers. This may require running the same workload in the new setup for a specific duration and closely monitoring it. Try simulating the same use case, setting up alerts, gradually increasing the use-case traffic, and monitoring the application behavior. Based on the PoC results, host your applications across multi-clouds.
  5. Niche Hyperscaler Credibility – There are options beyond the major Hyperscalers that might fit into enterprise niche needs. It is critical to validate these niche vendor’s credibility during the evaluation phase. Enterprises can make use of third-party services to ensure vendor credibility. Industry analysts like Gartner, IDC, Forrester, etc., regularly publish vendor-oriented reports. Look out for their evaluation of the Hyperscaler in Magic Quadrant, Forrester Wave, etc. The Hyperscaler must have a long-term strategy, plan, and roadmap.
  6. Migration Tools/Services – For an enterprise planning to onboard another Hyperscaler, it becomes equally important to select the right tool to migrate the workloads from on-premises to cloud or from one Hyperscaler to another. For this reason, evaluate if the new Hyperscaler provides any tools or services for workload, database, and data migration to their environment.

    For example, every Hyperscaler has a set of tools for workload migration, database migration, data migration, data transformation, etc. AWS provides Application Migration Services for workload migration, AWS Database Migration Service for database migration, AWS DataSync for data migration from on-premise to AWS. Similarly, Google Cloud Platform has tools to make the data and workload migration very seamless – Migrate for Compute Engine for workload migration from On-Premise to GCP, AWS/Azure to GCP (Hyperscaler to another Hyperscaler), Migrate for Anthos for workload transformation from GCE to GKE, AWS EC2/Azure VM to GKE (one Hyperscaler to another Hyperscaler) or Storage Transfer Service for Cloud, etc. Likewise, Azure has Azure Migrate for workload migration, Azure Database Migration Service for databases, etc.

  7. Pricing, FinOps, and Cost Optimization – Service consumption charges are always a top priority for a CFO. Enterprises are constantly exploring different options to reduce their operating expenses. They expect Hyperscalers to recommend options to reduce cost, display granular usage and report service-wise breakdown. Tools/platforms like CloudCheckr, CoreStack (FinOps), Flexera CMP, etc., offer recommendations and insights for cost optimization. These products/tools use an advanced ML-based approach to the past (historical) data to recommend the next course of action. Cost optimization plays a vital role in deciding the multi-cloud strategy.
  8. Support Model, KPI, SLAs – Few enterprises may also want to add another Hyperscaler since the available Hyperscaler cannot meet the required SLA or they don’t offer well-defined KPIs. These are a few key measurable parameters for an enterprise to discuss with their Hyperscaler before deciding. It helps in evaluating the cloud partners, measure the project progress and its impact on their business. Evaluate the benefits of each support model available through the Hyperscaler. Go for the one that best suits the enterprise’s requirements. Check different SLAs, KPIs, monthly/quarterly reports, etc.
  9. SME & Skills Availability – For going multi-cloud, an enterprise will require guidance at every stage, like identifying the right workloads, right Hyperscaler(s), right monitoring and management tools, right skills, etc. For these reasons, an enterprise must have or engage an expert or a system integrator (SI) who can advise, help the team and guide them through the multi-cloud journey. In addition, define a path for the internal teams to learn new skills and get certified

As the public cloud offerings and services expand, enterprises have multiple options available at their disposal. They can decide and pick up the most suitable Hyperscaler for their workloads. Workload mobility across clouds will be a general pattern based on service cost, application latency, and/or need for additional resources. Though it may not be ideal for critical production-grade workloads/applications with regulatory and compliance requirements, it is most suitable for other workloads like product testing, scalability testing, code development, etc., which caters to around 30%-40% of the workloads. Such workloads can make use of this capability to achieve cost optimization.

Earlier, due to a limited number of cloud service providers, enterprises had to worry about service outages, vendor lock-in, delays in problem resolution, vendor insolvency, etc. But with the blooming Hyperscaler eco-system, enterprises are flooded with choices. This leads to challenges in effectively managing, monitoring, securing, and optimizing costs in a multi-cloud environment. However, enterprises can use multi-cloud management solutions from vendors like IBM (Cloud Pak), Micro Focus (Hybrid Cloud Management X), Flexera (Cloud Management Platform), Scalr, ServiceNow (ITOM Cloud Management), etc. to ensure seamless operations.

A multi-cloud strategy also demands well-defined governance. Otherwise, it may increase the operating costs due to ignorant individuals or poor control mechanisms. An inefficient governance (control mechanism) may lead to underutilized and zombie resources, consuming money in the cloud. It is recommended to set up a central body responsible for managing the cloud resources and ensuring proper governance. Creating a self-service portal with proper workflow is a good approach to managing the cost and handling mismanagement.

Today, we are already consuming “serverless” services from cloud service providers, but, in the future, we may have a new business model where the enterprises pay for the services and forget worrying about where exactly it’s hosted. In the current product market, acquisition is a common strategy adopted by companies to expand their customer base, add unique services to their portfolio, and/or enhance their capabilities. Tomorrow, the trend may continue among the Hyperscalers too. Who knows what’s next in the technology roadmap?

Categories
Operational Excellence

Combining the power of RPA and AI to keep customer experience unharmed during network outages

Leverage RPA and AI to build and implement a proactive two-way Conversational Framework to reduce OpEx, boost agent productivity and improve NPS

According to recent statistics, 30% of the service providers’ contact center calls are network outage related. Their inability to predict these outages on time and provide prior information to the customers results in contact center call spikes, customer dissatisfaction and a low NPS score. This also increases the OpEx for contact centers and may lead to a reputational loss for service providers.

To overcome these challenges and improve NPS, service providers must create a central Intelligent platform capable of orchestrating seamless conversation between the contact centers and customers. This is established by implementing a “Two-way conversational Framework”. The steps involved are:

  • Step 1: Auto-identification of outage information
    Build a standardized process to identify relevant outages in the network monitoring systems. Integrate them with an outage monitoring dashboard for BOT to auto-extract outages and store them in a central database.
  • Step 2: Schedule notification
    Perform automated validation and intelligent scheduling to send proactive notifications to the impacted customers in a well-organized structure.
  • Step 3: Notify and engage with customers using a Conversational AI BOT
    Send proactive notifications, and if the customer has additional queries, the bot can engage in a conversation using the conversational AI


Conversational AI Bot orchestrates bi-directional communication and provides seamless customer experience during common network outages.

Categories
Operational Excellence

Improving the efficiency of your Field Service Workforce

Leverage machine learning to eliminate blind dispatches and improve the first-time fix rate (FTFR)

Field Technicians are the face of your service organization, and it is imperative to equip them with the right tools and knowledge to handle any field challenges. With efficient management and empowerment of technicians, your organization can deliver fast, effective, and efficient services to customers.

A business should strike a balance between the speed and accuracy of on-site customer requests to increase the productivity of technicians and improve customer satisfaction. But, in reality, technicians are frequently not able to deal with customer problems on time and are forced to make multiple trips to the client location due to process inefficiencies. Thus, instead of servicing new customers or optimizing current customer relationships, technicians invest valuable time and resources in non-revenue-generating activities.

Today, 70% of field technicians visit sites without prior information about the nature of the problem, issue location and solution recommendation. It leads to repeated dispatches, longer resolution time and high customer churn.

Going digital is the cornerstone of success for a modern services organization. Adopt the ‘AI-Powered Field Service Framework’ to optimize field services and increase technician productivity. The framework encompasses three vital components to achieve a higher First Time Fix Rate (FTFR) and reduce Mean Time to Resolve (MTTR):

  • Fault Location Classifier– Predicts the fault location and sends email/SMS notification via mobile app to technicians
  • Recommendation Engine– Suggests guided actions and next best resolution steps to improve technicians’ efficiency
  • Technician Dashboard– Provides a one-stop view of all dispatches and actionable insights to technicians


70% of field technicians visit the sites without prior information about the problem leading to repeated dispatches, longer resolution time and high customer churn.

Categories
Digital Customer Experience Insights

Using AI to understand how your customers feel

Predict Net Promoter Scores and identify whether your customer is potentially a promoter, neutral, or detractor. Take corrective actions timely to improve customer service.

Customers today expect a seamless and hassle-free interaction with their service providers. A dissatisfied and frustrated customer will quickly opt to switch. Thus, for the service provider, it becomes very crucial to understand the customer experience and promptly take corrective measures if it lags. One key metric to understand this is using the Net Promoter Score (NPS). It provides customer loyalty and satisfaction measurement by asking customers how likely they are to recommend your product or service to others on a scale of 0-10.

To capture NPS, service providers share the survey forms with their customers. But do customers respond to such surveys? Research shows that only 15-20% of customers respond to the NPS survey after their interactions with customer support. Does it mean the service provider should not take any action for the remaining 80-85%, assuming they would have a good experience? There is a high possibility that a customer not satisfied with the service would have already decided to opt out without taking any effort to respond to the survey.

Most innovative service providers are trying to address this problem with a machine learning (ML) approach.

Fig: Key steps towards building ML Model for CSAT Prediction and Improvement


NPS provides customer loyalty and satisfaction measurement by asking customers how likely they are to recommend your product or service to others

Categories
Operational Excellence

Turn your network issues into customer delight

Leverage automation strategies to streamline the Trouble to Resolve (T2R) process, providing customers with quick resolution and greater satisfaction

TM Forum, a global industry association for service providers and their suppliers in the telecommunications industry, has a business process framework -eTOM’s (Enhanced Telecom Operations Map) Trouble to Resolve (T2R) process. It reveals how to deal with a trouble (problem) reported by the customer, analyze it to identify the root cause of the problem, initiate resolution to meet customer satisfaction, monitor progress and close the trouble ticket.

Most Service Providers follow the eTOM T2R process, however, they encounter key challenges that affect the overall T2R operational efficiency and increase the OPEX.

  • Multiple siloed systems to complete a network event’s lifecycle leads to high manual effort and increased OPEX
  • Difficulty in identifying the right impact of a network event-
    • No proper tools for auto-identification & prioritization of critical events that would cause major business impact
    • Resource wastage: Network Operation Centre (NOC) tends to spend a significant amount of time handling huge volumes of
      alerts
  • Difficulty in meeting business KPIs due to unavailability of fully integrated systems and automated processes

Service Providers in the connectedness industry must develop an effective strategy for integrating the systems and bringing end-to-end automation to the T2R process flow. The majority of service providers have a basic level of automation, however, there is a huge scope for complete lifecycle automation. This Insight showcases an effective approach for implementing end-to-end automation of network event lifecycle from event creation to resolution. The approach is based on the implementation experience of leading service providers at multi-geographic locations.

“According to a report by McKinsey, many service providers have complex fundamental processes with multiple system integrations and are labor-intensive and costly. Leveraging digital technologies to simplify and automate operations makes them more productive and results in a significant cost reduction of up to 33%.”

Categories
Operational Excellence

Steering data migration, powered by RPA

Leverage RPA based Automation Framework to accelerate data migration and improve accuracy

Data migration involves moving data between locations, formats, and applications. This need is on the rise due to ongoing trends such as mergers and acquisitions (M&As), migration of applications to the cloud, and modernization of legacy applications. However, the execution of data migration using traditional methods is not at par with the increasing frequency!

According to Gartner, 50% of the data migration initiatives will exceed their budget & timeline by 2022 because of flawed strategy & execution. Most of the service providers in the connectedness industry adopt the traditional approach for data migration that involves three broad steps: migration planning & preparation, establishing governance, and execution.

Service providers follow the fundamental extract, transform, load (ETL) data migration execution methodology, which is full of challenges. It entails high cost and time due to mock runs and testing for each module. Moreover, it involves manual efforts, which leads to a lot of re-work due to errors and causes fallouts due to data integrity issues. Also, ramping up and down the teams is difficult.

To overcome these challenges, an RPA based automation framework for data migration execution could be an effective approach. The framework encompasses components such as:

  • Smart processor: Identifies data quality & integrity issues in the source data at a very early stage
  • Automation bot: Performs migration/upgrade by extracting & updating data at various layers of the application
  • Fallout management mechanism: Automates the fallout handling, i.e., Fix data quality & integrity issues in CRM, inventory systems, etc.

” According to Gartner, 50% of the data migration initiatives will exceed their budget & timeline by 2022 because of flawed strategy & execution.”