Categories
Digital Customer Experience

Plotting the future of customer care through an effective Virtual Agent (VA) rollout strategy

Improve the VA’s ability to engage with customers confidently and more accurately

The Virtual Agent’s (VA) market is at an all-time high and is garnering more and more interest with each passing day. It is beginning to establish as “the must-have” solution for the businesses in the connectedness industry, seeking to improve customer experience, reduce call center costs, optimize time to serve, etc.

But are these virtual agents living up to the hype?

Gartner has placed them in a “trough of disillusionment” in its hype cycle, meaning the technology is struggling to meet the envisioned expectations. When faced with complex and unknown scenarios, VAs tend to react in an unexpected way. One often comes across instances on social media where VAs are humiliated for their out-of-context interactions.

The primary reason for this shortcoming is that many VAs are launched without the right implementation strategy. As a result, they don’t reach the required confidence levels and cannot capture the right customer intent.


Virtual Assistants (VAs) use semantic and deep learning (such as Deep Neural Networks (DNNs), natural language processing, prediction models, recommendations, and personalization to assist people or automate tasks.

To prevent your VA from humiliation, adopt a robust VA implementation strategy encompassing the top 10 considerations that can help service providers to ensure their customers engage in the VA interaction, increasing overall customer satisfaction. This strategy provides key recommendations on the most important focus areas that are imperative for a successful rollout. Some of these include:

  • Choosing the right use case: Group the inbound calls into different categories like customer service enquiries, technical troubleshooting, sales etc. Based on these categories, different use cases can be invoked. For instance, kickoff the least complex rollout with self-service flows.
  • Analyzing the complexity of intents: Analyze the length of conversation and time taken by the agent to complete the conversation. Further, build a hierarchy of intents and sub-intents to identify high-volume intents and complex intents.
  • Considering variations in intent: Analyze the scope, lifecycle, and precursor of intents to improve engagement by increasing precision or recall.
Categories
Operational Excellence

Shift gears to an automated RPA code review for faster development of bots

Most service providers in the connectedness industry have started leveraging Robotic Process Automation (RPA) to streamline their business processes. Standardizing the bot development process and scaling the bot velocity are the most important goals of any RPA Center of Excellence (CoE). One of the major roadblocks faced in this mission is the manual review of the RPA code, which is a highly tedious task. It is not only cumbersome but also time-consuming and prone to errors. Although the RPA code review process is of utmost importance to reduce post-deployment defects and costs, the manual approach is crippled with challenges and is highly inefficient.

To overcome these challenges, service providers should automate the code review process. To achieve this, service providers can leverage a platform-agnostic RPA code reviewer bot that can review

  • Hundreds of variables, arguments, activities and message boxes
  • The logic for exception handling, custom logging, queues and credential management

Fig. Leveraging code reviewer bot to automate RPA code review process


Although the RPA code review process is of utmost importance to reduce post-deployment defects and costs, the manual approach is crippled with challenges and is highly inefficient.

Categories
Product Engineering

Use AI to Bolster your Network Capacity Planning decisions

The Content Delivery Network (CDN) market is poised to explode as content consumption gains more momentum. This calls for an efficiency-focused approach towards CDN capacity planning.

As per a Cisco report, the annual global IP traffic has already crossed the zettabyte (ZB) threshold. To cope with the increased content consumption by users, more supply chains should be established along with a reliable and scalable infrastructure. This puts a lot more pressure on the Content Delivery Networks (CDNs), which forms a well-established global backbone for content delivery.

For service providers, it becomes vital to take an efficiency-focused approach towards CDN capacity planning. This means satisfying the future capacity requirements without increasing the total cost of ownership.

The legacy manual way of capacity planning uses basic statistical tools to collect data and set a static threshold on capacity requirements. Such manual planning typically does not analyze the network in a holistic manner and produces a final proposal with a “one rule fits all” approach. However, this approach is inefficient in today’s scenario where consumer behavior changes very dynamically. Manual planning is also prone to human error, so the outcome might deviate from time to time, wasting a substantial number of resources and time. The service providers often run out of capacity due to increased data consumption and changes in the consumption patterns, which are not identified correctly during capacity planning.


To satisfy the customer demands in a timely fashion, it is necessary to have a modern capacity planning strategy.

Network planners need to confront these challenges before it impacts the customer experience. Leveraging Artificial Intelligence (AI) can significantly improve network capacity planning, thereby improving the end-user experience and reducing the total cost of ownership.

Categories
Product Engineering

Speed-up entertainment services rollout

Implementing an effective CI/CD setup to deliver high-value media services with agility

Online video consumption has been increasing tremendously with a rapid change in consumer expectations to have a seamless viewing experience across various digital devices. To capture this growing demand, it is critical for DSPs to deliver fast-track rollout of NextGen media services.

However, DSPs are facing major challenges in orchestrating and managing rollouts of innovative features, converged live TV and curated media services within a short span of time. This complexity further increases when DSPs need to cater to multiple geographies.

Unlike OTT players, DSPs have been limited with extremely long development and rollout timelines for new services and offerings. Primarily because of the enormous amount of vendor-specific hardware and software applications that do not support rapid changes.

An effective continuous integration and continuous deployment (CI/CD) approach enable DSPs to achieve same innovative services and delivery agility that OTT providers are offering to stay competitive. This insight talks about different enablers that will help DSPs in adopting an effective CI/CD setup for faster rollout of NextGen media services. Implementing these enablers would further ensure high-quality, right first time and consistent right delivery of media services across multiple geographies and drastically cut down on the product’s time to market.


Implementing CI/CD architecture accelerates media service rollouts by 60% providing enhanced content and features to the customers.

Categories
Software Intensive Networks

Predicting and preventing network problems leveraging AI

Implement a network event prediction model to improve service assurance

Today, service providers’ customers expect access to the products and services and enhanced customer experience anytime, anywhere. Hence, service providers should focus on service assurance and look for ways to address common problems such as accumulated faults, traffic congestion, and reactive event handling of networks. Further, reacting to a network event after it has occurred is not acceptable.

With the overwhelming volume and complexity of data from the service assurance domain, AI/ML techniques bring much value. By leveraging AI/ML in service assurance, service providers can analyze tons of data from various sources, derive insights, and take real-time preventive actions. Hence service providers should implement a network event prediction model to predict the network event failures and outages even before they occur.


By leveraging AI/ML in service assurance, service providers can analyze tons of data from various sources, derive insights, and take real-time preventive actions.

Categories
Media & Entertainment

Deliver uninterrupted, high-quality entertainment services

Build an effective monitoring framework to ensure high performance of microservices-based streaming services

The multi-fold increase in video content consumption and the different types of devices like mobiles, laptops, and smart TVs used to consume this content have made service providers move towards microservices. Most of the forward-thinking service providers have started adopting microservice-based architecture for the Video-on-Demand (VoD) services to handle the huge number of requests with minimum response time. Further, it enables scalability and continuous deployment of complex applications, thus providing uninterrupted entertainment services. Adoption of microservices-based applications helps to reap benefits such as:

  • Deliver video at scale to meet billions of customer requests each day
  • Handle the load spikes efficiently during special events (e.g. Premier League football games)
  • Ensure reliable delivery and availability of video content services
  • Implement auto-scaling algorithms to save cost by running at optimum capacity during silent hours


The adoption of microservices-based applications helps service providers to deliver entertainment services at scale, meeting billions of customer requests each day.

However, monitoring microservice-based applications is a highly complex task as a single application runs on multiple hosts in a very dynamic environment. It also needs to interact with several other systems that are dynamic. Implementing the right toolchain is critical for effective performance monitoring of microservice-based applications.

Categories
Operational Excellence

Optimizing RPA implementation with increased automation potential

A lot of players in the connectedness industry have started embracing Robotic Process Automation (RPA) to automate different tasks across various systems and streamline their business processes. However, service providers are still finding it difficult to optimize one of the most important success factors of RPA implementation – the automation potential. Incidentally, the answer to this challenge lies in the initial steps of the implementation roadmap itself.

An ideal RPA implementation roadmap consists of seven steps, from Proof of Technology (PoT) to the actual go-live of RPA. The key to increasing the automation potential of any process lies in effectively performing the first 3 steps – Proof of Technology, Process Assessment, and Input Standardization. This insight elaborates on specific tools and techniques to excel in these steps and increase the automation potential by 25%.


One of the most important success factors of RPA implementation is increasing its automation potential.

Categories
Operational Excellence

Staying ahead of security threats

Leverage Robotic Process Automation (RPA) to automate security support processes for better management of security threats

Digital security threats have become prevalent and continue to disrupt every aspect of the digital world. Increasingly sophisticated cyber criminals exploit sophisticated technology, leaving organizations feeling helpless as their confidential information and critical assets fall victim to malicious attacks. To combat the threats, service providers need to focus more on threat detection and mitigation capabilities. Furthermore, they need to focus on the related metrics such as Mean Time to Detect (MTTD) and Mean Time to Mitigate (MTTM).

Typically, a small team of technical support executives is deployed to handle the customer-facing security incidents (e.g., in the broadband service offered to residential or enterprise customers). While this team of experts is capable of handling regular day-to-day activities, it will be extremely challenging for them if there is a sudden security threat, which creates a huge volume of tickets to resolve in a short span of time.

Robotic process automation (RPA) is a great technology to handle numerous, repetitive, and mundane processes. Service providers leverage it to automate various processes like order-to-activate, assurance, fulfillment, and billing. However, using RPA to handle high volume and low-frequency security tickets is unconventional in the communications industry. This Insight discusses effective strategies for implementing RPA in the service providers’ ecosystem.


Security teams which typically tend to be smaller in size, are not equipped to handle the high volume of incidents when a major crash down occurs. Use RPA to handle high volume and low-frequency security tickets in the communications industry

Categories
IT Agility

Minimize the backup failures in data centers

According to different analysts, “5% to 25% of the backup jobs are failing across various tiers of data centers”. This impacts data centers heavily in revenue loss and SLA-based penalties. Further loss of essential data deteriorates customer experience. Hence, data centers must identify the root cause and reduce backup failures. The top reasons behind backup failures in data centers are the lack of storage space, database permission issues, and linear processing of high-volume backup jobs. Data centers should leverage a unique solution strategy to eliminate these problems and create successful backups.

Fig: Proven approaches to minimize the backup failure rate


Around 5-25% of the backup jobs are failing across various tiers of data centers

Categories
IT Agility

Simplify IT service management with consolidation

Leverage a unified ticketing system to enhance the ITSM consolidation program

Every service provider’s application landscape consists of multiple systems performing diverse functions. Managing standalone systems which are not interconnected involves enormous effort and cost. Also, service providers today are increasing acquisitions every year. In this post-merger environment, end-users struggle with distributed IT Service Management (ITSM) environment and unfamiliar technologies, which leads to a spike in demand for support. Some of the critical challenges faced in distributed ITSM environment are

  • Multiple processes for the same service management function
  • Complex third-party integrations
  • Lack of unified, integrated dashboard view from different systems

To overcome these challenges, process consolidation has become imperative. Businesses looking at ways to cut costs and enhance competitiveness are embracing various strategies to consolidate ITSM systems. This insight details the unified ticketing system and critical considerations for businesses to advance their ITSM consolidation program.

Fig. Key considerations to advance the ITSM consolidation strategy


With the growing trend of mergers and acquisitions across the Connectedness industry, the need for ITSM systems and process consolidation has become imperative.