Accedian is now part of Cisco  |

By Richard Piasentin

What’s holding back telecom adoption of artificial intelligence?

For some time now, the telecom industry has speculated about using artificial intelligence (AI) to optimize and enhance the way networks work and operate. According to a recent TM Forum survey of communications service providers (CSPs), nearly 70% of all respondents are either deploying or testing AI. This is good news for 5G, which brings huge increases to data loads as well as a whole new set of service delivery complexities. Humans are ill-equipped to assure 5G networks and guarantee consistent, high-quality services. Artificial intelligence is a necessity in the 5G era.

In the interim, before completely automated network operations becomes a reality, the industry faces several challenges around adoption of artificial intelligence. “Where do we start with AI?” is the most prevalent question for most service providers. To achieve successful applications of AI and machine learning across their networks, service providers must address matters of data quality, staff training, and strategy development. Short-cutting these factors could lead to wide-scale artificial intelligence and machine learning deployment limitations, as well as serious concerns about AI bias and trust.

Artificial intelligence thrives on clean, standardized data

Many of the complexities that surround AI implementation derive from the type of data being used by organizations. Telecom networks generate significant amounts of data every day, and with the arrival of new services such as 5G and internet of things (IoT), this will only increase.

To make matters more challenging, all data is not created equal. Service providers own and process various types of network data—which may include network flows, network logs, and data about memory usage. This makes it difficult to adequately categorize and combine disparate sources of data to train an appropriate AI algorithm. Providers must therefore assess how they can adequately clean and categorize this ‘dirty,’ randomized data before they unleash an AI algorithm.

Standardized data sets have been a crucial factor in the success of machine learning and AI as it enables direct comparison of learning and inference algorithms. Unfortunately, no such standardisation exists when it comes to networking. This means that service providers are left to wade through data that may be noisy, unnormalized, incomplete, proprietary and, most probably, not built for the training of AI algorithms.

Clean data is a critical factor in the success of AI applications. Failure to address this challenge head on could see operators fall victim to AI bias, whereby bad or incomplete data sets may not give an algorithm the “full picture.” This could reduce the accuracy of results, and harm the algorithm’s overall success.

In the next post, find out how to avoid artificial intelligence bias by training your staff and asking the right questions.