What’s influencing the way service providers develop performance assurance strategies?
In response to a variety of interrelated influences, network operators and service providers are making some significant changes to their performance assurance strategies. Here are seven of these influences (or, buzzwords, if you prefer), one for each day of the week!
Unofficially—not endorsed by the 3GPP—4.5G (aka Gigabit LTE, 4GX, 4G+) is the consumer-oriented marketing term for LTE-Advanced Pro, as defined by 3GPP release 13 (current) and future release 14. Essentially, it’s a radio upgrade: a way to get more juice out of existing mobile spectrum, and therefore for operators to differentiate their services, without the full complexity of 5G’s multi-access edge computing and virtualized RAN and core.
But, 4.5G also brings some specific requirements that render traditional, reactive monitoring insufficient: precision measurements of less than 20 microseconds (μ) for 2ms one-way service level agreements (SLAs), and the ability to precisely test between towers rather than hub-and-spoke at the evolved packet core (EPC).
The big show-stopper here is the need to test precisely between sites; you simply can’t do that with centralized monitoring designed for hub-and-spoke configurations. Operators on all continents are using Accedian’s SkyLIGHT virtualized performance assurance platform as a lightweight and affordable, yet very precise and scalable, instrumentation overlay to fill this visibility gap.
2. AIOps (algorithmic IT operations)
Gartner defines AIOps as multiple-layer technologies “that address data collection, storage, analytical engines and visualization” using APIs to “seamlessly interact with IT operations management (ITOM) toolsets,” and predicts that by 2019 a quarter of global enterprises will have implemented such a platform to support at least two major IT operations functions.
This is part of a larger trend: communications networking rapidly moving toward fully automated service orchestration through machine learning. Operators are and enterprises are investing in monitoring and assurance systems that enable self-optimizing networks. Some, like SK Telecom and Reliance Jio, have already deployed this technology at network-wide scale.
Accedian’s recent acquisition of Performance Vision extends our industry-leading virtualized network performance monitoring into the realm of AIOps, combining active synthetic performance monitoring for OSI layers 2-3 (SkyLIGHT VCX™) and remote packet broker (FlowBROKER™) with passive network and application performance management for layers 2-7 (SkyLIGHT PVX), all orchestrated by SkyLIGHT Director™.
A cloud-native environment is one that uses virtualized infrastructure to provide access to shared pools of configurable resources (e.g. servers, storage, applications) provisioned with minimal management effort. Key components of cloud-native, according to National Institute of Standards and Technology (NSIT): on-demand service, broad network access, resource pooling, rapid elasticity, and measured service or pay-per-use modal. This infrastructure is becoming more and more common in data centers, with cloud-native apps being foundational to software-as-a-service (SaaS).
It does get complicated to deliver and assure cloud-native applications (aka virtual network functions/VNFs), because they are not all yet centralised in the data center but instead are spread across the network’s distributed ‘web’ with many endpoints. Resource-pooling is therefore compromised, especially at remote points of the network edge or at aggregation points. To be successful with cloud-native, operators must integrate open APIs with each VNF for management and control. A vendor-agnostic, unified instrumentation overlay like Accedian’s SkyLIGHT is key to making this happen.
“Communication service providers (CSPs) are under mounting pressure to transform their systems and infrastructures to become more agile, able to deliver services at a push of a button,” said Accedian VP of business development and strategic partnerships, Michael Rezek, in a blog article published last November. “The answer lies in the cloud, and specifically in cloud-native environments—which, if implemented correctly, can boost network efficiency, reduce expenditure, and enhance quality of experience (QoE) and quality of service (QoS) for subscribers.”
4. Capillary networks
A capillary network is a local area network (LAN) acting as an extension of a wide-area link, in order to connect a large number of devices. Such an infrastructure uses both short-range networks (for offloading cellular data pipes and making machine-to-machine/M2M devices easier to manage) and long-range links (to improve energy efficiency and reduce interference).
Capillary networks are shaping up to be a key component for internet of things (IoT) development, as a means of efficiently connecting local wireless sensor networks to cellular networks. Or, to put it another way, allow devices that use short-range radios to gain much wider connectivity using cellular security, management, and virtualization services. As such, it ties in directly with the concept of fog computing (extending cloud computing to the edge of an enterprise’s network) and the low-latency demanded by 5G.
Unsurprisingly, here is another place where a precise, scalable, virtualized instrumentation overlay—capable of accurately testing between towers/local links—is vital.
5. Edge computing
Basically, edge computing just means performing data processing near the source of the data, rather than pooling it centrally. More narrowly, MEC stands for either multi-access edge compute or mobile edge compute—both of which can reasonably be defined as cloud-based IT service environment at the (cellular) network edge. The multiple part refers to the fact that it enables a variety of access types at the edge, including wireline.
The appeal of edge computing is its potential to run applications and perform processing tasks close to the end user, thereby reducing congestion and making applications perform better. It’s also a key component of 5G’s promise to drastically reduce latency; the limits of physics mean some kinds of applications must originate very physically close to the user.
6. Intent-based networking
Championed by Cisco, intent-based networking refers to “lifecycle management software for networking infrastructure” as Gartner defines it. That lifecycle includes translation and validation of business policy, automated network configuration, network state awareness gleaned from real-time data, and continuous business intent validation.
Eventually, this scenario—where where managers dictate policies and network orchestration software automatically configures and maintains the state of the network—will become mainstream. Again, the promise of automation is driving proof-of-concept and early-stage deployments of the technology.
For operators and enterprises alike, thriving in a competitive market requires that networks be very responsible and agile. Having them be “powered by intent and informed by context,” as Cisco puts it, is the future.
7. Non-human speed SDN
We’ll wrap up by emphasizing the increasing speed demanded of communications networks, into a realm where humans can no longer effectively optimize their performance. A human-speed network is a traditional, “old, static network. A non-human speed, software-defined network (SDN) is one involves network slicing in a closed-loop, artificial intelligence (AI)-assisted orchestration environment. This is only possible with end-to-end, top-to-bottom, precision instrumentation to see what’s happening network-wide and down to individual application transactions.