Traffic prioritization sets a further challenge. As before, voice and other latency-critical applications will need to be prioritized, but how do we grade the countless IoT demands? Vehicle monitoring for oil levels is very important, but can take its time, whereas loss of pressure in a tyre could cause a crash. EHealth – and the litigation that could arise from failure in a medical monitoring application – presents another minefield, let alone the microsecond demands of M2M financial systems.
So the rise of IoT not only threatens a leap in capacity requirements, but also intense competition for resources and a plethora of different QoS and QoE standards to be met. And yet the test solutions to make sure that all these management challenges are properly addressed are already available and already being deployed by leading carriers.
Conclusion – a testing future
According to a recent Heavy Reading report Mobile Network Outages & Service Degradations:
In what for the most part still tend to be flat revenue environments for mobile operators, maintaining network availability and excellent service and application performance is exceptionally challenging. That isn't just a function of the huge growth in traffic volumes and generally flat capex budgets. It's also a function of the growing diversity and complexity of application types and their underlying service requirements, and the increasing interdependence of different application, service and infrastructure layers within the network.
The report suggests that, although the number of incidents affecting mobile networks is about the same as it was two years ago, there are more outages from network failures and more that take longer to fix. The per annum cost of outages has meanwhile increased 18% to around $20 billion. But the good news is that operators are already confident that they have the necessary testing capacity:
Mobile operators seem to have a lot of confidence in the ability of testing and performance monitoring tools to accurately assess how successful a network upgrade has been. Only 9 percent of respondents believe that only the impact of user loading is capable of providing meaningful validation. Performance monitoring systems (59 percent rated most important) and the ability to test in the production network (32 percent) are both highly valued.