Accedian is now part of Cisco  |

Avatar photo
By Boris Rogier

What drives network latency?

Understanding network latency is key to:

  • Managing properly the delivery of applications through the network
  • Understanding how network performance can be improved

Read more to understand the basics of network performance.

Network latency (time required to send a packet over a network) is driven by 4 components:

  1. Propagation
  2. Processing
  3. Serialization
  4. Queueing

Propagation

Propagation is the time required to go from one interface of a network device to another over a physical cable. It should be very constant (unless you change the path) as it is driven by pure physics: time = distance / (2/3 x speed of light).

Unless you reduce the distance, propagation time cannot be reduced. For example, if you want to reduce the propagation time from Europe to China, the only way is to use a different physical path to reduce the distance; for example you can rely on an infrastructure that, instead of going through India, uses a shorter path through Siberia.

Processing and serialization delay

Serialization for a network device should be fairly constant and relatively negligible; Processing may vary a lot depending on the services provided by each hop (bridging, routing, encrypting, compressing, tunnelling, …), the capacity of the device vs the actual load. This part may be affected by insufficient resources on the network device or traffic overload.

Obviously, when considering an overall path, the number of hops has a direct incidence on the overall processing and serialization delay.

“Most of the time, we were unable to conduct an analysis at the time of the degradation and our efforts remained fruitless. 

For diagnostics, SkyLIGHT PVX outperforms our previous sniffer solutions and Netflow collectors!”

— Stéphane DEWEZ, Network Manager at Paris Police Headquarters

Queueing

Queueing delay is the time spent by a packet in a router queue. This delay mainly depends on the size of the queue. The size of the queue is itself driven by the overall traffic managed by the router and its burstiness (presence of peaks which overloads the queues for very short periods -> this mainly translates into jitter as latency varies in accordance with the variation of the queues and the traffic).

This is why when you consider measuring network latency it also makes a lot of sense to monitor metrics such as CPU, RAM usage (for processing) and the size of the queues.

When you design a network, you may want to pay attention to the distance corresponding to a route as well as the number of hops required.