Skip to content

Mobile internet usage more important than network coverage, stresses the GSMA report

GUEST OPINION: New and emerging digital experiences highlight the importance of understanding and optimizing how traffic gets to and from an end user.

A lot has happened in the past couple of years that has collectively put latency – and specifically low- and ultra-low latency – on the map.

Latency – the lag between when the packet leaves the streaming source and when it arrives at a device – takes many forms, with the most common being lag, dropped frames, buffering, and with that reduced content quality.

To be clear, latency was already a topic of interest to quite a few cohorts of users.

{loadposition stephen08}

On the consumer side, that includes gamers, viewers of ultra high-definition video on-demand such as live sports, and regional and remote users with satellite connectivity.

On the enterprise side, latency could be an issue for real-time use cases, such as high-frequency or algorithmic trading, or for data collection at remote ‘edge’ locations like mine sites and gas platforms.

However, latency was really driven into the mainstream during the work-from-home revolution. This brought about new metrics to understand the complicated interconnected networks used to send and receive user traffic, and particularly how different times of the day impact latency.

Latency and its impact on web applications is even a regular measure under watch by the Australian Competition and Consumer Commission.

The thing to understand about latency is that it’s not only important to the performance of applications today, but also to those of tomorrow.

In particular, a goal of network engineers and end users alike is to have low or – eventually – ultra-low latency connections to the internet. As PwC Australia notes, this is already happening to an extent through the growing footprint of 5G networks.

The truth is that ultra-low latency connections will need to become even more pervasive for the metaverse era. High-performing connections are likely to be crucial to broad participation in environments that are highly dependent on real-time interaction.

Any performance lag between users would not only be highly noticeable but would also undermine the promise of the experience.

If latency intrudes on those experiences, customers will turn to other providers, watch different events, or engage with other people and businesses. The real benefit of ultra-low latency, then, is that content, user experiences, and data are delivered in near real-time, forming the basis of better user experiences and enabling new business models.

What latency looks like

For the purposes of this piece, I’ll explain latency, its challenges and opportunities, largely in consumer terms.

The pipeline of content from creation to transmission to eventual reception by the consumer device requires processing and bandwidth, and time.

It can often take up to 10 seconds for a live event to display on a consumer device.

While the average HD cable broadcaster experiences 4 to 5 seconds of latency, about a quarter of content networks are challenged by anywhere from 10 to 45 seconds of latency.

With no current standard, typically, low-latency delivery means that the video is delivered to a consumer’s screen less than 4 to 5 seconds after the live action, while ultra-low latency is less than that.

So-called “glass-to-glass” latency – from the camera lens to the viewer’s screen – is often around 20 seconds, while high-definition cable TV content is the benchmark for low latency, at about 5 seconds of latency.

Reducing latency

There are many causes of latency in broadcast and delivery networks. The mere act of encoding a (live) video stream into packets to be sent over a network introduces delays into the video stream. Add to that the delivery through a variety of third-party networks to the end user’s device, and the latency grows longer. In addition, different protocols have different strengths and weaknesses, and the primary consideration may not always be to reduce latency.

Latency can be reduced by tuning the encoding workflow for faster processing. However, doing so will cause inefficiencies — and higher costs — elsewhere. Smaller network packets and video segments amount to more overhead and less bandwidth but will reduce latency, while larger segments increase the overall bandwidth and efficiency at the cost of a real-time experience.

The workflow of capturing and encoding media is a good place to look for opportunities to reduce latency. A well-tuned workflow can quickly deliver encoded video segments but focusing on minimizing processing time is not the only goal. Spending more time processing can often produce more compact data streams, reducing the overall network latency. Thus, there is a dial between processing efficiency and network-transport efficiency, and content publishers must find the right balance.

While building an efficient method of recording, encoding, and initially transmitting content can help remove inefficiencies and latency from early in the process, much of the actual latency occurs during delivery. Minimizing or reducing delivery latency requires planning and optimization, but also an acceptance of tradeoffs between latency and cost.

Content companies need to find solutions for both the front end of the system and the network delivery components to achieve the lowest latencies possible.

.