Article

Low Latency: what makes 5G different

5G networks promise high-speed and, above all, low-latency mobile connections, opening up a true digital revolution.

5G Low Latency explained

A common perception is that the most important benefit brought by new 5G technology is higher data speed. However, many ignore that 5G can be exploited to address a much more critical challenge, that is the reduction of network latency. Latency specifies the end to end communication delay, measuring the time between the sending of a given piece information and the corresponding response.

Picture

To give an example, latency can be identified in the time gap between the moment you click “stop” and the instant in which a remotely driven vehicle actually starts braking. Reducing the latency experienced by the end users from hundredths of a second to a few of milliseconds can have an unexpected impact, leading to a real digital revolution.

The role of latency: what it is and what it is good for

Low delays achieved by the development of 5G-based mobile networks open the way to radically new experiences/opportunities, including multiplayer mobile gaming, virtual reality experiences, factory robots, self-driving cars and other applications for which a quick response is not optional at all, but a strong prerequisite.

Picture

Focusing on self-driving vehicles, current cellular networks already provide a wide variety of tools that address some of the technology and business requirements. For example, LTE Cat-M and Narrow Band-Internet of Things (NB-IoT) are excellent low-power sensor communication technologies. However, in order to enable complex vehicle maneuvering, determining and recommending individual actions, e.g. acceleration, deceleration, lane changes or route modifications, the vehicles must be able to share and receive information about their driving intentions in almost real time. This low-latency demand certainly requires the development of an overall 5G system architecture to provide optimized end-to-end vehicle to everything (V2X) connectivity.

How to reduce latency: the key enablers

It is possible to identify different technological enablers called upon to speed up the communication process. First of all, the 5G standard allows excellent latency performance on the radio access link, providing a flexible framework to support different services and QoS requirements: scalable transmission slot duration, mini-slot and slot aggregation, self-contained slot structure, i.e. transmission slots containing both downlink and uplink data, traffic preemption and so on. In summary, different transmission patterns can be shaped for different services.

Another important feature, is the deployment of fiber-based backhaul systems: traditionally, 2G and 3G mobile networks often used copper-based circuits to connect cell sites to the Mobile Backhaul (MBH) network. This legacy MBH architecture has quickly shown its age with the advent of 4G. MBH upgrades are taking place all over the world converting legacy copper-based MBH serving cell sites to packet-based transport over fiber, which enables far higher capacities to best future-proof MBH networks. These technological upgrades will be leveraged by future 5G networks, given the almost unlimited bandwidth that fiber-based networks offer.

However, while the type of connection is a key consideration (for example, fiber optic cables allow much faster data transmission), distance remains one of the key factors in determining latency: the larger the distance that data must physically cover, the longer the communication delay, independently of the connection speed. This is the reason why the real playmaker in this technological revolution is edge computing, i.e. the idea of moving as many resources as possible to the edge of the network, next to the end user.

A focus on Edge Computing

We’re in the Cloud Computing era. Many of us still own personal computers, but we mostly use them to access centralized services like Dropbox, Office 365 and Gmail. Devices like the Apple TV, Amazon Echo and Google Chromecast are powered by content and intelligence that are in the Cloud. Almost everything that could be centralized has been centralized. Edge computing consists in moving most computing resources to the edge of the network, reversing the process of centralization and working with a more distributed network. Computing is done at or near the source of the data, instead of relying on the Cloud at one of a dozen remote data centers. This doesn’t mean that the Cloud will disappear, but that the Cloud is migrating to the end user.

Picture

Reducing the distance between user and computing resources, edge computing cuts back the experienced latency, when needed. Going back to the autonomous vehicle use case, a few milliseconds delay can result in avoiding a crash. Self-driving cars need to react immediately to changing road conditions and they cannot afford to wait for instructions or recommendations from a distant Cloud server. The solution to this problem is offered by edge computing: by locating servers and computing resources in edge facilities located in both high traffic areas and more faraway areas with limited bandwidth access, companies can ensure that their autonomous vehicles are able to access useful data with minimal latency to make decisions in almost real time.

Reply is leveraging 5G opportunities

Personal messaging requires near real time data transfer and even more so when taking part in real time visual and verbal communication. Reply has already created solutions that enable holographic telepresence (Holobeam) and multiparty communication in virtual reality, like for instance VR training or virtual visits, that will benefit from low latency communication as soon as human avatars become photorealistic.

Reply is also actively working on CloudVR and Cloud gaming where local computation previously required for expensive VR ready PC or Gaming consoles is now essential for edge computing. The real time rendering of a VR world or game streamed to the customer requires high throughput, but also customer movements/command must be sent with almost no latency to the edge.

You may be also interested in

No contents here.