5G & IoT? We need to talk about latency

5G & IoT? We need to talk about latency

Much of the discussion around the rationale for 5G – and especially the so-called “ultra-reliable” high QoS versions – centres on minimising network latency. Edge-computing architectures like MEC also focus on this. The worthy goal of 1 millisecond roundtrip time is often mentioned, usually in the context of applications like autonomous vehicles with snappy responses, AR/VR headsets without nausea, the “tactile Internet” and remote drone/robot control.

[This article is reposted from my blog from 4th December - link]

Usually, that is accompanied by some mention of 20 or 50 billion connected devices by [date X], and perhaps trillions of dollars of IoT-enabled value.

In many ways, this is irrelevant at best, and duplicitous and misleading at worst.

IoT devices and applications will likely span 10 or more orders of magnitude for latency, not just the two between 1-10ms and 10-100ms. Often, the main value of IoT comes from changes over long periods, not realtime control or telemetry.

Think about timescales a bit more deeply:

  • Sensors on an elevator doors may send sporadic data, to predict slowly-worsening mechanical problems – so an engineer might be sent a month before the normal maintenance visit.
  • A car might download new engine-management software once a week, and upload traffic observations and engine-performance data once a day (maybe waiting to do it over WiFi, in the owner’s garage, as it's not time-critical).
  • A large oil storage tank, or a water well, might have a depth-gauge giving readings once an hour.
  • A temperature sensor and thermostat in an elderly person’s home, to manage health and welfare, might track readings and respond with control messages every 10 minutes. Room temperatures change only slowly.
  • A shared bicycle might report its position every minute – and unlock in under 10 seconds when the user buys access with their smartphone app
  • A payment or security-access tag should check identity and open a door, or confirm a transaction, in a second or two.
  • A networked video-surveillance system may need to send a facial image, and get a response in a tenth of a second, before they move out of camera-shot.
  • A doctor’s endoscope or microsurgery tool might need to respond to controls (and send haptic feedback) 100 times a second – ie every 10ms
  • A rapidly-moving drone may need to react in a millisecond to a control signal, or a locally-recognised risk.
  • A sensitive industrial process-control system may need to be able to respond in 10s or 100s of microseconds to avoid damage to finely-calibrated machinery
  • Image sensors and various network sync mechanisms may require response times measured in nanoseconds

I have not seen any analysis that tries to divide the billions of devices, or trillions of dollars, into these very-different cohorts of time-sensitivity. Given the assumptions underpinning a lot of 5G business cases, I’d suggest that this type of work is crucial. Some of these use-cases are slow enough that sending data by 2G is fine (or by mail, in some cases!). Others are so fast they’ll need fibre – or compute capability located locally on-device, or even on-chip, rather than in the cloud, even if it’s an “edge” node.

I suspect (this is a wild guess, I'll admit) that the proportion of IoT devices, for which there’s a real difference between 1ms and 10ms and 100ms, will be less than 10%, and possibly less than 1% of the total. 

(Separately, the network access performance might be swamped by extra latency added by security functions, or edge-computing nodes being bypassed by VPN tunnels)

The proportion of accrued value may be similarly low. A lot of the IoT examples I hear about are either long time-series collections of sensor data (for asset performance-management and predictive maintenance), or have fairly loose timing constraints. A farm’s moisture sensors and irrigation pumps don’t need millisecond response times. Conversely, a chemical plant may need to alter measure and alter pressures or flows in microseconds.

Are we focusing 5G too much on the occasional "Goldilocks" situation of not-too-fast and not-too-slow? Even given 5G's other emphasis on density of end-points, is it really that essential for future IoT, or is it being overhyped to influence operators and policymakers?


If you are organising a conference, workshop or internal strategy session on 5G networks or IoT, and need a thought-provoking speaker, panellist or moderator, please get in touch. Other possibilities include business plan "stress-tests" & marketing communications like white papers.

Kjeld Lindsted

Group Manager Contacted Vehicles | Smart Mobility, AI, IoT | 10x TAM Growth | Scaled B2B & GovTech Platforms | $50M Series B

7y

We're exploring access to cellular location data from a major carrier for use in traffic planning, traffic detection, etc. This very conversation came up. There is currently about a 5 minute lag in the data (processing time mostly - it's a complex algorithm apparently) which is a problem for some use cases and totally irrelevant for others. As such, it's important to consider time when thinking about IoT and the data being generated.

Like
Reply
Justin Roberts

Agricultural journalist specialising in farm machinery

7y

"A car might download new engine-management software once a week, and upload traffic observations and engine-performance data once a day (maybe waiting to do it over WiFi, in the owner’s garage, as it's not time-critical)." Mass surveillance in other words.

Like
Reply

LATENCY ... the word that killed you.

Like
Reply
Gianluigi Cuccureddu

Senior Marketing Professional Specialized in Ecommerce & Data-Driven Decision Making

7y

we need to talk about the health risks of 5G

Like
Reply
Oleg Dubinin

Start-ups and Venture capital

7y

2ms should be fine

Like
Reply

To view or add a comment, sign in

More articles by Dean Bubley

Insights from the community

Others also viewed

Explore topics