The Latency Lag | Why Your Internet Feels Slower Than a Friday Afternoon at Work

Glowing blue digital world map showing global network nodes to illustrate business internet latency causes. 

Published: February 2025 | Updated March 2026

Latency is the gap between clicking something and it happening — and on a congested business network, that gap grows non-linearly. A link running at 90% capacity doesn’t just slow down; it can spike to hundreds of milliseconds of delay, making VoIP calls stutter and cloud applications feel broken. This post explains exactly what causes business internet latency, why load makes it dramatically worse, and how SD-WAN’s dynamic path selection and fair queuing solve problems that simply adding more bandwidth cannot.

Why Does Latency Exist?

At its core, latency is the time it takes for data to travel from point A (your device) to point B (the server) and back again. Every time you load a webpage, send an email, or stream a video, your data is on a journey through fibre cables, wireless links, and network routers – all of which introduce delays.

But this delay isn’t constant. Some days, your internet is snappy. Other days, it’s slower than a queue at Home Affairs. Why? Because latency is affected by multiple factors, including:

Distance

Data has to physically travel across networks, and the further it goes, the longer it takes.
If you’re connecting to a local server, it’s fast. If it’s halfway across the world, expect a noticeable delay.

Congestion (Load on the Network)

Think of a highway at peak traffic – the more cars (data packets), the slower everyone moves.
As more users stream, download, and video call, the available bandwidth shrinks, making latency worse.

Packet Loss & Jitter

If packets get lost due to congestion or poor network quality, they have to be re-sent, adding more delay.
Jitter (variations in packet arrival times) makes things worse, especially for voice and video calls.

Network Equipment & Routing

Every router, switch, or firewall between you and the destination introduces processing delays.
Some networks route traffic inefficiently (think of taking a scenic detour instead of a straight road).

How Load Makes Latency Worse (And Why You Should Care)

Now, let’s talk about the biggest killer of network performance – network congestion.

Imagine a four-lane highway. When traffic is light, cars move freely. Now picture a public holiday rush – bumper-to-bumper traffic, hooters blaring, and frustrated drivers crawling forward. The internet works the same way. The more data packets trying to move through a congested link, the longer they take to reach their destination.

And here’s the kicker – as load increases, latency doesn’t just rise linearly; it skyrockets! A link that was running at 50% capacity might have an extra 10ms of delay, but at 90% capacity, it can spike to hundreds of milliseconds, making video calls stutter, gaming impossible, and cloud applications unresponsive.

The worst part? Many service providers overload their networks with too many customers, causing chronic congestion. You pay for “high-speed” internet but get peak-hour gridlock instead.

How to Combat Latency (Without Crying in a Corner)

Thankfully, you’re not powerless against the dreaded lag monster. Here are some strategies to keep your network snappy:

Prioritise Traffic (QoS – Quality of Service)

Critical applications (like VoIP and video calls) get priority over downloads and social media scrolling.
Proper QoS ensures latency-sensitive traffic isn’t stuck behind a Netflix binge session.

Use Multiple Links (Load Balancing & Bonding)

Spreading traffic across multiple connections reduces congestion on any single link.
SD-WAN solutions can bond multiple links to create a more stable, low-latency experience.

Reduce Hop Count

Fewer routers and switches mean less processing delay.
Direct peering with content providers (Google, Microsoft, Netflix, etc.) speeds things up.

Optimise Routing

Some networks route traffic inefficiently. A smart SD-WAN solution picks the best route in real time, avoiding congested paths.

Switch to SD-WAN | The Latency Slayer

Standard networking treats all traffic the same, but SD-WAN is next-level when it comes to latency control. Here’s how it changes the game:

  • Dynamic Path Selection – SD-WAN constantly measures latency across all available links and sends traffic down the fastest route.
  • Fairness – ensures that all traffic flows get a fair share of bandwidth, preventing a single flow from hogging the link. One method to achieve this is Stochastic Fair Queuing (SFQ), which distributes packets into multiple queues using a hash function, ensuring that no single flow dominates. Unlike traditional FIFO queuing, SFQ prevents bandwidth-heavy applications (like large downloads) from starving latency-sensitive traffic (like VoIP and gaming). While SFQ doesn’t guarantee strict bandwidth limits per flow, it significantly improves network performance in shared environments by balancing traffic more equitably.
  • Adaptive Bandwidth Management – Instead of just slamming traffic onto whatever’s available, SD-WAN intelligently adjusts loads to keep performance optimal.

Key Takeaways:

  • Latency has four main causes: physical distance, network congestion, packet loss forcing retransmission, and inefficient routing through unnecessary hops
  • Congestion causes non-linear latency spikes — a link at 50% capacity may add 10ms of delay, but at 90% capacity the same link can spike to hundreds of milliseconds
  • Many ISPs oversubscribe their networks, meaning advertised speeds are available only at off-peak times — chronic peak-hour congestion is common on cheaper business broadband
  • SD-WAN’s dynamic path selection continuously measures latency across all available links and routes traffic over the fastest path in real time
  • Stochastic Fair Queuing (SFQ) distributes packets equitably across multiple queues, preventing single bandwidth-heavy flows from starving latency-sensitive traffic like VoIP
  • Adaptive bandwidth management intelligently adjusts load distribution rather than statically splitting traffic, maintaining optimal performance as link conditions change
Ronald Bartels, Director South Africa at Nepean Networks

Written by

Ronald Bartels

Director: South Africa · Nepean Networks · Johannesburg, South Africa

Ronald has over 30 years of hands-on networking experience spanning financial services, ISPs, and enterprise technology. He led infrastructure at Investec for nearly eight years, managed core IP networks at iBurst, and served as a solutions architect designing data centre migrations for governments and financial institutions. Since joining Nepean Networks in 2019, he has been the driving force behind SD-WAN adoption in South Africa — engineering resilient connectivity solutions purpose-built for the realities of the local market, including load shedding, mixed-quality last mile, and infrastructure variability. Ronald holds a BSc in Computer Science from Stellenbosch University and is a Certified Data Centre Professional (CDCP).

What do you think?

Subscribe To Our Newsletter

Table of Contents