What is Latency & How to Improve it?
Latency is an expression of how much time it takes for a data packet to travel from one designated point to another. Learn more about what is latency in...
Latency is an expression of how much time it takes for a data packet to travel from one designated point to another. Learn more about what is latency in...
Have you ever faced a delay in receiving some output from your system? Say some important analysis reports that need to be generated, or if you are a part of the crypto world, you might have seen instances when there have been slight delays in the processing of transactions. These can potentially lead to huge losses in some cases given the fact that the crypto prices are volatile.
Ever wondered what these delays are all about and what causes them? And even if you know about it then how to avoid or overcome them? Walk with us through this article to understand more about the concept of latency.
Latency, which basically translates “delay” refers to the discrepancy in time between the transmission of data. Suppose you trade in crypto and your transaction gets delayed, the concept will be termed as “latency” because the network communication is taking a longer time than it should ideally take.
Latency determines how quickly the data is transmitted between two systems, so when we see data being transmitted at a fast speed without any delay, we call it low latency.
High latency is when data is transmitted very slowly causing further delays and if we look at that from a user’s perspective, it is highly not desirable and spoils our experience.
Now let’s explain how latency works.
Consider you are a client and you have a device which communicates with a server through a computer network. You send data requests and receive data responses from the server. There are other devices like routers, switches, firewalls and a facility for wireless transmission or a connecting cable.
Now these data requests that you are sending and the responses that you are receiving hop from one device to another via the cables or other links till they reach their actual destination. These network transfers are complex and many such endpoints on the path of data affect the speed with which it gets transmitted that causes the problem of latency.
Let us now have a look at the different types of latency-
If we give a system some instruction but there is a time lapse between the time point at which we give the signal and the time point when the system responds, it is known as interrupt latency. It is the time duration taken by a system to act upon the occurrence of an interrupt.
If we transmit a light signal through a fiber optic cable, the time duration taken for this transmission of light over the distance of the fiber optic cable is what we call fiber optic latency. Ideally, it should take 3.33 microseconds per km of the cable but this transmission generally happens at a slower speed.
This is basically network latency, which means that if we transmit data across a network and that transmission is delayed, the delay is called internet latency. When there is low latency, the network has a faster response time.
If we transmit data across a wide area network (WAN) which is busy in directing other traffic, a delay will be caused in the transmission process which is what WAN latency is. Even if the resource is requested from a local area network server, the delay can be caused.
When a sound is generated, the sound waves are transmitted through a medium (solid, liquid or gas). The time lapse that will be observed between the creation of sound and the point when we actually hear it, is what we call audio latency. This delay depends on the speed of sound in different media – be it a physical object, through water or air.
When you perform a set of operations in a linear workflow, the combined time duration to perform those operations is termed operational latency. On the other hand if you perform the same set of operations in a parallel workflow, the time duration taken by the slowest single operation will be considered.
The time lapse between the point when you input something in a mechanical system and the point when the output is obtained is what we call mechanical latency. The delay caused can be explained by some physics concepts produced by Newton.
When we put some input on a system but due to reasons like there not being sufficient data buffers or some mismatch in the speed of data between the input and output device, we can experience a delay caused in the system to produce the output and this is what we call computer and OS latency.
Today we are living in a world characterized by continual digital transformation of all businesses in almost every industry. With companies migrating from traditional systems to cloud based services, they are becoming more and more dependent on the data which is provided to them by smart devices.
For example, you must have seen how the IoT devices are installed in some stores to give the store manager an update about the level of stock of items.
Now suppose that there is a lag time arising from latency of these systems which creates a delay in providing the required information to the manager. This prevents them from taking a timely response. So here, we can clearly see how high latency can lead to losses to the firm if the stock of goods is not replenished on time. This will also hamper your experience as a consumer ultimately.
Although network latency can be extremely problematic, don’t worry if you are facing this issue. It can be resolved in the following ways:
This is one of the most important factor to improve the latency issues. Again coming back to the store example, if the store is based in the USA, then the manager should ideally host the servers and databases in America rather than in Europe or somewhere geographically far because this will then reduce the network distance giving the users a better experience. CDNs reduce latency by caching static and dynamic content and serving them to users from servers located closer to them. The alternatives for cloud servers include edge computing and distributed systems like blockchains where the transactions are processed a lot closer to the end user.
In the example previously mentioned, if the store manager upgrades the IoT devices installed by looking for the latest updates in software or the latest hardware/ network configurations, then they might get rid of the latency problem soon. They can even ensure regular network maintenance in order to increase the processing time.
By regularly monitoring the network performance through the use of certain management tools like mock API testing and the experience analysis of the end users, the manager can ensure that network latency is checked on a real time basis and accordingly the troubleshooting can be done to resolve those issues.
If one groups network endpoints so that they can communicate with each other frequently, a subnet is formed which will act like a network in a network. This will minimize the router hops which are not required and help in resolving the latency issue.
If you make your network route VoIP calls and other high-priority traffic first, then the problem of network latency can be greatly improved because then you will delay other traffic types and prioritize the more important data packets.
With a data packet moving from one router to another, every hop causes a delay in the data transmission. But, if an appropriate cloud solution is used, the applications can run closer to the end users thereby reducing the number of hops which have to be taken by the traffic.
So, we’ve seen what the concept of latency entails, its different types and how the problem of latency can be addressed in different ways. It is important to note that if your firm is currently facing the problem of latency, there are a number of solutions that help improve your business performance in order for the end users to get a better experience.
This is basically network latency, which means that if we transmit data across a network and that transmission is delayed, the delay is called internet latency. When there is low latency, the network has a faster response time.
High latency means that there are long delays being caused in the transmission of data from one point to another which spoil user experience. In high latency networks, the unnecessary lag that occurs creates bottlenecks in effective communication
Generally a speed of 40-60 milliseconds (ms) or a speed lower than this limit will be called a good latency speed but if the speed crosses more than 100-120 ms, you may start noticing the time lags. 20-30 ms will be a great speed.
Disclaimers: Opinions expressed in this publication are those of the author(s). They do not necessarily purport to reflect the opinions or views of Shardeum Foundation.
About the Author: Anuska is an independent freelance writer freshly exploring web3 and blockchain space. Her articles blend personal exploration with established editorial methods, and she’d love to hear your thoughts in the comments!
What is Layer 1 Blockchain | EVM Compatible Blockchains | Gas Fees | Autoscaling in Blockchain | Dynamic State Sharding | Layer 1 Solves Blockchain Trilemma | What is Typescript | What is Truffle in Blockchain | What is Node JS | What is Foundry | THNDR Games | What is Solidity | What is Hardhat | Difference Between Github and Gitlab | What is Staking in Crypto | Blockchain Technology Applications | What is Blockchain Security | Best Cryptocurrency Exchange | What is Fiat | How to Start Career in Cryptocurrency | NFT Marketplace List | What is the Sandbox Crypto | What is POAP | What is Intrinsic Value | How to Buy Land in Metaverse