You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Ping (ICMP) usually is measured on a host, by the time each request packet takes to get to the destination and replied by the destination back to the host (round-trip). Inside the host, the ping application calculates therefore the time the packet took to go and come back.
Latency is usually defined by the time it takes for a packet to be received by a host when coming from a server (latency is therefore usually aprox. half the time a ping takes).
Latency is usually important for realtime traffic such as video and online gaming, as it is important that the packets are received quickly, and not that relevant if packets sent have much delay or not.
Nasa says: "Data latency is the total time elapsed between when data are acquired by a sensor and when these data are made available to the public.", this might include the presentation layer (show the public), thus, in its wholistic form, it can be the sum of the times: acquisition + receive + present data. We don't see any "send request" in these times. Some might argue that the the send request may be part of this equation, because the "public" will send a request for data, for it to be received, but if this is a video stream, you only request once, and then you have a steady stream of receives, so we might ignore those times (even though they might occur). Still If we are actually, strictly interested in understanding the latency in this equation, how much time the "public waits for this sensor data" we don't include any send-requests times.
Thus, generally speaking, in its standard definition, we can't accurately measure latency with pings, since we get the round-trip time in ms, request + reply , the latency is specifically only the reply time. When dividing this ping time by 2 we might be trying to obtain the reply time, but its not accurate.
It is common in many portals, and graphic representations to inter-mix these definitions, as being the same, but they are not, the difference its subtle.
But It is very important to understand that applications that really measure latency (voip, audio, video, database, streaming, p2p gaming), will in fact show up aprox. half the ping round-trip times.
Therefore the header on the page should change to Round-Trip and not latency times, just to set the record strait and not confuse teams that rely on these values, and need to technically understand them, as if they are half or double the time, it might matter to them.
: E=mc2 - Note for the curious minds :
These times don't stretch, and are strictly bound limits to the current human technical knowledge, and science evolution, and relate to the time speed of light needs to get from A to B. Since light travels down in approximately a straight line we can use the formula (D = T * V) where D is the distance the fiber travels, T is the time it takes to travel, and V is the velocity of light. The speed of light inside fiber is close to 2*10^8 m/s. As a rule of thumb, aprox. round trip is around 1millisecond per 100 km.
Therefore it is linked to how many kilometers light has to travel. They cannot be reduced, in light of the current human knowledge, we do use fiber optics as the best way to communicate data around the world.
So, the only way to reduce round-trip times from A to B, is simply to get closer to the destination, or simply do what many have done, use CDNs to replicate data across regions, and have it available closer and closer to the end-users.
hope i helped to clarify this topic, 'cause it came to my attention.
.keep up the good work Mat.
cheers.
The text was updated successfully, but these errors were encountered:
Cupidazul
changed the title
Ping, Latency and Round Trip
Ping: Latency vs Round-Trip
Sep 28, 2023
Ping (ICMP) usually is measured on a host, by the time each request packet takes to get to the destination and replied by the destination back to the host (round-trip). Inside the host, the ping application calculates therefore the time the packet took to go and come back.
Latency is usually defined by the time it takes for a packet to be received by a host when coming from a server (latency is therefore usually aprox. half the time a ping takes).
Latency is usually important for realtime traffic such as video and online gaming, as it is important that the packets are received quickly, and not that relevant if packets sent have much delay or not.
Nasa says: "Data latency is the total time elapsed between when data are acquired by a sensor and when these data are made available to the public.", this might include the presentation layer (show the public), thus, in its wholistic form, it can be the sum of the times: acquisition + receive + present data. We don't see any "send request" in these times. Some might argue that the the send request may be part of this equation, because the "public" will send a request for data, for it to be received, but if this is a video stream, you only request once, and then you have a steady stream of receives, so we might ignore those times (even though they might occur). Still If we are actually, strictly interested in understanding the latency in this equation, how much time the "public waits for this sensor data" we don't include any send-requests times.
Thus, generally speaking, in its standard definition, we can't accurately measure latency with pings, since we get the round-trip time in ms, request + reply , the latency is specifically only the reply time. When dividing this ping time by 2 we might be trying to obtain the reply time, but its not accurate.
It is common in many portals, and graphic representations to inter-mix these definitions, as being the same, but they are not, the difference its subtle.
But It is very important to understand that applications that really measure latency (voip, audio, video, database, streaming, p2p gaming), will in fact show up aprox. half the ping round-trip times.
Therefore the header on the page should change to Round-Trip and not latency times, just to set the record strait and not confuse teams that rely on these values, and need to technically understand them, as if they are half or double the time, it might matter to them.
: E=mc2 - Note for the curious minds :
These times don't stretch, and are strictly bound limits to the current human technical knowledge, and science evolution, and relate to the time speed of light needs to get from A to B. Since light travels down in approximately a straight line we can use the formula (D = T * V) where D is the distance the fiber travels, T is the time it takes to travel, and V is the velocity of light. The speed of light inside fiber is close to 2*10^8 m/s. As a rule of thumb, aprox. round trip is around 1millisecond per 100 km.
Therefore it is linked to how many kilometers light has to travel. They cannot be reduced, in light of the current human knowledge, we do use fiber optics as the best way to communicate data around the world.
So, the only way to reduce round-trip times from A to B, is simply to get closer to the destination, or simply do what many have done, use CDNs to replicate data across regions, and have it available closer and closer to the end-users.
hope i helped to clarify this topic, 'cause it came to my attention.
.keep up the good work Mat.
cheers.
The text was updated successfully, but these errors were encountered: