Glossary Topics: Speed - VoIP - Video - Route - Access Series TCP Quality metrics


Speed

Back to Analysis

Application Speed is a measure of the actual throughput speed for a TCP (which includes HTTP) application including the impact of the connection route latency between the client and the server. When the latency of a connection exceeds the data consumption time for the slowest capacity of the route (usually the clients connection) then the throughput speed will drop below the capacity (see Capacity Speed).

Capacity Speed is a measure of the maximum amount of throughput data expressed in bits per second that the connection can sustain. The maximum measure will be limited to the smallest capacity of any single part of the connection’s route between the client and the server. The smallest capacity of the route is usually, but not always, the clients connection. It should be noted that a connections capacity measure does not reflect how an application will perform on the connection, see Application Speed.

Data Flow Speed is a measure of the rate at which packets are moving when they reach the client. This measure excludes any latency delays created by the length of the route between the client and the server and should indicate the maximum Capacity Speed of the connection. If this is not the case then it indicates that there are problems with the Data Flow QoS of the connection which is causing excessive delays. This is likely to be caused by quality issues such as packet loss or packet regulation.
Also see Application Speed

TCP Max delay is a measure of the maximum delay the client was waiting for data to arrive. TCP max delay should normally not exceed the TCP forced Idle time created by the natural latency of the connection. If this is not the case then it indicates that there are problems with the Data Flow QoS of the connection which is causing excessive delays. This is likely to be caused by quality issues such as packet loss or packet regulation.

Data Flow QoS is a measure of how smoothly data packets are moving. If a connection is un congested or unregulated then every packet should flow at a rate that matches the maximum capacity of the slowest part of the connection’s route. If regulation of congestion causes this pattern to change then the data flow QoS should drop. Note if there are problems affecting packets but the impact is evenly spread, e.g. all or nearly all of packets are affected then the QoS may still be high.  See TCP Max Delay

Capacity QoS is a similar measure to Data Flow QoS but represents the data flow for the capacity test. Because the capacity test is designed to fill the connection to the maximum capacity of the slowest part of the route then it is natural that data flow problems will occur at the point capacity is reached. Packets may drop or regulation events may occur to slow the data flow down (i.e. govern the capacity limit).

Route Speed is the maximum TCP application throughput speed attainable  between the client and the server because of the route used. The efficiency and length of the route ultimately dictates the round trip time for data to travel end-to-end and back. Because TCP needs to know that any packets sent have arrived okay, the trip time is material to the data throughput speed. (See Round Trip Time).

Round Trip Time is the time it takes for a packet to be sent end-to-end between the client and the server and back. The length and consistency of the trip time ultimately defines the TCP throughput speed (See Route Speed). A long trip time will dramatically slow connection throughput speed. An erratic trip time is an early indication of regulation or congestion problems.

VoIP

Jitter a measure for the difference in time that each packet takes to reach the destination. In an ideal world it would be nice if each packet sent took exactly the same time to travel between the client and the server (0% jitter) but in reality this is seldom the case and packets vary in the length of time (Latency) it takes to reach the destination which on a bad connection can be very larger. Jitter is an expression of the variance.

Packet Loss is a measure of how many packets did not reach the destination for one reason or another, expressed as a percentage of the total number of packets. Any packet loss is bad and affects the quality of applications.

Packet Loss Distribution is a measure of the packet loss distribution across the timeline. If a test of 1000 packets lost 1% (10 packets) then if that one packet in every 100 the distribution would be 1%. However if the loss was 10 packets in just one window of 100 packets. The loss will remain 1% (10 packets) but the distribution would be 10%. A high distribution percentage means that all the lost packets are in a small window of time causing a bigger quality problem for the application.

Packet Order is a measure in percentage of how many packets arrived in order. Packet do not necessarily take the same route or the same time to reach the destination. This results in packets arriving out of order which causes other packets to be delayed or even in very bad cases discarded. Delayed or discarded packets cause a quality problem for the application.

Packet Discards is a measure of packets that arrive too late to be used by the application. Packets are very time dependant when it comes to media based applications. There is a time window when packets can be used after which it is too late and the packet has to be intentionally discarded when it arrives. A bit like missing a connecting flight because the first flight was delayed and arrived after the second flight had taken-off.
MOS score is a measure from 1 (being the worst) to 5 (being the best). MOS is quite subjective, as it originated from the phone companies and used human input from related quality tests. Software applications have adopted  the MOS score and scale, namely 5 – Clear as if in a real face to face conversation; 4 – Fair, small interference but sound still clear. Cell phones are a good example of an everyday; 3 – Not fair, enough interference to start to annoy; 2 – Poor, very annoying and almost unusable; 1 – Not fit for purpose.

Video

Setup Time is a measure of the time taken to simulate the Real Time Steaming Protocol (RTSP) which is then responsible for initiating the video data stream to the client. This is synonymous with the concept of a user logging on to an application before it can be used.

Describe Time is a measure of time taken to select and define requirements for the desired Video Stream. This includes the chosen video content, the codec definition to be used to transmit the video and ownership details etc.

Play Time is a measure of the time taken to request that the stream start playing the Video. This is the synonymous with pressing the ‘play’ button on a DVD player once the movie DVD is loaded and ready to go.

Codec is a term that is derived from the words ‘compress’ and ‘decompress’  or in computer jargon "code/decode". Because media data such as a video is traditionally a lot of data, sending it across the network uncompressed would most likely consume more than the bandwidth. Data is therefore formatted to be smaller in size, much in the same way we zip and compress files on our PCs before attaching to emails. There are many different codec’s that exist and each end of the connection must understand the codec format being used if the video and/or audio streams are going to function correctly.

Route

IP address stands for Internet Protocol Address and defines a particular device that is connection to the Internet. An IP address is synonymous a telephone number, unless you have the telephone number (IP address) of a destination you want to reach on the Internet you can’t reach it. An IP address is comprised of 4 separate numbers (called Octets) that can range from 0 to 255. For example 192.168.0.100 is an IP address. To make the internet more user friendly and because most people have a hard time remembering numbers IP addresses can be replaced with unique names that are more easily remembered and presented. See also DNS Name.

DNS Name stands for Domain Name Service. As people cannot usually remember IP address numbers DNS allows an IP address to be attached to a name. For example www.microsoft.com is a domain name. When a user uses a domain name the browser must do the equivalent of a directory listing look-up to find the IP address of the destination. A big advantage of using DNS names and not IP addresses is that companies such as Microsoft can change IP addresses without impacting customers. All that is needed is the DNS directory is simply updated for the new IP address when the change is made.

DNS time is a measure of how long it takes to look up a DNS name to retrieve the allocated IP address number. If DNS has problems this can cause significant performance problems including making a business application totally unusable.

Hop. An Internet route is similar in concept to a road route. In order to navigate a car between the beginning of a journey and the destination the journey taken will be comprised of a series of points along the route where the car has to make a direction change to complete the journey end-to-end. At each point on the Internet where there are choices of direction there is a router device (termed a ‘hop’) that is responsible for sending the data in the right direction for the next ‘Hop’. Just like road traffic, if a router hop is situated on a popular route then it can be come congested causing heavy delays in throughput or even lost packets. Routers are owned by the companies that operate them, when two different companies send traffic to each other, the hops where these routes join are called Peering Points. Because of natural issues invoked when more than one organization is involved Peering Points are the most likely hops where problems can occur.

Latency is a measure of the time taken for a route testing packet to reach a particular hop on the route and return . By measuring each hop along the route to the destination, including the destination it is possible to see where high latency is possible causing degradation in throughput speed or dropping packets. Latency should only be higher if there is distance involved. For example a route from London to New York will have to cross the Atlantic Ocean, 3000 miles of will obviously cause higher latency fro that hop. If you see high latency for a route validating the geography is an important step to making sense of any potential routing issues.

Access Series TCP Quality metrics

TCP Receive Statistics

Packets out of order
Reports the number of packets that have arrived out of order. This occurs when packets are delayed or more importantly lost. Out of order packets slow throughput because for obvious reasons TCP cannot process data out of order. You should review packets/bytes lost as well as packets/bytes retransmitted. If these values are non-zero then the connection has quality problems that can cause significant performance problems.

Bytes out of order
Reports the number of bytes that have arrived out of order. This occurs when packets are delayed or more importantly lost. Out of order packets slow throughput because for obvious reasons TCP cannot process data out of order. You should review packets/bytes lost as well as packets/bytes retransmitted. If these values are non-zero then the connection has quality problems that can cause significant performance problems.

Packets after receive window
Reports the number of packets that arrive outside the current TCP valid data window and therefore are thrown away as they cannot be used. TCP maintains a sliding window to manage the flow of data between the client and the server. If packets arrive outside the current window it indicates a serious problem with duplicate packets. Packets should never arrive outside the TCP Window.

Bytes after receive window
Reports the number of bytes that arrive outside the current TCP valid data window and therefore are thrown away as they cannot be used. TCP maintains a sliding window to manage the flow of data between the client and the server. If bytes arrive outside the current window it indicates a serious problem with duplicate packets. Packets should never arrive outside the TCP Window.

Bytes Lost
Reports the number of bytes that were sent but failed to arrive at the destination. Lost bytes are a sign of a very serious congestion problem or a regulatory policy problem. A stable and sound connection should not drop bytes. The penalty in throughput can be severe even if just one packet drops. In the case of acknowledgement packets being dropped the lost bytes can cause retransmit timeouts to occur.

Duplicate Packets
Reports the number of packets received at the destination more than once. Duplicates are usually caused when a connection displays very erratic latency end-to-end. As a result of erratic packet timing TCP can trigger a timeout and resend a packet that has not been lost but has only been delayed. This causes duplicates to occur.

Bytes received in duplicate packets
Reports the number of bytes received at the destination more than once. Duplicates are usually caused when a connection display very erratic latency end-to-end. As a result of erratic packet timing TCP can trigger a timeout and resend a packet that has not been lost but has only been delayed. This causes duplicates to occur.

Partially duplicate Packets
Reports the number of packets received at the destination more than once that contain one or more duplicate bytes. Duplicates are usually caused when a connection display very erratic latency end-to-end. As a result of erratic packet timing TCP can trigger a timeout and resend a packet that has not been lost but has only been delayed. This causes duplicates to occur.

Bytes received in partially duplicate packets
Reports the number of bytes received at the destination more than once. Duplicates are usually caused when a connection display very erratic latency end-to-end. As a result of erratic packet timing TCP can trigger a timeout and resend a packet that has not been lost but has only been delayed. This causes duplicates to occur.

Packets received with bad offset to data segment
The data offset index points to the start of the packets data block. If this is invalid, i.e. corrupt, the packet is dropped as corrupt.

Packets received with bad checksum
The checksum is a validation process undertaken by the receiving TCP to ensure that the result matches the same calculation done by the sending TCP. This is done to ensure that the data sent is the same as the data received. If the Checksum fails the packet is dropped and not used.

Packets received which are too short
If an arriving packet is smaller than the minimum packet size allowed it is assumed to be corrupt and dropped.

Window probes received
Window probes are initiated by the TCP stack to validate if there is memory for storing data waiting to be sent. This is an indicator on memory problems in the receiving computer which is usually caused if the processor is busy thus slowing the receiving application from getting data.

Zero window updates sent (receive buffer full)
Zero Window updates are a very bad event as they indicate the one end of the connection has stopped the data flow for lack of memory space.

 
TCP Send Statistics

Maximum transmission unit, negotiated between client and server
This metric reports the maximum byte size to be used by the client and the server. If this is too small then throughput will suffer. If it is too large then fragmentation will occur and also cause severe throughput problems. The MTU shout be around 1460-1500 bytes

Packets Retransmitted
Reports the number of packets that have been retransmitted. This occurs when packets are delayed or more importantly lost. You should look to see if you are getting fast-retransmit events or retransmit timeout events. Both types are bad quality events but timeouts can cause extreme problems.

Bytes Retransmitted
Reports the number of bytes that have been retransmitted. This occurs when packets are delayed or more importantly lost. You should look to see if you are getting fast-retransmit events or retransmit timeout events. Both types are bad quality events but timeouts can cause extreme problems.

Retransmit timer timeouts (due to lack of acks)
A timeout only occurs when the maximum amount of time allocated to a packet sent expires because the packet has not been acknowledged. Timeouts are very costly in performance and should never occur on a well-run network.

Fast retransmits (in response to duplicate acks)
A fast-retransmit occurs when the sending end receives 3 duplicate acknowledgements for a packet. When this happens the sender of the data tries to avoid a packet timeout event by resending the offending packet. This is done because a timeout event for a packet is very costly in performance and should be avoided at all costs. Fast-retransmits can cause duplicates if the packet was just delayed not lost.

Duplicate acks received
TCP operates a positive acknowledgement system so when data is missing or out of order TCP will acknowledge the packet expected not the packets received. This causes duplicate acknowledgements to be sent when data is lost of data is out of order. See fast-retransmits and retransmit timeouts.

Persist timer timeouts (due to inactivity)

Acks received for unsent data
Acks received for unsent data implies that data for another data stream (an old stream) is arriving on the connection. This should never happen and implies a severe problem

Pure window update packets (no data)
When the available buffer memory allocated for packets changes it is necessary for the receiver to notify the sender of the change. Usually this is done in a normal acknowledgement packet but there can be circumstances (memory removed by the OS for example) when there is no acknowledgement due.

Window probes sent
If the receive end of the connection causes the window to close suspending all data flow. The sending end will send probes to discover if the window has been opened again. This is done in case the window open messages get lost causing the connection to be suspended indefinitely.

Send window closed events (remote receive buffer full)
Zero Window updates are a very bad event as they indicate the one end of the connection has stopped the data flow for lack of memory space.

 
Ethernet (local network interface) statistics

Receive overruns

Frames received with bad checksum

Frames received which were too large

Frames received which were not octet-aligned

Frames received which were too short

Frames received which were truncated

 
Network Statistics (these accumulate over a 24 hour period)

Network down time (s)
The Access Series continually tests the connectivity to the MCS server and reports the network unavailable time as downtime in seconds every hour as well as with every test result. The totals are cleared every 24 hours.

Network up time (s)
The Access Series continually tests the connectivity to the MCS server and reports the network available time as uptime seconds every hour as well as with every test result. The totals are cleared every 24 hours.

Percentage network downtime in current 24 hour period
This metric expresses the downtime as a percentage of the total available time for the current 24 hour time window. This is provided to allow an alert trigger to be set for when downtime changes.

Seconds connection unavailable due to HTTP 3XX error
Reports HTTP error results when trying to use HTTP requests to the MyConnection Server.

Seconds connection unavailable due to HTTP 4XX error
Reports HTTP error results when trying to use HTTP requests to the MyConnection Server.

Seconds connection unavailable due to HTTP 5XX error
Reports HTTP error results when trying to use HTTP requests to the MyConnection Server.

Seconds connection unavailable due to unknown HTTP error

Current firmware version
Reports the current firmware version of the Access Series internal software. MyConnection server can be set to automatically upgrade firmware when a new version is released.

Milliseconds since last restart
Internal Timer

Size of free packet queue (diagnostic)
Internal memory health monitor, this should never reach zero.

 

Home | Contact Us | About Us