Network congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput.
that use aggressive retransmissions to compensate for packet loss due to congestion can increase congestion, even after the initial load has been reduced to a level that would not normally have induced network congestion. Such networks exhibit two stable states under the same level of load. The stable state with low throughput is known as congestive collapse.
Networks use congestion control and congestion avoidance techniques to try to avoid collapse. These include: exponential backoff in protocols such as 802.11 CSMA/CA and the original Ethernet, window reduction in , and fair queueing in devices such as routers. Another method is to implement priority schemes, transmitting some packets with higher priority than others. A third avoidance method is the explicit allocation of network resources to specific flows. One example of this is the use of Contention-Free Transmission Opportunities (CFTXOPs) in the ITU-T G.hn standard, which provides high-speed (up to 1 Gbit/s) local area networking over varying wires (power lines, phone lines and coaxial cables).