Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Two users, one using Telnet and one sending files with FTP, both send their traffic out via router \(R\). The outbound link from \(R\) is slow enough that both users keep packets in R's queue at all times. Discuss the relative performance seen by the Telnet user if \(\mathrm{R}\) 's queuing policy for these two flows is (a) round-robin service (b) fair queuing (c) modified fair queuing, where we count the cost only of data bytes, and not IP or TCP headers Consider outbound traffic only. Assume Telnet packets have 1 byte of data, FTP packets have 512 bytes of data, and all packets have 40 bytes of headers.

Short Answer

Expert verified
Telnet: Poor with round-robin, better with fair queuing, best with modified fair queuing.

Step by step solution

01

- Understand the Packet Sizes

Identify the sizes of the packets for both Telnet and FTP. Telnet packets have 1 byte of data and 40 bytes of headers, totaling 41 bytes. FTP packets have 512 bytes of data and 40 bytes of headers, totaling 552 bytes.
02

- Round-Robin Service

In the round-robin queuing policy, each flow gets an equal opportunity to send packets. For every Telnet packet (41 bytes), one FTP packet (552 bytes) is also sent. Telnet user will face higher delays because FTP packets are much larger.
03

- Fair Queuing

In fair queuing, the system allocates equal bandwidth to each user. Since Telnet packets are smaller (41 bytes) than FTP packets (552 bytes), Telnet user will send more packets per unit time, leading to better performance for the Telnet user compared to round-robin.
04

- Modified Fair Queuing

If modified fair queuing is applied, and only data bytes are counted, the Telnet packets (1 byte of data) will face less competition compared to FTP packets (512 bytes of data). This gives Telnet packets even more frequent transmission opportunities, improving performance for the Telnet user significantly.

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

round-robin queuing
Round-robin queuing is a scheduling technique where each flow of data gets its turn to send a packet in a circular order. Imagine a round table with the Telnet and FTP users as participants. The router serves one packet from Telnet, then one from FTP, and so on. This method is designed to ensure a fair distribution of service time among users.
However, because Telnet packets are much smaller (41 bytes) compared to FTP packets (552 bytes), the Telnet user can experience higher delays. Every FTP packet takes longer to send, causing the Telnet user to wait more. This can be quite frustrating for the Telnet user, as their smaller packets are often stuck behind the larger FTP packets.
fair queuing
Fair queuing is a more sophisticated method compared to round-robin queuing. It aims to provide a fair share of the bandwidth to each user. Here, the system considers the size of the packets and provides service accordingly. For instance, since Telnet packets are smaller, more Telnet packets will be delivered in the same time it takes to send a few large FTP packets.
This means that the Telnet user will see better performance because their smaller packets can be transmitted more frequently. Overall, fair queuing helps to balance the service each user gets, ensuring a more equitable experience.
modified fair queuing
Modified fair queuing takes fairness a step further by focusing only on the data bytes, ignoring the headers. In TCP/IP, headers add extra bytes to each packet: 40 bytes in our example. In this method, for Telnet, only the 1 byte of data is counted and for FTP, the 512 bytes of data are considered.
This adjustment gives an edge to the Telnet user as they have very little data per packet. As a result, Telnet packets face much less competition and are transmitted more often. This greatly enhances the Telnet user's experience, providing them with quicker and more reliable service.
packet sizes
Packet size plays a crucial role in network performance. A packet contains both data and protocol headers. The sizes for our example are:
  • Telnet: 1 byte data + 40 bytes headers = 41 bytes
  • FTP: 512 bytes data + 40 bytes headers = 552 bytes
The larger the packet, the longer it takes for the router to send it. This difference in packet size between Telnet and FTP impacts their performance under different queuing policies. Smaller packets, such as those from Telnet, can be handled more quickly, reducing latency. This is why understanding packet sizes is so important when considering network performance and queuing policies.
network performance
Network performance is a measure of how well data is transmitted across the network. Key factors influencing it include:
  • Bandwidth: The maximum rate of data transfer across a network path.
  • Latency: The delay before data transfer begins following an instruction.
  • Throughput: The actual amount of data successfully transferred.
Queuing policies directly impact these factors. Round-robin may increase latency for smaller packets, while fair and modified fair queuing can improve throughput and reduce latency for all users. In scenarios where packet sizes vary significantly, like between Telnet and FTP, choosing the right queuing policy can drastically improve overall network performance and user satisfaction.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Suppose a TCP connection has a window size of eight segments, an RTT of \(800 \mathrm{~ms}\), the sender sends segments at a regular rate of one every \(100 \mathrm{~ms}\), and the receiver sends ACKs back at the same rate without delay. A segment is lost, and the loss is detected by the fast retransmit algorithm on the receipt of the third duplicate \(\mathrm{ACK}\). At the point when the ACK of the retransmitted segment finally arrives, how much total time has the sender lost (compared to lossless transmission) if (a) the sender waits for the ACK from the retransmitted lost packet before sliding the window forward again? (b) the sender uses the continued arrival of each duplicate ACK as an indication it may slide the window forward one segment?

Suppose two TCP connections share a path through a router R. The router's queue size is six segments; each connection has a stable congestion window of three segments. No congestion control is used by these connections. A third TCP connection now is attempted, also through R. The third connection does not use congestion control either. Describe a scenario in which, for at least a while, the third connection gets none of the available bandwidth, and the first two connections proceed with \(50 \%\) each. Does it matter if the third connection uses slow start? How does full congestion avoidance on the part of the first two connections help solve this?

Suppose TCP is used over a lossy link that loses on average one segment in four. Assume the bandwidth \(x\) delay window size is considerably larger than four segments. (a) What happens when we start a connection? Do we ever get to the linearincrease phase of congestion avoidance? (b) Without using an explicit feedback mechanism from the routers, would TCP have any way to distinguish such link losses from congestion losses, at least over the short term? (c) Suppose TCP senders did reliably get explicit congestion indications from routers. Assuming links as above were common, would it be feasible to support window sizes much larger than four segments? What would TCP have to do?

Suppose a router's drop policy is to drop the highest-cost packet whenever queues are full, where it defines the "cost" of a packet to be the product of its size by the time remaining that it will spend in the queue. (Note that in calculating cost it is equivalent to use the sum of the sizes of the earlier packets in lieu of remaining time.) (a) What advantages and disadvantages might such a policy offer, compared to tail drop? (b) Give an example of a sequence of queued packets for which dropping the highest-cost packet differs from dropping the largest packet. (c) Give an example where two packets exchange their relative cost ranks as time progresses.

Consider a router that is managing three flows, on which packets of constant size arrive at the following wall clock times: flow A: \(1,2,4,6,7,9,10\) flow B: \(2,6,8,11,12,15\) flow C: \(1,2,3,5,6,7,8\) All three flows share the same outbound link, on which the router can transmit one packet per time unit. Assume that there is an infinite amount of buffer space. (a) Suppose the router implements fair queuing. For each packet, give the wall clock time when it is transmitted by the router. Arrival time ties are to be resolved in order \(\mathrm{A}, \mathrm{B}, \mathrm{C}\). Note that wall clock time \(T=2\) is FQ-clock time \(A_{i}=1.5\). (b) Suppose the router implements weighted fair queuing, where flows \(\mathrm{A}\) and \(\mathrm{B}\) are given an equal share of the capacity, and flow \(\mathrm{C}\) is given twice the capacity of flow A. For each packet, give the wall clock time when it is transmitted.

See all solutions

Recommended explanations on Computer Science Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free