|
UNIVERSITY OF WISCONSIN-MADISON
Computer Sciences Department
|
|
Networking Depth Exam
| | Fall 2005
|
Instructions:
There are six questions on this exam; answer ALL six of the
following questions.
1. Internet Measurement
Measurement and analysis form the foundation of our understanding of
Internet structure and behavior "in real life". In fact, empirical
evaluation has led to several seminal results which have changed the way
people think about the Internet.
- A.
-
Describe the fundamental challenges in measuring and analyzing the Internet
and how some of these have been addresses over the past decade.
- B.
-
Give two examples of important results from empirical studies of Internet
structure and/or behavior and their implications for future Internet system
and/or protocol design.
2. Internet Security
The Internet has become a central component in national and international
communication and commerce. As such, it is critical to protect it from
malicious attacks. However, there are many obstacles to providing
sufficient security and it is an on-going concern in the research community.
- A.
-
How do attackers typically cover their tracks, and what capabilities might
be required to discover the origin of attacks?
- B.
-
Worms have been and continue to be a particularly scary problem facing the
Internet. Describe their propagation behavior giving two examples (you can
describe either existing worms or projected worms). Is there hope for
containing their spread in the future?
3. Congestion Control
In the late 1980's the Internet experienced a series of so-called
congestion collapse events that reduced end-to-end performance to
nearly zero. This problem was addressed by Van Jacobson through a series
of simple enhancements in the transport layer resulting in the Tahoe
version of TCP.
- A.
-
Describe the enhancements made by Jacobson in the Tahoe version of TCP and
how these addressed the problem of congestion collapse.
- B.
-
How does the Vegas version of TCP attempt to improve on the basic
Tahoe/Reno mechanisms? Why has Vegas not been widely deployed?
- C.
-
While TCP is an end-host based congestion control mechanism,
random early detection (RED) is a network based mechanism.
RED operates by probabilistically dropping or marking packets.
How does RED attempt to penalize TCP flows that impose a higher
load in the network path?
4. BGP Policies
BGP is a path vector protocol that implements policy-based inter AS
routing. For this problem you should assume that other than the
policies discussed below, BGP propagates all advertisements and it
always prefers routes with the shortest AS path, breaking ties
arbitrarily. Consider the two policies below separately and ignore all
details of the internal routing protocols used by various ASes. Also
ignore all transient states that occur during BGP convergence.
- A.
-
Assume ISPs belong to one of 4 tiers and ISPs connect to other ISPs
in the same tier, in the tier above or in the tier below. If two ISPs
in the same tier connect, they will be peers, and if they are in
different tiers, the higher tier ISP will be a provider to the lower
tier ISP. For example a tier 2 ISP is a provider to tier 3 ISPs
connecting to it and it is paid for its services, but it is a client
to the tier 1 ISPs it connects to and it pays them for their
services. To implement these relationships ISPs use the following
policies: they advertise to all connected networks the routes from
their clients, and they advertise only to their clients the routes
they receive from their peers and providers. Give an example of a
topology in which there is a physical path between two ASes, but they
cannot communicate due to restrictions imposed by these policies. Can
these policy restriction lead to routing loops? Explain.
- B.
-
Some autonomous systems have the policy to simply ignore BGP
advertisements for prefixes below a certain size (say they accept only
/19s or larger). Give an example of a topology in which there is a
physical path between two ASes, but they cannot communicate due to
restrictions imposed by this type of policy. Can this type of policy
restriction lead to routing loops? Explain.
5. Wireless MAC
In the wired local-area networks, Ethernet is a well understood
random-access mechanism that has been fairly successful. Ethernet
is based on the Carrier Sense Multiple Access with Collision
Detection (CSMA/CD) approach for contention resolution.
- A.
-
However,
CSMA/CD, a mechanism that is known to be fairly robust in wired networks,
is inadequate in typical wireless scenarios. Why?
- B.
-
Define and explain what the hidden and exposed terminal problems are. How
does 802.11 MAC layer address these problems?
- C.
-
The 802.11 MAC protocol uses link layer acknowledgments to guarantee
reliability. Is this in violation of the "end-to-end argument?" Explain
your answer.
6. Overlays
The success of IP is partially attributed to its minimalist approach --- it
implements a best-effort unicast routing service. Over the last decade
there has been a few different proposals to add further functions inside
the network. This includes multicast and quality of service mechanisms.
- A.
-
Discuss two advantages and two disadvantages of such approaches of
implementing multicast and quality of service mechanisms in the network
layer.
- B.
-
Overlay multicast has been proposed as an alternate to network layer
multicast. Discuss how overlay multicast addresses two of the disadvantages
of network layer multicast.
- C.
-
Can quality of service mechanisms be also implemented through using
overlays? Explain your answer.