计网复习2

1. hierarchy of public telephone network’s architecture

Telephones (Phones): Users: homes, businesses
End offices (Switching offices): Connect to phones via local loops - nearest to phones
Toll offices (Higher level switching offices): Connect many switching offices via very high bandwidth trunks -> These entities formed a national hierachy of small redundancy - However, it's vulnerable of few key toll offices become isolated

Intermediate switching office:


Local loops (last miles): Analog twisted pairs wires, going to houses and businesses. Low capacity, weakest links, but provides everyone the access to the system
Trunks: Digital fiber optic links. High capacity, able to handle many simultaneous calls

 

2. telephone network topology

fully interconnected network     /     centralzized switch    /     two-level hierarchy

 

2. LATA, LEC, IXC, IXC POP definition and describtion

LATA: local access and transport access

LEC: local exchange carrier

IXC: Inter exchange carrier

POP: Point of presence

Tandem office:

 

IXC's toll office [[[连接到IXC POP(连接tanden office)LEC ]]] -> LATA

 

2. Multiplexing 多路传输

Stochastic multiplexing: 随机
  Communication link is divided into arbitrary number of variable bit-rate digital channels or data stream
  Adapt to instaneous traffic demand of data streams that are transfered over each channel
  Improve link utilization rate
  Packet-mode: packet oriented, better for packet switched network
  On demand service
  Randomized order of available slots
  Vary the delays
  Allow arbitrary number of division
  No wasted slots
  Link transmission capacity will be shared by only processes that have packets
  Carried out at datalink layer and above
Determistic multiplexing: (e.g., FDM, TDM)
  Fixed divisions/sharing of the link
  Fixed order of available slots -> fixed delays
  Allocated resources can be wasted (silent periods) -> Less efficient -> More costly
  Shared by everyone in the link, regardless of activities
  Reserved resources regardless of demand (suitable for circuit switched network more)
  Reservation requires more complex, more costly implementation
  Carried out at physical layer

The key here is:
Deterministic multiplexing has a pre-determined fabric for resource sharing (TDM, FDM, space...makes no difference), so that one can predict when a packet from a given data stream will be served (which is connected with the performance guarantees of circuit switched networks).
In stochastic multiplexing, the amount and timing of allocated resource is random and function of the network conditions. Of course, this latter case suites packet switching, where you have packet buffering. The allocated transmission time is a function of the position in the buffer, which is, in the end, a random variable.

 

3. Why is circuit switching used instead of packet switching in the public telephone network? Why is packet switching more efficient in the Internet infrastructure?

Difference: In circuit switching, a physical link is established and dedicated for the call. All traffic goes through that link. In packet switching, data are grouped into small packets. Each packet is switched and routed separately, and reassembled in the proper sequence.

Circuit switching is used for telephone network, because (1) phone calls are sparse, but generate data at relatively constant rate during the session, (2) circuit switching provides stable quality of service, given the predictable rate.

Packet switching is more efficient in Internet infrastructure because (1) the packets are processed individually. Therefore, it is faster. (2) Share wire in time domain.

 

4. Packet loss

A queue preceding a link has finite capacity, a packet can arrive to find a full queue. With no place to store such a packet, a router will drop that packet; that is, the packet will be lost.

 

5. delay

Processing delay: The time to examine the packet’s header and determine where to direct the packet. There could be other types of processing, e.g. checking for bit-level errors.

Queuing delay: The time for the packet to wait for earlier-arrived packets to be transmitted. It depends on the packets in the queue when the packet arrives.

Transmission delay: The time to transmit the packet onto the link. It depends on the length of the packet and the transmission rate of the link.

Propagation delay: The time for the packet to propagate through the link physically. It depends on the distance and the propagation speed on the link.

 

6. main functionality of the network layer in the internet

Application

Message. Application + Protocols that support applications (type of messages, syntax)

Service rate: depends on number of parallel connections.

Transport

Segment. Between application endpoints. TCP congestion control: connection-oriented service.

Service rate: effect of flow and congestion control

Network

Datagram. Between hosts. The celebrated IP Protocol defines the fields in the datagram + How the end systems and routers act on these fields+ Routing protocols determine the routes that datagrams take between sources and destination.

Service rate: effect of packet scheduling & dropping

Link

Frame. Between nodes. Network layer depends on link layer, service depend on the specific link-layer protocol.

Service rate: effect of multiple access

Physical

Bit. Move the individual bits within the frame from one node to the next.

 ISO

 

7. Compare TCP and UDP. What are the services offered to the application layer? Why is UDP more
suited to multimedia streaming?
UDP:
  Connectionless: send datagrams to IP without setting channels or datapaths - no handshake
  Light weight: minimum protocol mechanism
  Unreliable: the message can get lost without knowing
  Not ordered: order of messages received is random
  No congestion control: application layer will take care of congestion
  Datagrams: individual packets are checked at receiver only
  Unidirectional
  Small packet header overhead: 8 bytes of overhead
TCP:
  Connection-oriented: handshaking is a must
  Heavy weight: 3 packets are needed to set up the connection
  Reliable: guaranteed of service
  Ordered: order of message is preserved, out-of-order data will be buffered
  Congestion control: TCP handles congestion-control
  Full duplex connections: two-way communication
  Big packet header overhead: 20 bytes of overhead in every segment

UDP is more suited for mutimedia streaming because:
  UDP is stateless, suitable for large number of clients. Transmission delay is less. By design, UDP is unidirectional.

  Streaming can tolerate small amount of packet loss, thus reliable data transfer is not absolutely critical.
  Real-time applications react very poorly to TCP's congestion control

 

8. Concisely describe the client-server and the peer-to-peer application architectures and highlight their
differences.
Client-server:
There is an always-on host (server)
Clients don't communicate with each other
Server has fixed - well-knowned address (its IP)
Clients can always request packets from server's IP
Costly as service provider has to pay for interconnection & bandwidth cost
Star topology
Applications: web, FTP, email, Telnet
Peer-to-peer:
Minimal or no reliance on dedicated servers
The communication is between pairs of connected hosts
Peers are often are not from service providers but regular users
Self-scalability, cost effective, no server infrastructure and server bandwidth
Generally not secure due to the highly distributed and open nature
The performance of the network depends on the number of peers
Applications: file sharing, peer assisted download, Internet Telephony

 

9. Describe the leaky bucket mechanism. Why is this type of mechanism needed in priority queueing?

Leaky bucket mechanism: a traffic policing mechanism, to be implemented at the edge of the network to control the characteristics of the traffic injected. The leaky bucket enforces the traffic of a stream to stay within rate limits (controls injection rate (average rate, peak rate and burst size) into the network).
A leaky bucket can hold up to B tokens. Tokens are generated at rate R. If R>B then the extra token is ignored, the bucket remains full of B tokens.
Each packet enters the network must have 1 token, otherwise it has to wait for an available token. The token is removed from the bucket
Because of the token-generation rate R>B, obviously the maximum burst size is B. This also makes the maximum long term average rate is R. Maximum number of packets that can enter the network for any interval T is RT + B
Priority queuing: packets must be classified according to explicit marking. Each priority class has its own queue. With this mechanism, we can choose packet to transmit first from the highest priority class. All other traffic can be handled when the highest priority queue is empty.
Leaky bucket mechanism is needed in priority queueing to avoid abuse of the prioritized traffic over the nonprioritized buffer. Otherwise, prioritized traffic can eat out all the bandwidth and leave nothing to the others.

 

N. Briefly describe the performance requirements of real-time multimedia applications. Why is Forward Error Correction preferable to packet retransmission?

Performance requirements: High bit rate (bandwidth), small jitter, but tolerant to non-significant delay. Content can be compressed for quality-bandwidth trade-off.

FEC increases the effective systems throughput, even with the extra check bits added to the data bits, by eliminating the need to retransmit data corrupted by random noise, which is non-predictable and causes jitter.

 

posted @ 2018-10-31 05:36  森淼clover  阅读(191)  评论(0)    收藏  举报