A Measurement Study of Peer-to-Peer File Sharing Systems[2]

原文链接

2. METHODOLOGY

The methodology behind our measurements is quite simple. For each of the Napster and Gnutella systems, we proceeded in two steps. First, we periodically crawled each system in order to gather instantaneous snapshots of large subsets of the systems' user population. The information gathered in these snapshots include the IP address and port number of the users' client software, as well as some information about the users as reported by their software. Second, immediately after gathering a snapshot, we actively probed the users in the snapshot over a period of several days to directly measure various properties about them, such as their bottleneck bandwidth.

In this section of the paper, we first give a brief overview of the architectures of Napster and Gnutella. Following this, we then describe the software infrastructure that we built to gather our measurements, including the Napster crawler, the Gnutella crawler, and the active measurement tools used to probe the users discovered.

方法学:
很简单,分别分两个步骤处理两个系统;第一步,周期性获取每个系统的同时在线人数,以及用户端软件使用的IP地址,端口号;用户端软件也会上报一些其他的信息;第二步,针对第一步获取到的用户,长时间探测他们,获取他们的相应信息,比如带宽瓶颈;

2.1 The Napster and Gnutella Architectures

Both Napster and Gnutella have similar goals: to facilitate the location and exchange of files (typically images, audio, or video) amongst a large group of independent users connected through the Internet. In these systems, files are stored on the computers of the individual users or peers, and exchanged through a direct connection between the downloading and uploading peers, over an HTTP-style protocol. All peers in this system are symmetric: they all have the ability to function both as a client and a server. This symmetry distinguishes peer-to-peer systems from many conventional distributed system architectures. Though the process of exchanging files is similar in both systems, Napster and Gnutella differ substantially in how peers locate files (Figure 1).

两个系统有相同的目标,利用互联网使大量用户之间自由交换文件;两个系统都使用HTTP风格的协议;
点对点系统中的用户是对等的(均衡的),既可以下载也可以上传,这种对等性是点对点系统与其他分布式系统的最大差异;
两个系统交换文件的流程相似,但是定位文件的策略不同(如图1):


Figure 1: File location in Napster and Gnutella

In Napster, a large cluster of dedicated central servers maintain an index of the files that are currently being shared by active peers. Each peer maintains a connection to one of the central servers, through which the file location queries are sent. The servers then cooperate to process the query and return a list of matching files and locations. On receiving the results, the peer may choose to initiate a file exchange directly from another peer. In addition to maintaining an index of shared files, the centralized servers also monitor the state of each peer in the system, keeping track of metadata such as the peers' reported connection bandwidth and the duration that the peer has remained connected to the system. This metadata is returned with the results of a query, so that the initiating peer has some information to distinguish possible download sites.

Napster使用中央服务器集群维护一个被当前在线用户共享的文件列表。每个peer维护一个与一个中央服务器的连接,通过这个连接获取文件的地址;服务器通过合作处理请求并返回相应文件的列表和地址。收到返回的结果后,peer将与另外的peer开始文件传递;除了维护共享文件索引,中央服务器还监视系统中每一个peer的状态,跟踪peer上报的属性比如连接带宽,peer连接到系统的时间等。属性将返回给peer,作为peer启动时的参考,以便选择更好的下载站点;

There are no centralized servers in Gnutella, however. Instead, Gnutella peers form an overlay network by forging point-to-point connections with a set of neighbors. To locate a file, a peer initiates a controlled flood of the network by sending a query packet to all of its neighbors. Upon receiving a query packet, a peer checks if any locally stored files match the query. If so, the peer sends a query response packet back towards the query originator. Whether or not a file match is found, the peer continues to flood the query through the overlay.

Gnutella没有中央服务器,所有peer组成一个覆盖网络(overlay network),节点间通过p2p方式交互。通过受控的网络广播来定位文件地址,peer受到文件请求后检查本地有没有相应的文件, 如果有peer将发送一个应答包。不管有没有对应的文件,peer将继续发送广播请求。

To help maintain the overlay as the users enter and leave the system, the Gnutella protocol includes ping and pong messages that help peers to discover other nodes. Pings and pongs behave similarly to query/query-response packets: any peer that sees a ping message sends a pong back towards the originator, and forwards the ping onwards to its own set of neighbors. Ping and query packets thus flood through the network; the scope of flooding is controlled with a time-to-live (TTL) field that is decremented on each hop. Peers occasionally forge new neighbor connections with other peers discovered through the ping/pong mechanism. Note that it is possible to have several disjoint Gnutella overlays of Gnutella simultaneously coexisting in the Internet; this contrasts with Napster, in which peers are always connected to the same cluster of central servers.

Gnutella协议包含ping和pong消息,帮助peer发现其他节点,维护覆盖网络中的用户;

2.2 Crawling the Peer-to-Peer Systems

We now describe the design and implementation of our Napster and Gnutella crawlers.

2.2.1 The Napster Crawler

Because we did not have direct access to indexes maintained by the central Napster servers, the only way we could discover the set of peers participating in the system at any time was by issuing queries for files, and keeping a list of peers referenced in the queries' responses. To discover the largest possible set of peers, we issued queries with the names of popular song artists drawn from a long list downloaded from the web.

不能直接访问Napster的服务器,我们通过发送模拟请求来得到系统中的用户,从请求的应答中得到用户列表。为了得到尽量多的用户,我们从网上下载了一个流行歌唱家的列表,并模拟请求这些文件;

The Napster server cluster consists of approximately 160 servers; each peer establishes a connection with only one server. When a peer issues a query, the server the peer is connected to first reports files shared by ``local users'' on the same server, and later reports matching files shared by ``remote users'' on other servers in the cluster. For each crawl, we established a large number of connections to a single server, and issued many queries in parallel; this reduced the amount of time taken to gather data to 3-4 minutes per crawl, giving us a nearly instantaneous snapshot of peers connected to that server. For each peer that we discovered during the crawl, we then queried the Napster server to gather the following metadata: (1) the bandwidth of the peer's connection as reported by the peer herself, (2) the number of files currently being shared by the peer, (3) the current number of uploads and the number of downloads in progress by the peer, (4) the names and sizes of all the files being shared by the peer, and (5) the IP address of the peer.

Napster服务器集包含大约160个服务器,每个peer只与一个服务器建立一个连接。
peer发送一个请求后,与peer建立连接的服务器首先返回在同一个服务器上的用户共享的文件,然后报告其他服务器上的用户共享的的文件。
我们与一个服务器建立大量的连接,请求被并行发出;这样可以节省每个crawl的时间为3到4分钟,作为系统的一个快照;
我们拿到用户列表后,将请求Napster服务器收集用户的属性:
(1)peer自己报告的带宽;
(2)peer共享的文件数;
(3)当前正在上传和下载的文件数;
(4)peer共享的文件的名字和大小;
(5)用户的IP地址;

To get an estimate of the fraction of the total user population we captured, we separated the local and remote peers returned in our queries' responses, and compared them to statistics periodically broadcast by the particular Napster server that we queried. From these statistics, we verified that each crawl typically captured between 40\% and 60\% of the local peers on the crawled server. Furthermore, this 40-60\% of the peers that we captured contributed between 80-95\% of the total (local) files reported to the server. Thus, we feel that our crawler captured a representative and significant fraction of the set of peers.

为了衡量我们捕获的用户占全部用户的百分比,我们分离Napster服务器报告的“本地用户”和“远程用户”,并周期性的比较和统计;通过这些统计,证明每一个crawl能够捕获服务器上40%到60%的用户;这些40%到60%的用户贡献了大约80%到95%的服务器上的文件。从而可以说明我们的crawl捕获相当数量的用户,并具有相当的代表性;

Our crawler did not capture any peers that do not share any of the popular content in our queries. This introduces a bias in our results, particularly in our measurements that report the number of files being shared by users. However, the statistics reported by the Napster server revealed that the distributions of number of uploads, number of downloads, number of files shared, and bandwidths reported for all remote users were quite similar to those that we observed from our captured local users.

没有共享我们文件列表中的文件的用户将不能被我们捕获;这将给我们的结果带来误差,特别是用户共享文件的数目。

2.2.1 The Gnutella Crawler

The goal of our Gnutella crawler is the same as our Napster crawler: to gather nearly instantaneous snapshots of a significant subset of the Gnutella population, as well as metadata about peers in captured subset as reported by the Gnutella system itself. Our crawler exploits the ping/pong messages in the protocol to discover hosts. First, the crawler connects to several well-known, popular peers (such as gnutellahosts.com or router.limewire.com). Then, it begins an iterative process of sending ping messages with large TTLs to known peers, adding newly discovered peers to its list of known peers based on the contents of received pong messages. In addition to the IP address of a peer, each pong message contains metadata about the peer, including the number and total size of files being shared.

首先,Gnutella crawler连接一些有名的,热点的peer(如gnutellahosts.com)。接着开始进入一个循环发送ping消息(TTL很大)到已知用户的过程,并将在pong消息中包含的用户添加到已知用户队列中。另外pong消息还包含用户的IP地址,其他属性如共享文件的数目大小等;



Figure 2: Number of Gnutella hosts captured by our crawler over time

We allowed our crawler to continue iterating for approximately two minutes, after which it would typically gather between 8,000 and 10,000 unique peers (Figure 2). According to measurements reported by Clip2 [8], this corresponds to at least 25% to 50% of the total population of peers in the system at any time. After two minutes, we would terminate the crawler, save the crawling results to a file and begin another crawl iteration to gather our next snapshot of the Gnutella population.

每个crawler运行大概2分钟;

Unlike our Napster measurements, in which we were more likely to capture hosts sharing popular songs, we have no reason to suspect any bias in our measurements of the Gnutella user population. Furthermore, to ensure that the crawling process does not alter the behavior of the system in any way, our crawler neither forwarded any Gnutella protocol messages nor answered any queries.

Gnutella crawler不会引入误差;

2.2.3 Crawler Statistics

Both the Napster and Gnutella crawlers were written in Java, and ran using the IBM Java 1.18 JRE on Linux 2.2.16. The crawlers ran in parallel on a small number of dual-processor Pentium III 700 MHz computers with 2GB RAM, and four 40GB SCSI disks. Our Napster trace captured four days of activity, from Sunday May 6th, 2001 through Wednesday May 9th, 2001. We recorded a total of 509,538 Napster peers on 546,401 unique IP addresses. Our Gnutella trace spanned eight days (Sunday May 6th, 2001 through Monday May 14th, 2001) and captured 1,239,487 Gnutella peers on 1,180,205 unique IP-addresses.

在Napster系统中,共记录了546401个不同的IP地址,509538个用户;
在Gnutella系统中,共记录了1180205个不同的IP地址,1239487个用户;

2.3 Directly Measured Peer Characteristics

For each gathered peer population snapshot, we directly measured additional properties of the peers. Our goal was to capture data that would enable us to reason about the fundamental characteristics of the users (both as individuals and as a population) participating in any peer-to-peer file sharing system. The data collected includes the distributions of bottleneck bandwidths and latencies between peers and our measurement infrastructure, the number of shared files per peer, the distribution of peers across DNS domains, and the ``lifetime'' of the peers in the system, i.e., how frequently peers connect to the systems, and how long they remain connected.

我们的目标是找到每个系统中用户的特征;包括用户的带宽瓶颈,用户与测量系统的延迟,每个用户共享的文件数,DNS域分布,用户的“生命周期”,如,用户连接系统的频率,保持连接的时间。


2.3.1 Latency Measurements

Given the list of peers' IP-addresses obtained by the crawlers, we measured the round-trip latency between the peers and our measurement machines. For this, we used a simple tool that measures the RTT of a 40-byte TCP packet exchanged between a peer and our measurement host. Our interest in latencies of the peers is due to the well known feature of TCP congestion control which discriminates against flows with large round-trip times. This, coupled with the fact that the average size of files exchanged is in the order of 2-4 MB, makes latency a very important consideration when selecting amongst multiple peers sharing the same file. Although we realize that the latency to any particular peer is dependent on the location of the host from which it is measured, we feel the distribution of latencies over the entire population of peers from a given host might be similar (but not identical) from different hosts, and hence, could be of interest.

延迟测量:
通过使用一个工具测量peer与我们的测量系统之间的交互的40个字节的TCP包的RTT;因为TCP的拥塞控制协议将会删除RTT很长的包。再加上文件交互的大小为2到4MB,考虑以上因素,当大量用户共享一个文件时,延迟是一个非常重要的指标。每个用户的延迟依赖于用于测量的机器,考虑大量用户的分布,使用不同主机进行测量的延迟的分布应该是基本相同的。


2.3.2 Lifetime Measurements

To gather measurements of the lifetime characteristics of peers, we needed a tool that would periodically probe a large set of peers from both systems to detect when they were participating in the system. Every peer in both Napster and Gnutella connects to the system using a unique IP-address/port-number pair; to download a file, peers connect to each other using these pairs. There are therefore three possible states for any participating peer in either Napster or Gnutella:

  • offline: the peer is either not connected to the Internet or is not responding to TCP SYN packets because it is behind a firewall or NAT proxy.
  • inactive: the peer is connected to the Internet and is responding to TCP SYN packets, but it is disconnected from the peer-to-peer system and hence responds with TCP RST's.
  • active: the peer is actively participating in the peer-to-peer system, and is accepting incoming TCP connections.

We developed a simple tool (which we call LF}) using Savage's ``Sting'' platform [9]. To detect the state of a host, LF sends a TCP SYN-packet to the peer and then waits for up to twenty seconds to receive any packets from it. If no packet arrives, we mark the peer as offline. If we receive a TCP RST packet, we mark the peer as inactive. If we receive a TCP SYN/ACK, we label the host as active, and send back a RST packet to terminate the connection. We chose to manipulate TCP packets directly rather than use OS socket calls to achieve greater scalability; this enabled us to monitor the lifetimes of tens of thousands of hosts per workstation. Because we identify a host by its IP address, one limitation in the lifetime characterization of peers our inability of distinguishing hosts sharing dynamic IP addresses (e.g. DHCP).

生命期测量:
为了收集用户生命期内的属性,我们需要一个周期性探测大量客户端的工具;在Napster和Gnutella两个系统中的每一个客户端都是使用一个IP:port对,下载文件,用户互连;
在两个系统中的peer有3种可能的状态:

离线状态:peer没有连接到互联网,或者不响应TCP SYN包因为peer在防火墙或者NAT代理后面;
非活动状态:peer接入互联网,并能够响应TCP SYN包,但是没有接入点对点系统,响应为TCP RST;
活动状态:peer接入点对点系统,能够接受TCP连接;

LF工具,发送TCP SYN包给peer,等待20秒内peer返回的包。如果20秒内没有返回包, 我们标记peer处于离线状态。如果接受到一个TCP RST包,我们标记peer处于非活动状态,如果接受到一个TCP SYN/ACK,我们标记peer处于活动状态,并发送一个RST包结束连接。通过自己产生包而不是调用系统函数可以获得更大的扩展性能,一台机器可以监控上万个peer的生命期。因为我们使用IP地址标识一个主机,我们将无法区别动态获取IP地址(DHCP)的用户;


2.3.3 Bottleneck Bandwidth Measurements

Another characteristic of peers that we wanted to gather was the speed of their connections to the Internet. This is not a precisely defined concept: the rate at which content can be downloaded from a peer depends on the bottleneck bandwidth between the downloader and the peer, the available bandwidth along the path, and the latency between the peers.

带宽瓶颈测量:
带宽瓶颈并不准确的定义:从一个peer下载内容的码率,它决定于下载者与peer之间的带宽,路径可用带宽,peer交互的周期。

The central Napster servers can provide the connection bandwidth of any peer as reported by the peer itself. However, as we will show later, a substantial percentage of the Napster peers (as high as 25%) choose not to report their bandwidths. Furthermore, there is a clear incentive for a peer to discourage other peers from downloading files by falsely reporting a low bandwidth. The same incentive to lie exists in Gnutella; in addition to this, in Gnutella, bandwidth is reported only as part of a successful response to a query, so peers that share no data or whose content does not match any queries never report their bandwidths.

Napster和Gnutella系统中存在不能上报带宽的peer;

Because of this, we decided to actively probe the bandwidths of peers. There are two difficult problems with measuring the available bandwidth to and from a large number of hosts: first, available bandwidth can significantly fluctuate over short periods of time, and second, available bandwidth is determined by measuring the loss rate of an open TCP connection. Instead, we decided to use the bottleneck link bandwidth as a first-order approximation to the available bandwidth; because our workstations are connected by a gigabit link to the Abilene network, it is likely that the bottleneck link between our workstations and any peer in these systems is last-hop link to the peer itself. This is particularly likely since, as we will show later, most peers are connected to the system using low-speed modems or broadband connections such as cable modems or DSL. Thus, if we could characterize the bottleneck bandwidth between our measurement infrastructure and the peers, we would have a fairly accurate upper bound on the rate at which information could be downloaded from these peers.

因此我们决定通过动态侦听获取peer的带宽。两个问题:
(1)可用带宽随时间变化
(2)必须衡量丢包率
我们决定使用链路瓶颈作为可用带宽的记录;因为我们的工作站是通过1G带宽接入Abilene网络的,所以基本上瓶颈出现在工作站与peer之间连接最后一跳的位置。

Bottleneck link bandwidth between two different hosts equals the capacity of the slowest hop along the path between the two hosts. Thus, by definition, bottleneck link bandwidth is a physical property of the network that remains constant over time for an individual path.

两个主机之间的链路带宽瓶颈与最慢的一跳的能力相同。所以可以将链路带宽瓶颈定义为每一个路径上物理属性的一个常量;

Although various bottleneck link bandwidth measurement tools are available [10,11,12,13], for a number of reasons that are beyond the scope of this paper, all of these tools were unsatisfactory for our purposes. Hence, we developed our own tool (called SProbe) [14] based on the same underlying packet-pair dispersion technique as some of the above-mentioned tools. Unlike other tools, however, SProbe uses tricks inspired by Sting [9] to actively measure both upstream and downstream bottleneck bandwidths using only a few TCP packets. Our tool also proactively detects cross-traffic that interferes with the accuracy of the packet-pair technique, improving the overall accuracy of our measurements (for more information about SProbe, refer to http://sprobe.cs.washington.edu). By comparing the reported bandwidths of the peers with our measured bandwidths, we were able to verify the consistency and accuracy of SProbe, as we will demonstrate in Section 3.5.

SProbe工具

2.3.4 A Summary of the Active Measurements

For the lifetime measurements, we monitored 17,125 Gnutella peers over a period of 60 hours and 7,000 Napster peers over a period of 25 hours. For each Gnutella peer, we determined its status (offline, inactive or active) once every seven minutes, and for each Napster peer, once every two minutes.

For Gnutella, we attempted to measure bottleneck bandwidths and latencies to a random set of 595,974 unique peers (i.e., unique IP-address/port-number pairs). We were successful in gathering downstream bottleneck bandwidth measurements to 223,552 of these peers, the remainder of which were either offline or had significant cross-traffic. We measured upstream bottleneck bandwidths from 16,252 of the peers (for various reasons, upstream bottleneck bandwidth measurements from hosts are much harder to obtain than downstream measurements to hosts [14]). Finally, we were able to measure latency to 339,502 peers. For Napster, we attempted to measure downstream bottleneck bandwidths to 4,079 unique peers. We successfully measured 2,049 peers.

In several cases, our active measurements were regarded as intrusive by several monitored systems. Unfortunately, e-mail complaints received by the computing staff at the University of Washington forced us to prematurely terminate our crawls, hence the lower number of monitored Napster hosts. Nevertheless, we successfully captured a significant number of data points for us to believe that our results and conclusions are representative for the entire Napster population.

posted on 2006-04-11 09:45  hunter_gio  阅读(682)  评论(1编辑  收藏  举报

导航