A lossless compression method for internet packet headers

Share Embed


Descripción

A Lossless Compression Method for Internet Packet Headers Raimir Holanda and Jorge Garc´ıa Computer Architecture Dept. Technical University of Catalonia Barcelona, Spain Email: {rholanda,jorge}@ac.upc.edu Abstract— A critical requirement for performance evaluation and design of network elements is the availability of realistic traffic traces. There are, however, several reasons that makes it difficult to have access to them. Firstly, Internet providers are usually reluctant to make real traces public, secondly, hardware for collecting traces at high speed is usually expensive, and finally, with the increase of link rates, the required storage for packet traces of meaningful duration becomes too large. In this paper we address the problem of compression of these potentially huge packet traces. We propose a novel packet header compression, focused not on the problem of reducing transmission bandwidth or latency, but on the problem of saving storage space. As far as we know, ours is the first method specifically oriented to this goal. With our proposed method, storage size requirements for .tsh packet headers are reduced to 16% of its original size. The compression proposed here is more efficient than any other existing method and simple to implement. Others known methods have their compression ratio bounded to 50% and 32%.

I. I NTRODUCTION A critical requirement for performance evaluation and design of network elements is the availability of realistic traffic traces. Network traffic traces can be obtained by several methods. A popular scheme is to collect real traces from routers for extended periods of time [1]. These traces represent the mix of traffic flowing through a router and are collected on one of the input or output links. There are, however, several reasons that makes it difficult to have access to them. Firstly, Internet providers are usually reluctant to make public real traces captured in their networks. Moreover, when these traffic traces are made public [2], they are delivered after some transformations, such as sanitization [3], which modify some basic semantic properties (such as IP address structure). Secondly, there are others problems which arise due to the increasing speed of Internet routers. Hardware for collecting traces at high speed (e.g. to link rates of 2.5 Gbps, 10 Gbps or even 40 Gbps) is usually expensive. Moreover, with the increase of link rates, the required storage for packet traces of meaningful duration becomes too large. As an example, let us consider the problem of storing a 11 days (one hour per day) trace taken from a link at 10 Gbps and considering 20% of link utilization. Storing the full content of the traffic would require an storage of 900 Gbytes per day. If we only store the 40 bytes TCP/IP headers, together with timing information, we would require a storage capacity of around 45 Gbytes per day and 495 Gbytes along 11

days (assuming a mean packet length of 920 bytes). Similar storage requirements are found, for instance, in [2]. In this case the stored traces were collected with a NLANR PMA OC192MON located on SDSC Tera Grid Cluster. In this paper we address the problem of the compression of these potentially huge packet traces. A first approach to cope with the huge storage needs is to use a standard compression method. Content compress can be as simple as removing all extra space characters, inserting a single repeat character to indicate a string of repeated characters, and substituting smaller bit strings for frequently occurring characters. The compression is performed by algorithms which determine how to compress and decompress. Some of the most popular compress algorithms are the Huffman coding [4], LZ77 [5], and deflate [6]. Those specifications define lossless compressed data formats. From our measurements, using these methods on files containing packet headers, we can expect a compression ratio of around 50%. The previous methods do not take into account the specific properties of the data to be compressed. There are compression techniques developed for the specific case of packet headers. As far as we know, all of them have been developed for saving transmission bandwidth on channels such as wireless and slow point-to-point links. The original scheme proposed for TCP/IP header compression in the context of transmission of Internet traffic through low speed serial links is Van Jacobson’s header compression algorithm [7]. The method is based on the fact that in TCP connections, the content of many TCP/IP header fields of consecutive packets of a flow can be usually predicted. As we will show, the achievable compression rate using this method is around 32%. Since then, specifications for the compression of a number of other protocols have been written. Degermark proposed additional compression algorithms for UDP/IP and TCP/IPv6 [8]. Detailed specifications for compressing these protocols, as well as others such as RTP, were described in subsequent RFC’s [9] and [10]. Each of these descriptions specify a solution for a given protocol. For multimedia services in wireless environments ROHC (Robust Header Compression) was introduced. ROHC was standardized in [11] and is an integral part of the 3GPPUMTS specification [12]. Equally for wireless environments, another scheme that makes use of the similarity in consecutive

0

8

16

24



timestamp (seconds) timestamp (microseconds)

interface Version

IHL

Type of Service

Total Length

Identification Time to Live

Flags Protocol

IP

Fragment Offset Header Checksum

Source Address Destination Address Source Port

Destination Port

TCP

Sequence Number

fin

Reserved

ack psh rst syn

Data Offset

urg

Acknowledgment Number

• Window

Fig. 1: TSH header data format •

flows from or to a given mobile terminal is described in [13]. In this paper we propose a novel packet header compression, focused not on the problem of reducing transmission bandwidth or latency, but on the problem of saving storage space. As far as we know, ours is the first method specifically oriented to this goal. Note that we do not have some limitations of the previously mentioned methods. For instance, we can know all the packets in a flow before compressing them and the compression rate that we achieve is around 16%. The method presented here is limited to Web traffic assuming the more common case of storing TSH packet headers files [14]. We think, however, that it can be extended to other traffic types, which are becoming increasingly important in Internet links, such as P2P traffic. The method is lossless, in the sense that for some fields the decompression algorithm regenerates exactly the original value, while for others, those for which the initial values are random as for instance initial TCP sequence number, the values are shifted, as if we were captured the trace at another execution time. Evidently, these changes do not affect, in most cases, the analysis taken from the decompressed file.

Moreover, according with the ∆F (i) behavior, the fields of the i-th packet of a flow were classified as ∆F (i) = 0, ∆F (i)-predictable, and ∆F (i)-not predictable. •



II. H EADER F IELD C LASSIFICATION Throughout this paper we assume a TSH (Time Sequence Header) packet header file (see Figure 1). In .tsh the header size is 44 bytes: 8 bytes of timestamp and interface identifier, 20 bytes of IP, and 16 of TCP. No IP or TCP options are included. The packet payload is also not stored. In the following we classify the header fields depending on how these fields change for packets belonging to the same flow. Let F (i) be a header field for the i-th packet of a flow. We also define ∆F (i) = F (i) − F (1), where the minus operator refers to an arithmetic operation between fields from different packets. For the first packet of a flow, F (1) can be classified as F (1)-random, F (1)-predictable, or F (1)-not predictable:

F (1)-random fields: Are fields whose initial values could or should be chosen at random: identification, source port, sequence number, and acknowledgment number. The identification field is primarily used for uniquely identifying fragments of an original IP datagram and can have 65,536 different values. Many operating systems increase the value of the IP identification field values by 1, from one packet to the next. For sequence numbers we must guarantee that they are not reused before packets die out in the network [15]. Similar approach is adopted to acknowledgment numbers. The last field inserted in this group is the source port for Web clients. We have assigned a random value between 1024 and 65536 to it. F (1)-predictable fields: Are fields whose value is usually known or at least predictable: interface, version, IHL, type of service, flags, fragment offset, protocol, destination port on packets flowing to Web servers, data offset, reserved, and control bits. F (1)-not predictable fields: Are fields whose value cannot be predicted and has a specific meaning: timestamp, TTL, header checksum, total length, source address, destination address, and window. For instance it is impossible to guess, for each flow, the value of the timestamp field of the first packet. The TTL field is modified in Internet header processing depending of the amount of hops previously visited, and its value can vary broadly for different flows (although for packets belonging to the same flow, the TTL value is usually the same). The total length carried by each packet as well the window field also show a large variation. Finally, the source and destination address, represents a set of directions that is impossible to know in advance.



∆F (i) = 0 fields: are header fields whose values are likely to stay constant over the life of a connection: interface, version, type of service, protocol, source address, destination address, source port, and destination port. ∆F (i)-predictable fields: Are fields whose ∆F (i) values are predictable. They can be obtained from a set of precomputed templates, can be calculated based on the information stored in another field or follows sequential increments: IHL, identification, flags, fragment offset, time to live, sequence number, data offset, reserved, and control bits. For instance, the sequence number can be deduced from the total length field, and the identification is a sequential number. The mentioned templates are modeled using clustering techniques. In the next section we describe in more details the process of flow clustering generation. ∆F (i)-not predictable fields: Are fields that are likely to change over the life of the conversation, and furthermore, are impossible to be calculated: timestamp, total length, header checksum, acknowledgment number, and window.

Taking into account the joint behavior of F (1) and ∆F (i), we have created four categories of fields. In the first category, are placed the fields whose F (1) values are predictable and ∆F (i) values are constant or predictable through a flow: ((F (1) Not Random) AND (F (1)-predictable)) AND ((∆F (i) == 0) OR (∆F (i)-predictable)) The fields that agree with those constraints are: interface, version, IHL, type of service, flags, fragment offset, protocol, destination port for Web servers, data offset, reserved, and control bits. This set of fields shows a high similarity within consecutive packets belonging to the same flow and in particular between m-packets flows (flows with m packets). In the second category are included the fields whose F (1) values are not predictable and ∆F (i) values are constant or predictable: ((F (1)-Not Random) AND (F (1)-Not predictable)) AND ((∆F (i) == 0) OR (∆F (i)-predictable)) According with those constraints, we have the following fields: TTL, source address, and destination address. For these fields, storage needs are restricted to the first packet of each flow. The third category incorporates the fields that are hard to predict or calculate and we can not assign random values to them: (∆F (i)-Not predictable) In this case, storage needs are extended over all packets. These fields are: timestamp, total length, header checksum, acknowledgment number, and window. Finally, the last category groups the fields whose initial value F (1) is random and the increments ∆F (i) can be calculated: identification, source port for Web clients, and sequence number. III. F LOW C LUSTERING In [16] a novel flow characterization that incorporates a specific set of packet characteristics such as TCP structures, inter packet time, and payload size, was proposed. This flow characterization was used in the context of a lossy compression method for packet header traces [17]. To provide a lossless compression an extended approach should be adopted. In this section we summarize the main ideas behind the flow characterization and clustering that we apply in the lossless compression method proposed in this paper. We start our flow characterization defining a packet flow as a sequence of packets in which each packet has the same value for a 5-tuple of source and destination IP address, protocol number, and source and destination port number. Let Pim be the packet header of the i-th packet of a flow consisting of m packets. Pim (j) is a selected header field of Pim . For each field Pim (j), a function χj performs a mapping

into an integer value Fim (j): Fim (j) = χj (Pim (j))

(1)

Fim = (Fim (1), Fim (2), . . .)

(2)

For each packet, let

denote a vector of integers, where we include the selected fields. For the complete flow we can define:

and

m P m = (P1m , P2m , . . . , Pm )

(3)

m F m = (F1m , F2m , . . . , Fm ).

(4)

Note that the vector F m can be viewed as a numerical representation of the m packet headers, as we substitute some selected packet header fields by integers. From the flow classification described in Section II, we have selected 12 fields to study their diversity among Web flows in Internet links. The shaded boxes in Figure 2 depict those selected fields. Using the flow characterization described above, in a high-speed link, we can find potentially a large variety of Web flows. However, looking into the flows, we can see that they are not very different from each other. To study the variety among them, we have used an approach based on clustering, a classical technique used for workload characterization [18]. The basic idea of clustering is to partition the components into groups so the members of a group are as similar as possible and different groups are as dissimilar as possible. From each cluster, we generate a flow template. The clustering methodology starts from a real trace, converting each flow in a F m vector. Each new flow is compared against the previously generated templates. To be able to do that, we calculate the Euclidian distance between them. In the case of lossless compression methods the maximum distance is zero, but for lossy methods, a small distance can be admitted. Whenever a match is not possible, a new template is generated as a center of a new cluster. We have applied our methodology to traces from different available packet traces [2], [19] . We concluded that behind the great number of Web flows in a high speed link, many of them have identical or very similar F m vectors and they could be grouped into few clusters. IV. PACKET T RACE C OMPRESSION The main reason why header compression can be done at all is the fact that there is significant redundancy between header fields, both within consecutive packets belonging to the same flow but in particular between flows. The big gain of our proposed method comes from the observation that, for a set of selected header fields, the flows traveling in an Internet link are very similar. By utilizing a set of precomputed templates of flows and predictability for other fields, the header size can be significantly reduced. Hence, we have embarked upon the development of a new header compression scheme for packet header files that reduces drastically storage requirements. This section provides the details of how the

0

8

16

Flow 1

24

Pkt1

timestamp (seconds)

Flow 3

Flow n

Pkt1

Pkt1

timestamp (microseconds)

interface Version

Flow 2 Pkt1

IHL

Type of Service

Identification Time to Live

Pkt2

Pkt2

Pkt3

Pkt3

Total Length Flags

Protocol

Fragment Offset Header Checksum

Pkt4

Source Address Destination Address

Fig. 4: Temporary data structure Source Port

Destination Port Sequence Number

fin

Reserved

ack psh rst syn

Data Offset

urg

Acknowledgment Number Window

Fig. 2: Fields used to flow clustering

method works, focusing on the fact that the decompressed header is functionally identical to the original header. Our compression method uses the pre-computed Flow Clustering dataset described in Section III as the main input to compress a TSH packet header (Figure 3). It works by finding F m vectors that match with one of the templates and starts looking into the 5-tuple of fields (source and destination IP address, source and destination port number, and protocol number) to identify each new connection. Whenever a packet carrying a new flow is found, a new node is inserted at the end of a temporary data structure (Figure 4). This data structure is implemented by a linked list and stores the packet headers of n connections. If packets belonging to this same flow are found, we store only a subset of these fields: timestamp, IHL, total length, Flags, Fragment offset, time to live, header checksum, acknowledgment number, data offset, control bits, and window. When a Fin or Rst TCP flag arises in a packet, the flow status field is updated, to indicate that this flow has been completed.

Flow Clustering

Web Server Addresses

Compressor .tsh packet header Temporary data structure

Fig. 3: Compression model

compressed

header file

When the head of the linked list reaches a completed flow status, the compressor algorithm, examines the number of inserted nodes associated to this flow and searches for identical sequence of packets characteristics into the Flow Clustering dataset. Considering that we are implementing a lossless compressor, the maximum inter flow distance admitted is zero. In the case that a match is not possible, a new record is inserted in this dataset. This new F m vector will constitute a new template and the center of a new cluster. After the template searching, the compressor algorithm starts to write into the compressed header file. For many fields (see Figure 2), the storage is reduced to a template identifier, which is the most important realization of our proposed method. However, for other fields, which predictability is not possible, the carried information requires to be stored. Here, is important to consider that, for some of these fields, which the value is likely to stay constant over the life of a flow, the storage is required only once per each flow. However, for the remainder fields, the storage is required for all packets. Figure 5 shows the compressed data format for the set of fields, which the values are stored once per each flow. The first field, is a flag to identify the direction of the flow: from or to a Web server (1 bit). The inter-flow time (second field) stores the elapsed time between two consecutive flows (15 bits). The initial window field, stores the initial value assigned to the window field (16 bits). The fourth field is the Web client address (32 bits). In the case of packets flowing to Web servers, this field stores the source address, otherwise, it stores the IP destination address. The next field stores an index to a Web server address dataset (16 bits). According with [20], the bestknown characteristics of Web reference streams is their highly skewed popularity distributions, The practical implication of these distributions for reference streams is that most references are concentrated among a small fraction of all of the objects referenced. Hence based on this property, we have seen that using the strategy of store separately the Web server addresses, we increase the compression ratio. The sixth address stores an index to a specific template position into the template dataset (16 bits). Finally, the last field stores the value of the TTL of the first packet of each flow (8 bits). In total, we need of 13 bytes per each flow to store this set of fields. For the set of fields whose information assume different

8

C/S

0

16

24

Inter Flow Time

Initial Window Web client address Index to template dataset

Index to Web server address TTL

0

8 Total Length Window factor

16

24

ChkSum

Fig. 5: Flow compressed data format

Ack Number (variation)

Inter−packet time

Fig. 6: Packet compressed data format

values into a flow, storage is required for all packets. In this set are the following fields: time-stamp, total length, header checksum, acknowledgment number, and window. The (Figure 6) shows the compressed data format to store these fields. The first field stores the total length (16 bits), the second, is a flag to checksum (1 bit), to identify whether this is a valid packet or not. The third field stores the acknowledgment number increment (15 bits). We need store the acknowledgment number because, in the case of traces whose packets are flowing in only one direction of the link, we can not calculate it. The next field stores the window increment factor (4 bits), and the last field stores the inter-packet time into a flow (20 bits). In total, for each packet, we need of 7 bytes. V. D ECOMPRESSION ALGORITHM Processing at the decompressor is much simpler than at the compressor because all decisions have been made and the decompressor simply does what the compressor has told it to do. To perform its functionalities, the decompression algorithm sets up a temporary linked list to store the decompressed packets headers of n connections. It works by reading the compressed headers, flow clustering, and Web server address datasets (Figure 7). These three datasets store the necessary information to reproduce the header fields of all original packets.

Flow Clustering

compressed

header file

Web Server Addresses

Decompressor

Temporary data structure

Fig. 7: Decompression model

.tsh header packet

The decompression algorithm starts reading the first record from the compressed header dataset and assigning a random timestamp to the first flow. To the following flows, the interflow time field (see Figure 5), indicates where each flow starts. Moreover, for each flow, the algorithm reads the following informations: flow direction (to or from Web servers), initial window value, Web client and server addresses, index to template dataset, and TTL of the first packet. After the template be identified, the algorithm goes decoding the sequence of Fim vectors of integers. For each Fim vector, the following fields are decoded: interface, version, IHL, type of service, flags, fragment offset, TTL, Protocol, destination port for Web clients, data offset, reserved, and control bits. Moreover, for each m-packet flow, the timestamp field is calculated using the inter-packet time information (see Figure 6). Using the flow direction identification, we decode the source and destination addresses. The total length, acknowledgment number, and window fields are restored from the Packet compressed data format (see Figure 6). To Web client port, we have assigned a random value between 1,024 and 65,536. Initial random values are assigned to sequence number and identification fields of each packet. The following sequence number values are reconstructed based on the stored total length field. To identification field, we assign sequential values incremented by 1 to each packet into the same flow. At this point, all the header information from the packet has been consumed, so its checksum is recalculated and stored in the IP checksum field. Depending on the header checksum flag (Figure 6) the value is calculated correctly or not. For each decompressed packet, the algorithm inserts a new node at the temporary linked list sorting by time-stamp. After decoding the last Fim vector from the template, the algorithm continues the process by reading the next record in the compressed dataset. Meantime, all nodes in the linked list are checked. For the nodes whose timestamp field is less than the new flow start point, the packet headers are written in a decompressed file. VI. C OMPRESSION R ATIO To study the efficiency of the proposed compression method, we compared the compression ratio for different methods and for different packet traces. The measures were taken from RedIRIS trace [19] and from traces downloaded from NLANR [2]. The compression methods evaluated were the GZIP [21], the Van Jacobson method and our proposed method. The GZIP and also ZIP and ZLIB [22] applications use the deflation algorithm. For different TSH file sizes, the compressed file size obtained using the GZIP application is around 50% of the original TSH file size (see Figure 8). For the Van Jacobson method, the header size of a compressed datagram ranges from 3 to 16 bytes. However, we must modify slightly the original method because the number of active flows is much more larger in a high-speed Internet link than in a low speed serial link (the scenario which Van Jacobson was originally proposed). Hence, we must increase thus the number of bytes needed to store the flow identifier

44 + 8(m − 1) , 44m

f V J (m) =

60 Compressed file size (MBytes)

(we have increased it from 1 byte to 3 bytes). Moreover, we assume that a time stamp (3 bytes) is added to each header. As a result we assume that minimal encoded headers becomes 8 bytes in the best case and 21 bytes in the worst case. To estimate the compression ratio for the Jacobson we must use flow-length distribution measured in the available packet traces. We will call Pm as the probability that a Web flow has m packets. With the changes we have explained before and for a TSH packet header (44 bytes), the compression ratio for m-packet flows using the Van Jacobson method is bounded by:

 m

30 20 10

0

20

40 60 80 Uncompressed file size (MBytes)

100

(6)

13 + 7m , 44m



Pm f (m)

In future works, we intend to extend the applicability of the method to other packet header formats and other emerging applications like P2P. Moreover, applying the same methodology and preserving the most important statistical properties present into the Internet traffic, some lossy methods can be developed to reach a more effective compression rate. ACKNOWLEDGMENT This work was supported by CAPES-Brazil, by the Ministry of Science and Technology of Spain under contract TEC-200406437-C05-05, and under grants VI FP project EuroNGI. R EFERENCES [1]

(7)

obtaining thus a compression ratio of: CRatio =

40

Fig. 8: File size comparison VJ VJ Pm f (m)

The Pm distribution obtained from several traces shows that most flows today are short lived, with small number of packets [23], [24], [25]. Using this distribution, we conclude that the compression rate of the Van Jacobson method reaches 32% in the best case. In the proposed compression method 13 bytes for each new flow and 7 bytes per packet are sufficient to represent each flow of m packets. There are some data structures with information related to the clusters of flows that are also needed. However these additional data structures are almost constant with the packet trace length. Then for large packet traces, the compression ratio for m-packet flows is given by: f (m) =

50

0

(5)

obtaining thus a compression ratio given by: VJ CRatio =

GZIP method VJ method Proposed method

(8)

m

which results in a compression ratio of around 16%. Figure 8 shows the file size of the original trace and the correspondent file sizes for the three compression methods under analysis. VII. C ONCLUSION In this paper, we have introduced a novel lossless packet header compression method based on TCP flow clustering. Using the semantic similarities among Web flows and the TCP/IP functionalities, we have grouped many packet streams in few templates. With our proposed method, storage size requirements for .tsh packet headers traces are reduced to 16% of its original size. The compression proposed here is more efficient than any other existing method and simple to implement. Others known methods have their compression ratio bounded to 50% (GZIP) and 32% (Van Jacobson method), pointing out the effectiveness of our method.

CAIDA. The Cooperative Association for Internet Data Analysis. In www.caida.org. [2] NLANR. National Laboratory for Applied Network Research. In http://www.nlanr.net. [3] R. Pang and V. Paxson. A High-Level Programming Environment for Packet Trace Anonymization and Transformation. In Proceedings of ACM SIGCOMM Conference, August 2003. [4] D. E. Knuth. Dynamic Huffman coding. In Journal of Algorithms,6:163180, June 1985. [5] J. Ziv and A. Lempel. A universal algorithm for sequential data compression. In IEEE Transactions on Information Theory, Vol. 23, n 3, pp. 337-343. [6] DEFLATE. Compressed data format specification. In Available in ftp://ds.internic.net/rfc/rfc1951.txt. [7] Van Jacobson. Compressing TCP/IP Headers for Low-Speed Serial Links. In RFC 1144, February 1990. [8] M. Degermark, M. Engan, B. Nordgren, and S. Pink. Low-loss TCP/IP Header Compression for Wireless Networks. In Proc. MOBICOM , Rye, NY, November 1996. [9] M. Degermark, B. Nordgren, S. Pink. IP Header Compression. In Internet Engineering Task Force, RFC-2507, February 1999. [10] S. Casner and V. Jacobson. Compressing IP/UDP/RTP Headers for Low-Speed Serial Links. In Internet Engineering Task Force, RFC-2508, February 1999. [11] C. Bormann et al. RObust Header Compression ROHC: Framework and four profiles: RTP, UDP, ESP, and uncompressed. In Request for Comments 3095, July 2001. [12] 3rd Generation Partnership Project. Radio Acess Bearer Support Enhancements. In 3GPP, Tech. Rep., 2002. [13] C. Westphal. Improvements on IP Header Compression In GLOBECOM 2003 - IEEE Global Telecommunications Conference, vol. 22, no. 1, pp. 676-681, December 2003. [14] TSH format. In http://pma.nlanr.net/Traces/tsh.format.html.

[15] R. S. Tomlinson. Selecting Sequence Numbers. In Proc. ACM SIGCOMM/SIGOPS Interprocess Communications Workshop, pp. 11–23, 1975. [16] R. Holanda, J. Garcia, and V. Almeida. Flow Clustering: a New Approach to Semantic Traffic Characterization. In 12th Conference on Measuring, Modelling, and Evaluation of Computer and Communication Systems, Dresden Germany, September 2004. [17] R. Holanda, J. Verdu, J. Garcia, and M. Valero. Performance Analysis of a New Packet Trace Compressor based on TCP Flow Clustering. In IEEE International Symposium on Performance Analysis of Systems and Software - ISPASS 2005, Austin Texas, March 2005. [18] R. Jain. The Art of Computer Systems Performance Analysis. In John Wiley Sons, Inc., New York, 1991. [19] RedIRIS. Spanish National Research Network. In http://www.rediris.es. [20] Virgilio Almeida, Azer Bestavros, Mark Crovella, and Adriana Oliveira. Characterizing reference locality in the WWW. In Proceedings of the Fourth International Conference on Parallel and Distributed Information Systems (PDIS96), December 1996. [21] J. -L. Gailly and M. Adler. GZIP documentation and sources. In ftp://prep.ai.mit.edu/pub/gnu/. [22] J. -L. Gailly and M. Adler. ZLIB documentation and sources. In ftp://ftp.uu.net/pub/archiving/zip/doc/. [23] L. Guo and I. Matta. The War Between Mice and Elephants. In Technical Report BU-CS-2001-005, Boston University, Computer Science Department, May 2001. [24] N. Brownlee and K. Claffy. Understanding Internet traffic streams: Dragonflies and tortoises. In IEEE Communications Magazine, 40(10):110–117, October 2002. [25] S. McCreary and K. Claffy. Trends in Wide Area IP Traffic Patterns: A View from Ames Internet Exchange. In ITC Specialist Seminar on IP Traffic Measurement, Modeling, and Management, Manterey, California, September 2000.

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.