Admission Control in PeertoPeer Design and Performance Evaluation Project

The goal of the Admission Control in PeertoPeer Design and Performance Evaluation Project is to evaluate different admission control approaches: centralized and distributed. In centralized request processing, the proxy server simply forwards the media request to the media server and at the same time forwards the cached blocks to the client if the prefix is available. The central media server, upon receiving a media request, performs admission control and batched patch algorithms. To study the effect of distributed integrated approach, we compare the distributed scheme with a deterministic centralized admission control algorithm, the centralized instantaneous maximum (CIM) algorithm. A deterministic admission control algorithm should ensure that the total required bandwidth at any time is less than the disk retrieval capacity MinRead, which is defined as the maximum number of blocks the media server can guarantee to retrieve from the disk during each service round. When a client issues a request for a stream, the server creates a block schedule which contains the number of blocks required to be read in each time slot for the continuous stream playback. The CIM algorithm summates all the currently admitted block schedules and keeps the sum of the required media blocks during each time slot in a table. To make an admission decision for a new request, the server adds the block schedule for the requested stream to the table. If the number of required media blocks during any of the time slot is greater than MinRead, the new request is rejected, otherwise it is accepted. In the experimental study, we consider a centralized, integrated request processing (CIRP) scheme, in which the media server performs the CIM admission control algorithm together with batching and patching techniques.

We also compare the distributed approach with two other distributed admission control algorithms proposed in to study the effect of integrated request processing. First, we consider a simple distributed admission control (SDAC) where the proxy servers simply perform distributed admission control and prefix caching. Each proxy agent reserves some amount of server disk bandwidth according to its demand and admits client requests based on the reserved bandwidth. However, since the agents make admission decisions independently, some agents may use up all allocated bandwidth while some other agents may under-utilize their reserved bandwidth. To improve the overall bandwidth utilization, we also consider an aggressive distributed admission control (ADAC) policy where each agent may admit more requests than its allocated bandwidth. When an agent reaches the bandwidth limit, it still may admit a new request with a certain probability.

A real media server may contain thousands of different media files, but for simplicity, the media files in our experiment are generated by randomly selecting a segment of a MPEG compressed video file “Star Wars”.

We have discussed the disk bandwidth issue in Section 3.6 for a media system with multiple servers geographically distributed over Internet and concluded that the optimal solution can reduce almost a half of the average transportation delay. In a system with multiple media servers, we can compute the total bandwidth of all servers, and then allocate the total bandwidth to the agents. After disk bandwidth allocation, the distributed scheme works in a similar way to the single server system except that requests may go to different servers. Hence, in the experimental studies, we only consider systems with a single media server.

In our experimental studies, we take into account the Internet dynamics by using NIST Net emulation package, which can provide a controlled, reproducible environment for running live code and emulating a variety of critical end-to-end performance characteristics such as packet delay, bandwidth limitation, and network congestion, etc.  First, we set up a single Linux box with kernel version 2.4.18 as a router. Then, the NIST Net emulation package is installed on the Linux router.  All the traffic among the media server and proxy servers goes through the NIST Net emulator. The packet delay from a proxy server to the media server is set up by averaging the round trip time of ping packets during three hours. Since current streaming media applications in the Internet primarily use UDP transport, we use this protocol in our experiments.

The requests are generated at the 10 proxy servers according to Poisson distribution. The arrival rate l at a proxy server varies from 0.05 to 0.4. The user access pattern, such as file duration, file encoding bit rate, and file popularity, etc, is generated by a publicly available streaming media workload generator – MediSyn. We generate 10 groups of access log files for 10 proxy server. Totally 7000 requests are generated and each run of the experiment ranges from 2 hours to 3 hours according to different arrival rates.

Leave a Reply

Your email address will not be published. Required fields are marked *