The performance gap between magnetic disk (called as disk default) drivers and CPU-memory subsystem is steadily increasing. In order to bridge this gap, the research about obtaining timely and accurate information about the low-level characteristics of the disk to optimize the performance of operating systems is a hot spot in the storage system field. Based on the low-level characteristics (such as zone number, track number per zone, sector number per track, head-switch time and cylinder-switch time) of disk as a series of context parameters, it is possible to introduce disk characteristics into disk scheduling algorithm, data placement and block allocation mechanism, buffer management, prefetching algorithm and devoted storage/file system based on special access pattern for special application. So the performance of the whole system is optimized and improved. In this research project, we will investigate the low-level characteristics of variant SCSI/IDE disks to develop a series of tools which could accurately extract variant low-level information of disk. These low-level characteristics of disk should be utilized as context parameters to improve the related mechanism in operating system so as to optimize the performance of the whole system. Simultaneously, we also consider that some devoted systems (such as media storage system) has special access pattern to disk, it is possible to optimize the disk related mechanism (such as block allocation and disk scheduling) based on the access pattern. Hence, we think it is reasonable to couple the disk access pattern and the low-level characteristics of disk with the disk related framework in operating system to obtain a great improvement of the performance.
Peer-to-peer (P2P) networks are
self-organizing distributed systems, with no centralized authority or
infrastructure. By pooling together the resources of many autonomous computing
terminals, P2P systems are able to provide an inexpensive platform for
distributed computing, storage, or file-sharing that is highly scalable,
available, and fault-tolerant. From file-sharing to distributed computing, from
application layer overlays to mobile ad hoc networking, the ultimate success of
a P2P system rests on the twin pillars of scalable and robust system design and
alignment of economic interests among the participating peers. The popularity
of the Internet and file sharing tools made the distribution of copyrighted
digital media files simple. Digital media publishers typically have a business
model that relies on their ability to collect a fee for each copy made of a
digital work, and sometimes for each performance of said work. Digital Rights
This project aims to develop new mechanisms for integrating P2P and DRM. P2P networks are used to distribute the DRM-enabled digital contents. We will address several system-level issues that play significant roles in designing robust and efficient P2P network protocols for digital rights management and protection in the presence of selfish users. Specifically, we will address the following challenging problems:
1) a proper incentive scheme that encourages the P2P users to participate the distribution of DRM enabled medias, which clearly saves the distribution cost of content producer;
2) reputation and trust-inference in P2P systems, free-riding detection and prevention;
3) a proper sharing and resale scheme that provides the expected fair use of DRM-enabled contents.
The impact of our proposed schemes and protocols will be evaluated by simulation and testbed experiments. We will make great effort to explore different tradeoffs such as communication overhead and computation overhead, implementation cost and performance gain, etc.
IEEE 802.11 based Wireless LANs have become ubiquitous in coffee shops, office buildings, universities, and hundreds of millions of residential homes. Access points, or APs, have been playing an important role in infrastructure Wireless LANs. An AP provides lots of services including cell identification, synchronization, authentication, distribution service, etc. Another important functionality of AP is to relay messages among wireless stations inside the same wireless LAN. In current implementations, an AP needs to compete for the wireless channel with all other wireless stations using the Distributed Coordination Function (DCF) protocol.
The objective of this research project is to design systematic and reproducible experiments to show that, with uncontrolled UDP traffic in the network, the AP will become the system bottleneck and the system goodput could drop to an unacceptable level, mainly due to buffer overflow at the AP. To solve the problem, we shall propose UDP rate control schemes for wireless stations. We think this research is important because UDP traffic volume is growing rapidly with the widely-deployed real time applications such as Voice over Wi-Fi, wireless surveillance system, digital games, streaming multimedia applications, etc. In our experimental study, we will first design experiments for conformance test on 802.11 chipsets, as some products are using proprietary protocols which do not fully comply with the 802.11 DCF standard. We will also design a method to estimate the buffer size of an AP, which has great impact on the delay performance. Next, we will conduct experiments to measure the UDP saturation goodput and compare the results to ad hoc mode. We will design experiments to evaluate the impact of UDP traffic on the TCP performance. Finally, we shall design UDP rate control schemes for the wireless stations to avoid buffer overflow at the AP.
With the wide spread usage of web applications, the number of accesses to many popular websites is ever increasing. Web servers usually experience extreme demand variations, ranging from little demand to an enormous increase of requests caused by the Slashdot-effect or flash crowds. It is therefore not economical to design the web servers for supporting any possible peak load, because even well-equipped web servers may be easily overloaded. During overload, not all requests can be served in a timely manner. Hence, performance-enhancing mechanisms that can provide better service to premium customers during server overload are of major importance.
This research project aims to address the delay performance problem in two different web server architectures, namely, the single-tier architecture and the cluster architecture. We shall investigate ways to manage the resources of the web servers to provide the performance guarantee for premium class users, by using fuzzy control techniques. We shall also implement the proposed web server architectures based on the open-source Apache web server. The proposed web QoS schemes will then be tested and evaluated on the real system.
Wavelength-division multiplexing (WDM) technology has been widely deployed in optical backbone networks to accommodate the ever-increasing demand for bandwidth. A lightpath can be setup between two network nodes for communication, which can carry 40Gbps or more of data. However, due to the limited network resources, some lightpath connection requests may not be satisfied. One of the main design goals of WDM networks is to minimize the lightpath connection blocking probability. Traditionally, there are mainly two ways to decrease the blocking probability: 1) to design an effective routing and wavelength assignment algorithm; 2) to make use of wavelength conversion.
In this project, we are aiming at investigating a third way to decrease the blocking probability, i.e., to use lightpath rerouting technique. Rerouting means the action of switching an existed circuit from the current route to another route. It is originally proposed for circuit-switched telephone networks. A passive lightpath rerouting scheme has been studied in WDM networks for the purpose of overcoming wavelength-continuity constraint. In this project, we propose the concept of intentional lightpath rerouting. Our preliminary study has shown that, even a simple intentional lightpath rerouting scheme can decrease the blocking probability significantly. The first step of this project is to thoroughly investigate the design of intentional lightpath rerouting algorithms so as to make the best tradeoff between the blocking performance improvement and the traffic overhead caused by rerouting routines. The second step is to investigate the effect of integrating passive lightpath rerouting and intentional lightpath rerouting.
Optical networks based on wavelength-division multiplexing (WDM) are growing at unprecedented rates to accommodate the ever-increasing demand for bandwidth. Wavelength-routed WDM networks are potential candidates to serve as the backbones for future wide-area networks. Because these networks carry a huge volume of traffic and at the same time they are prone to component failures, maintaining a high level of service availability is a crucial issue. This opens up a new avenue for research and calls for re-examination of some fundamental issues in wavelength-routed WDM networks. One of the key challenges is to design an effective resource management strategy, which can provide a reliable lightpath service to the customers, and utilizes the network resources (e.g., wavelength channels, wavelength converters) efficiently.
The objective of this project is to develop an efficient resource management scheme for reliable wavelength-routed WDM networks. This includes an advanced routing and wavelength assignment (RWA) algorithm with restoration capability, and also an effective wavelength converter allocation scheme. For the former issue, we plan to employ lightpath restoration technique by using dynamic routing algorithms. For the latter issue, we plan to first quantify the benefits of wavelength conversion in reliable WDM networks, and afterwards we will aim at developing a wavelength converter allocation scheme, which can use the smallest number of wavelength converters to achieve the near-optimal performance.