International Journal of Computer Information Systems, Vol. 3, No.
3, 2011
Recycling of Bandwidth in IEEE 802.16 Networks
Venkata Subbareddy Pallamreddy #1 and K. Vidya Sagar#2
#1 #2
Associate Professor, CSE Dept, Q.I.S College of Engg & Tech, Ongole, A.P, India. M.Tech Student Q.I.S College of Engg &Tech, Ongole, A.P, India.
AbstractIEEE 802.16 standard was designed to support the bandwidth demanding applications with quality of service. Bandwidth is reserved for each application to ensure the QOS. For variable bit rate (VBR) applications, however, it is difficult for the subscriber station (SS) to predict the amount of incoming data. In this paper, we propose a scheme, named Recycling Bandwidth, to recycle the unused bandwidth without changing the existing bandwidth reservation. The idea of the proposed scheme is to allow other SSs to utilize the unused bandwidth when it is available. Thus, the system throughput can be improved while maintaining the same QOS guaranteed services. Mathematical analysis and simulation are used to evaluate the proposed scheme. Simulation and analysis results confirm that the proposed scheme can recycle 35% of unused bandwidth on average. By analyzing factors affecting the recycling performance, three scheduling algorithms are proposed to improve the overall throughput.
INTRODUCTION
In The Worldwide Interoperability for Microwave Access (WiMAX), based on IEEE 802.16 standard standards [1] [2], is designed to facilitate services with high trans-mission rates for data and multimedia applications in metropolitan areas. The physical (PHY) and medium access control (MAC) layers of WiMAX have been specified in the IEEE 802.16 standard. Many advanced communication technologies such as Orthogonal Frequency-Division Multiple Access (OFDMA) and multiple-input and multiple-output (MIMO) are embraced in the standards. Supported by these modern technologies, WiMAX is able to provide a large service coverage, high data rates and QoS guaranteed services. Because of these fea-tures, WiMAX is considered as a promising alternative for last mile broadband wireless access (BWA). In order to provide QOS guaranteed services, the subscriber station (SS) is required to reserve the necessary bandwidth from the base station (BS) before any data transmissions. In order to serve variable bit rate (VBR) applications, the SS tends to keep the reserved bandwidth to maintain the QOS guaranteed services. Thus, the amount of reserved bandwidth transmitted data may be more than the amount of transmitted data and may not be fully utilized all the time. Although the amount of reserved bandwidth is adjustable via making bandwidth requests (BRs), the adjusted bandwidth is applied as early as to the next coming frame. The unused bandwidth in the current frame has no chance to be utilized. Moreover, it is very challenging to adjust the amount of reserved bandwidth precisely. The SS may be exposed to the risk of
degrading the QOS requirements of applications due to the insufficient amount of reserved bandwidth. To improve the bandwidth utilization while maintaining the same QoS guaranteed services, our research objective is twofold: 1) the existing bandwidth reservation is not changed to maintain the same QoS guaranteed services. 2) our research work focuses on increasing the bandwidth utilization by utilizing the unused bandwidth. We propose a scheme, named Bandwidth Recycling, which recycles the unused bandwidth while keeping the same QoS guaranteed services without introducing extra delay. The general concept behind our scheme is to allow other SSs to utilize the unused bandwidth left by the current transmitting SS. Since the unused bandwidth is not supposed to occur regularly, our scheme allows SSs with nonreal time applications, which have more flexibility of delay requirements, to re-cycle the unused bandwidth. Consequently, the unused bandwidth in the current frame can be utilized. It is different from the bandwidth adjustment in which the adjusted bandwidth is enforced as early as in the next coming frame. Moreover, the unused bandwidth is likely to be released temporarily (i.e., only in the current frame) and the existing bandwidth reservation does not change. Therefore, our scheme improves the overall throughput while providing the same QoS guaranteed services. According to the IEEE 802.16 standard, SSs scheduled on the uplink (UL) map should have transmission op-portunities in the current frame. Those SSs are called transmission SSs (TSs) in this paper. The main idea of the proposed scheme is to allow the BS to schedule a backup SS for each TS. The backup SS is assigned to standby for any opportunities to recycle the unused bandwidth of its corresponding TS. We call the backup SS as the complementary station (CS). In the IEEE 802.16 standard
MOTIVATION AND RELATED WORK
Bandwidth reservation allows IEEE 802.16 networks to provide QOS guaranteed services. The SS reserves the required bandwidth before any data transmissions. Due to the nature of VBR applications, it is very difficult for the SS to make the optimal bandwidth reservation. It is possible that the amount of reserved bandwidth is more than the demand. Therefore, the reserved bandwidth cannot be fully utilized. Although the reserved band-width can be adjusted via BRs, however, the updated reserved bandwidth is applied as early
September Issue
Page 76 of 105
ISSN 2229 5208
International Journal of Computer Information Systems, Vol. 3, No. 3, 2011 as to the next coming frame and there is no way to utilize the unused bandwidth in the current frame. In our scheme, the SS releases its unused bandwidth in the current frame and another SS pre-assigned by the BS has opportunities to utilize this unused bandwidth. This improves the bandwidth utilization. Moreover, since the existing bandwidth reservation is not changed, the same QoS guaranteed services are provided without introducing any extra delay. Many research works related to bandwidth utilization inprovement have been proposed in the literature. In [4], a dynamic resource reservation mechanism is proposed. It can dynamically change the amount of reserved re-source depending on the actual number of active connections. The investigation of dynamic bandwidth reservation for hybrid networks is presented in [3]. The authors evaluated the performance and effectiveness for the hybrid network, and proposed efficient methods to ensure optimum reservation and utilization of bandwidth while minimizing signal blocking probability and signaling cost. In [5], the authors enhanced the system throughput by using concurrent transmission in mesh mode. The authors in [6] proposed a new QoS control scheme by considering MAC-PHY cross-layer resource allocation. A dynamic bandwidth request-allocation algorithm for real-time services is proposed in [7]. The authors predict the amount of bandwidth to be requested based on the information of the backlogged amount of traffic in the queue and the rate mismatch between packet arrival and service rate to improve the bandwidth utilization. The research works listed above improve the performance by predicting the traffic coming in the future. Instead of prediction, our scheme can allow SSs to accurately identify the portion of unused bandwidth and provides a method to recycle the unused bandwidth. It can improve the utilization of bandwidth while keeping the same QoS guaranteed services and introducing no extra delay. specifies the burst profiles for the SSs which are only scheduled on the CL. 3.1 Protocol According to the IEEE 802.16 standard, the allocated space within a data burst that is unused should be initialized to a known state. Each unused byte should be set as a padding value (i.e., 0xFF), called stuffed byte value (SBV). If the size of the unused region is at least the size of a MAC header, the entire unused region is initialized as a MAC PDU. The padding CID is used in the CID field of the MAC PDU header. In this research, we intend to recycle the unused space for data transmissions. Instead of padding all portion of the unused band-width in our scheme, a TS with unused bandwidth transmits only a SBV and a RM shown in Fig. 1. The SBV is used to inform the BS that no more data are coming from the TS. On the other hand, the RM comprises a generic MAC PDU with no payload shown in Fig. 2. The mapping information between CL and UL map is based on the basic CID of each SS. The CID field in RM should be filled by the basic CID of the TS. Since there is an agreement of modulation for transmissions between TS and BS, the SBV can be transmitted via this agreed modulation. However, there are no agreed modulations between TS and CS. Moreover, the transmission coverage of the RM should be as large as possible in order to maximize the probability that the RM is able to be received successfully by the CS. To maximize the transmission coverage of the RM, one possible solution is to increase the transmission power of the TS while transmitting the RM. However, the power may be a critical resource for the TS and should not be in-creased dramatically. Therefore, under the circumstance of without increasing the transmission power of the TS, the RM should be transmitted via BPSK which has the largest coverage among all modulations supported in the IEEE 802.16 standard. For example, Fig. 3 illustrates the physical location of the BS, TS and CS, respectively. The solid circle represents the coverage of QPSK which is the modulation for data transmissions between BS and TS. When the TS has unused bandwidth, it transmits a SBV via this modulation (i.e., QPSK) to inform the BS that there are no more data coming from the TS. It is easy to observe that the corresponding CS is out of QPSK coverage. In order to maximize the coverage of the RM under the circumstance of without increasing the transmission power of the TS, the TS transmits the RM via BPSK which coverage is represented by the dished circle. The radius of the dished circle is KL, where L is the distance between TS and BS and K is the ratio of transmission range of BPSK to the transmission range of QPSK de-pending on the transmission power. Assume all channels are in good condition. As long as the CS is within the coverage of BPSK, it can receive the RM successfully and start to recycle the unused bandwidth. it is hard to have unused bandwidth in this type of applications. Therefore, our scheme has very limited benefit on CBR traffic. The VBR applications generate data in a variable rate. It is hard for a SS to predict the amount of incoming data precisely for making the appropriate
PROPOSED SCHEME
The objectives of our research are twofold: 1) The same QoS guaranteed services are provided by maintaining the existing bandwidth reservation. 2) the bandwidth utilization is improved by recycling the unused band-width. To achieve these objectives, our scheme named Bandwidth Recycling is proposed. The main idea of the proposed scheme is to allow the BS to pre-assign a CS for each TS at the beginning of a frame. The CS waits for the possible opportunities to recycle the unused bandwidth of its corresponding TS in this frame. The CS information scheduled by the BS is resided in a list, called complementary list (CL). The CL includes the mapping relation between each pair of pre-assigned CS and TS. As shown in Fig. 1, each CS is mapped to at least one TS. The CL is broadcasted followed by the UL map. To reach the backward compatibility, a broadcast CID (B-CID) is attached in front of the CL. Moreover, a stuff byte value (SBV) is transmitted followed by the B-CID to distinguish the CL from other broadcast DL transmission intervals. The UL map including burst profiles and offsets of each TS is received by all SSs within the network. Thus, if a SS is on both UL map and CL, the necessary information (e.g., burst profile) residing in the CL may be reduced to the mapping information between the CS and its corresponding TS. The BS only September Issue Page 77 of 105
ISSN 2229 5208
International Journal of Computer Information Systems, Vol. 3, No. 3, 2011
percentage on VBR traffics which is popularly used today. Additionally, in our scheme, each TS should transmit a RM to inform its corresponding CS when it has
Fig. 1. Messages to release the unused bandwidth within a UL transmission interval
TYPE EC LEN MEB CID LSB
CI
LEN MEB
CS KL TS
HT=0
EKS
HT : Header Type CI : CRC Indicator EC : Encryption Control EKS: Encryption Key Sequence LSB : Least Significant Bt CID : Connection ID HCS: Header Check Sequence
Fig. 2. The format of RM bandwidth reservation. Thus, in order to provide QoS guaranteed services, the SS tends to keep the amount of reserved bandwidth to serve the possible busty. 3.2 Scheduling Algorithm
Assume Q represents the set of SSs serving non-real time connections (i.e., nrtPS or BE connections) and T is the set of TSs. Due to the feature of TDD that the UL and DL operations can not be performed simultaneously, we can not schedule the SS which UL transmission interval is overlapped with the target TS. For any TS, St, let Ot be the set of SSs which UL transmission interval overlaps with that of St in Q. Thus, the possible corresponding CS of St must be in QOt. All SSs in QOt are considered as candidates of the CS for St. A scheduling algorithm, called Priority-based Scheduling Algorithm (PSA), shown in Algorithm 1 is used to schedule a SS with the highest priority as the CS. The priority of each candidate is decided based on the scheduling factor (SF) defined as the ratio of the current requested bandwidth (CR) to the current granted bandwidth (CG). The SS with higher SF has more demand on the bandwidth. Thus, we give the higher priority to those SSs. The highest priority is given to the SSs with zero CG. Non-real time connections include nrtPS and BE connections. The nrtPS connections should have higher priority than the BE connections because of the QoS requirements. The priority of candidates of CSs is concluded from high to low as: nrtPS with zero CG, BE with zero CG, nrtPS with non-zero CG and BE with non-zero CG. If there are more than one SS with the highest priority, we select one with the largest CR as the CS in order to decrease the probability of overflow.
4 ANALYSIS The percentage of potentially unused bandwidth occupied in the reserved bandwidth is critical for the potential
performance gain of our scheme. We investigate this
Rev
CID MSB HCS
LEN : Length
Rsv
BS
Fig. 3. An example of corresponding locations of TS, BS and CS. potential performance gain of our scheme. We investigate this percentage on VBR traffics which is popularly used today. Additionally, in our scheme, each TS should transmit a RM to inform its corresponding CS when it has Algorithm 1 Priority-based Scheduling Algorithm Input: T is the set of TSs scheduled on the UL map. Q is the set of SSs running non-real time applications. Output: Schedule CSs for all TSs in T. For i =1 to T do a. St TSi. b. Qt QOt: c. Calculate the SF for each SS in Qt. d. If Any SS Qt has zero granted bandwidth, If Any SSs have nrtPS traffics and zero granted bandwidth, Choose one running nrtPS traffics with the largest CR. else Choose one with the largest CR. else Choose one with largest SF and CR. e. Schedule the SS as the corresponding CS of St. End For Unused bandwidth. However, the transmission range of the TS may not be able to cover the corresponding CS. It depends on the location and the transmission power of the TS. It is possible that the unused bandwidth cannot be recycled because the CS does not receive the RM. Therefore, the benefit of our scheme is reduced. In this section, we analyze mathematically the probability of a CS to receive a RM successfully. Obviously, this probability affects the bandwidth recycling rate (BBR). BBR stands for the
September Issue
Page 78 of 105
ISSN 2229 5208
International Journal of Computer Information Systems, Vol. 3, No. 3, 2011 percentage of the unused bandwidth which is recycled. condition: Moreover, the performance analysis is presented in terms of Xi1 < Wi max{0;Qi1 Wi1} (2) throughput gain (TG). At the end, we evaluate the where Qi1 is the amount of data stored in queue performance of our scheme under different traffic load. before transmitting frame i 1. Wi and Wi1 are the amount of bandwidth assigned in frame i and i 1, respectively. Again, both Wi and Wi1 are at most Wmax. 4.1 Analysis of potential unused bandwidth In our traffic model based on [8], the time interval max{0;Qi1 Wi1} represents the amount of queued between arriving packets of the VBR traffic is considered as data arriving before frame i 1. exponential distribution. The steady state probability of the As mentioned, Xi1 is the amount of data arriving in traffic model can be characterized by Poisson distribution. the frame i 1. Thus, Xi1 must be nonnegative. Let and max be the mean and maximal amount of data Consequently, arriving in a frame, respectively. Suppose X represents the the probability of having unused bandwidth amount of data arriving in a frame and p(X) is the in frame i, Pu(i), is derived as: probabilities affects the number of data in queue in the Pu(i) = current frame),it can be represented as the the following Xi1 0 Based on the traffic generation rate, the applications can be p(X)dX (3) classified into two types: constant bit rate (CBR) and Thus, the expected amount of unused bandwidth in variable bit rate (VBR). Since CBR applications generate frame i, E(i), can be derived as: data in a constant rate, SSs rarely adjust the reserved E(i) = bandwidth. As long as the reasonable amount of bandwidth Xi1 is reserved, it is hard to have unused bandwidth in this type 0 of applications. Therefore, our scheme has very limited Xp(X)dX (4) benefit on CBR traffic. However, VBR applications Finally, by summing the expected unused bandwidth generate data in a variable rate. It is hard for a SS to predict in all frames, the ratio of the total potentially unused the amount of incoming data precisely for making the bandwidth to total reserved bandwidth in N frames, Ru, appropriate bandwidth reservation. Thus, in order to can be presented as: provide QoS guaranteed services, the SS tends to keep the N1 amount of reserved bandwidth to serve the possible bursty data arrived in the future. The reserved bandwidth may not be fully uti-lized all the time. Our analysis focuses on investigating the percentage of potentially unused bandwidth of VBR traffics. When the SS intends to establish a new connection with the BS, this connection must pass the admission control in order to ensure that the BS has enough resourceto provide QoS guaranteed services. The policy can be considered as a set of predefined QoS parameters such as minimum reserved traffic rate (Rmin), maximum sustained rate (Rmax) and maximum burst size (Wmax) [9] [10]. In our analytic model, the BS initially assigns the bandwidth, B, to each connection. The BS guarantees to support the bandwidth until reaching Rmin and optionally to reach Rmax. Suppose Df represents the frame duration and W is the assigned bandwidth per frame (in terms of bytes). Because of the admission control policy, the burst size that the BS schedules in each frame cannot be larger than Wmax. The relation between W and B canbe formulated as: W = BDf Wmax (1) Suppose Xi1 represents the amount of data arriving in the frame i1 (in terms of bytes), where 1 i N1 and N is the total number of frames we analyze. If we have unused bandwidth in frame i, then the amount of data in queue must be less than the number of assigned bandwidth. By considering the inter-frame dependence (i.e., the number of data changed in the previous frame
i=0
Ru = E(i) N1
i=0
Wi
(5)
4.2 Performance analysis of the proposed scheme under different traffic load The traffic load in a network may vary dynamically. Thus, the network status can be classified into four stages: light, moderate, heavy and fully loaded. The performance of the proposed scheme may be variant in different stages. We investigate the performance of our scheme in each stage. Suppose Ball represents the total bandwidth supported by the BS. Assume Brt represents the bandwidth reserved by real time connections and BRrt is the amount of additional bandwidth requested by them via BRs. Similarly Bnrt represents the bandwidth assigned to non-real time connections and BRnrt is the amount of additional bandwidth requested by them. The investigation of our scheme in each stage is shown as follows. All investigations are validated via simulation in Section 4. Stage 1 (light load): This stage is defined as that the total demanding bandwidth of SSs is much less than the supply of
September Issue
Page 79 of 105
ISSN 2229 5208
International Journal of Computer Information Systems, Vol. 3, No. 3, 2011 the BS. The formal definition can be expressed as: Ball Brt + Bnrt + BRrt + BRnrt Since all BRs are granted in this stage, the BS schedules the CS randomly. Moreover, every SS receives its desired amount of bandwidth. Therefore, for any given CS, Su, the probability to have data to recycle the unused bandwidth, derived from equation is small. It leads to low Pr. Therefore, the probability that the CS recycles the unused bandwidth successfully is small and the throughput gain of our scheme is not significant. 2) Stage 2 (moderate load) : This network stage is defined as equal demand and supply of bandwidth, i.e., Ball = Brt + Bnrt In this stage, the BS can satisfy the existing demand but does not have available resource to admit new BRs. Since the currently desired bandwidth of every SS can be satisfied, the probability of CS to recycle the unused bandwidth may be higher than the stage 1 but still limited. Based on equations the throughput gainis still insignificant. 3) Stage 3 (heavy load): This stage is defined as that the BS can satisfy the demand of real time connections, but does not have enough bandwidth for the non-real time connections. However, there are no rejected BRs in this stage. We can express this in terms of formulation as: Ball = Brt + _Bnrt where 0 _ < 1. Since the bandwidth for non-real time connections has been shrunk, there is a high probability that the CS accumulates non-real time data in queue. It leads to higher Pr and Precycle. Thus, the throughput gain can be more significant than Stage 1 and 2. 4) Stage 4 (full load) : This stage describes a network with the heaviest traffic load. The difference between stage 3 and 4 is that there are rejected BRs in stage 4. It means that the probability of SSs accumulating non-real time data in queue is much higher than the one in Stage 3. Therefore, both Pr and Precycle are significantly high. Our scheme can achieve the best performance in this stage followed by introducing the definition of performance metrics used for measuring the network performance. The simulation results are shown as the third part ofthis section. At the end, we provide the validation of theoretical analysis and simulation results Fig. 5 presents the percentage of the unused bandwidthin our simulation traffic model (i.e., UBR). It shows theroom of improvement by implementing our scheme.From the simulation results, we conclude that the averageUBR is around 38%. In the beginning, the UBRgoes down. It is because each connection still requests bandwidth from the BS. As time goes on, the UBR starts to increase when the connection has received the requested bandwidth. After 75th second of simulation time, UBR increases dramatically due to the inactivityof real time connections. The purpose to have inactive real time connections is to simulate a network with large amount of unused bandwidth and evaluate the improvement of the proposed scheme in such network status. The evaluation is presented in the later of this section.The simulation results of recycling rate are presented in Fig. 6. From the figure, we observe that the recycling rate is very close to zero at the beginning of the simulation. It is because that only a few connections transmit data during that time and the network has a light load. Therefore, only few connections need to recycle the unused bandwidth from others. As time goes on, many active connections join in the network. The available bandwidth may not be able to satisfy the needs of connections. Therefore, there is a high probability that the CS recycles the unused bandwidth. It leads a higher BRR.Fig. 7 shows the total bandwidth demand requested by SSs during the simulation. In the figure, the dashed line indicates the system bandwidth capacity. During the simulation, the BS always allocates the bandwidth to satisfy the demand of real time connections due to the QoS requirement. Therefore, the amount of bandwidth allocated to non-real time connections may be shrunk. At the same time, the new non-real time data are generated. Therefore, the non-real time data are accumulated in the queue. It is the reason that the demand of bandwidth keeps increasing
RESULTS
Fig No 5: Simulation results of UBR
Our simulation is conducted by using Qualnet 4.5 [11].In this section, we first present our simulation model
September Issue
Page 80 of 105
ISSN 2229 5208
International Journal of Computer Information Systems, Vol. 3, No. 3, 2011
Fig No 7: Simulation results of TG
Fig No 6: Simulation results of BRR. Fig. 7 presents the results of TG calculated from the cases with and without our scheme. In the figure, the TG is very limited at the beginning of the simulation, which is similar to the results of the BRR. It shows Stage 1and 2 described in section 5 that there is no significant improvement on our scheme when the network load is light. As the traffic increases, the TG reaches around 15 to 20%. It is worth to note that the TG reaches around 20% at 35th second of the simulation time. It matches the time that the bandwidth demand reaches the system capacity shown in Fig. 8. Again, it confirms our early observation (Stage 3 and 4 in section 5) that the proposed scheme can achieve higher TG when the network is heavily loaded. After the 75th second, the TG increases dramatically. It shows that our scheme can have significant improvement on TG when the large amount of unused bandwidth is available. We also investigate the delay in the cases with and without our scheme. By implementing our scheme, the average delay is improved by around 19% comparing to the delay without using our scheme. It is due to the higher overall system throughput improved by our scheme. From the simulation results shown above, we conclude that the proposed scheme can not only improve the bandwidth utilization and throughput but also decrease the average delay. Moreover, the scheme reaches the higher performance when the network is heavily loaded. This validates our performance analysis shown in stage 1 to 3in Section 4. Fig. 8 shows the throughput comparison between our scheme and Case with BRs defined in Section 4.
Fig No 8: Comparison with the case with BRs From the figure, we obtain that the throughput of Case with BRs can maintain higher throughput than the proposed scheme in most of time but the achievable throughput of our scheme is higher. It is because the SS in the former case always requests bandwidth based on the number of queued data. However, the BS has to reserve sufficient amount of bandwidth for BRs. Therefore, it limits the number of bandwidth for data transmissions. Additionally, this comparison is based on the proposed scheduling algorithm, named Priority-based Scheduling algorithm. The throughput of the proposed scheme is enhanced further by algorithms.
5.CONCLUSION
Variable bit rate applications generate data in variant rates. It is very challenging for SSs to predict the amount of arriving data precisely. Although the existing method allows the SS to adjust the reserved bandwidth via bandwidth requests in each frame, it cannot avoid the risk of failing to satisfy the QoS requirements. Moreover, the unused bandwidth occurs in the current frame cannot be utilized by the existing bandwidth adjustment since the adjusted amount of bandwidth can be applied as early as
September Issue
Page 81 of 105
ISSN 2229 5208
International Journal of Computer Information Systems, Vol. 3, No. 3, 2011 in the next coming frame. Our research does not change the existing bandwidth reservation to ensure that the same QoS guaranteed services are provided. We proposed bandwidth recycling to recycle the unused bandwidth once it occurs. It allows the BS to schedule a complementary station for each transmission stations. Each complementary station monitors the entire UL transmission interval of its corresponding TS and standby for any opportunities to recycle the unused bandwidth. Besides the naive priority-based scheduling algorithm, three additional algorithms have been proposed to improve the recycling effectiveness. Our mathematical and simulation results confirm that our scheme can not only improve the throughput but also reduce the delay with negligible overhead and satisfy the QoS requirements.
Tutorial, IEEE Communications Surveys and Tutorials, Vol. 6, No. 2 p.58-78, Third Quarter 2004 .
AUTHOR PROFILES
Venkata Subbareddy Pallamreddy has 9 years of Teaching Experience .He has published more than 15 International journals. He is board member of International journals and member of various international journals. He is Expert in paper evaluation.
REFERENCES
[1] IEEE 802.16 WG,IEEE Standard for Local and Metropolitan Area Network Part 16: Air Interface for Fixed Boardband Wireless Access Systems IEEE Std 802.162004 p.1 - p.857 [2] IEEE 802.16WG, IEEE standard for local and metropolitan area networks part 16: Air interface for fixed and mobile broadband wireless access systems, Amendment 2, IEEE 802.16 Standard, December 2005. [3] Jianhua He, Kun Yang and Ken Guild A Dynamic Bandwidth Reservation Scheme for Hybrid IEEE 802.16 Wireless Networks ICC08 p.2571-2575. [4] Kamal Gakhar, Mounir Achir and Annie Gravey, Dynamic re-source reservation in IEEE 802.16 broadband wireless networks, IWQoS, 2006. p.140-148 [5] J. Tao, F. Liu, Z. Zeng, and Z. Lin, Throughput enhancement in WiMax mesh networks using concurrent transmission, In Proc. IEEE Int. Conf. Wireless Commun., Netw. Mobile Comput., 2005, p. 871V874. [6] Xiaofeng Bai, Abdallah Shami and Yinghua Ye Robust QoS Control for Single Carrier PMP Mode IEEE 802.16 Systems,IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 7, NO. 4, APRIL 2008, p.416-429 [7] Eun-Chan Park, Hwangnam Kim, Jae-Young Kim, HanSeok Kim Dynamic Bandwidth Request-Allocation Algorithm for Real-time Services in IEEE 802.16 Broadband Wireless Access Networks, INFOCOM 2008,p.852 - 860 [8] Thomas G. Robertazzi Computer Networks and Systems:Theory and Performance Evaluation. SpringerVerlag 1990 [9] Kamal Gakhar, Mounir Achir and Annie Gravey, How Many Traffic Classes Do We Need In WiMAX?, WCNC 2007, p.3703-3708 [10] Giuseppe Iazeolla, Pieter Kritzinger and Paolo Pileggi, Mod-elling quality of service in IEEE 802.16 networks,SoftCOM 2008. p.130-134 [11] Qualnet,http://www.scalablenetworks.com/products/developer/ new in 45.php [12] Frank H.P. Fitzek, Martin Reisslein, MPEG4 and H.263 Video Traces for Network Performance Evaluation, IEEE Network, Vol. 15, No. 6, p.40-54, November/December 2001
[13] Patrick Seeling, Martin Reisslein, and Beshan Kulapala, Network Performance Evaluation Using Frame Size and Quality Traces of Single-Layer and Two-Layer Video: A
K.Vidya Sagar Completed B.Tech in 2008 and pursuing M.Tech in QIS College of Engineering & Tech, Ongole.
September Issue
Page 82 of 105
ISSN 2229 5208