wcnc07



Comments



Description

Experiences using Gateway-Enforced Rate-Limiting Techniques in Wireless Mesh NetworksKamran Jamshaid and Paul A.S. Ward Shoshin Distributed Systems Group Department of Electrical and Computer Engineering University of Waterloo, Waterloo, Ontario N2L 3G1 Email: {kjamshai, pasward}@shoshin.uwaterloo.ca typically fixed to building structures, there are no topological variations due to mesh-router mobility, though infrequent topology changes might still occur because of addition, removal, or failure of mesh routers. Thus client mobility is not relevant for this work, as such clients access the network per the standard WLAN mode of operation. This static topology also precludes other mobility requirements like battery-power conservation; these mesh routers are typically powered through the electricity grid. Finally, the traffic pattern in these networks is highly skewed, with most of the traffic being either directed to or originating from the wired Internet through one of the gateway nodes. For the purpose of this paper, we restrict our analysis to single gateway systems in which client access is non-interfering with mesh-router operation. This is consistent with systems developed by wireless equipment vendors such as Nortel [13], and is a generalization of the TAP model [4] from chains to arbitrary graphs. Most WMNs use commodity 802.11 hardware because of its cost advantage. However, the CSMA/CA MAC demonstrates its limitations in a multihop network. Specifically, because of the hidden and exposed terminal problems [17], nodes may experience varying spatial (location-dependent) contention for the wireless channel. This produces an inconsistent view of the channel state among the nodes, resulting in throughput unfairness between different flows. This unfairness trend deteriorates with increasing traffic loads and multiple back-logged flows, eventually resulting in flow starvation for disadvantaged flows. In particular, nodes multiple hops away from the gateway starve while all the available network capacity is consumed by nodes closer to the gateway [8]. We are currently exploring the use of traffic-aggregation points like gateway nodes for policy-enforcement and other traffic-shaping responsibilities that may support scalable, functional mesh networks. Given WMN traffic patterns, the gateway a natural choice for enforcing fairness and bandwidthallocation policies. It has a unified view of the entire network, and thus is better positioned to manage a fair allocation of network resources. In this paper, we compare the use of two gateway-enforced flow-control schemes: Active Queue Management (AQM) and our recently proposed Gateway Rate Control (GRC) mechanism [7]. One way to perform allocation of resources is to use queue- Abstract— Gateway nodes in a wireless mesh network (WMN) bridge traffic between the mesh nodes and the public Internet. This makes them a suitable aggregation point for policy enforcement or other traffic-shaping responsibilities that may be required to support a scalable, functional mesh network. In this paper we evaluate two gateway-enforced rate-limiting mechanisms so as to avoid congestion and support network-level fairness: Active Queue Management (AQM) techniques that have previously been widely studied in the context of wired networks, and our Gateway Rate Control (GRC) mechanism. We evaluate the performance of these two techniques through simulations of an 802.11-based multihop mesh network. Our experiments show that the conventional use of AQM techniques fails to provide effective congestion control as these mesh networks exhibit different congestion characteristics than wired networks. Specifically, in a wired network, packet losses under congestion occur at the router queue feeding the bottleneck link. By contrast, in a WMN, many such geographically dispersed points of contention may exist due to asymmetric views of the channel state between different mesh routers. As such, gateway rate-limiting techniques like AQM are ineffective as the gateway queue is not the only bottleneck. Our GRC protocol takes a different approach by rate limiting each active flow to its fair share, thus preserving enough capacity to allow the disadvantaged flows to obtain their fair share of the network throughput. The GRC technique can be further extended to provide Quality of Service (QoS) guarantees or enforce different notions of fairness. I. I NTRODUCTION In recent years Wireless Mesh Networks (WMNs) have emerged as a successful architecture for providing costeffective, rapidly deployable network access in a variety of different settings. We are investigating their use as an infrastructure-based or community-based network. Typically, these networks provide last-mile Internet access through mesh routers affixed to residential rooftops, forming a multihop wireless network. Clients typically connect to their preferred mesh router, either via wire or over a (possibly orthogonal) wireless channel. In this regard, a wireless-connected client views a WMN as just another WLAN. Any mesh router that also has Internet connectivity is referred to as a gateway. Gateway nodes provides wide-area access which is then shared between all the nodes in the network. We first highlight some key characteristics of these infrastructure-based mesh networks that distinguish them from other ad hoc wireless networks. As the mesh routers are if we consider the broadcast wireless medium to be a system bottleneck. thus creating opportunities for potentially reusing the rich AQM research literature in this new networking domain. or adjusted to allow prioritized access to nodes higher in the connectivity graph [5]. There is also ongoing work in adapting TCP to multihop wireless networks (e. III. By limiting the throughput of aggressive TCP sources to their fair share. management techniques. [10]. where the sources rate limit themselves either according to neighborhood activity [5]. and asks it neighbours to drop packets in case congestion is detected. The remainder of this paper is organized as follows. typically obtained through snooping [5]. communicate this to interfering nodes. 2. Unlike AQM techniques. II. Rangwala et al. we do not monitor queue sizes but use a simple computational model that calculates the fair-share rate per active stream. or by observing instantaneous transmission queue lengths ( [14]. Our work explores the traditional use of AQM techniques at the wired-wireless boundary at the gateway interface. We defer description of this model to Sect. typically occurs at network boundaries where flows from high throughput links are aggregated across slower links. Using ns-2 [16] we simulate a traffic upload scenario in which we attach a variable bit rate UDP-traffic generator to each mesh router with traffic destined to the wired Internet through the gateway. our work explores a different approach by enforcing centralized flow control that allows us to vary between different rate-control criterion without requiring any modifications to the wireless nodes. we compare the performance of GRC against AQM.Fig. hop-by-hop flow control.g.. In Sect. or per some computational model [10] that takes into account the network topology and stream-activity information. We also include additional experiments illustrating the performance of GRC in other mesh topologies. we operate the network at traffic loads that preserve enough network capacity to allow disadvantaged distant nodes to obtain their fair-share throughput. where the MAC-layer backoff information is explicitly shared [1]. III we investigate the congestion characteristics of multihop mesh networks through a series of network simulations. IV. contrasting it with our approach. In Sect.’s [18] use of RED over a virtual distributed “neighborhood” queue comprising all nodes that contend for channel access. Finally. Another common model is the clique graph model [12] that uses the link-contention graph to determine the maximal clique that bounds the capacity of the network. However. and to follow an AIMD rate-control mechanism for converging to the fair-rate. and propose a distributed algorithm that allows them to achieve their fair-rate allocation. In Sect. We have recently proposed GRC [7]. In WMNs. where nodes other than the source can also enforce rate control along the path to the destination by either monitoring local queue [5] or “neighborhood” queue sizes [18]. we conclude by observing what issues remain open. starting from 0 to a rate that is well-above fair-share allocation per stream. [18]). two popular AQM techniques that have successfully been used to provide congestion control and fairness in wired networks. 802. and describe why FRED fails to yield the desired results. Our GRC mechanism uses the fair-share computational model used by Li et al. V we review our GRC mechanism that enforces implicit flow control by dropping or delaying excess traffic at the gateway. which is derived from the nominal capacity model of Jun and Sichitiu [9]. 1). The simulation . [15]). V. IV. in the form of queue buildups. In this paper. AQM takes a proactive approach towards congestion avoidance by actively controlling flow rates for various connections. [14] have proposed a mechanism that allows a distributed set of sensor nodes to detect incipient congestion. In Sect. We defer description of relevant AQM techniques to Sect. Congestion control is then exercised through one of the following mechanisms: source rate limiting. There are a number of publications that focus on the mathematical modeling of a given topology for determining the optimal fair share per active flow. We source rate limit (in consort) all nodes over a range of traffic generation rates. We discovered that these techniques fail as the gateway queue does not exhibit the same congestion characteristics that are observed in wired routers that interface across a high-bandwidth and a low-bandwidth link. we describe RED and FRED. One noticeable exception is Xu et al. This paper describes our experiences testing AQM techniques in WMNs. then a similar scenario emerges in which a high-speed wired link is feeding this shared bottleneck through the gateway router (Fig. It exploits the fact that congestion. and explain why AQM techniques fail to yield the expected results. R ELATED W ORK Fairness issues have recently received significant attention. The gateway mesh node acts as a bridge between the wired high-speed public Internet and the shared-access multihop mesh network. In wireless networks congestion is determined by measuring MAC-layer utilization. VI we provide simulation results that compare the performance of FRED and GRC in a simple WMN topology. and prioritized MAC layer. we observe that there has been little work discussing the applicability (or lack thereof) of AQM techniques for multihop wireless networks. Each node computes the drop probability based on its notion of the size of this distributed queue.11 M ULTIHOP N ETWORKS T RAFFIC L OADS UNDER VARIABLE Consider the simple “chain” topology shown in Fig. a gateway-enforced rate-control mechanism that provides network-level fairness in a WMN. Gambiroza et al. We first cover the background and the related work. In contrast to these. 1. [4] propose a time-fairness reference model that removes the spatial bias for flows traversing multiple hops. This produces an asymmetric view of the channel state between nodes 1 and 3. i. Nodes 1 and 3 are hidden terminals (as are nodes 2 and 4).e. IV. Increasing the traffic load beyond this fair share rate produces network congestion ( i.0k 1 -> GW 2 -> GW 3 -> GW 4 -> GW 600.0 350. this corresponds to around 140 kbps. Random Early Detection (RED) [3] is an example of such a queue-management protocol. 3. 4. A simple 5-node chain topology. The fact that all connections see the same instantaneous loss rate means that ..0 400. This is consistent with extant WMN deployments. 2). 100000 50000 Fair Share Point 0 0 100000 200000 300000 400000 500000 Load (bps) 600000 700000 800000 900000 Fig. when incipient congestion is detected. TCP builds up a large congestion window for advantaged nodes based on their favorable (though incorrect) local view of the channel state. Throughput (bps) 400.e. At these higher traffic loads.0k Fig. For the given topology.0k 250000 0. Throughput for the topology shown in Figure 2. Fig. Offered load vs. 2. Our analysis of the simulation trace corroborate that this is primarily due to hidden terminal problems that are exacerbated under increasing traffic load.0k 300000 100. apart.0k 350000 1 -> GW 2 -> GW 3 -> GW 4 -> GW 200. Instead. The throughput plot produced in this experiment is shown in Fig. RED gateways are typically used at network boundaries where queue build-ups are expected when flows from highthroughput networks are being aggregated across slower links. node 1 cannot hear node 3’s RTS and cannot decode GW’s subsequent CTS. The same phenomenon is observed for node 4 which has to contend unfairly with node 2’s transmissions.0 250. the traffic generated exceeds the carrying capacity of the network). 4 shows the throughput plot (averaged over 5 sec. the gateway can notify the connection through explicit feedback or by dropping packets. uses the default ns-2 radio model [16].0 200.0k 300. We wish to emphasize that the link-layer issues are the root cause of unfairness shown in Fig. We observe that as the offered load increases. When the queue size exceeds a certain threshold.0k 500. 3. We assume that the wired interface on the gateway has zero loss with negligible latency. thus allowing only the neighboring nodes to directly communicate with each other. The degree of this asymmetry increases with increasing traffic loads as node 1 experiences backoff far more frequently and in greater degree. TCP’s greedy nature combined with asymmetric local views of the channel state results in complete starvation of the 2-hop flows. resulting in flow unfairness and subsequent starvation at high enough traffic loads. These cannot be completely resolved by higher-layer congestion control protocols like TCP. AQM takes a proactive approach towards congestion avoidance by actively controlling flow rates for various connections. RTS/CTS handshake was disabled for these simulations.700. allowing these nodes to inject traffic into the network beyond their fair-share rate at the cost of starving the 2-hop flows. This is because RED gateways do not differentiate between particular connections or classes of connections [3]. node 1 can discover transmission opportunities only through random backoff.. The use of RTS/CTS handshake does not solve this problem as these hidden terminals are outside transmission range. RED gateways require that the transport protocol managing those connections be responsive to congestion notification indicated either through marked packets or through packet loss. we see an increasing unfairness experienced by the 2-hop flows (flows 1→GW and 4→GW).0 Time (sec) 300. AQM T ECHNIQUES One way to perform allocation of resources is to use queue-management techniques. To illustrate this.0 100. RED gateways provide congestion avoidance by detecting incipient congestion through active monitoring of average queue sizes at the gateway. all received packets (irrespective of the flow or size) are marked with the same drop probability. All nodes are 200 m. The 1-hop flows that are within carrier-sense range fairly share the network bandwidth. The asymmetric view of the channel state combined with TCP’s aggressive nature results in complete starvation of the disadvantaged nodes 1 and 4. 3. it has been shown that they provide little fairness improvement [11].0 Throughput (bps) 200000 150.) with TCP NewReno [2] sources attached to the mesh routers for the same topology (Fig. the link capacities are provisioned such that the wireless domain remains the bottleneck. As a result. the throughput for each stream increases linearly till we hit the fair-share point. While RED gateways have proven effective in avoiding congestion.0 150000 Fig. Information Gathering The type of information required depends upon the complexity of the computational model. it maintains the corresponding queue length qleni . link-state routing protocols like OLSR. This actually over-estimates link contention. either a node is silent or its demand is insatiable. referred to as the collision domain of that link. placing them into a token-bucketcontrolled FIFO. Flow Random Early Drop (FRED) [11] is an extension to the RED algorithm designed to reduce the unfairness between the flows. as in RED. A NALYSIS A.. When avg lies between minth and maxth . and avg for the overall queue. Fair Share Computation We adopt a restricted version of the model developed by Li et al. This is generally true. A brief description of FRED is as follows: A FRED gateway uses flow classification to enqueue flows into logically separate buffers.even a connection using less than its fair share will be subject to packet drops. providing an opportunity for starving nodes to transmit their packets. rather than probabilistic traffic policing. For link interference information. Each token bucket has an adjustable rate. FRED ensures that the drop rate for a flow depends on its buffer usage [11]. Link usage is determined by routing and demand. We consider network demand to be binary. A link interferes with another link if either endpoint of one link is within transmission range of either endpoint of the other link. In contrast with FRED. and slow down by reducing their congestion window size. we need some notion of network topology as well as information as to which links interfere with each other. Then the bottleneck collision domain is simply the domain with the greatest load. lastRate VI. Adaptive data sources like TCP register this packet loss as an indication of congestion. Otherwise. it explicitly rate limits each flow to its fair share. The topology information can be extracted from routing protocols ( e. which either is not transmitting or will increase its transmission rate to the available bandwidth. flows pass through the gateway. the gateway node sorts all incoming packets by stream. is underestimated. That is. 5. we can then compute the load over each link. given that link interference. For each flow i. Our analysis in Sect. when each source is rate limited to its fair share. V. That is. or source-routing protocols like DSR). Per Fig. G ATEWAY R ATE C ONTROL As with FRED. 3. A. All new packet arrivals are accepted as long as avg is below the minth . For the work in this paper. The network is modeled as a connectivity graph with mesh nodes as vertices and wireless links as bidirectional edges. [10]. it changes infrequently compared with traffic demand changes. though it would not be difficult to remove this assumption. we presume that routing is relatively static. As all Fig. VI shows that this allows an equilibrium to be established where each source can only consume its fair share of the network throughput. It defines minq and maxq . the WMN is generally able to provide fair access to the gateway to all flows. by simply recomputing the feasibility as routing changed. Fair share Enforcement To enforce the fair share rate. our recently proposed GRC technique [7] requires the gateway to perform flow classification for all the traffic entering the gateway. This frees up the wireless medium. the packet is dropped with a probability that increases with increasing queue size.g. the presumption (born out by detailed simulation studies) is that the overall model is approximately correct. In general. which will be a function of the usage of the links within each collision domain. This corresponds to TCP behavior. maxth . this can simply be determined by performing per-packet inspection. Simulation topology with downstream TCP flows emanating from the wired network and terminating at mesh routers. Performance comparison between AQM and GRC We compared the performance of FRED against a simple Drop-Tail queue as well as against GRC. 5. Similarly. are those within two hops of either endpoint of the link. It is then sufficient to determine the bottleneck collision domain. which respectively are the minimum and the maximum number of packets individual flows are allowed to queue. we use the simple model of [9] which only requires neighborhood information. We used a simple 3-hop chain shown in Fig. it applies per-flow RED to create an isolation between the flows. This leads to packet drops or delays for aggressive data sources. B. The model assumes that the links within a collision domain cannot transmit simultaneously. In essence. We also need stream-activity information since there is no need to reserve bandwidth for nodes that are not transmitting. However. Our gateway rate-control protocol consists of three steps: 1) Gather information required to compute the fair share bandwidth 2) Compute the fair share for each stream 3) Enforce the computed rate for each stream at the gateway We now describe the three steps. it also maintains minth . and the fair share is determined simply by dividing the link capacity by that load. By using per-activeflow accounting. Thus. and in turn compute the load in each collision domain. Given the stream activity. defined by transmission range rather than interference range. C. the set of all links that interfere with a given link. a new packet arrival is deterministically accepted only if the corresponding qleni is less than minq . releasing a packet from the FIFO after an average delay of lastPacketSize from when it last released a packet. We assume that the . we only present a brief summary of our experiments in Table VI-B.11 MAC. the extra available bandwidth is acquired by flow 2 because there is very little traffic to be sent out for flow 3 because of the combined effect of the 802. the wireless domain remains the bottleneck (Fig.11 contention window and the TCP congestion window. We simulated the three algorithms using the ns-2 [16] simulator with the radio model defaults described earlier in Sect. and the packets have to be retransmitted by the gateway. 6 shows the per-flow data arrival rate (not ACKs) in the FRED queue at the gateway during the simulation run. The ns-2 default parameters for FRED operation were used. In this paper we evaluated the use of two gateway-enforced rate-limiting mechanisms to improve the fairness characteristics of these networks. Flows in these networks can starve as packet drops can occur in any congested region of the physical network. and random topologies. B. this hidden-terminal scenario cannot be resolved using RTS/CTS as nodes GW and 3 are out of each others transmission range. simulation run with the Drop-Tail discipline. VII. Fig.bandwidth of the wired Internet connection at the gateway exceeds the wireless MAC-layer bandwidth. they do not slow down flows 1 and 2 sufficiently to preclude the starvation of flow 3. particularly those based on the 802. Though flow 1 transmits fewer packets with FRED. for our initial performance studies. As a result the output queue on the wired interface of the gateway will never exceed one packet. but continues deteriorating during the simulation execution. We use Jain’s minthroughput as Fairness Index (JFI) [6] and an estimate of avgthroughput quantitative measures of network-level fairness and starvation. 4.. by contrast. but this is a direct result of the fairness requirement [4]. C ONCLUSION AND F UTURE W ORK WMNs. Therefore. The queue space is evenly shared between the flows at the start of the simulation. we use it. this loss does not always occur at the queue interfacing the high-speed and the low-speed networks (the gateway node for WMNs). Drop Tail exhibits the basic unfairness scenario similar to the results we observed in Fig. exhibit extreme fairness problems. Protocols like our GRC fare better because they reduce the degree of this asymmetry by limiting aggressive TCP sources to their fair share. While the FRED experiment does cause some queue drops at the gateway. We enabled queue monitoring at the gateway to explain this behavior. to a point where TCP timeouts occur. The throughput decreases for flows multiple hops away from the gateway. Aggregate bandwidth achieved by GRC is noticeably less than that of the other approaches. GRC Evaluation We tested the performance of GRC on a number of different chain.6% for flow 3 ACKs with FRED). with flow 3 getting starved. III. As discussed previously in Sect. TCP is the canonical example of such a protocol. Only GRC is able to enforce absolute fairness between the flows. node 3 repeatedly increases its contention window Packet arrivals rate in FRED queue 6000 S1 S2 S3 Total data packets received in Queue 5000 4000 3000 2000 1000 0 0 20 40 60 80 Time (seconds) 100 120 140 Fig. Table I shows the summary of our results. As such. slightly reducing the load on the wireless medium. thus preserving enough channel capacity so as to allow disadvantaged nodes to obtain their fair share. The queue size at the gateway was set to 50 packets. 6. Both FRED and GRC rely on the inherent responsive nature of the transport protocol to congestion notification indicated through delayed or dropped packets. The use of FRED does not prevent node 3 from starvation. the use of AQM techniques show negligible fairness improvement in multihop wireless networks. Because of frequent collisions. III. New data packet arrival rate in FRED queue. flows originating in the wired domain and terminating at the mesh nodes) because a queue build-up (required for FRED) only takes place in this direction. New data packets are not being generated for flow 3 because ACKs for the previously transmitted ones have not been received (loss rate of 39. Reducing the queue size in GRC would cause explicit packet loss rather delay. but can occur at any intermediate node that is disadvantaged due to asymmetric view of channel state between the mesh nodes. FS corresponds to the computed fair share per stream. As such. specifically TCP NewReno [2]. this queue is not the bottleneck as shown by zero queue-controlled drops in the 150 sec. and dominates Internet traffic. and thus the FRED algorithm never reaches the minimum queue size necessary to start dropping packets. As detailed results have been presented elsewhere [7]. while σ is the standard deviation between average throughput of all active TCP streams. Overall. requiring existing deployments to limit the maximum number of hops to the gateway to prevent distant nodes from starving. At this size. we only test the performance of these queue-management protocols for downstream flows ( i. We conclude that unlike in wired networks. 1).e. Unlike in wired networks. These experiments represent the scenario when all mesh routers had an active TCP stream to the wired Internet via the gateway. This is because the gateway acts as a hidden terminal for TCP ACKs generated by node 3. but does delay them sufficiently as to cause flows 1 and 2 to infer loss and invoke congestion control. does not lose packets at the gateway. We . GRC. while the GRC technique computes its own fair-share rate. we observed that our algorithm successfully operated the network at a capacity that meets the fair-share requirements of all active streams. grid. 99958 min throughput avg throughput 0. In Proc. Sadeghi. Qi. http://www. Floyd and V. Fairness and QoS in multihop wireless networks.3 3784 0 Avg. Baltimore. 1997. Mitigating congestion in wireless sensor networks.9 0. and by further cutting the convergence time hope to extend it to web-like traffic. Rangwala. pages 8–14. Govindan. IEEE Wireless Communications. and E. In Proc. IEEE/ACM Transactions on Networking. Random Early Detection Gateways for Congestion Avoidance.11-based wireless mesh networks. ns-2.799 0. prevents channel capture by aggressive sources. [18] K. Shu. Jun and M. November 2004. DEC Report TR-301. Submitted. A. of ACM MobiCom ’03.960 0. W. pages 397–413. In Proc. Chiu.6 3740 0 S3 5. of ACM MobiCom ’04. Bharhgavan. pages 63–74. [14] S. April 2004. R. pages 87–98. 2006.0 5607 258 S1 324.987 0. T. September 2004. August 2006. Ad Hoc Networks.930 TABLE II Q UANTITATIVE FAIRNESS ANALYSIS OF TESTED TOPOLOGIES found that AQM-based techniques fail to solve this problem. RFC 3782. [6] R. Tan. and W. Gambiroza. September 1984. S.5 41. Gao. such a feedback-based controller would work even where the computation model is likely to be ineffective. MD. A. FL. December 1975.0 0. 1994. and P. Nandagopal. L. Sichitiu. K.979 0. Kleinrock. R EFERENCES [1] V. Simulation results over various topologies demonstrate that our approach is effective in providing fair sharing of the network resources.009 0. Achieving MAC layer fairness in wireless packet networks. Jun and M.996 0. and H.011 0. R. Ward. [12] T. End-to-end performance and fairness in multihop wireless backhaul networks. October 2003. In Proc. Orlando. R. Zhang.049 0.2 FRED S2 233.3 39. Jamieson. L. S. Knightly. [11] D. Zhang. [10] L. As the quality of a link can vary rapidly and unpredictably over a period of time. X.1 0. and Y.0 3999 127 2.09 18.2 118. of WiMeshNets. Gateway rate control of wireless mesh networks. 23(12):1417–1433. [5] B. Jiang. V-B. Lin and R. we are developing a feedback-based controller which adapts the enforced fair rate based on existing measured rates.0 0. Sichitiu. Shenker. We have recently tested the performance of this algorithm with dynamic TCP flows. The simulations in this paper considered static long-lived TCP flows. 2003. Lau. Dynamics of Random Early Detection. and V.49 107. on the other hand.013 0. Balakrishnan.38 41. Internet Engineering Task Force. [8] J. [16] The Network Simulator.0 6184 0 0. The nominal capacity of wireless mesh networks. 1993.46 0.014 JFI 0. A. securely and cost-effectively (white paper). Bharghavan. Hawe. Shen. and P. the bottleneck in these networks is not the gateway queue interfacing the wired-wireless domains. pages 16–28. Psounis. of the IEEE Vehicular Technology Conference (VTC). Xu. October 2003. Our GRC mechanism. of ACM SIGCOMM. [13] Nortel Networks.6 0.625 0. IEEE Transactions on Communication.4 123 0 4. [17] F. of SIGCOMM. [15] K. [4] V.964 0. L. Also. K.014 0.39 σ/FS 0. AND GRC S1 378. FRED Q UEUE .31 18. A quantitative measure of fairness and discrimination for resource allocation in shared computer system. of ACM MobiCom ’00. Gummadi.0282 3.031 0.1 3707 0 0. [9] J. A.3 27. 2005. any appropriate rate controller can be used instead.0 2438 0 4.7 Drop Tail S2 S3 206.3 Gateway Rate Control S1 S2 S3 119. The NewReno Modification to TCP’s Fast Recovery Algorithm.49 41. Interferenceaware fair rate control in wireless sensor networks. In particular. In Proc. Jain. Wireless mesh network: Extending the reach of wireless LAN. and L. B.99966 0. Enhancing TCP fairness in ad hoc wireless networks using neighborhood RED. Achieving fairness in 802. Kim.99977 0. Packet switching in radio channels: Part II—The hidden terminal problem in carrier sense multipleaccess and busy-tone solution.6 0.8 42.99982 0. Hull.49 11. thus preserving enough network capacity to allow disadvantaged nodes to obtain their fair-share network throughput. T. [2] S. In Proc.007 0. Congestion control in multihop wireless networks. Q.99960 0. but physical geographical regions that are interspersed through the network and disadvantaged due to reasons such as hidden terminal problems.661 0. Jakubczak. Tobagi and L. Floyd and T. of ACM SIGCOMM. and X.02 0. . Gerla. 2000. throughput (kbps) JFI minthroughput avgthroughput % Loss (Collisions) % Loss (Collisions) Data packets received Data packets dropped Topology 5-Hop Chain 10-Hop Chain 15-Hop Chain 3x3 Grid 4x4 Grid 5-Node Random 10-Node Random 15-Node Random FS (kbps) 49.99802 0.3 5.914 0.9999 0. F. Jacobson. Technical report.0277 Data ACKs at gateway queue at gateway queue 0.99488 0. In IEEE SECON. MACAW: A media access protocol wireless LANs. Morris. Jamshaid. D. M. Li. Li.6 129 3 0. In Proc. In ACM SenSys 2004. [7] K. [3] S.99989 0.05 0. Demers. GRC as a framework is independent of the computational model discussed in Sect. pages 249–256. L.isi.-M. Henderson. In Proc. and K.967 0. Ward.TABLE I P ERFORMANCE COMPARISONS AND GATEWAY QUEUE ANALYSIS USING D ROP TAIL Q UEUE .2 118. May 2006.edu/nsnam/ns/. pages 287–301. A.
Copyright © 2024 DOKUMEN.SITE Inc.