Search results for "Real-time computing"
showing 10 items of 366 documents
An improved noninvasive method for measuring heartbeat of intertidal animals
2013
Since its emergence two decades ago, the use of infrared technology for noninvasively measuring the heartbeat rates of invertebrates has provided valuable insight into the physiology and ecology of intertidal organisms. During that time period, the hardware needed for this method has been adapted to currently available electronic components, making the original published description obsolete. This article reviews the history of heartbeat sensing technology, and describes the design and function of a modern and simplified infrared heartbeat rate sensing system compatible with many intertidal and marine invertebrates. This technique overcomes drawbacks and obstacles encountered with previous …
Revisit of RTS/CTS Exchange in High-Speed IEEE 802.11 Networks
2005
IEEE 802.11 medium access control (MAC), called distributed coordination function (DCF), provides two different access modes, namely, 2-way (basic access) and 4-way (RTS/CTS) handshaking. The 4-way handshaking has been introduced in order to combat the hidden terminal phenomenon. It has been also proved that such a mechanism can be beneficial even in the absence of hidden terminals, because of the collision time reduction. We analyze the effectiveness of the RTS/CTS access mode, in current 802.11b and 802.11a networks. Since the rates employed for control frame transmissions can be much lower than the rate employed for data frames, the assumption on the basis of the 4-way handshaking introd…
The ATLAS detector control system
2012
The ATLAS experiment is one of the multi-purpose experiments at the Large Hadron Collider (LHC) at CERN, constructed to study elementary particle interactions in collisions of high-energy proton beams. Twelve different sub detectors as well as the common experimental infrastructure are controlled and monitored by the Detector Control System (DCS) using a highly distributed system of 140 server machines running the industrial SCADA product PVSS. Higher level control system layers allow for automatic control procedures, efficient error recognition and handling, manage the communication with external systems such as the LHC controls, and provide a synchronization mechanism with the ATLAS data …
Time course of central and peripheral fatigue differs when maintaining a constant-EMG task vs. a constant-torque task
2005
Hybrid Observer for Indoor Localization with Random Time-of-Arrival Measurments
2018
In this work an indoor position estimation algorithm will be proposed. The position will be measured by means of a sensor network composed by fixed beacons placed on the indoor environment and a mobile beacon mounted on the object to be tracked. The mobile beacon communicates with all the fixed beacons by means of ultra wide-band signals, and the distance between them is computed by means of time of flight techniques. Moreover, inertial measurements will be used when the position measurements are not available. Two main problems will be considered in the proposed architecture: the fact that the beacons work with a lower update rate than the IMU, and that the mobile beacon can comunicate wit…
Optimal MAC PDU Size in IEEE 802.16
2008
In the IEEE 802.16 the number of errors and the MAC PDU size have an impact on the performance of the network. We present a way to estimate the optimal PDU size and we run a number of simulation scenarios to study these parameters and how they impact on the performance of application protocols. The simulation results reveal that the channel bit error rate has a major impact on the optimal PDU size in the IEEE 802.16 networks. Also, the ARQ block rearrangement influences the performance.
Link Adaptation Thresholds for the IEEE 802.16 Base Station
2008
The IEEE 802.16 technology defines a number of modulation and coding schemes that the base station can use to achieve the best tradeoff between the spectrum efficiency and the resulting application level throughput. However, the 802.16 specification does not define any particular link level adaptation algorithm, neither does it specify the SNR thresholds to switch between modulation and coding schemes. In this paper we consider a link adaptation model and conduct a number of simulation runs to find transition thresholds for ARQ and HARQ retransmission mechanisms. All the simulations are done with the 802.16 extension for the NS-2 simulator.
Increasing the VoIP Capacity through MAP Overhead Reduction in the IEEE 802.16 OFDMa Systems
2010
One of the main issues with supporting VoIP service over 802.16 networks is the signalling overhead caused by the downlink MAP messages due to frequent transmissions and small packets. To decrease the MAP overhead, the 802.16 standard proposes some mechanisms, such as the compressed MAP and sub-MAPs. In this paper, we show by means of extensive dynamic simulations that sub-MAPs can reduce dramatically the signalling overhead associated with VoIP traffic and significantly improve overall VoIP capacity. At the same time, since sub-MAPs are more sensitive to packet drops, they tend to increase the number of HARQ retransmissions in downlink and transmission delays in the uplink direction.
Adaptive Contention Resolution for VoIP Services in the IEEE 802.16 Networks
2007
In the IEEE 802.16 networks, a subscriber station can use the contention slots to send bandwidth requests to the base station. The contention resolution mechanism is controlled by the backoff start/end values and a number of the request transmission opportunities. These parameters are set by the base station and are announced to subscriber stations in the management messages. In the case of the VoIP services, it is critical that the contention resolution occurs within the specified time interval to meet the VoIP QoS requirements. Thus, it is the responsibility of the base station to set correct contention resolution parameters to ensure the QoS requirements. This paper presents analytical c…
Adaptive contention resolution parameters for the IEEE 802.16 networks
2007
In the IEEE 802.16 networks, the base station allocates resources to subscriber stations based on their QoS requirements and bandwidth request sizes. A subscriber station can send a bandwidth request when it has an uplink grant allocated by the base station or by taking part in the contention resolution mechanism. This paper presents analytical calculations for parameters that control the contention resolution process in the IEEE 802.16 networks. In particular, the backoff start/end values and the number of request transmission opportunities are considered. The simulation results confirm the correctness of theoretical calculations. They also reveal that the adaptive parameter tuning results…