Motion Detection in WLANs using PHY Layer Information


Overview

People

Publications

Overview

The increasing popularity of WiFi brings new challenges in the design of wireless protocols but at the same time creates exciting opportunities for novel ways of exploiting the WiFi technology in a variety of applications. On one hand, the smaller form factor of today's WiFi devices allows WiFi users to access wireless networks under mobility, calling for new, mobility-aware wireless protocols that are able to sustain high performance; on the other hand, the ubiquity of WiFi devices has recently raised interest for extending WiFi's capabilities beyond communication, e.g., human-computer interaction. In this project, we demonstrate that it is possible to enable fine-grained human motion detection on commodity WiFi devices by exploiting PHY layer information available from today's WiFi chipsets.

In the first part of this project, we demonstrate how different client mobility modes can be distinguished by using PHY layer information available at commodity APs without any software modifications on the client. We identify four broad categories of client mobility. If the client is stationary, it can be in the static mobility mode when there are no significant environmental changes affecting the channel between the AP and the client. A static client may also be in the environmental mobility mode when the channel changes due to external movements. Of course the wireless client itself may be moving; a mobility mode that we call device mobility. We identify two broad and dominant categories of device mobility in WLANs. First, the user may slowly move the device although she is stationary or her movement is confined within a small area. We define that the client is under micromobility if it is under device-mobility but its location is confined within a small area. On the other hand, device mobility may also cause the client to change its location as its user walks from one location to another. In such scenarios, we classify the client to be under macro-mobility.

We utilize Channel State Information (CSI) and Time-of-Flight (ToF) values available from off-the-shelf HP MSM APs to determine the client's mobility mode. We show that the temporal changes in the CSI can distinguish among static, environmental, and device mobility. However the CSI value may change similarly for micro and macro mobility scenarios. To distinguish between these two device mobility modes, we observe that the client's distance from the AP changes significantly under macro-mobility, whereas under micro-mobility her distance seldom changes. The ToF can reliably indicate the distance between the client and the AP because it captures the round trip propagation time between the two. The increasing vs. decreasing trend of the client's distance can further indicate her relative heading with respect to the AP, whether the client is moving away from or moving towards the AP. In addition, we demonstrate how fine-grained mobility determination can be exploited to improve performance of client roaming, rate control, frame aggregation, and MIMO beamforming. Our testbed experiments show that our mobility classification algorithm achieves more than 92% accuracy in a variety of scenarios, and the combined throughput gain of all four mobility-aware protocols over their mobility-oblivious counterparts can be more than 100%.

                                            Fig 1: Client mobility classification scheme Fig 2: WiDraw: Estimated trajectories and core idea

In the second part of the project, we introduce WiDraw, the first hand motion tracking solution that can be enabled on existing mobile devices using only a software patch. Different from the existing hand tracking solutions using wireless signals, WiDraw does not require prior learning or the use of any wearable. WiDraw fully utilizes the presence of a large number of WiFi devices in today's WLANs. It leverages the Angle-of-Arrival (AoA) values of incoming wireless signals at the mobile device to track the detailed trajectory of the user's hand in both Line-of-Sight and Non-Line-of-Sight scenarios. The intuition behind WiDraw is that whenever the user’s hand blocks a signal coming from a certain direction, the signal strength of the AoA representing the same direction will experience a sharp drop. Therefore, by tracking the signal strength of the AoAs, it is feasible to determine when and where such occlusions happen and further determine a set of horizontal and vertical coordinates along the hand's trajectory, given a depth value. The depth of the user's hand can also be approximated using the drop in the overall signal strength. By estimating the hand’s depth, along with horizontal and vertical coordinates, WiDraw is able to track the user's hand in the 3-D space w.r.t the WiFi antennas of the receiver.

The tracking granularity of WiDraw depends on the number of AoAs along the hand's trajectory. While periodic AP beacons can contribute several angles, the tracking granularity can be further increased by employing lightweight probing mechanisms to obtain AoAs from neighboring client devices as well. We show that by utilizing the AoAs from up to 25 WiFi transmitters, a WiDraw-enabled laptop can track the user's hand with a median error lower than 5 cm. WiDraw's rate of false positives -- motion detection in the absence of one -- is less than 0.04 events per minute over a 360 minute period in a busy office environment. Experiments across 10 different users also demonstrate that WiDraw can be used to write words and sentences in the air, achieving a mean word recognition accuracy of 91%.

People

Faculty:

External collaborators: Doctoral Students:
  • Li Sun

Publications

Contact: Dimitrios Koutsonikolas      Email: dimitrio@buffalo.edu