DensePose From WiFiDensePose From WiFi - 2301.00250v1.pdfhttps://arxiv.org/pdf/2301.00250
DensePose From WiFi
Jiaqi Geng, Dong Huang, Fernando De la Torre 31 Dec 2022
Abstract
Advances in computer vision and machine learning techniques have
led to significant development in 2D and 3D human pose estimation
from RGB cameras, LiDAR, and radars. However, human pose esti-
mation from images is adversely affected by occlusion and lighting,
which are common in many scenarios of interest. Radar and LiDAR
technologies, on the other hand, need specialized hardware that is
expensive and power-intensive. Furthermore, placing these sensors
in non-public areas raises significant privacy concerns.
To address these limitations, recent research has explored the use
of WiFi antennas (1D sensors) for body segmentation and key-point
body detection. This paper further expands on the use of the WiFi
signal in combination with deep learning architectures, commonly
used in computer vision, to estimate dense human pose correspon-
dence. We developed a deep neural network that maps the phase
and amplitude of WiFi signals to UV coordinates within 24 human
regions. The results of the study reveal that our model can estimate
the dense pose of multiple subjects, with comparable performance
to image-based approaches, by utilizing WiFi signals as the only
input. This paves the way for low-cost, broadly accessible, and
privacy-preserving algorithms for human sensing.