Yup, exactly. Gait is the main thing I've seen too. But there are a whole grab bag of papers about these applications with different methodologies. Stuff like topological data analysis and machine learning are much more fleshed out these days, so these applications don't need to rely as much on rigid models to be accurate. Could be that they're using subtle, noisy information from breathing patterns, for example.
"Device free", "localization", "identification" and "mm wave antenna arrays" are some useful keywords for checking this stuff out on Google scholar. Some other alternatives for "device free" are "passive" and "adversarial". I think I remember one paper where they were identifying dozens of people at once with like 95%+ accuracy. I could be getting the details wrong on that one but it was along those lines.
For the high level overview though, you can just read the 5G industry whitepapers and it pretty clearly spells out that there are privacy implications and I'm sure that these kinds of applications are exactly why.
Remember, higher frequency = more information density = finer resolution. Which is also why these radio waves can't travel through as much stuff... They simply interact with more; more stuff is opaque to them, and as such they carry information about more interactions in their image.
It's an approach to wireless engineering aiming to make it much easier for sensing and communications applications to use the same spectrum simultaneously. There are so many papers on the topic and some of them are more technical but a lot are focused on possible applications. The name "ISAC" is kind of an umbrella term to describe a bunch of different approaches both old and new.
For the passive sensing applications you can mix and match these keywords in Google scholar:
For example: "device free localization" is a keyword for papers about tracking people who don't have phones. Another similar one would be "passive identification". You might also try "adversarial localization".
So basically gait analysis using WiFi power levels. At present requires making the user walk through a specific choke point to measure the gait, then can compare any other captured video to compare. Pretty neat!
> This proof-of-concept would be a breakthrough for healthcare, security, gaming (VR), and a host of other industries.
Similar capability is scheduled for new consumer routers in 2024 via Wi-Fi 7 Sensing / IEEE 802.11bf. Hundreds of previous papers include terms like these:
human-to-human interaction recognition
device-free human activity recognition
occupant activity recognition in smart offices
emotion sensing via wireless channel data
CSI learning for gait biometric sensing
sleep monitoring from afar
human breath status via commodity wifi
device-free crowd sensing
Many alarm systems have gunn-diode microwave motion sensors in them. Replace this with high-resolution radar and networking for house-wide whole-body gesture recognition.
It's quite advanced and there are all kinds of different techniques that work, e.g. wavelet analysis, topological data analysis, deep learning, etc. Not much info on deployed applications, of course, not that I could find.
Some keywords for Google Scholar:
"Passive localization", "mm-wave antenna array", "device free localization", etc.
But I'm afraid every time I look there are more I haven't seen before, using different variations on the lingo, such as "passive sensing" and all kinds of different stuff.
I'm really hoping this tech gets more accessible (i.e. uses OTS components and shows up on 2-minute-papers [i] along with a github repository) for a variety of use-cases e.g. not just detecting humans, but also for detecting wiring and other hidden features in walls and floors.
The special thing about mm wave antenna arrays is that you don't need to have a phone on you to be identified by the unique fingerprint of your gait. There's enough information density in that spectrum to resolve your identity accurately even if you're not carrying a phone. And there are papers demonstrating how to do just that.
Anything that requires the ability to detect if it's in the same physical space as another device / item of interest.
Some common examples are:
* guest mode discovery on a Chromecast
* pairing to videoconference system (both Cisco and Polycom do this in their current gen devices), building control, wireless presentation system etc
* a means for transmitting a 2FA code from a personal device for auth / payments
* micro-location service - particularly for temporary events or spaces where BLE is not possible (e.g. embedded in FOH audio at different stages at a music festival)
* proximity based peer discovery for mobile games
You'd be surprised at how common a technique it is. I regularly face issues with environments that have had enough different devices all attempting to use some form of ultrasonic pairing / localisation etc that there's issues with interference.
Some weeks back I read a post here about detecting people in rooms by measuring how the physical body interferes with the wifi signals. I wouldn't have imagined someone could extract useful information at this small of a scale. wow!
Cool! I'm glad to see RFID and localization being merged together like this. I think this really shows that RFID is a much more suitable technology for this type of application than a computation-heavy CV algorithm.
There are dozens+ papers like that specifically focused on mm-wave stuff, "passive" localization/identification/sensing. Gait doesn't go far enough to describe it. It's basically the equivalent of face recognition tech but in the mm-wave spectrum with standard router hardware.
The various mm-wave bands being used in 5G (and being proposed for "6G") are much more dense (spatially and information theoretically) than the < 5GHz bands, but sparse enough compared to visible light that many obstacles are quite translucent.
mm-wave antenna arrays are kinda just big low frequency eyeballs.
Somewhere deep on my todo list is building a grid of ESP32s around the place reporting BLE signal strengths to a central node which can then do triangulation to work out where people are based on devices they typically carry with them (I for example am almost always wearing an Apple Watch, and carrying my phone).
It already is in the market, just that it is not usually sold as a standalone product like Leap Motion. Through wall imaging using WiFi imaging is old news. Also, for gesture detection specifically, radar chips are often preferred over hacking WiFi firmware/additional signal processing.
reply