The whole README is heavily AI-edited (the final output is all by AI), and the worst thing is that the image diagrams seem to be generated (likely with 4o), see for example
The AI generated README's is something I've noticed a lot lately. It makes it really difficult to differentiate the weekend projects from the "we took the time to create a proper documentation" projects. The AI copy also makes it hard to trust the instructions and examples given.
Blender, Godot, Audacity, Firefox, Git, Linux, ... I could name 100 projects that could not be described that way. Most couldn't. There are only a few projects that I can think of that are really just wrappers (even though they add a lot of value), e.g:
* Handbrake, wraps ffmpeg (it does more stuff but that's the main thing most people use it for)
>There are only a few projects that I can think of that are really just wrappers (even though they add a lot of value), e.g:
it's more common than you give it credit for.
gparted, cups, 7zip, baobab, all the *commnder file tools, almost all cd/dvd/bd burning software, nearly every media player that touches ffmpeg (vlc,mplayer) , almost every VM gui, almost every firewall GUI, time machine, duplicati, sabnzbd .. the list goes on forever.
Linux fits too if you're talking about the OS rather than the kernel.
If you want to talk at a lower level then python is really just a wrapper for lots of other shit, simiarly pytorch/cuda are wrappers for a bunch of ugly C.
pretty languages are wrappers for ugly languages, ugly languages are wrappers for assembly, assembly is a wrapper for machine code.
I agree that libraries are a thing, all problems in CS can be solved with a layer of indirection, etc. I also have no issue with AI-gen projects if they're good.
In this case, they posted a README full of nonsense diagrams, didn't fix the broken characters in their UX, and breezed over the complexity of the dependencies (ESP-CSI is very cool but requires specific hardware, with two ESP devices and external antennas). Feels sloppy.
My thoughts exactly, I was initially impressed with how well detailed the readme, I have never seen anything like that before, but it seems it’s AI generated and I am not impressed anymore, I am not even sure if it’s all authentic anyway.
Ok let's say I'm making a robot spider in my garage half the size of a Tesla and with as much horsepower. I'm putting nvidias new Jetson brain as the chip. If I use enough of these can I replace a lidar package for autonomous control?
I'm dying to know though, what's the practical resolution like? Can it tell the difference between my cat and a bag I dropped, or is it more like "a blob moved over there"?
On one hand, the potential privacy invasions enabled by this technology (e.g. Xfinity (of course Comcast) a few months ago[1]) are pretty scary.
On the other hand, the technology seems potentially extremely useful. I've had an interest in pose estimation for many years, but doing it with normal cameras seems tricky to do reliably because of the possibility for visual occlusion (both from the body itself and from other objects). I'm curious to see if I can use this for something like tracking my posture while I use my computer so I can avoid back pain later in life.
If you want good posture and prevent back problems and pain, just do resistance-based training (get a good coach and/or physical therapist to get started). There are a lot of excercises for strengthening the back and the neck in particular. It is never too late to start.
I'm interested but am also incredibly dubious. Not because it seems impossible but the opposite. On one hand, an open source repo like this making an approach for hackable extension should be praised, but the "Why Built WiFi-3D-Fusion" section[0] gives me very, very bad vibes. Here's some excerpts I especially take issue with:
> "Why? Because there are places where cameras fail, dark rooms, burning buildings, collapsed tunnels, deep underground. And in those places, a system like this could mean the difference between life and death."
> "I refuse to accept 'impossible.'"
WiFi sensing is an established research domain that has long struggled with line of sight requirements, signal reflection, interference, etc. This repo has the guise of research, but it seems to omit the work of the field it resides in. It's one thing to detect motion or approximately track a connected device through space, but "burning buildings, collapsed tunnels, deep underground" are exactly the kind of non-standardized environments where WiFi sensing performs especially poorly.
I hate to judge so quickly based on a readme, but I'm not personally interested in digging deeper or spinning up an environment. Consider this before aligning with my sentiment.
what i want to know is if you need multiple senders and receivers, or you just run it on a esp32 and it can visualize? usually they need a sender and a receiver to make sense of it all?
I didn't see any reference to a sender or actively blasting RF from the same access point. I think the approach relies on other signal sources creating reflections to a passively monitoring access point and attempting to make sense of that.
5GHz WiFi has a wavelength of ~6cm and 2.4GHz ~12.5cm. Anything achieving smaller is a result of interferometry or a non WiFi signal. Mentioning this might not add much substance to the conversation, but it felt worth adding.
I do actually really want this, to integrate into Home Assistant. I don't want to have to put a bunch of mm-wave detectors around the house to see where people are, I want to use the emitters and receivers I've already got.
The current alternatives aren't that great.
The US military has been using tech like this for years. Some public, some not. The stuff not public is supposedly pretty good (bits and pieces of info have slipped in various publications).
If you’re interested in this stuff, check out Lumineye.
The whole README is heavily AI-edited (the final output is all by AI), and the worst thing is that the image diagrams seem to be generated (likely with 4o), see for example
https://github.com/MaliosDark/wifi-3d-fusion/blob/main/docs/...
"Wayelet CSi tensas"
That makes me question the authenticity of the project.
The badges are ridiculous. There’s a YAML badge in there.
There is even a "License: Not identifiable by Github" badge despite there is a LICENSE file clearly visible in the list of files.
Although I'm more surprised a repo that appears to be merely a week old already has 245 stars.
The AI generated README's is something I've noticed a lot lately. It makes it really difficult to differentiate the weekend projects from the "we took the time to create a proper documentation" projects. The AI copy also makes it hard to trust the instructions and examples given.
The interface and code smacks of Claude. It's basically someone's AI pet project wrapping legitimate third-party tools.
This "wrapping 3rd party tools" thing is a weird kind of critisizm to me. Like name 1 project that could not be described that way?
Blender, Godot, Audacity, Firefox, Git, Linux, ... I could name 100 projects that could not be described that way. Most couldn't. There are only a few projects that I can think of that are really just wrappers (even though they add a lot of value), e.g:
* Handbrake, wraps ffmpeg (it does more stuff but that's the main thing most people use it for)
* Ollama, wraps llama.cpp
>There are only a few projects that I can think of that are really just wrappers (even though they add a lot of value), e.g:
it's more common than you give it credit for.
gparted, cups, 7zip, baobab, all the *commnder file tools, almost all cd/dvd/bd burning software, nearly every media player that touches ffmpeg (vlc,mplayer) , almost every VM gui, almost every firewall GUI, time machine, duplicati, sabnzbd .. the list goes on forever.
Linux fits too if you're talking about the OS rather than the kernel.
If you want to talk at a lower level then python is really just a wrapper for lots of other shit, simiarly pytorch/cuda are wrappers for a bunch of ugly C.
pretty languages are wrappers for ugly languages, ugly languages are wrappers for assembly, assembly is a wrapper for machine code.
It's wrapped turtles all the way down.
I agree that libraries are a thing, all problems in CS can be solved with a layer of indirection, etc. I also have no issue with AI-gen projects if they're good.
In this case, they posted a README full of nonsense diagrams, didn't fix the broken characters in their UX, and breezed over the complexity of the dependencies (ESP-CSI is very cool but requires specific hardware, with two ESP devices and external antennas). Feels sloppy.
My thoughts exactly, I was initially impressed with how well detailed the readme, I have never seen anything like that before, but it seems it’s AI generated and I am not impressed anymore, I am not even sure if it’s all authentic anyway.
That repo has more github badges that a north Korean general has metals on their uniform...
https://www.youtube.com/watch?v=-ea2-kt8ox4&t=4s
lol!
Here is a link to a video of what it looks like (estimated) in video.
We built this system at the UofT WIRLab back in 2018-19 https://youtu.be/lTOUBUhC0Cg
And link to paper https://arxiv.org/pdf/2001.05842
Ok let's say I'm making a robot spider in my garage half the size of a Tesla and with as much horsepower. I'm putting nvidias new Jetson brain as the chip. If I use enough of these can I replace a lidar package for autonomous control?
It's open source; fork around and find out ;)
I'm dying to know though, what's the practical resolution like? Can it tell the difference between my cat and a bag I dropped, or is it more like "a blob moved over there"?
On one hand, the potential privacy invasions enabled by this technology (e.g. Xfinity (of course Comcast) a few months ago[1]) are pretty scary.
On the other hand, the technology seems potentially extremely useful. I've had an interest in pose estimation for many years, but doing it with normal cameras seems tricky to do reliably because of the possibility for visual occlusion (both from the body itself and from other objects). I'm curious to see if I can use this for something like tracking my posture while I use my computer so I can avoid back pain later in life.
[1] https://news.ycombinator.com/item?id=44426726
Posture (how you position your body) isn't the cause or prevention of back pain.
Your muscles need strengthening, strengthening comes from movement, movement comes from mobility.
But you are right in that it is an interesting hammer to find nails for
If you want good posture and prevent back problems and pain, just do resistance-based training (get a good coach and/or physical therapist to get started). There are a lot of excercises for strengthening the back and the neck in particular. It is never too late to start.
I scrolled through two pages of badges and hit counters. I have to be honest, that makes me very scared to run the underlying code.
This is what 1998 felt like.
Github is the new geocities!
The UI looks like it was built by a Hollywood set designer
I'm interested but am also incredibly dubious. Not because it seems impossible but the opposite. On one hand, an open source repo like this making an approach for hackable extension should be praised, but the "Why Built WiFi-3D-Fusion" section[0] gives me very, very bad vibes. Here's some excerpts I especially take issue with:
> "Why? Because there are places where cameras fail, dark rooms, burning buildings, collapsed tunnels, deep underground. And in those places, a system like this could mean the difference between life and death."
> "I refuse to accept 'impossible.'"
WiFi sensing is an established research domain that has long struggled with line of sight requirements, signal reflection, interference, etc. This repo has the guise of research, but it seems to omit the work of the field it resides in. It's one thing to detect motion or approximately track a connected device through space, but "burning buildings, collapsed tunnels, deep underground" are exactly the kind of non-standardized environments where WiFi sensing performs especially poorly.
I hate to judge so quickly based on a readme, but I'm not personally interested in digging deeper or spinning up an environment. Consider this before aligning with my sentiment.
[0] https://github.com/MaliosDark/wifi-3d-fusion/blob/main/READM...
what i want to know is if you need multiple senders and receivers, or you just run it on a esp32 and it can visualize? usually they need a sender and a receiver to make sense of it all?
I didn't see any reference to a sender or actively blasting RF from the same access point. I think the approach relies on other signal sources creating reflections to a passively monitoring access point and attempting to make sense of that.
Seems like it is based on this paper from CVPR 2024:
https://aiotgroup.github.io/Person-in-WiFi-3D/
Frankly I'm shocked it's possible to do this with that level of resolution.
5GHz WiFi has a wavelength of ~6cm and 2.4GHz ~12.5cm. Anything achieving smaller is a result of interferometry or a non WiFi signal. Mentioning this might not add much substance to the conversation, but it felt worth adding.
This resolution is probably enough, as they use human skeleton pose estimators and human movement pattern detectors too.
I do actually really want this, to integrate into Home Assistant. I don't want to have to put a bunch of mm-wave detectors around the house to see where people are, I want to use the emitters and receivers I've already got. The current alternatives aren't that great.
The US military has been using tech like this for years. Some public, some not. The stuff not public is supposedly pretty good (bits and pieces of info have slipped in various publications).
If you’re interested in this stuff, check out Lumineye.
[dead]
[dead]