I don't think these are useful at all. If you implement a simple network that approximates 1D functions like sin or learn how image blurring works with kernels and then move into ML/AI that gave me a much better understanding.
Thank you for saying this. I often find this "glib" explains of ML stuff very frustrating as a human coming from an Applied Math background. It just makes me feel a bit crazy and alone to see what appears to be a certain kind of person saying "gosh" at various "explanations" when I just don't get it.
Obviously this is beautiful as art but it would also be useful to understand how exactly these visualizations are useful to people who think they are. Useful to me means you gain a new ability to extrapolate in task space (aka "understanding").
This is a fantastic educational resource! Visual animations like these make understanding complex ML concepts so much more intuitive than just reading equations.
The neural network visualization is particularly well done - seeing the forward and backward passes in action helps build the right mental model. Would be great to see more visualizations covering transformer architectures and attention mechanisms, which are often harder to grasp.
For anyone building educational tools or internal documentation for ML teams, this approach of animated explanations is really effective for knowledge transfer.
Nice! I made my own version of this many years ago, with a very basic manim animation
https://www.jerpint.io/blog/2021-03-18-cnn-cheatsheet/
Years back I worked on some animated ML articles, my favorites being: https://mlu-explain.github.io/neural-networks/ and https://mlu-explain.github.io/decision-tree/
not deep learning but this oldie is a goodie too (since we are sharing favorites): https://narrative-flow.github.io/exploratory-study-2/
I originally had it saved as [[ https://www.r2d3.us/visual-intro-to-machine-learning-part-1/ ]] but it seems that link is gone?
I worked on something similar but specifically for transformer architecture: https://transformer.sujayk.me/
On Safari mobile it shows a modal that can’t be scrolled nor closed
I don't think these are useful at all. If you implement a simple network that approximates 1D functions like sin or learn how image blurring works with kernels and then move into ML/AI that gave me a much better understanding.
Thank you for saying this. I often find this "glib" explains of ML stuff very frustrating as a human coming from an Applied Math background. It just makes me feel a bit crazy and alone to see what appears to be a certain kind of person saying "gosh" at various "explanations" when I just don't get it.
Obviously this is beautiful as art but it would also be useful to understand how exactly these visualizations are useful to people who think they are. Useful to me means you gain a new ability to extrapolate in task space (aka "understanding").
Yup, I'd say you learn more by doing math by hand (shouldn't be that surprising).
You should add dilated conv and conv_transpose to the list.
Shameless plug for my writeup about convolutions: https://jlebar.com/2023/9/11/convolutions.html
Nice work. A while back, I learned convolutions using similar animations by Vincent Dumoulin and Francesco Visin's gifs
https://github.com/vdumoulin/conv_arithmetic
here is the github link for anyone wanting to star the repo https://github.com/animatedai/animatedai
This is a fantastic educational resource! Visual animations like these make understanding complex ML concepts so much more intuitive than just reading equations.
The neural network visualization is particularly well done - seeing the forward and backward passes in action helps build the right mental model. Would be great to see more visualizations covering transformer architectures and attention mechanisms, which are often harder to grasp.
For anyone building educational tools or internal documentation for ML teams, this approach of animated explanations is really effective for knowledge transfer.
I feel like these are helpful, and I think the calculus oriented visualizations of convex surfaces and gradient descent help a lot as well.
amazing resource!