It doesn't really matter what the people here say. The world is moving in a certain direction, and the people here are not deciding that direction, tho they may be unhappy with it. Online opinion is just a wave, noise.
If you care about the future, what people say on the Internet is not worth your time. Just make it happen.
My vision for the future includes greatly reducing the power requirements for AI by rethinking computing using first principles thinking. Every single attempt at this so far wasn't willing to go far enough, and ditch the CPU or RAM. FPGAs got close, but they went insane with switching fabrics and special logic blocks. Now they've added RAM, which is just wrong.
Edit/Append: I've had this idea [1] forever (since the 1990s, possibly earlier... don't have notes going that far back). Imagine the simplest possible compute element, the look up table... arranged in a grid. Architectural optimizations I've pondered over time lead me to a 4 bits in, 4 bits out look up table, with latches on all outputs and a clock signal. This prevents race conditions by slowing things down. The gain is that you can now just clock a vast 2d array of these cells with a 2 phase clock (like the colors on a chessboard) and it's a universal computer, Turing complete, but you can actually think about it without your brain melting down.
The problem (for me) has always been programming it and getting a chip made. Thanks to the latest "vibe coding" stuff, I've gotten out of analysis paralysis, and have some things cooking on the software front. The other part is addressed by TinyTapeout, so I'll be able to get a very small chip made for a few hundred dollars.
Because the cells are only connected to neighbors, the runs are all short, low capacitance, and thus you can really, REALLY crank up the clock rates, or save a lot of power. Because the grid is uniform, you wont have the hours or days long "routing" problems that you have with FPGAs.
If my estimates are right, it will cut the power requirements for LLM computing by 95%.
greatly reducing the power requirements for AI by rethinking computing using first principles thinking.
I feel some affinity for this statement! Although what I've said in the past was more along the lines of "rethinking our approach to (artificial) neural networks from first principles" and not necessarily the foundations of computing itself. That said, I wouldn't reject your position out of hand at all!
It certainly feels like we've reached a point where there may be an opportunity to stop, take stock, look back, revisit some things, and maybe do a bit of a reset in some areas.
Might be a crazy statement, but I believe Meta is on the right track. Right now, I think most people can clearly see that more and more people are getting addicted to the little device in their hand.
The "Metaverse" is going to be a more interactive, immersive extension of that device. I also believe that Meta's superintelligence team isn't necessarily about achieving AGI, but rather, creating personable, empathetic LLMs. People are so lonely and seeking friendship that this will be a very big reason to purchase their devices and get tapped into this world.
The observation about smartphone addiction is certainly valid, with studies showing average daily screen time exceeding 7 hours for many users, driven by algorithmic engagement.
BUT
While the Metaverse could theoretically extend that immersion, historical execution suggests caution: initiatives like Horizon Worlds have struggled with user adoption and technical hurdles, indicating it might not seamlessly evolve from current devices as envisioned.
On the superintelligence front, focusing on empathetic LLMs for companionship taps into real societal issues like rising loneliness (e.g. reports from the WHO highlight it as a global health threat).
This approach risks exacerbating dependency rather than alleviating it, potentially creating echo chambers of artificial interaction over genuine human bonds.
So yes, Meta shows some promise in these areas, but success is anything but assured. Their previous massive investments have largely failed to deliver the transformative changes they hyped.
I think AI is going to help accelerate standardization of processes and make bigger businesses more efficient and profitable. Smaller firms are toast as the behemoths diversify as growth is capped.
End of the day, I see it as repeat of the 1920s, good and bad. Technology will drive discontent until we figure out how to tame it.
The world gets hotter, political control of nations continues to flip back and forth between conservative and progressive ideologies, Koreas don't unify, water shortages intensify, year of the Linux desktop.
What's forefront on your mind with regards to the human predicament, and how we move forward?
what's your vision for the future?
Honestly, I consider those two pretty different questions. At the very least, I'd approach them very differently in terms of time-scale. What's "top of mind" for me is more about the short-term threats I perceive to our way of life, whereas my "vision for the future" is - to my way of thinking - more about how I'd like things to be in some indeterminate future (that might never arrive, or might arrive long after my passing).
To the first question then: what's on my mind?
1. The rise of authoritarianism and right-wing populism, both in the US and across the world.
2. The increasing capabilities of artificial intelligence systems, and the specter of continued advances exacerbating existing problems of unequal wealth / power imbalances / injustice / etc.
Combine (1) and (2) and you have quite a toxic stew on your hands in the worst case. Now I'm not necessarily predicting the worst case, but I wouldn't bet money that I couldn't afford to lose against it either. So worst case, we wind up in a prototypical cyberpunk dystopia, or something close to it. Only probably less pleasant than the dystopias we are familiar with from fiction.
And even if we don't wind up in a straight up "cyberpunk dystopia", one has to wonder what's going to happen if fears of AI replacing large numbers of white-collar jobs come true. And note that that doesn't have to happen tomorrow, or next year, or 5 years from now or whatever. If it happens 15 years, or 25 year, or 50 years, or whatever, from now, the impact could still be profound. So even for those of you who are dismissive of the capabilities of current AI systems, I encourage you to think about the big picture and play some mental simulations with different rates of change and different time scales.
People get their head out of a phone only to realize
life is more than the next dopamine hit.
The dystopian version:
The logical conclusion of what is detailed in the
paragraphs above.
Where being addicted to a handheld device is not
only normal, but expected.
Where "what it is that we actually want" is not an
individual choice, but a corporate one.
Where the idea of technofascism is introduced as
"silly to suggest" and then normalized as "but we
all know that that's where things should be headed
in many cases." (see above)
It doesn't really matter what the people here say. The world is moving in a certain direction, and the people here are not deciding that direction, tho they may be unhappy with it. Online opinion is just a wave, noise.
If you care about the future, what people say on the Internet is not worth your time. Just make it happen.
If people care about future they need to destroy companies like BlackRock and nestly
[dead]
My vision for the future includes greatly reducing the power requirements for AI by rethinking computing using first principles thinking. Every single attempt at this so far wasn't willing to go far enough, and ditch the CPU or RAM. FPGAs got close, but they went insane with switching fabrics and special logic blocks. Now they've added RAM, which is just wrong.
Edit/Append: I've had this idea [1] forever (since the 1990s, possibly earlier... don't have notes going that far back). Imagine the simplest possible compute element, the look up table... arranged in a grid. Architectural optimizations I've pondered over time lead me to a 4 bits in, 4 bits out look up table, with latches on all outputs and a clock signal. This prevents race conditions by slowing things down. The gain is that you can now just clock a vast 2d array of these cells with a 2 phase clock (like the colors on a chessboard) and it's a universal computer, Turing complete, but you can actually think about it without your brain melting down.
The problem (for me) has always been programming it and getting a chip made. Thanks to the latest "vibe coding" stuff, I've gotten out of analysis paralysis, and have some things cooking on the software front. The other part is addressed by TinyTapeout, so I'll be able to get a very small chip made for a few hundred dollars.
Because the cells are only connected to neighbors, the runs are all short, low capacitance, and thus you can really, REALLY crank up the clock rates, or save a lot of power. Because the grid is uniform, you wont have the hours or days long "routing" problems that you have with FPGAs.
If my estimates are right, it will cut the power requirements for LLM computing by 95%.
[1] Every mention of BitGrid here on HN - https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
greatly reducing the power requirements for AI by rethinking computing using first principles thinking.
I feel some affinity for this statement! Although what I've said in the past was more along the lines of "rethinking our approach to (artificial) neural networks from first principles" and not necessarily the foundations of computing itself. That said, I wouldn't reject your position out of hand at all!
It certainly feels like we've reached a point where there may be an opportunity to stop, take stock, look back, revisit some things, and maybe do a bit of a reset in some areas.
interesting. can you expand on this?
Might be a crazy statement, but I believe Meta is on the right track. Right now, I think most people can clearly see that more and more people are getting addicted to the little device in their hand.
The "Metaverse" is going to be a more interactive, immersive extension of that device. I also believe that Meta's superintelligence team isn't necessarily about achieving AGI, but rather, creating personable, empathetic LLMs. People are so lonely and seeking friendship that this will be a very big reason to purchase their devices and get tapped into this world.
The observation about smartphone addiction is certainly valid, with studies showing average daily screen time exceeding 7 hours for many users, driven by algorithmic engagement.
BUT While the Metaverse could theoretically extend that immersion, historical execution suggests caution: initiatives like Horizon Worlds have struggled with user adoption and technical hurdles, indicating it might not seamlessly evolve from current devices as envisioned.
On the superintelligence front, focusing on empathetic LLMs for companionship taps into real societal issues like rising loneliness (e.g. reports from the WHO highlight it as a global health threat). This approach risks exacerbating dependency rather than alleviating it, potentially creating echo chambers of artificial interaction over genuine human bonds.
So yes, Meta shows some promise in these areas, but success is anything but assured. Their previous massive investments have largely failed to deliver the transformative changes they hyped.
I think AI is going to help accelerate standardization of processes and make bigger businesses more efficient and profitable. Smaller firms are toast as the behemoths diversify as growth is capped.
End of the day, I see it as repeat of the 1920s, good and bad. Technology will drive discontent until we figure out how to tame it.
The world gets hotter, political control of nations continues to flip back and forth between conservative and progressive ideologies, Koreas don't unify, water shortages intensify, year of the Linux desktop.
What's forefront on your mind with regards to the human predicament, and how we move forward?
what's your vision for the future?
Honestly, I consider those two pretty different questions. At the very least, I'd approach them very differently in terms of time-scale. What's "top of mind" for me is more about the short-term threats I perceive to our way of life, whereas my "vision for the future" is - to my way of thinking - more about how I'd like things to be in some indeterminate future (that might never arrive, or might arrive long after my passing).
To the first question then: what's on my mind?
1. The rise of authoritarianism and right-wing populism, both in the US and across the world.
2. The increasing capabilities of artificial intelligence systems, and the specter of continued advances exacerbating existing problems of unequal wealth / power imbalances / injustice / etc.
Combine (1) and (2) and you have quite a toxic stew on your hands in the worst case. Now I'm not necessarily predicting the worst case, but I wouldn't bet money that I couldn't afford to lose against it either. So worst case, we wind up in a prototypical cyberpunk dystopia, or something close to it. Only probably less pleasant than the dystopias we are familiar with from fiction.
And even if we don't wind up in a straight up "cyberpunk dystopia", one has to wonder what's going to happen if fears of AI replacing large numbers of white-collar jobs come true. And note that that doesn't have to happen tomorrow, or next year, or 5 years from now or whatever. If it happens 15 years, or 25 year, or 50 years, or whatever, from now, the impact could still be profound. So even for those of you who are dismissive of the capabilities of current AI systems, I encourage you to think about the big picture and play some mental simulations with different rates of change and different time scales.
> So yeah, what's your vision for the future?
The hopeful version:
The dystopian version:Every minor action in life gets intermediated via some VC-backed group of 22 year olds who clip the ticket and then retire to become reactionaries.