RSS isn't a format that's super-helpful for publishers. There are a variety of reasons why. But it's an absolute dream for consumers. And that's what makes it so awesome, so powerful.
Case in point: I saw someone had unsubscribed from one of my email newsletters, and when I went to go read the "reason why" field, they'd filled out: "subscribed to the RSS feed instead."
That's right, my email newsletter has an RSS feed (thanks Buttondown!), and they prefer to receive the newsletter that way rather than via email. And can I blame them? Absolutely not! I love RSS. Is it better for my vanity to have their email address in my database instead, rather than some nebulous XML file going out to who-knows? Of course. But again, this format keeps on winning year after year because it's one of the best consumer-first features of the open web.
The rise of email only newsletters has been irritating. Thankfully a lot of readers (I use inoreader) let you create mailboxes that just turn into entries in your reader
That's an extremely rare edge case that fails to justify your point spectacularly.
I thought the comment says "it was an absolute dream for consumers" but actually it says "it's". Sorry to burst your bubble, if you ask any normal person who does not spend 10 hours on HN per week, chances are that they have never heard of the term RSS in their life.
I think there are a lot of us out there in fact. Managing a queue of feeds in an RSS reader is much more pleasant than having newsletters mixed with email. Separation of purpose is a good thing for most, imho
During the whole API debacle all the RSS feeds in my reader got rate limited or blocked, so I just stopped using Reddit. Maybe I'll give it another go if they actually started allowing RSS again.
I've given up on Reddit, after all of their moves that seemed to be explicitly hostile to their users. I know some people still get value out of it, and I'm happy for them, but I'm not particularly interested anymore.
It's more bots / paid actors than real conversations at this point anyway. They're just milking the honeypot for that LLM training money until it runs out.
Most people seem to be in Discord. The quality is just as bad as Reddit, but at least it's not overrun with bots and there isn't an upvote system causing an echo chamber effect.
The quality of the content is terrible too now that you get banned for saying the wrong words. And I'm not talking about "woke" stuff - you can get a permanent site-wide ban for saying things like Elon Musk is a Nazi, or just at random.
I feel like some unsung hero is quietly maintaining it trying their best to not draw any attention. With how walled off social media has become and how unadvertised the feature is I thought it'd be gone by now.
RSS has more of a commercial problem. You can’t put ads in it so sites are incentivized to force a site visit. Which in turn forces them to withhold the bulk of the value from the feed itself. Ie just include first sentence or two. Which kills the usefulness of the feed as anything more thank headlines and link. Headlines in turn are all clickbait these days so those don’t have much info density either.
That's a silly thing to say. Of course you can put ads in it since it allows linking to things. What you mean probably, is that it's not as easy as embedding some google ads markup in your sidebar.
What you can't do, is add all sorts of invasive tracking to RSS to confirm that the user saw the ad, and that it wasn't filtered out. You have to get more creative with wording that works the ad into the descriptions for the articles, and even then, there's no guarantee.
Advertisers love to burn money, but they draw the line at not being able to verify that the spend did what was promised.
You can add an image, can’t you? So the situation is not worse than email, and there’s plenty of tracking there (that good email clients block, but that doesn’t matter in a world where almost everyone uses the Gmail web UI).
Of course you could manually put ads in your RSS feed. What you can't do is use an ad network (3rd party javascript), but if RSS was actually popular, that could be solved.
With email, you normally use unique image and link URLs for each recipient, so you generally know who's opened the email and what they've clicked and can map that to their email address and whatever other information you have about them.
With RSS, you generally don't have any information about who's accessing the feed other than an IP address. It is possible to require users to log in and receive a unique RSS URL, which is what podcasts often do to give paid subscribers access to paywalled episodes, but that's not common for web RSS.
The exact same techniques used for email can be used for RSS. You could generate unique links for RSS too, based on requester headers, in the same way way web fingerprinting works. There'd be a bit of computational overheard in comparison to serving a static XML file, but it seems easily doable.
Small problem was the way feeds worked in practice is you had various services caching the source feed and consolidating everything for its users like Reader and Feedly and others. Multiple startups around this.
Even the injected ads idea was tried with companies like FeedBurner, later acquired by google.
Not that much different from any other form of content aggregation. Web links posted to HN or Reddit also either strip out individualized links or conflate everything together under the same link. There are plenty of solutions to this.
If you're generating feeds on the fly with tracking metadata based on the requester, you can identify aggregators, and treat them equivalently to social media platforms where users circulate normal web links. You still get click-throughs to the underlying content from the end users, and you'll know the aggregator was the referrer.
Many podcasts use direct ad injection using metadata from the request to the enclosure link -- that seems to work well enough, and seems like the sort of thing that could be used for other content than just audio and video.
I really really really object to being tracked in emails with poisoned links (without being told or having a sensible opt-out, usually, so also illegal under GDPR I believe) and it is one reason that I will not sign up to them.
You can have that on your website and put a summary and a link to your website in the feed. That's been a common approach for a while.
If RSS has been more common, I imagine the bigger RSS readers (bearing in mind one of them was from Google!) would also have standardised on other ways of tracking clicks and ad views and all the rest. There just weren't enough people interested in RSS to make any of that worthwhile.
I say this as a user of RSS and someone who publishes a (very sporadic) RSS feed. It's a niche, because most people don't want to curate their own feeds.
Everywhere that ads are the only way to create revenue streams, that should be considered a commercial problem in itself. It should be way easier to pay (and charge) securely for services like this by now.
Most ads seem to be unwanted, but some of them seem to work to make the nuisance worthwhile. People regularly stumble upon content randomly, and get exposed to ads.
Paying for content is a conscious action, it has a higher activation threshold than just clicking mindlessly on something that looks fun.
Then, transactions are expensive; micropayments are not a thing.
Subscriptions alleviate that a bit. Large middlemen alleviate that even more: Apple and Google can make micropayments like $0.50 viable within their ecosystems, so apps or in-app purchases can be tiny, and allow to remove ads for paying users. Attempts to do something similar for websites never took off, sadly.
I only put a summary of each item in my RSS feeds because I do not want to be redundantly sending the same body data over and over but I do want a complete history available easily in at least some versions of the feeds. And some of the primary content is audio (or video) so cannot be dumped into the RSS usably.
There are also some efficiency-related shortcomings. I'd wager that most feed readers either implement conditional requests incorrectly, or they don't implement conditional requests at all. Polling rates also tend to be stupid, on the order of 1-30 minutes with no regard for how often any given feed actually has new posts. This creates server-side pressure to make your feed as small as possible, which always means excluding content.
I'm a small enough deal that I can call out specific readers at the top of a new post. A couple weeks ago I put up a post that started with the aside "hey one of you has a Turkish IP and you're pulling the whole feed every 10 minutes, please stop doing that."
I think that I probably average < 1 listener by any sensible metric! B^>
(The podcast where I capture primarily audio material such as voice or sonfification that might as well also be presented in this alternative channel form...)
I'd be interested to listen to your podcast if you were to ping me its URL via one of the contact methods in my bio...
The first point sounds like an implementation issue rather than a protocol one. I also don't agree that most readers have this problem.
Polling rate also has nothing to do with frequency of updates if you care to receive those updates in a timely manner. I haven't seen a reader default to 30 minutes or less.
Probably in both cases you just notice the bad implementations more because they make more requests.
And Atom supports pagination so you can limit the main feed url to be just one entry while still allowing for clients to retrieve older ones.
Why can't you put ads on RSS? Either in the story itself (by the site or aggregator) or as a "promoted item" in the feed (by the aggregator). If anything, the Google Discover (or whatever it's called) is not too different, just that you don't control which exact news sources you're subscribed to.
I can imagine an alternate timeline where Google Reader turned into a sort of Twitter (or FB or IG) feed.
Ads are a recent innovation on top of existing web standards. A new generation of changeable scripted ads had emerged and this was not compatible with XML. The old generation of ads was still compatible but not scalable, similar to how podcast sponsor ads work (immutable after publishing), and so did not get much traction.
Podcasts ads are definitely mutable after publishing. Dynamic ad insertion has been a thing for years. If you download an old Stuff You Should Know podcast, you will get a new ad even based on where you were when you downloaded it.
why not? you can format ads as an entry in the rss feed. in fact it would not even bother me. i could train my rss reader to detect the ad based on keywords and mark it, and even if not, i'd just skip over it manually, mark it as read, and it's gone. as long as the frequency of ads is not to much that is better than an ad on a website that is permanently visible.
> Anyone still feel the pain when Google discontinued its RSS reader?
Yes, but mostly because of a lost opportunity.
I was working on my own web based reader when Google made a significant upgrade to their reader. It was similar to what I had made, so I thought it would be foolish to compete with Google and stopped working on it.
I wonder where RSS would be now if Google had not discouraged potential competitors.
John Gruber (Daring Fireball) has made his entire living for 20 years by putting a once a week sponsored post in RSS along with the full content of his posts from his site in RSS.
https://theoldreader.com has been my go-to since google reader was killed. It's pretty good at sussing out the rss feed of random blogs if one exists, too.
Sorry for the random question, but I’ve been trying to get more into RSS, and figure it’s worth asking someone who has a lot of experience - is there a reliable way to find an RSS feed for a given site, assuming it has one? Or is it a set of heuristics you try?
Are there good tools to RSSify sites that don’t have one?
Awesome, thanks! Especially for the pointers to those rssifiers.
For the first question, I should clarify that I'm hoping to just ingest these RSS feeds myself in various scripts. But yeah, makes sense that most of the good feed readers mostly take care of that.
Websites usually link to their RSS feed using a <link> attribute in the head of the page.
Browsers used to detect this and show an RSS icon near the address bar if the website you were viewing had a feed - and you could click the icon to see more details and subscribe.
With decent RSS apps, you can generally just paste in the URL of any page (or the site's homepage) and they will take care of examining the HTML to find the URL of the actual feed.
I use Folo which has Rsshub built in. You simply search for a source you want, or add your own with a known URL for everyone to use. Otherwise you can use Rsshub with a reader of your choice.
That's actually what I've been doing, but sites that very clearly should have an RSS feed (specifically, our local governments' event calendar pages), don't, so I thought there might be some other route/heuristic/whatever that I've been missing :-(.
Depends how you define lost. I still use it every day.
Is it a popular main stream thing? No. Does every since site offer feeds for every reasonable thing you could want to subscribe to? But does it still work quite well for those that want to use it? Yes.
What "war"? RSS is an open standard and still going strong. It doesn't need to win or compete or whatever business words from warfare are hyped nowadays. It just needs to exist. The genie is already out of the bottle, for 20+ years.
Discord, WhatsApp, and iMessage are all messaging applications that aren't directly related to the use case for RSS.
That leaves Twitter and Instagram as the two major sites for which RSS would be applicable, but which don't natively offer RSS feeds. And a cursory web search reveals the large number of solutions people have come up with for subscribing to content from Twitter and Instagram via RSS, indicating that there's significant demand for it.
It's also worth noting that with Twitter in decline, the main competitors gaining traction, BlueSky and Mastodon, do both natively offer RSS feeds.
On top of that, the entire podcasting ecosystem is fundamentally based on RSS, and it's still the primary mechanism for syndicating blog content.
Many moons ago I tried out a service [0] that did this with pocket articles (although I used to send to pocket vis RSS). It was pretty good! It didn't last long though.
I suspect maybe it's easier now to nail the layout if ai can read content before it goes to print.
AI is indeed a crucial part in solving the two most difficult challenges -- typesetting and curation, although we'll probably do things that don't scale for a little while before fully automating.
I sort of love this, but immediately wonder about curation.
My feeds are pretty unpredictable - sometimes I have 40 new articles in a day, sometimes just a few. The cheapness of digital consumption and interface makes it viable for me to skim titles and read, defer, or dismiss at my judgement. I don't want the entire feed printed out - not viable.
But if some SaaS is curating my feeds for me, I fear it'll turn into another algorithmized something optimizing for what exactly? At least the first-pass filter is explicitly set by me - feeds I subscribe to.
Curious to hear your thoughts on it, and wishing you luck.
Yeah- I get about 300 new items each day in my feed... of which on average about 1% of those are worth reading the full article. There is a lot of duplication as well- many sites will cover a new gadget announcement, but only need to read one to get the full scoop. Printing this would be overwhelming- and many of those sites are summaries of "source documents" (papers, release notes, etc) that I want to jump to.
I am sure people use RSS in many different ways though, it just doesn't seem useful to me.
I've had this same idea! Of course, it remains an idea never taken out of the garage. Are you delivering broadsheet, or formatting a printable file for users to print at home?
Typesetting is a challenge so broadsheet vs tabloid is undetermined, but whatever it will be it will be delivered to the door. The newspaper paper is a crucial part, I believe.
I have had this idea pitched to me many times over the years, with requests to build a simple prototype practically forced into my dev queue .. but I always resist it.
The last time someone tried to convince me this was a good idea was just after the iPhone was announced, and before everyone and their monkey had a super computer in their pocket. It seemed like a good idea at the time, so we almost started - but my advice to the punter then was "lets see what the mobile phone industry looks like next year" .. well that put a pin in it.
Nowadays, I'm not so sure I'd be so willing to do this - again, because it requires the user do the printing - but if you were to, say, make this into a vending machine product, which users can walk up to in the street and walk away with a custom 'zine full of their own interests, you might be onto something.
Here in Europe we have a lot of old telephone booths converted into mini neighborhood free libraries. I've often wondered whether it would make sense to put a public printer in those libraries and let people print things .. seems like this would be a revolutionary new product to make, with printable broadsheets based on a custom RSS, an obvious killer app .. assuming someone can be found to maintain the printers.
(Off to find thermal paper for my ClockworkPi, which I always wanted to turn into a custom RSS printer in the toilet...)
Not yet, but we'll need beta testers. If you're interested and in a large metro area please reach out to ofek [at] nestful [dot] app mentioning said metro.
Even being aware that such a thing as "RSS" exists nowadays, implies a pretty high level of technical sophistication. Why would such users go out of their way to use up print stock, wait for it to be delivered, incur the energy/fuel costs of such delivery, etc. instead of reading it on their screen?
I’ve thought of this (worked in book sales so the espresso printers were around for print on demand books.
Recently I’ve been living in a cottage town and thought of this idea again… rather than be reading on phones or tablets people could read printed books with their favourite articles or blogs. But I think the actual distribution system would be the killer, unless it’s at a big resort the transportation will kill the idea.
Let me blow your mind: Betamax was not better quality than VHS. There are many things that can explain why people believed that one was better than the other.
People confused Betamax with Betacam, Sony’s professional grade recording medium, which is absolutely better quality.
People conflated VHS’ ability to slow the tape for even longer play at the expense of quality. That of course made the recording terrible. Betamax did not initially have this capability.
People listened to Sony’s own marketing. When they couldn’t compete on features, they banked on their reputation.
"When Betamax was introduced in Japan and the United States in 1975, its Beta I speed of 1.57 inches per second (ips) offered a higher horizontal resolution (approximately 250 lines vs 240 lines horizontal NTSC), lower video noise, and less luma/chroma crosstalk than VHS, and was later marketed as providing pictures superior to VHS's playback. However, the introduction of Beta II speed, 0.79 ips (two-hour mode), to compete with VHS's two-hour Standard Play mode (1.31 ips) reduced Betamax's horizontal resolution to 240 lines.[7]"
https://en.wikipedia.org/wiki/Videotape_format_war#Picture_q...
In tests done by Technology Connections, the difference was so small as to be inconsequential. It was technically better at its slowest speed, but you could barely perceive the difference and more importantly Sony disabled the feature in the vast majority of machines sold. People wanted more than 60 minutes out of one tape. They wanted 2 hours.
sells VHS decks for $12 and you can get pretty good movies for $2. Contrast that to compact cassette decks which start at twice that and have a good chance of being non-functional. That place has the complete works of Barbara Streisand but if you want music that anybody would want on cassettes the sky is the limit for collectables.
My impression is that the quality of VHS isn't terrible. The video is worse than DVD of course but a lot of DVDs have NERFed soundtracks because they mixed them assuming you're going to play their 5.1 mix on a 2-channel system. Any deck you get now is going to support VHS Hi-Fi and if you have a 5.1 system with some kind of Dolby Pro Logic the soundtrack of a good VHS can be better than the soundtrack of an average DVD. (Blu-Ray often has better sound not because the technology is better but because the 5.1 soundtrack is more likely to really be a 5.1 soundtrack)
There are a few more things I didn't like about DVD, I don't like the blocky artifacts that you often see in the background (doors, bookcases, etc). Some of the earlier scenes in The Matrix are particularly bad. Fire/explosions are also very poor.
Beyond this, is when they bake a 16:9 movie into the 4:3 format losing significant fidelity. Batman Begins was nearly unwatchable.
This of course doesn't get into the sound quality/mixing issues you mention... I wish they had something closer to h.265 at that time, as I don't mind a blurry background nearly as much as blotchy/blocky artifacts for similar sizes or smaller. A 2gb h.265 movie from blueray looks dramatically better than a 4+gb DVD movie.
Sony charging exorbitant licensing fees to manufacturers of Betamax equipment also didn't help, a lesson it took Sony a few more decades and proprietary formats to finally learn.
Betamax's "Standard" playback was better than VHS's "standard" playback... the issue was VHS "standard" could get something like 2 hours to a typical tape and BetaMax was like half an hour. For actual content, BetaMax tapes were recorded in an extended play format, while most VHS tapes were in Standard. This dramatically reduced BetaMax quality to be comparable or worse.
We started with a Betamax player. I think one underappreciated reason for VHS's win was that you could put a movie on one VHS tape, whereas the Betamax required two (at least at the time it mattered). And in an era of movie rental stores, that made a difference. Both in terms of logistics, but also in terms of the consumer having to load a new tape halfway through a movie.
I'm not entirely sure that the blog is totally correct in this part. I've always been fascinated by censorship (especially when it comes to movies/entertainment) and have seen/read a bunch of stuff about vhs in the 80's. Here in the UK we had a term, 'video nasties' for a bunch of horror films that were banned (Evil Dead being a very prominent example). Anyway, the general concensus I've always been under after watching/reading all the documentaries about that stuff is that the reason why the porn industry used vhs and ultimately won the format war was because the prudish japanese execs at the sony would not alow 'smut' on their precious new format and if the industry could not license the use of that format then they had to use vhs. The porn industry didn't choose vhs over betamax, vhs was the only option available to it.
The only real advantage VHS had was that JVC broadly licensed the tech so anyone could manufacture devices and/or tapes while Sony heavily restricted Beta.
I, like many others on here, use RSS every day. In Thunderbird I have a whole bunch of feeds I subscribe to, one of which is this very website - Hacker News. I even made my own HackerNews extension in Thunderbird to make it even easier/quicker to open the links from the feed. RSS is great, I check them all throughout the day as I do my emails, all in the same app.
I'm still hoping for AI agents to mature to a point where they can be universal scrapers for my RSS. Have a headless client, scraping, interpreting websites in the background... burn excess cpu cycles and dead dinosaurs to replicate universal RSS dream.
I chatted with Dean Hachomovich at a blogging conference as he was copying our (Firefox) tabbed browsing and RSS implementations. Soon after, a MS lawyer reached out to me to ask what they needed to do to re-use our RSS icon in the upcoming IE 7 release. We gave them the okay. I still have the jacket he gave me with "Longhorn loves RSS" on it.
I'd like to see a "new" RSS standard based around newline delimited json, where the summary text is a minor extension to GFM (to support left/right/spread images, minimal formatting, basically match medium.com options). This can allow a common reader to do a display that renders to their own liking (colors, font, etc).
Beyond this, maybe a framework to show a single header ad on the reader giving the revenue credit and money to the original content site.
The reason for newline separated json, is simply that you can do a partial content download in the reader... the most recent 100kb or 2mb or whatever... you the most recent is on top, and allows a site to publish more than just the most recent, but you don't have to grab that. Or maybe just standardize a since=(iso-style-datetime) or last=## (number of articles).
I still think there is a future for web publishing - from indie to corporate - if people stop feeding the algorithm machine with both sides of the supply and demand market, and move it elsewhere.
People found the web more boring, because it became more boring.
They found the algorithm more interesting, because it allowed them to see what was going on with people they barely knew (from former school mates they'd lost touch with to celebrities without press filtering), and that was compelling.
But there's a next phase available to us, which is to make the web more interesting, entertaining and compelling again.
I love that b3ta.com still exists. I love that metafilter.com is moving on. I think it's great that web comics I love still publish to RSS.
I just think more of us need to provide more demand, and more people will wake up to supply, and the flywheel will start to turn.
RSS beat ICE, and it can beat Meta and X if people want it too, albeit for different reasons.
Then google killed it, they made a great product, Google Reader, then killed it, and then after that huge amounts of RSS feeds just faded away.
Ironically, my microsoft feeds are pretty active, and xkcd is still there, The Daily WTF is still going strong.... but a lot of my feeds are just dead.
The author seems to live in a bubble where people are aware of RSS feeds. This article is the first I'd even heard of ICE in the first place. While multiple companies are listed as being behind ICE, no examples are given of websites that actually provided a feed for it.
Meanwhile, RSS is barely relevant today. For decades (Youtube turned 20 this year), people have had access to feeds curated by "the algorithm" operated by a commercial interest (hoping to maximize the amount of ads you look at); and most people seem to prefer it that way, if they're even aware of alternatives.
Interesting, I’d never heard of that ICE. Seems that it could be considered a very very early idea in line with ActivityPub, which I also don’t really know much about.
I think, as someone that has a RSS feed on my blog, that RSS is a total mess and Atom was probably the better choice.
Maybe even some modern JSON based format would be OK, but maybe that’s what ActivityPub is?
Anyway, after dealing with the mess of images and inline HTML with CDATA in RSS, I have complete fatigue of the whole endeavour.
Because people care about publishing or getting updates, not about the thing delivering them being valid XML or not.
RSS works. Atom splitting the standard into two probably did more harm than good. In the end it doesn't matter since every reader supports both and both do the job well.
>All RSS had to do to weather ICE, Twitter, AI, and whatever comes next
RSS did not weather Twitter. Social media is huge compared to RSS. It turned out that singular recommendation feeds are able to push URLs around better than needing every site to build in feeds themselves and then still requiring someone to turn those feeds into a singular feed for the user.
First, RSS has a bit more friction. Smashing the follow button on Twitter et al is faster than adding the feed to your RSS reader of choice unless your OS has support for default RSS app.
Second, discoverability. Just like with any distributed system vs monolithic platform, you need to find what to read yourself. For some niches this works well. If you are a software developer/hacker, you are more familiar with blogs in your area of interest. But if you have a wide range of interests you’d need to find the blogs yourself and hope their RSS feed is well formatted.
Third, the algorithm. A monolithic platform can do more to try to mix in new content based on your interests and intelligently mix up the content from sources you follow. This is of course controversial because feed algorithms can also try to cram bullshit into your feed or hide important stuff from you or create an echo chamber. But in the best case scenario they can also expose you to new sources of content you wouldn’t have found otherwise. An RSS reader would mean it is up to you to do this discovery which is more friction.
And ultimately content creators realized that they get more eyeballs on their stuff by using platforms like Facebook, Medium, Instagram, Twitter, than on blogs especially since blogs tend to be then repackaged by blog spam bots, Google’s AMP, and now LLMs.
So IMO RSS is just too manual and requires too much work. And of course since you can’t effectively advertise through it there is less incentive for creators and platforms to support it.
BlueSky and Mastodon both support RSS feeds. The loss from Google Reader dying was huge, more so than Twitter, but it’s probably balanced by the growth in Podcasts.
RSS feels like a cable. Cables won! Because you need them to power your devices and pipe your home internet. Cables lost! Because of 5G and WiFi. Maybe cables dont care, they just do their job.
So did Twitter pre-Elon. I moved a number of "public personalities" with high-volume feeds from my follow list to my RSS reader. I liked what Merlin Mann, or Parker Molloy, or John Green had to say, but I wasn't going to interact with them, and their loquaciousness made it hard to keep up with people I followed there that I actually knew and interacted with.
Then I remembered that Twitter was once referred to as "micro blogging," so I put those folks in my blog list on Feedbin, and was happy again.
I don't think that market is zero-sum, so the question is not about who "won", it's whether any player lost. Despite Twitter being big, RSS is still widely used and, perhaps more importantly, widely supported and thus usable. That counts as weather in my book.
I was going to say, RSS is not as big as I remember it being back in the late 2000s. I remember people having RSS clients, myself included. Now I can't remember the last time I ever used one. Where RSS is most prominent I guess is podcast feeds which were based on RSS to my understanding.
OTOH, I can't imagine not using an RSS reader. I'm sitting here with Liferea on my desktop connected to my TT-RSS server, which I use to manage pretty much everything I subscribe to: blogs, podcasts, YouTube channels, webcomics, subreddits, and several aggregators including HN. Having to access all of those separately via their own websites sounds like a nightmare.
Back when Twitter was less controversial, I remember tons of techie folks gleefully saying that they didn't bother with RSS any more because Twitter was better.
RSS is alive and well. It’s rare that I find an interesting website where RSS makes sense and it doesn’t exist. Even if they don’t advertise it, popping the website’s address into a feed reader tends to be enough to find it. Even Mastodon and Bluesky profiles have RSS feeds.
Mozilla decided to remove its fantastic live bookmarks feature that seamlessly integrated RSS within bookmarks in 2018 with Firefox 64. Someone then made an extension which was ported to Chromium, and then back again to Firefox because original one was abandoned.
A dedicated extension is needed to have that feature back. Chrome needs one as well, so does Edge; only Vivaldi and Opera come with build-in feed readers. There are of course standalone applications but that seems to be a niche nowadays.
I've found an old rssowl opml file from 2014 last week and I decided to see what's still up. I've found some RSS readers in flathub but sadly, majority of what I was visiting back then died.
Ofc I did not mean they literally killed it. But Google improved and pushed Reader to the point where it was the ubiquitous client, then terminated it leaving everyone at the mercy of poor clones or paid options. This was at the precise time social media platforms gained traction, and while some kept up rss, the majority of personal publishing moved to facebook and later other closed platforms.
I still use rss daily for keeping up with we bsites, but now it is for 99% of internet users something tech embedded in an application, or just not heard of.
So in that sense, the poster child of "Web 2.0" was taken out back and kneecapped.
A lot of discussion around RSS revolves around the format for the data/metadata (e.g. the Atom feud) but the real problem with it is this:
To consume an RSS feed you poll it. There are two polling speeds: too fast and too slow, and it's possible to be both at the same time.
Note the struggles of this Karen to turn RSS from a simple stateful protocol to a complex stated protocol, and she'll ban you if you ever reset your cache and rerun your fetcher because your cache got corrupted or you suspect it might have been corrupted.
You really want to have a stream of feed items and to be able to: (1) replay the whole stream all the way from or to the beginning and (2) query "what got added after time t?" and just get that. ActivityPub accomplishes this but people don't really like it. For Dave Winer it is all blub but even if he doesn't believe in the Fedi, he's on it.
because it does all the polling for you and hits your webhook whenever a new feed item appears. My webhook is about 15 lines of Python running as a Lambda function that posts items to an SQS queue and my YOShInOn RSS reader just drains the queue at its convenience. The pricing at 10 cents/feed/month is a bargain for high volume feeds like MDPI, arXiv, and The Guardian [1] but unfortunately I can't really afford to subscribe to 2000 little blogs that post maybe once a week at that rate. I wish there were more Planets.
It’s a pity that this is the bottom-most comment and equally it’s a shame that the slur “Karen” made its way into the White lexicon spoiling an otherwise informative remark.
Correct me if I’m wrong, but is Winer’s somewhat recent effort with FeedLandⁱ any different from Planet?
(2) I really am mad at Rachel for this and that's from someone who's been writing webcrawlers [1] since 1998, been an RSS innovator, and been responsible for complex systems when they fail.
(3) Maybe I am missing it but I don't see an actual feed for that URL, I see only an OPML file. Dave is really gay [2] for OPML files but I'm not because I still have to work to fetch all the items. Yet, visually the OPML file and blogroll look like a planet and you're not the first person who's pointed Dave's blogroll as a solution as opposed to the problem that I see it is.
(4) Looking at the head of the list I think "Daily Kos" and "404 Media" suck but that I already subscribe to many of them like "Arstechica" -- looking at the tail of the list I see there are gems that I'm not getting. If those things were getting aggregated topically it would please both me and Rachel.
> If those things were getting aggregated topically it would please both me and Rachel.
I think I catch what you’re getting at and it reminds me of what seems to be a primary gripe with RSS—compromising an otherwise decent spec as far as I can tell—discoverability and curation.
I can’t grasp the more technical issues surrounding polling just yet so I can only intuitively get how it corresponds with the other issues. In spite of this, and leaning again on my intuition, I didn’t get a good feeling from Rachel’s blog post when I first came across it either.
And yeah I’m not sure how FeedLand works under the hood, I just matched Wikipedia's description of a planet with what I knew FeedLand looks like on the frontend.
I also know there’s been some “folksonomic" efforts for curating lesser-know feeds, but to be honest I’m not fond of what these networks of folks appear to be into. Compare your distaste for Daily Kos and 404 Media (which I share) to the vibe I’m trying to avoid on that front.
Don't know what you're so angry with Rachel about. I provide feeds for readers and many of them are way too greedy and frequently do hammer websites, often unneccessarily. Mix in all the rampant AI bots and you got a recipe for an extremely expensive server bill.
Getting angry at someone who's trying to keep their server costs down to provide you something for free is kind of weird.
- I've looked at the abyss of bankruptcy from server bills. I actually think it's a hell of a lot worse than Rachel does, I thought that 15 years ago, and I've suffered worse
- The whole discussion around RSS has been naive in every respect from the very beginning, for instance Dave Winer thinking we care about him publishing a list of the tunes he listens to back when you couldn't actually listen to them (I'll grant that in the age of Apple Music, Spotify, YouTube in such things may have caught up with him.) There are never any systems thinkers just people who see it all through a keyhole and can't imagine at all that anyone else sees it another way.
- To use a Star Wars analogy, Google is the evil empire that blew up up our planet with a Deathstar 10 years ago and now we're living on the wreckage in an asteroid belt. I see the AI upstarts as The Rebel Alliance that at the very least reset the enshittification cycle back to the beginning and create some badly needed competition. Opponents to A.I. crawlers are brainwashed/living in the matrix and defacto defending Google's monopoly and slamming the door to prevent the exits from enshittification that A.I. agents offer (e.g. they look at the content and drop out the ads!) They think they're the rebel alliance but are really the stormtroopers suppressing the resistance until the Deathstar charges up for the next shot.
- Rachel's supposedly some high powered systems programmer or sysadmin or something but is just as naive as Winer. We're supposed to "just" use a cache. Funny, the reason why the web won out of every other communication protocol is that you can "just" use curl. curl by it's nature can't support cacheing because it's a simple program that just runs in one process and if you wanted to have a cache you'd have to have a complex locked data structure which would force a bunch of trade offs... like I am downloading something over my slow ADSL connection that takes two hours but I also need to clear my cache so do I force the download to abort or hang up the script that needs the cache cleared for two hours? curl is "cheap and cheerful" because you just don't have to deal with all the wacky problems for which "clear the cache" or "disable the cache" clears up like a can of Ubik. But in the age of "Vibe Coding" solutions that almost work are all the rage, except when you finally realize they didn't actually work you clear your cache and rerun and BAM you got banned by Rachel's blog because you hit it twice in 24 hours.
Web browsers are some of the most complicated software in existence, ahead of compilers (they contain compilers) and system runtimes (they are a system runtime), right up there with operating systems (less having to know things that aren't in the datasheet to write device drivers at least.) For the first 15 years you could not trust the cache if you were writing web applications, somewhere around 2010's I finally realized you could take it for granted the cache works right. I guess implementations and the standards improved over time but all this complexity is some of the reason why it is just Google's world and we all live in it and there are just two big techs that can make a browser engine and one unaccountable and out-of-touch foundation.
So I wish Rachel would just find an answer to the cost problems (Cloudflare R2?) or give up on publishing RSS or advocate activitypub rather than assume we care what she says enough to follow her rules for her blog without seriously confronting what a system-wide solution for the problems RSS tries to solve and the problems it poses.
This is really not much of an issue if both sides implement HTTP caching (If-Modified-Since or Etags). Atom also adds pagination which allows you to keep all old items accessible to supporting readers while cutting down the main feed to just the last entry. The little bandwidth a well managed feed takes is really not worth giving up the ability to host the feed statically which the overly complex ActivityPub can't do.
Http might be better in 2025 than it used to be but historically the cache is as much a problem as it is a solution. That is, when you have heisenbugs the answers are frequently "clear the cache" [1] and "don't use the cache" [2] and it's often a better strategy to "rip and archive the whole site right now and come back in six months and make a fresh rip" if you're working on a large scale and can tolerate being up to six months out of date.
In general database-backed sites struggle to implement if-modified-since since a fully correct implementation has to traverse the graph of all database objects that are looked up in the request which costs about as much as… the request. Your cache needs a cache and you can make it work at the system level if you have a modified date cache and always invalidate it properly. If you are doing that you might as well materialize a static web site fronting your site the way the Wordpress supercache works —- then you’ll find your site crashing a lot less!
I’ll admit ActivityPub is complex but http caching is complex in a particularly poisonous way in that there are so many alternate paths. An ActivityPub system (client+server) could be 100% correct but an http-based system might not have a clear line around it and might never be quite right. A stateless system could run for 10 years without trouble, it might take you 10 years to realize a stateful system was feeding you corrupted data.
> In general database-backed sites struggle to implement if-modified-since since a fully correct implementation has to traverse the graph of all database objects that are looked up in the request which costs about as much as… the request.
That's only true if your implementation favors editing performance over GET request performance ... in which case you get what you ordered. If your feed requires any database connections then that's a choice you made.
I disagree that HTTP caching is complex. The basic primitive are:
- An indication until when the returned document can be used without checking for updates.
- A way to check if a cached document is still valid (based on either modification time or id).
This provides everything you need to implement an efficient feed reader.
Do you know (or have an idea about public discussions indicating) why ActivityPub never ended up being used for feeds? AP seems to mostly be Mastodon and Lemmy but it obviously has a bunch of fixes for problems inherent in RSS, Atom, and OPML.
I’ve seen a lot of Dave Winer saying his blub is good enough and abstractly that ActivityPub is too complex. There’s pixelfed too. Practically if I want to have an ActivityPub feed for my blog I might host in on Mastodon and if the character limit is too little you can have a server like
If anything characterizes the RSS community it is a lack of imagination and rejection of the last 25 years of work in relevance, ML and UX. RSS readers are still failing with the same failing interfaces that failed 25 years ago.
That said, the people who are still using it feel satisfied and other than Rachel that aren’t a lot of people who will try to stop you if you do try something ambitious. You just gotta poll poll poll and poll poll poll some more or pay somebody to poll poll poll for you.
I know. Perhaps I should have said "one of those misanthropic online personalities who kills 1000 contributions to open source you never knew you would have had because of their public bad attitude" and the fact that they have supporters is why Facebook wins and open systems lose.
RSS isn't a format that's super-helpful for publishers. There are a variety of reasons why. But it's an absolute dream for consumers. And that's what makes it so awesome, so powerful.
Case in point: I saw someone had unsubscribed from one of my email newsletters, and when I went to go read the "reason why" field, they'd filled out: "subscribed to the RSS feed instead."
That's right, my email newsletter has an RSS feed (thanks Buttondown!), and they prefer to receive the newsletter that way rather than via email. And can I blame them? Absolutely not! I love RSS. Is it better for my vanity to have their email address in my database instead, rather than some nebulous XML file going out to who-knows? Of course. But again, this format keeps on winning year after year because it's one of the best consumer-first features of the open web.
The rise of email only newsletters has been irritating. Thankfully a lot of readers (I use inoreader) let you create mailboxes that just turn into entries in your reader
https://kill-the-newsletter.com/
I like the idea of 404 media to provide a paid full text rss feed. This makes it a lot better for publishers.
Yes, Talking Points Memo also has full text RSS feeds for paid subscribers.
That's an extremely rare edge case that fails to justify your point spectacularly.
I thought the comment says "it was an absolute dream for consumers" but actually it says "it's". Sorry to burst your bubble, if you ask any normal person who does not spend 10 hours on HN per week, chances are that they have never heard of the term RSS in their life.
I think there are a lot of us out there in fact. Managing a queue of feeds in an RSS reader is much more pleasant than having newsletters mixed with email. Separation of purpose is a good thing for most, imho
Does not help that the browsers killed native RSS support. To bring up the infamous example, many normal people were using Google Reader.
Nearly everyone I know has a favorite podcast. Even if they never heard of RSS, they are probably using it.
Oops, apparently people don't like me saying some truth out loud.
- I still use RSS
- Some major platform still provide RSS, which makes me use them (I do not use twitter, because it does not provide RSS
- If not for RSS I would not be using Reddit
- the moment platform drops RSS, I drop the platform
Links:
[0] https://github.com/rumca-js/Django-link-archive - my own RSS reader
I forgot Reddit ever had RSS, and I think they're doing their best to forget it, too.
Viewing the source of a subreddit on old.reddit.com shows an RSS link; viewing it on the new domain does not.
During the whole API debacle all the RSS feeds in my reader got rate limited or blocked, so I just stopped using Reddit. Maybe I'll give it another go if they actually started allowing RSS again.
I've given up on Reddit, after all of their moves that seemed to be explicitly hostile to their users. I know some people still get value out of it, and I'm happy for them, but I'm not particularly interested anymore.
It's more bots / paid actors than real conversations at this point anyway. They're just milking the honeypot for that LLM training money until it runs out.
Is there a community that you’ve instead joined?
Not really; I read Hacker News more, now, and spend more time in my Discords & Slacks that I was already in.
But I also have a child & a job, so in general my available time to goof around online is less.
Most people seem to be in Discord. The quality is just as bad as Reddit, but at least it's not overrun with bots and there isn't an upvote system causing an echo chamber effect.
The quality of the content is terrible too now that you get banned for saying the wrong words. And I'm not talking about "woke" stuff - you can get a permanent site-wide ban for saying things like Elon Musk is a Nazi, or just at random.
Just add .rss to the end of the URL to get RSS.
I mean, I know how to get there. I was pointing out that Reddit itself doesn't really consider RSS a first-class citizen of how to get to its content.
I feel like some unsung hero is quietly maintaining it trying their best to not draw any attention. With how walled off social media has become and how unadvertised the feature is I thought it'd be gone by now.
if I REALLY care, then I end up making an agent in my Huginn instance (and hope there's no Cloudflare, etc verification in front that blocks scraping
https://github.com/huginn/huginn
RSS has more of a commercial problem. You can’t put ads in it so sites are incentivized to force a site visit. Which in turn forces them to withhold the bulk of the value from the feed itself. Ie just include first sentence or two. Which kills the usefulness of the feed as anything more thank headlines and link. Headlines in turn are all clickbait these days so those don’t have much info density either.
> You can’t put ads in it
That's a silly thing to say. Of course you can put ads in it since it allows linking to things. What you mean probably, is that it's not as easy as embedding some google ads markup in your sidebar.
What you can't do, is add all sorts of invasive tracking to RSS to confirm that the user saw the ad, and that it wasn't filtered out. You have to get more creative with wording that works the ad into the descriptions for the articles, and even then, there's no guarantee.
Advertisers love to burn money, but they draw the line at not being able to verify that the spend did what was promised.
You can add an image, can’t you? So the situation is not worse than email, and there’s plenty of tracking there (that good email clients block, but that doesn’t matter in a world where almost everyone uses the Gmail web UI).
Of course you could manually put ads in your RSS feed. What you can't do is use an ad network (3rd party javascript), but if RSS was actually popular, that could be solved.
> that could be solved
Let's not. Please.
It's a little worse than email.
With email, you normally use unique image and link URLs for each recipient, so you generally know who's opened the email and what they've clicked and can map that to their email address and whatever other information you have about them.
With RSS, you generally don't have any information about who's accessing the feed other than an IP address. It is possible to require users to log in and receive a unique RSS URL, which is what podcasts often do to give paid subscribers access to paywalled episodes, but that's not common for web RSS.
The exact same techniques used for email can be used for RSS. You could generate unique links for RSS too, based on requester headers, in the same way way web fingerprinting works. There'd be a bit of computational overheard in comparison to serving a static XML file, but it seems easily doable.
Small problem was the way feeds worked in practice is you had various services caching the source feed and consolidating everything for its users like Reader and Feedly and others. Multiple startups around this.
Even the injected ads idea was tried with companies like FeedBurner, later acquired by google.
Not that much different from any other form of content aggregation. Web links posted to HN or Reddit also either strip out individualized links or conflate everything together under the same link. There are plenty of solutions to this.
If you're generating feeds on the fly with tracking metadata based on the requester, you can identify aggregators, and treat them equivalently to social media platforms where users circulate normal web links. You still get click-throughs to the underlying content from the end users, and you'll know the aggregator was the referrer.
Many podcasts use direct ad injection using metadata from the request to the enclosure link -- that seems to work well enough, and seems like the sort of thing that could be used for other content than just audio and video.
I really really really object to being tracked in emails with poisoned links (without being told or having a sensible opt-out, usually, so also illegal under GDPR I believe) and it is one reason that I will not sign up to them.
You can have that on your website and put a summary and a link to your website in the feed. That's been a common approach for a while.
If RSS has been more common, I imagine the bigger RSS readers (bearing in mind one of them was from Google!) would also have standardised on other ways of tracking clicks and ad views and all the rest. There just weren't enough people interested in RSS to make any of that worthwhile.
I say this as a user of RSS and someone who publishes a (very sporadic) RSS feed. It's a niche, because most people don't want to curate their own feeds.
This is definitely false, see Facebook's pivot to video.
Wasn't that famously based on advertising network lies?
FWIW, there were attempts to build an RSS Ads product at Google: https://www.eweek.com/c/a/Search/Google-Puts-RSS-Advertising...
Everywhere that ads are the only way to create revenue streams, that should be considered a commercial problem in itself. It should be way easier to pay (and charge) securely for services like this by now.
Most ads seem to be unwanted, but some of them seem to work to make the nuisance worthwhile. People regularly stumble upon content randomly, and get exposed to ads.
Paying for content is a conscious action, it has a higher activation threshold than just clicking mindlessly on something that looks fun.
Then, transactions are expensive; micropayments are not a thing.
Subscriptions alleviate that a bit. Large middlemen alleviate that even more: Apple and Google can make micropayments like $0.50 viable within their ecosystems, so apps or in-app purchases can be tiny, and allow to remove ads for paying users. Attempts to do something similar for websites never took off, sadly.
I only put a summary of each item in my RSS feeds because I do not want to be redundantly sending the same body data over and over but I do want a complete history available easily in at least some versions of the feeds. And some of the primary content is audio (or video) so cannot be dumped into the RSS usably.
There are also some efficiency-related shortcomings. I'd wager that most feed readers either implement conditional requests incorrectly, or they don't implement conditional requests at all. Polling rates also tend to be stupid, on the order of 1-30 minutes with no regard for how often any given feed actually has new posts. This creates server-side pressure to make your feed as small as possible, which always means excluding content.
Yes indeedy. My one-update-per-month feed gets pulled over 10 times per hour by Podbean for example:
https://www.earth.org.uk/note-on-site-technicals-99.html#202...
10 times per hour? Yikes.
I'm a small enough deal that I can call out specific readers at the top of a new post. A couple weeks ago I put up a post that started with the aside "hey one of you has a Turkish IP and you're pulling the whole feed every 10 minutes, please stop doing that."
I think that I probably average < 1 listener by any sensible metric! B^>
(The podcast where I capture primarily audio material such as voice or sonfification that might as well also be presented in this alternative channel form...)
I'd be interested to listen to your podcast if you were to ping me its URL via one of the contact methods in my bio...
The first point sounds like an implementation issue rather than a protocol one. I also don't agree that most readers have this problem.
Polling rate also has nothing to do with frequency of updates if you care to receive those updates in a timely manner. I haven't seen a reader default to 30 minutes or less.
Probably in both cases you just notice the bad implementations more because they make more requests.
And Atom supports pagination so you can limit the main feed url to be just one entry while still allowing for clients to retrieve older ones.
My only Atom feeds are already short, and the paging mechanism would I think be hard to do in my static site, but yes, thanks for the reminder!
Why can't you put ads on RSS? Either in the story itself (by the site or aggregator) or as a "promoted item" in the feed (by the aggregator). If anything, the Google Discover (or whatever it's called) is not too different, just that you don't control which exact news sources you're subscribed to.
I can imagine an alternate timeline where Google Reader turned into a sort of Twitter (or FB or IG) feed.
I'm just here to say that I'm still bitter about Google Reader. :-(
Ads are a recent innovation on top of existing web standards. A new generation of changeable scripted ads had emerged and this was not compatible with XML. The old generation of ads was still compatible but not scalable, similar to how podcast sponsor ads work (immutable after publishing), and so did not get much traction.
Some podcasts do dynamic ad insertion. It has a plethora of inconveniences, but it does exist. I’d rather it didn’t but here we are.
Podcasts ads are definitely mutable after publishing. Dynamic ad insertion has been a thing for years. If you download an old Stuff You Should Know podcast, you will get a new ad even based on where you were when you downloaded it.
And where you(r GeoIP data) were as well. I remember a lot of local car dealer ads being on that show for a while.
Sounds about right.
Podcasts inject ads into the content: from RSS you get the link and description of the episode, and inside the episode are ads.
I guess that's why RSS is still a thing for podcasts? :-)
why not? you can format ads as an entry in the rss feed. in fact it would not even bother me. i could train my rss reader to detect the ad based on keywords and mark it, and even if not, i'd just skip over it manually, mark it as read, and it's gone. as long as the frequency of ads is not to much that is better than an ad on a website that is permanently visible.
It's more like a one-way problem. Authors don't know how far the RSS reached nor who reads the articles.
Readers don't know how to reply to the author in a standard way (like an email)
FreshRSS on android will fetch the full article. Such a good feature I wished more applications used.
I host freshRSS and it's been amazing for me.
Also https://lireapp.com on iOS and macOS, has optional local cache of text and images for offline reading of RSS feeds.
Man, RSS still brings me so much nostalgia. Anyone still feel the pain when Google discontinued its RSS reader?
Feedly works more or less the same. I have no issues with it.
> Anyone still feel the pain when Google discontinued its RSS reader?
Yes, but mostly because of a lost opportunity.
I was working on my own web based reader when Google made a significant upgrade to their reader. It was similar to what I had made, so I thought it would be foolish to compete with Google and stopped working on it.
I wonder where RSS would be now if Google had not discouraged potential competitors.
John Gruber (Daring Fireball) has made his entire living for 20 years by putting a once a week sponsored post in RSS along with the full content of his posts from his site in RSS.
I hate that the self hosting newsletter does this.
> Massive tech companies tried to own syndication. They failed.
Well, RSS won the battle, but lost the war.
Yet I still use it on all devices and nothing beats it. Moved to Feedly when Google Reader died.
For apple ecosystem best client is https://reederapp.com/classic/
Arguably the best is NetNewsWire, which has been around in various forms for over 20 years and is still developed today https://netnewswire.com
https://theoldreader.com has been my go-to since google reader was killed. It's pretty good at sussing out the rss feed of random blogs if one exists, too.
NetNewsWire doesn’t have an in-app browser, which can be a dealbreaker (it was for me, last I tried it).
Cannot agree more.
Sorry for the random question, but I’ve been trying to get more into RSS, and figure it’s worth asking someone who has a lot of experience - is there a reliable way to find an RSS feed for a given site, assuming it has one? Or is it a set of heuristics you try?
Are there good tools to RSSify sites that don’t have one?
> is there a reliable way to find an RSS feed for a given site, assuming it has one?
Any half-decent feed reader app will do it for you after just pasting the website’s address.
> Are there good tools to RSSify sites that don’t have one?
https://openrss.org
https://rss-bridge.org
https://createfeed.fivefilters.org
And for newsletters:
https://notifier.in
https://kill-the-newsletter.com
Awesome, thanks! Especially for the pointers to those rssifiers.
For the first question, I should clarify that I'm hoping to just ingest these RSS feeds myself in various scripts. But yeah, makes sense that most of the good feed readers mostly take care of that.
Websites usually link to their RSS feed using a <link> attribute in the head of the page.
Browsers used to detect this and show an RSS icon near the address bar if the website you were viewing had a feed - and you could click the icon to see more details and subscribe.
I use this Firefox addon which replicates that functionality: https://addons.mozilla.org/en-GB/firefox/addon/feed-preview/
FreshRSS is a good self-hosted RSS feed reader, and you can configure it to scrape non-RSS webpages for updates too: https://danq.me/2022/09/27/freshrss-xpath/
Great tip on the <link>, thanks a lot! Also the pointer to FreshRSS, I might end up running an instance of that in our basement.
I use RSSHub Radar which finds both native feeds and some RSS-ified feeds for websites that don't support it. https://github.com/DIYgod/RSSHub-Radar
Ah this is great, thanks!
RSSHub radar to detect rss feeds. And you can write handlers for RSSHub to RSSify websites. Both open source.
With decent RSS apps, you can generally just paste in the URL of any page (or the site's homepage) and they will take care of examining the HTML to find the URL of the actual feed.
I use Folo which has Rsshub built in. You simply search for a source you want, or add your own with a known URL for everyone to use. Otherwise you can use Rsshub with a reader of your choice.
Check the source code. Looks for "rss". If that returns too many hits then search for "application/rss+xml".
That's actually what I've been doing, but sites that very clearly should have an RSS feed (specifically, our local governments' event calendar pages), don't, so I thought there might be some other route/heuristic/whatever that I've been missing :-(.
Exactly the approach that I've been using for years. Manual, but works!
Google makes an extension for it - https://chromewebstore.google.com/detail/rss-subscription-ex...
You can link it to your reader so you just click the button and it adds the feed into it.
I use RSS inside Telegram using a bot (should work with Matrix, Teams, etc as well) Allows syncing read stuff across devices and gives nice previews.
Depends how you define lost. I still use it every day.
Is it a popular main stream thing? No. Does every since site offer feeds for every reasonable thing you could want to subscribe to? But does it still work quite well for those that want to use it? Yes.
What "war"? RSS is an open standard and still going strong. It doesn't need to win or compete or whatever business words from warfare are hyped nowadays. It just needs to exist. The genie is already out of the bottle, for 20+ years.
Lost it to who or what? What other feed syndication protocols are in widespread use? RSS is everywhere, and I don't see anything else comparable.
Discord. Reddit. Hacker News. WhatsApp. iMessage. Ex-Twitter. Instagram.
Reddit and HN fully support RSS.
Discord, WhatsApp, and iMessage are all messaging applications that aren't directly related to the use case for RSS.
That leaves Twitter and Instagram as the two major sites for which RSS would be applicable, but which don't natively offer RSS feeds. And a cursory web search reveals the large number of solutions people have come up with for subscribing to content from Twitter and Instagram via RSS, indicating that there's significant demand for it.
It's also worth noting that with Twitter in decline, the main competitors gaining traction, BlueSky and Mastodon, do both natively offer RSS feeds.
On top of that, the entire podcasting ecosystem is fundamentally based on RSS, and it's still the primary mechanism for syndicating blog content.
So RSS is not just alive and well, it's thriving.
It’s still alive and making strong steps towards a comeback in recent years.
This is especially timely, as I'm currently building a service that let's you receive your RSS feed as a physical newspaper.
Many times this sort of meta information reveals much more than expected
Many moons ago I tried out a service [0] that did this with pocket articles (although I used to send to pocket vis RSS). It was pretty good! It didn't last long though.
I suspect maybe it's easier now to nail the layout if ai can read content before it goes to print.
[0] https://www.bfoliver.com/2014/paperlater
Thanks for the heads up about paperlater!
AI is indeed a crucial part in solving the two most difficult challenges -- typesetting and curation, although we'll probably do things that don't scale for a little while before fully automating.
I sort of love this, but immediately wonder about curation.
My feeds are pretty unpredictable - sometimes I have 40 new articles in a day, sometimes just a few. The cheapness of digital consumption and interface makes it viable for me to skim titles and read, defer, or dismiss at my judgement. I don't want the entire feed printed out - not viable.
But if some SaaS is curating my feeds for me, I fear it'll turn into another algorithmized something optimizing for what exactly? At least the first-pass filter is explicitly set by me - feeds I subscribe to.
Curious to hear your thoughts on it, and wishing you luck.
Yeah- I get about 300 new items each day in my feed... of which on average about 1% of those are worth reading the full article. There is a lot of duplication as well- many sites will cover a new gadget announcement, but only need to read one to get the full scoop. Printing this would be overwhelming- and many of those sites are summaries of "source documents" (papers, release notes, etc) that I want to jump to.
I am sure people use RSS in many different ways though, it just doesn't seem useful to me.
You got it exactly right, curation and typesetting are the most challenging aspects of it. Experimenting with different solutions...
Maybe you first get the summary on your phone and you decide what to be printed?
I've had this same idea! Of course, it remains an idea never taken out of the garage. Are you delivering broadsheet, or formatting a printable file for users to print at home?
Typesetting is a challenge so broadsheet vs tabloid is undetermined, but whatever it will be it will be delivered to the door. The newspaper paper is a crucial part, I believe.
I have had this idea pitched to me many times over the years, with requests to build a simple prototype practically forced into my dev queue .. but I always resist it.
The last time someone tried to convince me this was a good idea was just after the iPhone was announced, and before everyone and their monkey had a super computer in their pocket. It seemed like a good idea at the time, so we almost started - but my advice to the punter then was "lets see what the mobile phone industry looks like next year" .. well that put a pin in it.
Nowadays, I'm not so sure I'd be so willing to do this - again, because it requires the user do the printing - but if you were to, say, make this into a vending machine product, which users can walk up to in the street and walk away with a custom 'zine full of their own interests, you might be onto something.
Here in Europe we have a lot of old telephone booths converted into mini neighborhood free libraries. I've often wondered whether it would make sense to put a public printer in those libraries and let people print things .. seems like this would be a revolutionary new product to make, with printable broadsheets based on a custom RSS, an obvious killer app .. assuming someone can be found to maintain the printers.
(Off to find thermal paper for my ClockworkPi, which I always wanted to turn into a custom RSS printer in the toilet...)
That's a damn good of an idea. I'd had uses for my old parents for something that came by snail mail, to notify sports events or what not.
There are already some similar projects that use a thermal printer to achieve this.
This sounds interesting. Do you have anything to show yet?
Not yet, but we'll need beta testers. If you're interested and in a large metro area please reach out to ofek [at] nestful [dot] app mentioning said metro.
Even being aware that such a thing as "RSS" exists nowadays, implies a pretty high level of technical sophistication. Why would such users go out of their way to use up print stock, wait for it to be delivered, incur the energy/fuel costs of such delivery, etc. instead of reading it on their screen?
I’ve thought of this (worked in book sales so the espresso printers were around for print on demand books.
Recently I’ve been living in a cottage town and thought of this idea again… rather than be reading on phones or tablets people could read printed books with their favourite articles or blogs. But I think the actual distribution system would be the killer, unless it’s at a big resort the transportation will kill the idea.
Let me blow your mind: Betamax was not better quality than VHS. There are many things that can explain why people believed that one was better than the other.
People confused Betamax with Betacam, Sony’s professional grade recording medium, which is absolutely better quality.
People conflated VHS’ ability to slow the tape for even longer play at the expense of quality. That of course made the recording terrible. Betamax did not initially have this capability.
People listened to Sony’s own marketing. When they couldn’t compete on features, they banked on their reputation.
How do you quantify quality?
"When Betamax was introduced in Japan and the United States in 1975, its Beta I speed of 1.57 inches per second (ips) offered a higher horizontal resolution (approximately 250 lines vs 240 lines horizontal NTSC), lower video noise, and less luma/chroma crosstalk than VHS, and was later marketed as providing pictures superior to VHS's playback. However, the introduction of Beta II speed, 0.79 ips (two-hour mode), to compete with VHS's two-hour Standard Play mode (1.31 ips) reduced Betamax's horizontal resolution to 240 lines.[7]" https://en.wikipedia.org/wiki/Videotape_format_war#Picture_q...
In tests done by Technology Connections, the difference was so small as to be inconsequential. It was technically better at its slowest speed, but you could barely perceive the difference and more importantly Sony disabled the feature in the vast majority of machines sold. People wanted more than 60 minutes out of one tape. They wanted 2 hours.
https://www.youtube.com/watch?v=_oJs8-I9WtA
by measuring signal fidelity?
Looking at it. Which is what really matters.
If you want to collect obsolete formats and you have a TV with analog inputs VHS is probably your best thing to get into. This place
https://mastodon.social/@UP8/114286077399818803
sells VHS decks for $12 and you can get pretty good movies for $2. Contrast that to compact cassette decks which start at twice that and have a good chance of being non-functional. That place has the complete works of Barbara Streisand but if you want music that anybody would want on cassettes the sky is the limit for collectables.
My impression is that the quality of VHS isn't terrible. The video is worse than DVD of course but a lot of DVDs have NERFed soundtracks because they mixed them assuming you're going to play their 5.1 mix on a 2-channel system. Any deck you get now is going to support VHS Hi-Fi and if you have a 5.1 system with some kind of Dolby Pro Logic the soundtrack of a good VHS can be better than the soundtrack of an average DVD. (Blu-Ray often has better sound not because the technology is better but because the 5.1 soundtrack is more likely to really be a 5.1 soundtrack)
There are a few more things I didn't like about DVD, I don't like the blocky artifacts that you often see in the background (doors, bookcases, etc). Some of the earlier scenes in The Matrix are particularly bad. Fire/explosions are also very poor.
Beyond this, is when they bake a 16:9 movie into the 4:3 format losing significant fidelity. Batman Begins was nearly unwatchable.
This of course doesn't get into the sound quality/mixing issues you mention... I wish they had something closer to h.265 at that time, as I don't mind a blurry background nearly as much as blotchy/blocky artifacts for similar sizes or smaller. A 2gb h.265 movie from blueray looks dramatically better than a 4+gb DVD movie.
> This place sells VHS decks for $12
> Any deck you get now is going to support VHS Hi-Fi
When you say "VHS deck", do you mean something other than a VCR?
Sony charging exorbitant licensing fees to manufacturers of Betamax equipment also didn't help, a lesson it took Sony a few more decades and proprietary formats to finally learn.
The real beta killer feature was that VHS extended recording mode could fit an entire NFL game on a single tape.
Betamax's "Standard" playback was better than VHS's "standard" playback... the issue was VHS "standard" could get something like 2 hours to a typical tape and BetaMax was like half an hour. For actual content, BetaMax tapes were recorded in an extended play format, while most VHS tapes were in Standard. This dramatically reduced BetaMax quality to be comparable or worse.
https://www.youtube.com/watch?v=FyKRubB5N60
We started with a Betamax player. I think one underappreciated reason for VHS's win was that you could put a movie on one VHS tape, whereas the Betamax required two (at least at the time it mattered). And in an era of movie rental stores, that made a difference. Both in terms of logistics, but also in terms of the consumer having to load a new tape halfway through a movie.
A classic case of completely ignoring UX. UX beats technical merits, every single time.
Porn went VHS and later on you could fit a whole movie on one cassette.
That was it
This is discussed and dismissed in the first paragraph of the article.
I'm not entirely sure that the blog is totally correct in this part. I've always been fascinated by censorship (especially when it comes to movies/entertainment) and have seen/read a bunch of stuff about vhs in the 80's. Here in the UK we had a term, 'video nasties' for a bunch of horror films that were banned (Evil Dead being a very prominent example). Anyway, the general concensus I've always been under after watching/reading all the documentaries about that stuff is that the reason why the porn industry used vhs and ultimately won the format war was because the prudish japanese execs at the sony would not alow 'smut' on their precious new format and if the industry could not license the use of that format then they had to use vhs. The porn industry didn't choose vhs over betamax, vhs was the only option available to it.
Yep, it was Sony policing the content on their media - a story as old as time.
It's (a part of) what killed MiniDisc too. It was _the_ coolest format ever invented, but Sony gatekeeped it so hard nobody used it.
Nothing more Cyberdeck-y than using a MD to transfer data :)
https://www.youtube.com/watch?v=hGVVAQVdEOs => (false, true)
They were both terrible quality. The thing with VHS is playtime. One movie could fit onto one tape.
The only real advantage VHS had was that JVC broadly licensed the tech so anyone could manufacture devices and/or tapes while Sony heavily restricted Beta.
Laserdisc also had that annoyance; max duration about an hour per side.
I, like many others on here, use RSS every day. In Thunderbird I have a whole bunch of feeds I subscribe to, one of which is this very website - Hacker News. I even made my own HackerNews extension in Thunderbird to make it even easier/quicker to open the links from the feed. RSS is great, I check them all throughout the day as I do my emails, all in the same app.
Small self-promotion: https://github.com/Olshansk/rss-feeds
As far as I can tell, it's become the "de-facto" for Anthropic related RSS feeds.
You'd think RSS was dead, but I release this earlier this year and it's at 100 start.
I'm still hoping for AI agents to mature to a point where they can be universal scrapers for my RSS. Have a headless client, scraping, interpreting websites in the background... burn excess cpu cycles and dead dinosaurs to replicate universal RSS dream.
RSS died so many times. But as my google traffic is steadily declining with AI overviews, my RSS readership has exploded.
I’ve been using the internet since 1995 and I’ve never even heard of ICE. Crazy
I chatted with Dean Hachomovich at a blogging conference as he was copying our (Firefox) tabbed browsing and RSS implementations. Soon after, a MS lawyer reached out to me to ask what they needed to do to re-use our RSS icon in the upcoming IE 7 release. We gave them the okay. I still have the jacket he gave me with "Longhorn loves RSS" on it.
1 – I had no idea that Buttondown had an active blog.
2 – It feels like RSS is one of those topics where the same old observations and opinions are raised, but nothing new is ever tread for or against it.
I’m not sure whether these works address my interest for the former, but I think they’re cool.
* https://matklad.github.io/2025/06/26/rssssr.html
* https://github.com/rsdoiel/antenna
I'd like to see a "new" RSS standard based around newline delimited json, where the summary text is a minor extension to GFM (to support left/right/spread images, minimal formatting, basically match medium.com options). This can allow a common reader to do a display that renders to their own liking (colors, font, etc).
Beyond this, maybe a framework to show a single header ad on the reader giving the revenue credit and money to the original content site.
The reason for newline separated json, is simply that you can do a partial content download in the reader... the most recent 100kb or 2mb or whatever... you the most recent is on top, and allows a site to publish more than just the most recent, but you don't have to grab that. Or maybe just standardize a since=(iso-style-datetime) or last=## (number of articles).
Just a couple loose thoughts on this.
I still think there is a future for web publishing - from indie to corporate - if people stop feeding the algorithm machine with both sides of the supply and demand market, and move it elsewhere.
People found the web more boring, because it became more boring.
They found the algorithm more interesting, because it allowed them to see what was going on with people they barely knew (from former school mates they'd lost touch with to celebrities without press filtering), and that was compelling.
But there's a next phase available to us, which is to make the web more interesting, entertaining and compelling again.
I love that b3ta.com still exists. I love that metafilter.com is moving on. I think it's great that web comics I love still publish to RSS.
I just think more of us need to provide more demand, and more people will wake up to supply, and the flywheel will start to turn.
RSS beat ICE, and it can beat Meta and X if people want it too, albeit for different reasons.
Then google killed it, they made a great product, Google Reader, then killed it, and then after that huge amounts of RSS feeds just faded away.
Ironically, my microsoft feeds are pretty active, and xkcd is still there, The Daily WTF is still going strong.... but a lot of my feeds are just dead.
The author seems to live in a bubble where people are aware of RSS feeds. This article is the first I'd even heard of ICE in the first place. While multiple companies are listed as being behind ICE, no examples are given of websites that actually provided a feed for it.
Meanwhile, RSS is barely relevant today. For decades (Youtube turned 20 this year), people have had access to feeds curated by "the algorithm" operated by a commercial interest (hoping to maximize the amount of ads you look at); and most people seem to prefer it that way, if they're even aware of alternatives.
Then Microsoft took out revenge by adding the worst RSS integration ever to Windows/Outlook.
Interesting, I’d never heard of that ICE. Seems that it could be considered a very very early idea in line with ActivityPub, which I also don’t really know much about.
I think, as someone that has a RSS feed on my blog, that RSS is a total mess and Atom was probably the better choice.
Maybe even some modern JSON based format would be OK, but maybe that’s what ActivityPub is?
Anyway, after dealing with the mess of images and inline HTML with CDATA in RSS, I have complete fatigue of the whole endeavour.
> Maybe even some modern JSON based format would be OK
That’s what JSON Feed is. It’s supported by several RSS readers.
https://www.jsonfeed.org
> but maybe that’s what ActivityPub is?
No, that’s for social networking.
https://en.wikipedia.org/wiki/ActivityPub
RSS was so badly designed that early versions weren't actually valid XML. I never understood why anyone used it after Atom was created.
Because people care about publishing or getting updates, not about the thing delivering them being valid XML or not.
RSS works. Atom splitting the standard into two probably did more harm than good. In the end it doesn't matter since every reader supports both and both do the job well.
>All RSS had to do to weather ICE, Twitter, AI, and whatever comes next
RSS did not weather Twitter. Social media is huge compared to RSS. It turned out that singular recommendation feeds are able to push URLs around better than needing every site to build in feeds themselves and then still requiring someone to turn those feeds into a singular feed for the user.
I think there are a couple of things here.
First, RSS has a bit more friction. Smashing the follow button on Twitter et al is faster than adding the feed to your RSS reader of choice unless your OS has support for default RSS app.
Second, discoverability. Just like with any distributed system vs monolithic platform, you need to find what to read yourself. For some niches this works well. If you are a software developer/hacker, you are more familiar with blogs in your area of interest. But if you have a wide range of interests you’d need to find the blogs yourself and hope their RSS feed is well formatted.
Third, the algorithm. A monolithic platform can do more to try to mix in new content based on your interests and intelligently mix up the content from sources you follow. This is of course controversial because feed algorithms can also try to cram bullshit into your feed or hide important stuff from you or create an echo chamber. But in the best case scenario they can also expose you to new sources of content you wouldn’t have found otherwise. An RSS reader would mean it is up to you to do this discovery which is more friction.
And ultimately content creators realized that they get more eyeballs on their stuff by using platforms like Facebook, Medium, Instagram, Twitter, than on blogs especially since blogs tend to be then repackaged by blog spam bots, Google’s AMP, and now LLMs.
So IMO RSS is just too manual and requires too much work. And of course since you can’t effectively advertise through it there is less incentive for creators and platforms to support it.
BlueSky and Mastodon both support RSS feeds. The loss from Google Reader dying was huge, more so than Twitter, but it’s probably balanced by the growth in Podcasts.
RSS feels like a cable. Cables won! Because you need them to power your devices and pipe your home internet. Cables lost! Because of 5G and WiFi. Maybe cables dont care, they just do their job.
So did Twitter pre-Elon. I moved a number of "public personalities" with high-volume feeds from my follow list to my RSS reader. I liked what Merlin Mann, or Parker Molloy, or John Green had to say, but I wasn't going to interact with them, and their loquaciousness made it hard to keep up with people I followed there that I actually knew and interacted with.
Then I remembered that Twitter was once referred to as "micro blogging," so I put those folks in my blog list on Feedbin, and was happy again.
I do miss the glory days of Twitter, tbh.
Most people consume those services via the app or website and not via the RSS feed.
I don't think that market is zero-sum, so the question is not about who "won", it's whether any player lost. Despite Twitter being big, RSS is still widely used and, perhaps more importantly, widely supported and thus usable. That counts as weather in my book.
(In contrast, ICE did not weather RSS.)
I was going to say, RSS is not as big as I remember it being back in the late 2000s. I remember people having RSS clients, myself included. Now I can't remember the last time I ever used one. Where RSS is most prominent I guess is podcast feeds which were based on RSS to my understanding.
OTOH, I can't imagine not using an RSS reader. I'm sitting here with Liferea on my desktop connected to my TT-RSS server, which I use to manage pretty much everything I subscribe to: blogs, podcasts, YouTube channels, webcomics, subreddits, and several aggregators including HN. Having to access all of those separately via their own websites sounds like a nightmare.
I read HN top submissions through RSS \o/ I have a lot of other feeds too in my reader. I don’t think I could function without it, newswise.
> which were based on RSS to my understanding.
They still are in most cases.
A podcast, by definition, is an RSS feed -- if it doesn't have RSS, it's not a podcast.
Back when Twitter was less controversial, I remember tons of techie folks gleefully saying that they didn't bother with RSS any more because Twitter was better.
deleted
wrong thread :)
[dead]
And then Google and Facebook killed RSS.
RSS is alive and well. It’s rare that I find an interesting website where RSS makes sense and it doesn’t exist. Even if they don’t advertise it, popping the website’s address into a feed reader tends to be enough to find it. Even Mastodon and Bluesky profiles have RSS feeds.
Mozilla decided to remove its fantastic live bookmarks feature that seamlessly integrated RSS within bookmarks in 2018 with Firefox 64. Someone then made an extension which was ported to Chromium, and then back again to Firefox because original one was abandoned.
A dedicated extension is needed to have that feature back. Chrome needs one as well, so does Edge; only Vivaldi and Opera come with build-in feed readers. There are of course standalone applications but that seems to be a niche nowadays.
I've found an old rssowl opml file from 2014 last week and I decided to see what's still up. I've found some RSS readers in flathub but sadly, majority of what I was visiting back then died.
Ofc I did not mean they literally killed it. But Google improved and pushed Reader to the point where it was the ubiquitous client, then terminated it leaving everyone at the mercy of poor clones or paid options. This was at the precise time social media platforms gained traction, and while some kept up rss, the majority of personal publishing moved to facebook and later other closed platforms.
I still use rss daily for keeping up with we bsites, but now it is for 99% of internet users something tech embedded in an application, or just not heard of.
So in that sense, the poster child of "Web 2.0" was taken out back and kneecapped.
One can say that, but can you really kill RSS when it is protocol? It is an easier way to keep track of all the updates from different sites.
Bluesky is basically RSS on JSON.
My having to prune my subscriptions says otherwise.
I'm sure there's still some people using a Ford Model T to drive around as well.
Whatever ICE was, I'm sure it wasn't thinking about RSS. RSS remains a trivial, unhierarchical link dump.
A lot of discussion around RSS revolves around the format for the data/metadata (e.g. the Atom feud) but the real problem with it is this:
To consume an RSS feed you poll it. There are two polling speeds: too fast and too slow, and it's possible to be both at the same time.
Note the struggles of this Karen to turn RSS from a simple stateful protocol to a complex stated protocol, and she'll ban you if you ever reset your cache and rerun your fetcher because your cache got corrupted or you suspect it might have been corrupted.
http://rachelbythebay.com/w/2022/03/07/get/
You really want to have a stream of feed items and to be able to: (1) replay the whole stream all the way from or to the beginning and (2) query "what got added after time t?" and just get that. ActivityPub accomplishes this but people don't really like it. For Dave Winer it is all blub but even if he doesn't believe in the Fedi, he's on it.
I really like
https://superfeedr.com/
because it does all the polling for you and hits your webhook whenever a new feed item appears. My webhook is about 15 lines of Python running as a Lambda function that posts items to an SQS queue and my YOShInOn RSS reader just drains the queue at its convenience. The pricing at 10 cents/feed/month is a bargain for high volume feeds like MDPI, arXiv, and The Guardian [1] but unfortunately I can't really afford to subscribe to 2000 little blogs that post maybe once a week at that rate. I wish there were more Planets.
https://en.wikipedia.org/wiki/Planet_(software)
[1] AWS costs would be trivial in comparison even if it got out of the free tier
It’s a pity that this is the bottom-most comment and equally it’s a shame that the slur “Karen” made its way into the White lexicon spoiling an otherwise informative remark.
Correct me if I’m wrong, but is Winer’s somewhat recent effort with FeedLandⁱ any different from Planet?
ⁱ: https://feedland.org/?username=scripting
(1) Some people read from the bottom up
(2) I really am mad at Rachel for this and that's from someone who's been writing webcrawlers [1] since 1998, been an RSS innovator, and been responsible for complex systems when they fail.
(3) Maybe I am missing it but I don't see an actual feed for that URL, I see only an OPML file. Dave is really gay [2] for OPML files but I'm not because I still have to work to fetch all the items. Yet, visually the OPML file and blogroll look like a planet and you're not the first person who's pointed Dave's blogroll as a solution as opposed to the problem that I see it is.
(4) Looking at the head of the list I think "Daily Kos" and "404 Media" suck but that I already subscribe to many of them like "Arstechica" -- looking at the tail of the list I see there are gems that I'm not getting. If those things were getting aggregated topically it would please both me and Rachel.
[1] that don't crash your server
[2] in a good sense!
> If those things were getting aggregated topically it would please both me and Rachel.
I think I catch what you’re getting at and it reminds me of what seems to be a primary gripe with RSS—compromising an otherwise decent spec as far as I can tell—discoverability and curation.
I can’t grasp the more technical issues surrounding polling just yet so I can only intuitively get how it corresponds with the other issues. In spite of this, and leaning again on my intuition, I didn’t get a good feeling from Rachel’s blog post when I first came across it either.
And yeah I’m not sure how FeedLand works under the hood, I just matched Wikipedia's description of a planet with what I knew FeedLand looks like on the frontend.
I also know there’s been some “folksonomic" efforts for curating lesser-know feeds, but to be honest I’m not fond of what these networks of folks appear to be into. Compare your distaste for Daily Kos and 404 Media (which I share) to the vibe I’m trying to avoid on that front.
> I really am mad at Rachel
Don't know what you're so angry with Rachel about. I provide feeds for readers and many of them are way too greedy and frequently do hammer websites, often unneccessarily. Mix in all the rampant AI bots and you got a recipe for an extremely expensive server bill.
Getting angry at someone who's trying to keep their server costs down to provide you something for free is kind of weird.
It's a 10,000 word rant at least but...
- I've looked at the abyss of bankruptcy from server bills. I actually think it's a hell of a lot worse than Rachel does, I thought that 15 years ago, and I've suffered worse
- The whole discussion around RSS has been naive in every respect from the very beginning, for instance Dave Winer thinking we care about him publishing a list of the tunes he listens to back when you couldn't actually listen to them (I'll grant that in the age of Apple Music, Spotify, YouTube in such things may have caught up with him.) There are never any systems thinkers just people who see it all through a keyhole and can't imagine at all that anyone else sees it another way.
- To use a Star Wars analogy, Google is the evil empire that blew up up our planet with a Deathstar 10 years ago and now we're living on the wreckage in an asteroid belt. I see the AI upstarts as The Rebel Alliance that at the very least reset the enshittification cycle back to the beginning and create some badly needed competition. Opponents to A.I. crawlers are brainwashed/living in the matrix and defacto defending Google's monopoly and slamming the door to prevent the exits from enshittification that A.I. agents offer (e.g. they look at the content and drop out the ads!) They think they're the rebel alliance but are really the stormtroopers suppressing the resistance until the Deathstar charges up for the next shot.
- Rachel's supposedly some high powered systems programmer or sysadmin or something but is just as naive as Winer. We're supposed to "just" use a cache. Funny, the reason why the web won out of every other communication protocol is that you can "just" use curl. curl by it's nature can't support cacheing because it's a simple program that just runs in one process and if you wanted to have a cache you'd have to have a complex locked data structure which would force a bunch of trade offs... like I am downloading something over my slow ADSL connection that takes two hours but I also need to clear my cache so do I force the download to abort or hang up the script that needs the cache cleared for two hours? curl is "cheap and cheerful" because you just don't have to deal with all the wacky problems for which "clear the cache" or "disable the cache" clears up like a can of Ubik. But in the age of "Vibe Coding" solutions that almost work are all the rage, except when you finally realize they didn't actually work you clear your cache and rerun and BAM you got banned by Rachel's blog because you hit it twice in 24 hours.
Web browsers are some of the most complicated software in existence, ahead of compilers (they contain compilers) and system runtimes (they are a system runtime), right up there with operating systems (less having to know things that aren't in the datasheet to write device drivers at least.) For the first 15 years you could not trust the cache if you were writing web applications, somewhere around 2010's I finally realized you could take it for granted the cache works right. I guess implementations and the standards improved over time but all this complexity is some of the reason why it is just Google's world and we all live in it and there are just two big techs that can make a browser engine and one unaccountable and out-of-touch foundation.
So I wish Rachel would just find an answer to the cost problems (Cloudflare R2?) or give up on publishing RSS or advocate activitypub rather than assume we care what she says enough to follow her rules for her blog without seriously confronting what a system-wide solution for the problems RSS tries to solve and the problems it poses.
This is really not much of an issue if both sides implement HTTP caching (If-Modified-Since or Etags). Atom also adds pagination which allows you to keep all old items accessible to supporting readers while cutting down the main feed to just the last entry. The little bandwidth a well managed feed takes is really not worth giving up the ability to host the feed statically which the overly complex ActivityPub can't do.
You are talking about this sort of thing?
https://datatracker.ietf.org/doc/html/rfc5005
Http might be better in 2025 than it used to be but historically the cache is as much a problem as it is a solution. That is, when you have heisenbugs the answers are frequently "clear the cache" [1] and "don't use the cache" [2] and it's often a better strategy to "rip and archive the whole site right now and come back in six months and make a fresh rip" if you're working on a large scale and can tolerate being up to six months out of date.
In general database-backed sites struggle to implement if-modified-since since a fully correct implementation has to traverse the graph of all database objects that are looked up in the request which costs about as much as… the request. Your cache needs a cache and you can make it work at the system level if you have a modified date cache and always invalidate it properly. If you are doing that you might as well materialize a static web site fronting your site the way the Wordpress supercache works —- then you’ll find your site crashing a lot less!
I’ll admit ActivityPub is complex but http caching is complex in a particularly poisonous way in that there are so many alternate paths. An ActivityPub system (client+server) could be 100% correct but an http-based system might not have a clear line around it and might never be quite right. A stateless system could run for 10 years without trouble, it might take you 10 years to realize a stateful system was feeding you corrupted data.
[1] fix it... for now
[2] ... and they're no longer part of your life
> In general database-backed sites struggle to implement if-modified-since since a fully correct implementation has to traverse the graph of all database objects that are looked up in the request which costs about as much as… the request.
That's only true if your implementation favors editing performance over GET request performance ... in which case you get what you ordered. If your feed requires any database connections then that's a choice you made.
I disagree that HTTP caching is complex. The basic primitive are:
- An indication until when the returned document can be used without checking for updates.
- A way to check if a cached document is still valid (based on either modification time or id).
This provides everything you need to implement an efficient feed reader.
Do you know (or have an idea about public discussions indicating) why ActivityPub never ended up being used for feeds? AP seems to mostly be Mastodon and Lemmy but it obviously has a bunch of fixes for problems inherent in RSS, Atom, and OPML.
I’ve seen a lot of Dave Winer saying his blub is good enough and abstractly that ActivityPub is too complex. There’s pixelfed too. Practically if I want to have an ActivityPub feed for my blog I might host in on Mastodon and if the character limit is too little you can have a server like
https://writefreely.org/
If anything characterizes the RSS community it is a lack of imagination and rejection of the last 25 years of work in relevance, ML and UX. RSS readers are still failing with the same failing interfaces that failed 25 years ago.
That said, the people who are still using it feel satisfied and other than Rachel that aren’t a lot of people who will try to stop you if you do try something ambitious. You just gotta poll poll poll and poll poll poll some more or pay somebody to poll poll poll for you.
Her name is actually Rachel, not Karen.
I know. Perhaps I should have said "one of those misanthropic online personalities who kills 1000 contributions to open source you never knew you would have had because of their public bad attitude" and the fact that they have supporters is why Facebook wins and open systems lose.
Rachel is absolutely not like this in my experience. How rude!