Google is killing the open web

(wok.oblomov.eu)

184 points | by thm 2 days ago ago

182 comments

  • ants_everywhere 2 days ago ago

    > in 2023, Google renames their chatbot from Bard to [Gemini][gemini] thereby completely eclipsing the 4-year-old independent protocol by the same name; this is possibly coincidental, which would make it the only unintentional attack on the open web by Google in the last 15 or so years —and at this point even that is doubtful;

    So the theory is that Google chose the name of its AI -- easily one of the hardest and most revenue-impacting naming decisions it's made in years -- in order to create a name collision with a protocol nobody's heard of that's trying to revive GOPHER?

    This is so obviously false that you have to re-read the rest of the article with the knowledge that the author is misunderstanding what they're seeing.

    Much of what the author describes is increasing security and not wanting to work with XML.

    • a day ago ago
      [deleted]
    • morkalork 2 days ago ago

      Yahoo Gemini Ads team in shambles right now when nobody remembers they even existed.

    • TheCraiggers 2 days ago ago

      I suppose the definition of intentional is a bit murky here.

      Yeah, you're right that Google probably didn't look at a list of open web technologies that they disagree with and choose one for their new tool. I guess I'll call that "malicious intention".

      I'm sure that, however the name was picked, Google's lawyers looked for prior uses of the name. I'm sure it came up, and Google shrugged its shoulders in indifference. Maybe someone brought up the fact this would hurt some open standard or whatever, but nobody in power cared. Is this the same kind of malicious? Probably not, but it still shows that Google doesn't care about the open web and the collateral damage they cause.

      • ants_everywhere 2 days ago ago

        surely the lawyers found far more name clashes with Bard than Gemini

        https://books.google.com/ngrams/graph?content=gemini%2Cbard&...

        • TheCraiggers 2 days ago ago

          Ok, but were they open web standards? That's the topic at hand here. Regardless, that just shows Google doesn't care.

          • drysart a day ago ago

            Why are 'open web standards' specifically deserving of greater name protection than literally everything else?

            • queenkjuul a day ago ago

              Because Google participates in creating web standards

      • jaredklewis 2 days ago ago

        Have you ever tried to name something? Anything that isn’t an outright racial slur is in use by something. Usually dozens or hundreds of things. Some protocol that almost no one has even heard of is going to be very low on the list of conflicts to avoid. This has nothing to do with the open web.

      • criddell 2 days ago ago

        Gemini? You mean the name used by Gemini Data Inc 8 years before the Gemini protocol was launched?

        • Centigonal a day ago ago

          Nah, they'll probably referring to the Gemini cryptocurrency exchange which started in 2014.

    • owebmaster 2 days ago ago

      Did you at least read the excerpt you posted? It says the opposite of your conclusion, this is not even the worse interpretation of what was said, it's plain false.

  • isodev 2 days ago ago

    I love the little historical overview in the post. With more than 25 years of hindsight, the push against user-centred standards is so obvious. W3C is always better than whatever coolaid-du-jour big corps wants you to drink because (at the very least), someone actually thinks "how is this going to affect people using it" as opposed to Google/Apple's approach "How is this going to affect our revenue".

    • StopDisinfo910 2 days ago ago

      To be honest, in my recollection, in 2013 what the W3C was doing was actually seen as user hostile and HTML5 was seen as a good thing for users.

      Part of the community really hated XHTML and its strictness. I remember Mozilla being at the vanguard then rather than Google.

      I think the situation was and is a lot more messy and complicated than what the article presents but presenting it fully would make for a less compelling narrative.

      As is I don’t really buy it personally.

      • newyorkahh 2 days ago ago

        Hostile actions would be IE’s strategy for monopolizing browser in 90s and Google paying Apple and Mozilla to monopolize search starting in 2004, killing off Reader in 2013.

        Taking over standards groups is a gray area with tradeoffs. It helped Google preserve monopoly in search but clearly devs and the web benefited as well.

        XHTML2 was panned because it was super strict without clear benefits. Keeping HTML backwards compatible is clearly a very good thing. I don’t fully understand the author’s passion for XSLT- it’s cumbersome and it wasn’t popular with devs.

        I agree with the headline and some aspects but XML is a bad hill to die on and much of the writing is hyperbolic and more than a little out of touch.

      • int_19h 2 days ago ago

        I've been there at the time, and the pushback against XHTML always struck me as disingenuous. XHTML was not at all difficult to write! The only real argument against it was that it wasn't always valid HTML, and browsers didn't want to support it specifically, so when people published XHTML pages it would sometimes break if the browser tried to interpret it as HTML. But they have broken HTML backwards compatibility so much worse many times since then...

      • queenkjuul a day ago ago

        I do agree reality was a lot more messy, but i also think it still paints a compelling case that Google in particular acted how it did (mostly) to shape the web to its own best interests.

        That it wasn't literally "Google railroaded WHATWG/W3C/everyone else to get what it wanted" doesn't mean Google didn't take advantage of the situation to kill open web standards to its own benefit. I imagine Mozilla, for instance, went along with as much as they did because Google accounted for most of their revenue.

      • Devasta 2 days ago ago

        > Part of the community really hated XHTML and its strictness.

        A big part of this is that people were concatenating XML together manually, to predictable disaster.

        Nowadays they use JSX and TypeScript, far more strict than XML ever was, and absolutely love it.

        • isodev 2 days ago ago

          > Nowadays they use JSX and TypeScript

          And we're already moving away from that, landing us into HTMX/hypermedia and other fancy tools which aren't really concerned with JSX. So things come and go, but standards stay to keep things working and options available for people with different constraints. It's not up to Google to be deciding all that just by themselves.

    • api 2 days ago ago

      Nobody pays for anything with user centric standards. If software were free to produce and services were free to run this would work, but it doesn’t. Software in particular is incredibly time consuming and expensive, especially if you want to make it usable.

      • wobfan 2 days ago ago

        > Nobody pays for anything with user centric standards.

        ??? Why do you think this?

        • api 2 days ago ago

          Do people buy chat apps? Web browsers? Web servers? Web content? Clients or servers for other open standards?

          No, which means you’ll never see them get the level of polish or investment that closed stuff gets. Because when it’s closed you can make people pay or monetize it with advertising.

          I’m not cheering for this. Don’t shoot the messenger. I’m pointing out why things are this way.

          A major problem is that while free software efforts can build working software, it often takes orders of magnitude more work to make software mere mortals can use. That kind of UI/UX polish is also the work programmers hate doing, so you have to pay them to do it. Therefore closed stuff always wins on UI/UX. That means it always takes the network effect. UX polish is the moat that free has never been able to cross.

          • newyorkahh 2 days ago ago

            You’re right, but browsers are free because their cost is a drop in the bucket compared to the profits a monopolized browser status quo provides, for Windows/Office in 90s snd search/ads with Google. MS started it with free IE and Google improved upon their strategy.

          • queenkjuul a day ago ago

            If the "spy on users and sell the data" business model were illegal, you bet your ass people would pay for chat. People were paying per message to send SMS once upon a time!

          • JumpCrisscross 2 days ago ago

            > Do people buy chat apps? Web browsers? Web servers? Web content?

            Yes. (Slack. Orion. Since when were servers free?)

            The web basically fractures into people who watch ads and complain about paywalls and those who don’t.

            • scarface_74 2 days ago ago

              People don’t buy Slack. Corporations do. They also buy Teams…

              • JumpCrisscross 2 days ago ago

                > People don’t buy Slack. Corporations do.

                One, corporate cash is just as good as people cash. Two, people absolutely paid for WhatsApp before it was acquired. And three, I am a people and I personally pay for Microsoft 365 and on occasion have used Teams.

                • moritzwarhier 2 days ago ago

                  > people absolutely paid for WhatsApp before it was acquired

                  Wasn't that a one-time payment of 1$?

                  No, I wouldn't pay for WhatsApp.

                  • JumpCrisscross 2 days ago ago

                    > Wasn't that a one-time payment of 1$?

                    I think it was $1/year.

                    > I wouldn't pay for WhatsApp

                    Plenty wouldn’t have. There are ad and data-supported models for them.

                • scarface_74 2 days ago ago

                  B2B sales by definition is where the buyer is not the user. The software doesn’t have to be anything the end user wants or have a good user experience. In corporate sells, it often just has to be in the right upper quadrant of Gartner’s Magic Square.

                  They definitely weren’t bought by corporations because they care about open standards or great UX.

                  • JumpCrisscross 2 days ago ago

                    > weren’t bought by corporations because they care about open standards or great UX

                    OP said open products lose because they lack “UI/UX polish.”

                    • JustExAWS 2 days ago ago

                      And how many B2B apps have you used that have “polish”? Slack is okay. But at the end of the day, it’s another crappy Electron app.

                      • JumpCrisscross 2 days ago ago

                        > how many B2B apps have you used that have “polish”? Slack is okay. But at the end of the day, it’s another crappy Electron app

                        Sure. My point is polish isn’t a reason closed source sells and attracts investment. Folks will pay for terrible UX. (Including users.)

                        • aspenmayer a day ago ago

                          Closed source sells because open source devs don't know sales or marketing. In many cases, developers are the only users that the devs even acknowledge.

                          Just look at the successful/popular open source projects. There are nearly no paid open source apps, though most of everything is turning into software as a service.

                          Open source is built in such a way as to make outside investment very difficult to justify by most private investors. Why pay good money for something you already get for free? This is a flawed metaphor, because investors aren't purchasing anything, as investment isn't a transaction, but I think that's why we don't see more sales and investment in open source. It seems fundamentally ill-suited toward those aims and ends.

                          I think successful open source businesses are outliers, and as such are pretty interesting. The only recently founded one I can think of that does hardware is Flipper Zero. I'm sure there are others.

                          I'd be curious about who others think are the outliers in this reading, as those are folks whose work I'd love to hear about.

              • queenkjuul a day ago ago

                People buy discord nitro, though

            • api 2 days ago ago

              Slack is an example of a user-centric open protocol?

              Slack proves my point. It's closed and vertically integrated and people pay for it. Nobody paid for the open precursors to Slack so they stagnated.

              • 2 days ago ago
                [deleted]
      • piva00 2 days ago ago

        People definitely do pay for it when it's available, even more at the core of this issue is that people would prefer alternatives that are open, where their data can be easily ported to some competitor service if it's better which directly affects the bottomline from companies that push against open standards.

        I think you got it clearly reversed in your mind...

        • api 2 days ago ago

          They prefer it but they don’t pay for it.

          • piva00 2 days ago ago

            They also don't pay for the non-standards stuff, what's your point? Chrome, Facebook, Instagram. For the paid services like Apple's there's no even an alternative following open standards.

            They don't pay because there are no options of services provided by these companies following open standards, exactly because companies wouldn't be able to lock users in their solutions if open standards were commonly deployed and used...

            • aspenmayer a day ago ago

              It's kind of a chicken and egg problem. Let's say for the sake of the argument that you can check out all the data and metadata from all of the sites you want to. Now what? Where would you check it in?

              "Build it and they will come" kinda falls flat when there's no there there to, you know, build it. It's like advocating for building a highway to the middle of nowhere because in our mind the field of dreams is inside all of us, so the center of the universe would be ideal, but there are already things built there and we want folks to appreciate the game and our collective love of it, so we had to build it way out here. Open standards are one part of "building it," but not the whole of it, so it might be a bit premature to be asking where everybody is. You have to draw the rest of the owl.

              We're building sandcastles in the sky. What is the point of all these column inches if it doesn't lend itself to building the destinations that you wish to visit? The best defense is a good offense. Whining and complaining can help identify a problem and motivate others to share your view that the problem exists and is worth solving. Making new year's resolutions and telling folks about it isn't actually doing the work.

              Community organizing is like step 0. Now comes the actually hard part, being the change you want to see in the world.

  • sharpfuryz 2 days ago ago

    The article is about intentional killing XSLT/XML in the browser. I think it is evolutionary: devs switched to JSON, AI agents don't care at all - they can handle anything; XML just lost naturally, like GOPHER

    • isodev 2 days ago ago

      The problem is not XML vs. JSON. This is not about choosing the format to store a node app's configuration. This is about an entire corpus of standards, protocols that depend on this. The root problem for me is:

      1) Google doing whatever they want with matters that affect every single human on the planet.

      2) Google running a farse with "public feedback" program where they don't actually listen, or in this case ask for feedback after the fact.

      3) Google not being truthful or introspective about the reasons for such a change, especially when standardized alternatives have existed for years.

      4) Honestly, so much of standard, interoperable "web tech" has been lost to Chrome's "web atrocities" and IE before that... you'd think we've learned the lesson to "never again" have a dominant browser engine in the hands of a "for profit" corp.

      • rapnie 2 days ago ago

        Yes, this is the real issue, and it is a pity so many comments delve into json vs. xml and not on the title stating that "google is killing the open web". A new stage of the web is forming where Big Tech AI isn't just chatbots, but matured to offer fully operational end-to-end services. All AI-operated and served, up to tailor-made domain-specific UI. Then the corporations, winners in their market, don't have a need for open web anymore to slurp data from. All open web data absorbed, now fresh human creativity flows in exclusively via these service, directly feeding the AI systems.

        • quietbritishjim 2 days ago ago

          There are a lot of comments focusing more on the specifics of XML and XSLT because that's what much of the article laboriously drones on about, despite its general title.

      • StopDisinfo910 2 days ago ago

        The narrative would be more compelling to me if Google didn’t fail to impose their technology on the web so many times.

        NaCL? Mozilla won this one. Wasm is a continuation of asm.js.

        Dart? It now compiles to Wasm but has mostly failed to replace js while Typescript filled the niche.

        Sure, Google didn’t care much for XML. They had a proper replacement for communication and simple serialisation internally in protobuf which they never actually try to push for web use. Somehow json ended up becoming the standard.

        I personally don’t give much credit to the theory of Google as a mastermind patiently under minding the open web for years via the standards.

        Now if we talk about how they have been pushing Chrome through their other dominant products and how they have manipulated their own products to favour it, I will gladly agree that there is plenty to be said.

        • int_19h 2 days ago ago

          > NaCL? Mozilla won this one. Wasm is a continuation of asm.js.

          And yet the design of wasm is the way it is to a large extent because of V8 limitations and Google's pushback on having to do any substantial changes for the sake of a clean design.

        • kbelder a day ago ago

          Amp pages' miserable failure. There's a lot of Google failures.

    • diggan 2 days ago ago

      > XML just lost naturally, like GOPHER

      Lost? The format is literally everywhere and a few more places. Hard to say something lost when it's so deeply embedded all over the place. Sure, most developers today reach for JSON by default, but I don't think that means every other format "lost".

      Not sure why there is always such a focus on who is the "winner" and who is the "loser", things can co-exists just fine.

      • sharpfuryz 2 days ago ago

        Do you use it daily in browser?

        • jeroenhd 2 days ago ago

          Tons of APIs and applications work with XML. XSLT less so; that's more of a backend language.

        • aragilar 2 days ago ago

          Yes, RSS.

          • TiredOfLife 2 days ago ago

            Which browser? Firefox and Chrome have no support

            • aragilar 19 hours ago ago

              Which is why people are up in arms about XSLT, as you can provide previews of the feed via it.

        • MrVandemar 2 days ago ago

          Immaterial. If the answer is either 'yes' or 'no', it makes no actual difference: gopher still exists, is still a thing, is still successful. It feels like you're just trying to move the goal-posts and redefine what 'lose' means and trying to lure the poster into a "gotcha".

          • sharpfuryz 2 days ago ago

            It's not about a "gotcha." Browsers once supported the GOPHER protocol but dropped it around a decade ago. This serves as an analogy: if users don't use XSLT/XML daily, browsers may eventually drop support for XSLT - supporting features cost money

            • MrVandemar 19 hours ago ago

              That's not a great analogy. Firefox once supported RSS feeds as live bookmarks and dropped it, and not because people didn't use it, because people did use it and bemoaned its loss for years afterwards.

    • mattlondon 2 days ago ago

      +1 I think XML "lost" some time ago. I really doubt anyone would chose to use it for anything new these days.

      I think, from my experience at least, that we keep getting these "component reuse" things coming around "oh you can use Company X's schema to validate your XML!" "oh you can use Company X's custom web components in your web site!" etc etc yet it rarely if ever seems to be used. It very very rarely ever feels like components/schemas/etc can be reused outside of their intended original use cases, and if they can they are either so trivially simple it's hardly worth the effort, or they are so verbose and cumbersome and abstracted trying to be all things to all people then it is a real pain to work with. (And for the avoidance of doubt I don't mean things like tailwind et Al here)

      I'm not sure who keeps dreaming these things up with this "component reuse" mentality but I assume they are in "enterprise" realms where looking busy and selling consulting is more important than delivering working software that just uses JSON :)

      • NoboruWataya 2 days ago ago

        It may be that nobody would choose XML as the base for their new standard. But there are a ton of existing standards built around XML that are widely used and important today. RSS, GPX, XMPP, XBRL, XSLT, etc. These things aren't being replaced with JSON-based open standards. If they die, we will likely be left without any usable open standards in their respective niches.

        • roenxi 2 days ago ago

          Looking at the list, what actually jumps out at me is there is probably a gap in the world of standards for a JSON-based replacement to RSS. Looking it up someone came up with the idea of https://www.jsonfeed.org/ and hopefully it gains traction.

          In hindsight, it is hard to imagine a JSON-based RSS-style standard struggling to catch. The first project every aspiring JS developer would be doing is how to add a feed to their website.

      • bryanrasmussen 2 days ago ago

        probably nobody would choose it for anything new because the sweet spots for XML usage have all already been taken, that said if someone was to say hey we need to redo some of these standards they can of course find ways to make JSON work for some standards that are XML nowadays, but for a lot of them JSON would be the absolute worst and if you were redoing them you would use XML to redo.

        example formats that should not ever be JSON

        TEI https://tei-c.org/ EAD https://www.loc.gov/ead/ docbook https://docbook.org/

        are three obvious ones.

        basically anything that needs to combine structured and unstructured data and switch between the two at different parts of your tree are probably better represented as XML.

        • ongy 2 days ago ago

          EAD does indeed look like a good example of why we shouldn't use XML.

          • bryanrasmussen 2 days ago ago

            hah hah yeah, these scoped content examples would be a joy to do in JSON

            https://www.loc.gov/ead/tglib1998/tlin125.html

            • ongy 2 days ago ago

              Yes. A sane schema that actually encapsulates the data would be a lot easier to read.

              Earlier I had only seen the mix of values in body and values in tags. With one even being a tag called "value".

              Thanks for showing more examples of XML being used to write unreadable messes.

              • bryanrasmussen 2 days ago ago

                you must find reading HTML a slog.

                • ongy 2 days ago ago

                  I do. That's why I have a browser render it to a format that makes sense for human consumption.

                  Granted, html actually makes sense in the xml-ish (I don't remember if it's technically compliant), since it weaves formatting into semantically uninterrupted text.

                  If that's not the case, I don't see a real benefit to use XML over anything sane (not yaml... Binary formats depending on use case)

                  • bryanrasmussen a day ago ago

                    >I do. That's why I have a browser render it to a format that makes sense for human consumption.

                    I guess if that's the standard then reading any data format is also a slog because hey, most data and document formats get rendered as something for "human consumption" but that said when one is a programmer one often has to read the format without the rendering, and so, your witty reply aside I guess you must find this task where HTML is concerned a slog.

                    This is too bad because most mixed content formats like EAD, HTML etc. are like that, and if you want humans to be able to write the content with say a paragraph, but inside that paragraph is a link etc. you're going to write it mixed content, because that works best based on millions of programmer and editor hours over decades and JSON would be crap for it.

                    Is it super great, nope it's only the best way of writing document formats (with highly technical and mix of structured and unstructured content) that we currently know of, in the same way that Democracy is the worst form of politics except for all the all others and multiple other examples of things that suck in the world but are better than all the alternatives.

                    I didn't say EAD was great, I said it was better than JSON for what it needed to do, part of which is having humans write mixed content.

                    Believe me I have certainly seen people who have been JSON enthusiasts try to replicate mixed content type documents in JSON and it has always ended up looking at least as bad as any XML but without all the tooling to make it easier to write XML and with a tendency to brittleness because in doing mixed content in JSON you are going to have to do a lot of character escaping.

                    I'm going to end off here with the observation that I doubt you are actually acquainted with the workflows of editors, writers, publishing industries and the use of markup formats in any sort of long running type of company using these things? I just have a feeling on this matter. You seem like your technical area of expertise is not in the area you are critiquing? Some of these companies are actually quite technically advanced, so I'm just putting that out there that you might not be as aware of the requirements of parts of the world that use things that you would build in a superior manner if only you were given the task to do so.

      • bayindirh 2 days ago ago

        > I really doubt anyone would chose to use it for anything new these days.

        I use it to store complex 3D objects. It works surprisingly well.

      • temporallobe 2 days ago ago

        XML might have “lost” but it’s still a format being used by many legacy and de novo projects. Transform libraries are also alive and well, some of them coming with hefty price tags.

      • weinzierl 2 days ago ago

        "I really doubt anyone would chose to use it for anything new these days."

        Funny how went from "use it for everything" (no matter how suitable) to "don't use it for anything new" in just under two decades.

        To me XML as a configuration file format never made sense. As a data exchange format it has always been contrived.

        For documents, together with XSLT (using the excellent XPath) and the well thought out schema language RelaxNG it still is hard to beat in my opinion.

      • ekianjo 2 days ago ago

        LLMs produce much more consistent XML than JSON because JSON is a horrible language that can be formatted in 30 different ways with tons of useless spaces everywhere, making for terrible next token prediction.

        • AgentME a day ago ago

          Uh can't XML have whitespace throughout it just as much as JSON? They seem pretty similar in this respect.

              <foo    bar="x"  baz  = "y" />
      • MrVandemar 2 days ago ago

        > I really doubt anyone would chose to use it for anything new these days.

        Use the correct tool for the job. If that tool is XML, then I use it instead of $ShinyThing.

        • bryanrasmussen 2 days ago ago

          I have a hilarious example of this. I was hired to consult at a large company that had "configurators" which were applications that decided all sorts of things for if you were building a new factory and needed to use this company's stuff for your factory, so for example one configurator would be a search for replacement part in your area - so if you are building a factory in Asia but you want to use a particular part but that part is has export restrictions for it from the U.S where it is manufactured you would use this tool to pick out the appropriate replacement part made somewhere in Asia.

          They had like 50 different configurators built at different times using different tech etc. (my memory is a bit fuzzy here as to how many they had etc. but it was a lot) So of course they wanted to make a solution for putting their codebase together and also make it easy to make new configurators.

          So they built a React application to take a configurator input format that would tell you how to build this application and what components to render and blah blah blah etc.

          Cool. But the configurator format was in JSON so they needed to make an editor for their configurator format.

          They didn't have a schema or anything like this they made up the format as they went along, and they designed the application as they went along by themselves, so application designed by programmers with all the wonder that description entails.

          That application at the end was just a glorified tree editor that looked like crap and of course had all sorts of functionality behavior and design mixed in with its need to check constraints for outputting a particular JSON structure at a particular point. Also programmed in React.

          There was about 10 programmers, including several consultants who had worked on this for over a year when I came along, and they were also shitting bricks because they had only managed to port over 3 configurators, and every time they ported a new one they needed to add in new functionality to the editor and the configurator compiler, and there was talk of redesigning the whole configurator editor cause it sucked to use.

          Obviously the editor part should have been done in XML. Then people could have edited the XML by learning to use XML spy, they could have described their language in XML schema real easy, and so forth.

          But no they built everything in React.

          The crowning hilarity - this application at most would ever be used by about 20 people in the world and probably not more than 10 people at all.

          I felt obligated by professional pride (and also by the fact that I could see no way could this project keep being funded indefinitely so it was to my benefit to make things work) to explain how XML would be a great improvement over this state of affairs but they wouldn't hear of it.

          After about 3 months on it was announced the project would be shut down in the next year. All that work wasted on an editor that could probably have been done by one expert in a month's time.

    • bayindirh 2 days ago ago

      XML is not a file format only. It's a complete ecosystem built around that file. Protocols, verifiers, file formats built on top of XML.

      You can get XML and convert it to everything. I use it to model 3D objects for example, and the model allows for some neat programming tricks while being efficient and more importantly, human readable.

      Except being small, JSON is worst of both worlds. A hacky K/V store, at best.

      • ongy 2 days ago ago

        Calling XML human readable is a stretch. It can be with some tooling, but json is easier to read with both tooling and without. There's some level of the schema being relevant to how human readable the serialization is, but I know significantly fewer people that can parse an XML file by sight than json.

        Efficient is also... questionable. It requires the full turing machine power to even validate iirc. (surely does to fully parse). by which metric is XML efficient?

        • bayindirh 2 days ago ago

          By efficiency, I mean it's text and compresses well. If we mean speed, there are extremely fast XML parsers around see this page [0] for state of the art.

          For hands-on experience, I used rapidxml for parsing said 3D object files. A 116K XML file is parsed instantly (the rapidxml library's aim is to have speed parity with strlen() on the same file, and they deliver).

          Converting the same XML to my own memory model took less than 1ms including creation of classes and interlinking them.

          This was on 2010s era hardware (a 3rd generation i7 3770K to be precise).

          Verifying the same file against an XSLT would add some milliseconds, not more. Considering the core of the problem might took hours on end torturing memory and CPU, a single 20ms overhead is basically free.

          I believe JSON and XML's readability is directly correlated with how the file is designed and written (incl. terminology and how it's formatted), but to be frank, I have seen both good and bad examples on both.

          If you can mentally parse HTML, you can mentally parse XML. I tend to learn to parse any markup and programming language mentally so I can simulate them in my mind, but I might be an outlier.

          If you're designing a file format based on either for computers only, approaching Perl level regular expressions is not hard.

          Oops, forgot the link:

          [0]: https://pugixml.org/benchmark.html

        • StopDisinfo910 2 days ago ago

          > Calling XML human readable is a stretch.

          That’s always been the main flaw of XML.

          There are very few use case where you wouldn’t be better served by an equivalent more efficient binary format.

          You will need a tool to debug xml anyway as soon as it gets a bit complex.

          • bayindirh 2 days ago ago

            A simple text editor of today (Vim, KATE) can real-time sanity check an XML file. Why debug?

            • StopDisinfo910 2 days ago ago

              Because issue with XML are pretty much never sanity check. After all XML is pretty much never written by hand but by tools which will most likely produce valid xml.

              Most of the time you will actually be debugging what’s inside the file to understand why it caused an issue and find if that comes from the writing or receiving side.

              It’s pretty much like with a binary format honestly. XML basically has all the downside of one with none of the upside.

              • bayindirh 2 days ago ago

                I mean, I found it pretty trivial to write parsers for my XML files, which are not simple ones, TBH. The simplest one of contains a bit more than 1700 lines.

                It's also pretty easy to emit, "I didn't find what I'm looking for under $ELEMENT" while parsing the file, or "I expected a string but I got $SOMETHING at element $ELEMENT".

                Maybe I'm distorted because I worked with XML files more than decade, but I never spent more than 30 seconds while debugging an XML parsing process.

                Also, this was one of the first parts I "sealed" in the said codebase and never touched it again, because it worked, even if the coming file is badly formed (by erroring out correctly and cleanly).

                • StopDisinfo910 2 days ago ago

                  > It's also pretty easy to emit, "I didn't find what I'm looking for under $ELEMENT" while parsing the file, or "I expected a string but I got $SOMETHING at element $ELEMENT".

                  I think we are actually in agreement. You could do exactly the same with a binary format without having to deal with the cumbersomeness of xml which is my point.

                  You are already treating xml like one writing errors in your own parsers and "sealing" it.

                  What’s the added value of xml then?

                  • bayindirh a day ago ago

                    > cumbersomeness of xml...

                    Telling the parser to navigate to first element named $ELEMENT, checking a couple of conditions and assigning values in a defensive manner is not cumbersome in my opinion.

                    I would not call parsing binary formats cumbersome (I'm a demoscene fan, so I aspire to match their elegance and performance in my codebases), but not the pragmatic approach for that particular problem at hand.

                    So, we arrive to your next question:

                    > What’s the added value of xml then?

                    It's various. Let me try to explain.

                    First of all, it's a self documenting text format. I don't need an extensive documentation for it. I have a spec, but someone opening it in a text editor can see what it is, and understand how it works. When half (or most) of the users of your code are non-CS researchers, that's a huge plus.

                    Talking about non-CS researchers, these folks will be the ones generating these files from different inputs. Writing an XML in any programming language incl. FORTRAN and MATLAB (not kidding) is 1000 times easier and trivial than writing a binary blob.

                    Expanding that file format I have developed on XML is extremely easy. You change a version number, and maybe add a couple of paths to your parser, and you're done. If you feel fancy, allow for backwards compatibility, or just throw an error if you don't like the version (this is for non-CS folks mostly. I'm not that cheap). I don't need to work with nasty offsets or slight behavior differences causing to pull my hairs out.

                    The preservation is much easier. Scientific software rots much quicker than conventional software, so keeping file format readable is better for preservation.

                    "Sealing" in that project's parlance means "verify and don't touch it again". When you're comparing your results with a ground truth with 32 significant digits, you don't poke here and there leisurely. If it works, you add a disclaimer that the file is "verified at YYYYMMDD", and is closed for modifications, unless necessary. Same principle is also valid for performance reasons.

                    So, building a complex file format over XML makes sense. It makes the format accessible, cross-platform, easier to preserve and more.

          • scotty79 a day ago ago

            With this you have efficient binary format and generality of XML

            https://en.m.wikipedia.org/wiki/Efficient_XML_Interchange

            But somehow google forgot to implement this.

        • int_19h 2 days ago ago

          It's kinda funny to see "not human readable" as an argument in favor of JSON over XML, when the former doesn't even have comments.

          • queenkjuul a day ago ago

            And yet, it's still easier for me to parse with my eyes

      • mortarion 2 days ago ago

        I mean, at least JSON has a native syntax to indicate an array, unlike XML which requires that you tack on a schema.

        <MyRoot> <AnElement> <Item></Item> </AnElement> </MyRoot>

        Serialize that to a JavaScript object, then tell me, is "AnElement" a list or not?

        That's one of the reasons why XML is completely useless on the web. The web is full of XML that doesn't have a schema because writing one is a miserable experience.

        • bayindirh 2 days ago ago

          This is why you can have attributes in a tag. You can make an XML file self explanatory.

          Consider the following example:

              <MyRoot>
                <AnElement type="list" items="1">
                  <Item>Hello, World!</Item>
                </AnElement>
              <MyRoot>
          
          Most parsers have type aware parsing, so that if somebody tucks string to a place where you expect integer, you can get an error or nil or "0" depending on your choice.
          • dminik a day ago ago

            I had the displeasure of parsing XML documents (into Rust) recently. I don't ever want to do this again.

            JSON for all it's flaws is beautifully simple in comparison. A number is either a number or the document is invalid. Arrays are just arrays and objects are just objects.

            XML on the other hand is the wild west. This particular XML beast had some difficulty sticking to one thing.

            Take for instance lists. The same document had two different ways to do them:

              <Thing>
                <Name>...</Name>
                <Image>...</Image>
                <Image>...</Image>
              </Thing>
            
              <Thing>
                <Name>...</Name>
                <Images>
                  <Image>...</Image>
                  <Image>...</Image>
                </Images>
              </Thing>
            
            Various values were scattered between attributes and child elements with no rhyme or reason.

            To prevent code reuse, some element names were namespaced, so you might have <ThingName /> and <FooName />.

            To round off my already awful day, some numbers were formatted with thousands separators. Of course, these can change depending on your geographical location.

            Now, one could say that this is just the fault of the specific XML files I was parsing. And while I would partially agree, the fact that a format makes this possible is a sign of it's quality.

            Since there's no clear distinction between objects and arrays you have to pick one. Or multiple.

            Since objects can be represented with both attributes and children you have to pick one. Or both.

            Since there are no numbers in XML, you can just write them out any way you want. Multiple ways is of course preferable.

            • jll29 a day ago ago

              The file you got sounds neither valid nor well-formed. It might not even be XML.

              I know you describe a real-life situation, but if XML gets abused it's not XML's fault - like it's not JSON's fault if JSON gets abused

              • dminik a day ago ago

                Could you elaborate why you think so?

                As far as I can tell, the file was a fully valid XML file. The issue is that doesn't really tell you (or guarantee) much.

                There's just no one specific way to do a thing.

            • bayindirh 21 hours ago ago

              There's a trade-off and tension between simplicity and flexibility. In the recent days the post titled "I prefer RST over Markdown" has surfaced again [0][1], showing the same phenomenon clearly.

              Simple formats are abuse-proof because of their limitations, and it makes perfect sense in some cases (I'm a Markdown fan, for example, but prefer LaTeX for serious documents). Flexible formats are more prone to abuse and misuse. XML is extremely flexible and puts the burden of designing and sanity checking the file to the producers and consumers of the file format in question. This is why it has a couple of verification standards built on top of it.

              I personally find very unproductive to yell at a particular file format because it doesn't satisfy some users' expectation out of the box. The important distinction is whether it provides the capability to address those or not. XML has all the bells and whistles and then some to craft sane, verifiable and easily parseable files.

              I also strongly resist that the notion of making everything footgun proof. Not only it stifles creativity and progress, it makes no sense. We should ban all kinds of blades, then. One shall read the documentation of the thing they are intending to handle before starting. The machine has no brain, we shall use ours instead.

              I'm also guilty of it myself. Some of my v1 code holds some libraries very wrong, but at least I reread the docs and correct the parts iteration by iteration (and no, I don't use AI in any form or shape for learning and code refactoring).

              So if somebody misused any format and made it miserable to parse, I'd rather put the onus on the programmer who implemented the file format on top of that language, not the language itself (XML is a markup language).

              The only file format I don't prefer to use is YAML [2]. The problem is its "always valid" property. This puts YAML into "Risk of electric shock. Read the manual and read it again before it operate this" category. I'm sure I can make it work if I need to, but YAML's time didn't come for me, yet. I'd rather use INI or TOML (INI++) for configuring things.

              [0]: https://news.ycombinator.com/item?id=41120254

              [1]: https://news.ycombinator.com/item?id=44934386

              [2]: https://noyaml.com/

      • agos 2 days ago ago

        it's a lot of things, none of them in the browser anymore

        • bayindirh 2 days ago ago

          RSS says hi!

          • agos 2 days ago ago

            as much as it pains me to say it, that is also a sailed ship

            • bayindirh 2 days ago ago

              I still follow feeds, my blog's RSS feed gets ~1.5K fetches every day.

              How is it a sailed ship?

              • agos 2 days ago ago

                how many of those 1.5K you think are using a web browser to read that feed?

                • bayindirh 2 days ago ago

                  The platform I use doesn't give statistics on that (I don't host my blog), but I assume the number is >0, since there's a lot of good browser based and free RSS readers.

    • jon-wood 2 days ago ago

      > AI agents don't care at all

      And I don't care at all about the feelings of AI agents. That a tool that's barely existed for 15 minutes doesn't need a feature is irrelevant when talking about whether or not to continue supporting features that have been around for decades.

    • vidarh 2 days ago ago

      Agreed. Having actually built and deployed an app that could render entirely from XML with XSLT in the browser: I wouldn't do it again.

      Conceptually it was beautiful: We had a set of XSL transforms that could generate RSS, Atom, HTML, and a "cleaned up" XML from the same XML generated by our frontend, or you could turn off the 2-3 lines or so of code used to apply the XSL on the server side and get the raw XML, with the XSLT linked so the browser would apply it.

      Every URL became an API.

      I still like the idea, but hate the thought of using XSLT to do it. Because of how limited it is, we ended up having e.g. multiple representations of dates in the XML because trying to format dates nicely in XSLT for several different uses was an utter nightmare. This was pervasive - there was no realistic prospect of making the XML independent of formatting considerations.

      • scotty79 a day ago ago

        XSLT is much nicer to use if you just create a very simple templating language that compiles to XSLT. Subset of XLST already has a structure of typical templating language. It can even be done with regexps.

        Then simplicity becomes a feature. You can write your page in pretty much pure HTML, or even pure HTML if you use comments or custom tags for block markers. Each template is simple and straightforward to write and read.

        And while different date format seems to be a one off thing you'd prefer to deal with as late as possible in the stack, if you think broader, like addressing global audience in their respective languages and cultures, you want to support that on the server so the data (dates, numbers, labels) lands on the client in the correct language and culture. Then doing just dates and perhaps numbers in the browser is just inconsistent.

        If browsers implemented https://en.m.wikipedia.org/wiki/Efficient_XML_Interchange the web would get double digit percent lighter and faster and more accessible to humans and ai.

        But that would let you filter out ads orders of magnitude easier. So it won't happen.

        • vidarh a day ago ago

          > You can write your page in pretty much pure HTML, or even pure HTML if you use comments or custom tags for block markers.

          That's exactly what we didn't want. The XSL encoded the view. The "page" was a pure semantic representation of the data in XML that wherever possible were direct projections from the models stored and serialied internally in our system, and the XSL generated each of the different views - be it HTML, RSS, Atom, or a condensed/simplified XML view. The latter was necessary largely because the "raw" XML data was more verbose than needed due to the deficiencies of XSL.

          It's possible it'd be more pleasant to use XSL your way, but that way wouldn't have solved any issues we had a need to solve.

          > you want to support that on the server so the data (dates, numbers, labels) lands on the client in the correct language and culture.

          That would've meant the underlying XML would need to mix view and model considerations, which is exactly what we didn't want.

          Today I'd simply use a mix of server-side transformations, CSS, and web components to achieve the same thing rather than try to force XSL to work for something it's so painful to use for.

          • scotty79 14 hours ago ago

            Sorry, I misspoke. What I had was that contents of the page were served to the browser as XML. The browser automatically requested the appropriate XSLT to convert the XML to XHTML to display it nicely. Basically same thing that you had, except that I didn't need feeds.

            What I wanted to say is that I didn't write XSLT by hand. Instead I was writing XHTML files with just few short, convenient markers, and my regexps based "compiler" converted them to various xls nasty tags generating output XSLT that was served.

            For example `$name` was converted to `<xsl:value-of select="name" />` and `@R:person` was converted to `<xsl:for-each select="person">` and `@E` was converted to `</xsl:for-each>`

            Basically there were 5 tags that were roughly equivalent to

            with, for, else, if, end

            `with` descended into a child in XML tree if the child was present and displayed enclosed XHTML

            `for` displayed copy of the enclosed XHTML for each copy of its argument present in XML, descending into them

            `else` displayed enclosed XHTML only if the argument node didn't exist in XML

            `if` displayed enclosed XHTML Argument only if the element existed

            Neither if nor else descended into the argument node in XML. With and for descended. XSLT was using relative paths everywhere.

            `end` marked the end of each respective block.

            > That would've meant the underlying XML would need to mix view and model considerations, which is exactly what we didn't want.

            The thing is, what you are serving to the browser is always a View-Model. If you serve a Model instead you are just "silently casting" it onto a View-Model because at that moment they are identical. Sooner or later the need will arise to imbue this with some presentation specific information attached on the fly to data from your data layer Model.

            I no longer try to use XSLT either, but I think web components are completely orthogonal tool. Your XSLT could still generate web components if you like. You could even "hydrate" generated HTML with React for interactivity.

            What XML+XSLT was solving was basically skipped entirely by programmers. Instead of taking care about separating concerns, performance, flexibility they just thrown into the browser a server side generated soup that is only one step away from compiled binary and called it a day.

    • int_19h 2 days ago ago

      Ironically LLMs are actually better at processing and especially outputting correct XML than they are at JSON.

    • zzo38computer a day ago ago

      I think JSON is generally better than XML (although XML is better for some things, mostly it isn't), but JSON is not so good either; I think DER format is much better.

    • _heimdall 2 days ago ago

      The only reason AI agents don't care about XML is because the developers decided, yet again, to attempt to recreate the benefits of REST on top of JSON.

      That's been tried multiple times over the last two decades and it just ends up with a patchwork of conventions and rules defining how to jam a square peg into a round hole.

      • afavour 2 days ago ago

        Many years ago I tried very hard to go all-in on XML. I loved the idea of serving XML files that contain the data and an XSLT file that defined the HTML templates that would be applied to that XML structure. I still love that idea. But the actual lived experience of developing that way was a nightmare and I gave up.

        "Developers keep making this bad choice over and over" is a statement worthy of deeper examination. Why? There's usually a valid reason for it. In this instance JSON + JS framework of the month is simply much easier to work with.

        • bilog 2 days ago ago

          Most of the issues of using client-side XSLT is that browsers haven't updated their implementations since v1 nor their tooling to improve debugging. Both of these issues are resolved improving the implementations and tooling, as pointed out by several commenters on the GH issue.

          • FeepingCreature 2 days ago ago

            That kind of demonstrates why XSLT is a bad idea as well though. JSON has its corner cases, but mostly the standard is done. If you want to manipulate it, you write code to do so.

            • _heimdall 2 days ago ago

              JSON correlates to XML rather than XSLT. As far as I'm aware, XML as a standard is already done as well.

              XSLT is more related to frontend frameworks like react. Where XML and JSON are ways of representing state, XSLT and react (or similar) are ways of defining how that state is converted to HTML meant for human consumption.

              • bilog 2 days ago ago

                Also, fun fact, XSLT 3 relies on XPath 3.1 which can also handle JSON.

                • _heimdall 2 days ago ago

                  Ah, so it can! I've only ever used XSLT with built-in browser support, never even realized the latest would allow JSON to be rendered with XSLT!

              • FeepingCreature 2 days ago ago

                Yes but React is not built into the browser, that's kinda my point.

                • _heimdall 17 hours ago ago

                  XSLT is a browser spec and part of the web platform though, react never went through that process.

                  For what its worth, XSLT 3.0 can apparently work with JSON as well if your main concern from a couple comments up is XML vs JSON.

        • molteanu 2 days ago ago

          "Choice" is a big word here. It would imply "we've weighted the alternatives, the pros and cons, we've tested and measured different strategies and implementations and we came out with this conclusion: [...]." You know, like science and engineering.

          While oftentimes what happens is is "oh, this thing seems to be working. And it looks easy. Great! Moving on.."

        • scotty79 a day ago ago

          XSLT is much nicer to use if you just create a very simple templating language that compiles to XSLT. Subset of XLST already has a structure of typical templating language. It can even be done with regexps. Then simplicity becomes a feature. You can write your page in pretty much pure HTML, or even pure HTML if you use comments or custom tags for block markers. Each template is simple and straightforward to write and read.

        • 2 days ago ago
          [deleted]
      • sharpfuryz 2 days ago ago

        People have been building things differently for the last 10 years, using json/grpc/graphql (that's why replacing complex formats like xml/wsdl/soap with just JSON is a bad idea), so why train(spend money) AI for legacy tech?

    • 2 days ago ago
      [deleted]
  • myfonj 2 days ago ago

    Coincidentally spotted this source code in some Microsoft Windows Server® 2022 Remote Desktop Web Access thingy yesterday:

        <?xml version="1.0" encoding="UTF-8"?>
        <?xml-stylesheet type="text/xsl" href="../Site.xsl"?>
        <?xml-stylesheet type="text/css" href="../RenderFail.css"?>
    
        <RDWAPage 
            helpurl="http://go.microsoft.com/fwlink/?LinkId=141038" 
            (…)
    
    So I doubt XSLT is going away any time soon.
  • zzo38computer a day ago ago

    I also think Google is doing many bad things with it (although many of the things are not specific to Google, they are doing most of it). Removing stuff, and also adding stuff that just makes it worse, as well.

    Many of the things they add, or that other things are replaced with, are seems to just mostly benefit Google (and sometimes Cloudflare), rather than actually helping you. This is true of the new Web Authentication systems as much as with other things. (And, they seem to want to make you use bloated JavaScripts even if neither the author nor reader want to do.)

    > in 2025 Google announces a change in their Chrome Root Program Policy that within 2026 they will stop supporting certificate with an Extended Key Usage that includes any usage other than server

    I agree that Google should not have done that, but it is often more useful to use different certificates for clients anyways.

    While I think XML is generally not as good as other formats (I think DER is generally better), it works better than some other formats for some things. This is not a reason to get rid of XSLT though; it is useful. There are other reasons to not require it (e.g. to simplify implementations, but they are currently too complicated mainly due to the newer stuff instead anyways), but that does not mean that it cannot be used, that it cannot be implemented, etc. (For example, a static site generator might convert XML+XSLT to HTML if you need it while also providing the original XML+XSLT files to anyone who wants them, therefore making server-side and client-side working.)

    • simultsop a day ago ago

      I find it really interesting the amount of effort everyone putting in nagging a bad thing rather than ignoring and working on alternatives. I guess everyone is doing a favor to the bad decision maker. People are attracted to negative phenomens and here you go another one, and its rewarding the op.

      • zzo38computer a day ago ago

        Yes, it would be better to work on alternatives, and I have done some of these things (and so have some other people). However, that won't fix WWW (or Chrome or Google), it just means it is an alternative (which is still a good thing to have, though). However, sometimes they even try to (or sometimes just does as a consequence of existing specifications, rather than deliberatley trying to) prevent any alternative that is actually good.

  • billy99k 2 days ago ago

    They have been killing open anything for a long time. Very similar to Microsoft. As an example, they have the power to block emails for a large portion of the Internet. This is used for good, like spam and scams, but also bad, like political viewpoints they don't like.

    The same can be said about their search engine. This most likely has already altered the outcomes of elections and should have been investigated years ago.

  • ymolodtsov 2 days ago ago

    Coders have this tendency to value ideology over practicality. What matters is something that works and people use, not a theoretical picture of how it could have worked in an alternative timeline.

    • isaacremuant 2 days ago ago

      Actually, control means practicality. Linux won the server wars and that was a combination of ideology AND practicality.

      If a company breaks something so only their path works it's short term practicality to use it and long term practicality to fight for an alternative that keeps control in the developers hand.

      Monopolies are terrible for software developers. Quality and customisation tend to go down, which means less value for the Devs.

    • Devasta 2 days ago ago

      > Coders have this tendency to value ideology over practicality.

      It would be a horrible existence to value anything else. What reason is there to get up in the morning if you think things couldn't be better?

  • SEOCurmudgeon a day ago ago

    It was named Gemini because it was developed by the twin teams at Google, Google Brain and DeepMind. That's the only reason.

  • pjmlp 2 days ago ago

    It is already ChromeOS Application Platform for quite some time now.

    Every Chrome installation or related fork, plus Electron shippments, counts.

  • lifeinthevoid 2 days ago ago

    Google is a corporation maximizing shareholder value. That this goal is not aligned with serving the greater good and freedom should come as no surprise.

  • jaredcwhite a day ago ago

    XML for document content (like, the whole point of markup) = awesome.

    XML for app configuration or basic data transfer formats = horrible.

    Unfortunately I fear so many people got burned by the latter issues they forgot (or missed entirely) all the greatness of the first.

    • jaredcwhite a day ago ago

      P.S. Google has a bazillion dollars but can't figure out how to maintain a new secure XSLT library or update to a newer one which exists already? The usage argument is dumb…maybe a lot more sites would find good uses for XML/XSLT if this stuff was actually maintained and promoted properly!

      • AgentME a day ago ago

        Why would Google want to bother? Who actually uses XSLT today for making webpages? Why should browsers spend effort on supporting XML+XSLT-based pages in addition to HTML+CSS-based pages?

  • myfonj 2 days ago ago

    Other anecdotal experiences with Google and their specific attitude towards user/developer needs:

    - Stable Array.sort (2008 – 2018): Of course it doesn't have to be stable, spec does not dictate it right now, it is good for performance, and some other browser even started to do this like we do: http://crbug.com/v8/90 . - Users don't userstyle (2015– ) Of course we absolutely can an will remove this feature from the core, despite it is mandated by several specifications: https://bugs.chromium.org/p/chromium/issues/detail?id=347016 . - SMIL murder attempt was addressed in the OP article (I think they keep similar sentiment towards MathML either) but luckily was eventually retracted. I guess/hope this XSLT will have similar "storm in the teacup" trajectory.

  • ozgrakkurt 2 days ago ago

    “The reason implementations are riddled with CVEs is neglect”

    Imo this misses the point a bit. If it is neglected and is going to keep producing bugs and not many people are developing on it, then it maybe makes sense to kill it.

    This also means new browsers won’t have to implement it maybe?

    • bilog 2 days ago ago

      Because those neglecting it are the same that want to remove it. So it's not “we want to remove it because it's neglected”, but “we want to remove it so we'll neglect it”. This is a pretty standard M.O. for the destruction of the commons.

      If you look at the WHATWG GH issue, you'll see that two distinct, modern, maintained implementations of XSLT, one of which in Rust (so considerably less likely to be affected by memory bugs) have been proposed as alternatives to what's currently used in WebKit and Blink. The suggestions has been ignored without a motivation, because the neglect is the point.

  • gslin 2 days ago ago
  • Devasta 2 days ago ago

    Getting rid of XSLT from the browser would be a mistake, no doubt about it.

    You can see it clear as day in the github thread that they weren't asking permission, they were doing it no matter what, all their concerns about security just being the pretext.

    It would have been more honest of them to just tell everyone to go fuck themselves.

    • JimDabell 2 days ago ago

      > their concerns about security just being the pretext.

      It seems entirely reasonable to be concerned about XSLT’s effects on security:

      > Although XSLT in web browsers has been a known attack surface for some time, there are still plenty of bugs to be found in it, when viewing it through the lens of modern vulnerability discovery techniques. In this presentation, we will talk about how we found multiple vulnerabilities in XSLT implementations across all major web browsers. We will showcase vulnerabilities that remained undiscovered for 20+ years, difficult to fix bug classes with many variants as well as instances of less well-known bug classes that break memory safety in unexpected ways. We will show a working exploit against at least one web browser using these bugs.

      https://www.offensivecon.org/speakers/2025/ivan-fratric.html

      https://www.youtube.com/watch?v=U1kc7fcF5Ao

      • youngtaff 2 days ago ago

        AFAIK browsers rely on an old version of xslt libraries and haven’t upgraded to newer versions

        They also seem to be putting pressure on the library maintainer resulting in them saying they’re not going to embargo security bugs

    • simonw 2 days ago ago

      What do you think their real reason for wanting to remove XSLT is, if not what they claim?

      • aragilar 2 days ago ago

        They don't want to support it (because of their perceived cost-benefit ratio for what they're interested in developing/maintaining), and hence if it is removed from the browser standards then they aren't required to support it (as opposed to driving people to other browsers)? One could ask why do WebUSB and similar "standards" given those would seem (to me) to be a much greater security issue?

      • El_Camino_Real 2 days ago ago

        Why side with the megacorps on every thread, even when it doesn't relate to the big hotness of large language models?

      • jacquesm 2 days ago ago

        To increase the depth of their moat. XSLT would allow anybody with a minimum of effort to extract semantic information from the web.

        • jeroenhd 2 days ago ago

          XSLT is a terrible tool for that job. RDF combined with something like SPARQL is much closer to that, and makes for one of the greatest knowledge processing tools nobody ever uses.

          XSLT is designed to work on XML while HTML documents are almost always SGML-based. The semantics don't work the same and applying XML engines on HTML often breaks things in weird and unexpected ways. basic HTML parsing rules like "a <head> tag doesn't need to be closed and can simply be auto-closed by a <body>" will seriously confuse XML engines. To effectively use XSLT to extract information from the web, you'd first need to turn HTML into XML.

          • oefrha 2 days ago ago

            Hey, it works great on the dozens of XHTML websites lying around. Dozens!

          • int_19h 2 days ago ago

            XSLT is designed to work on the XML Infoset, which is basically just an abstract tree of elements with attributes. Which is why XSLT has e.g. HTML output method, even though you use XML snippets to generate it. If you already have logic to parse HTML into a tree, it's trivial to run XSLT on it. Indeed, most recent version of XSLT uses the same trick to process JSON even.

          • aragilar 2 days ago ago

            I think it's the other way round, it's XML -> HTML not HTML -> XML.

        • JimDabell 2 days ago ago

          > XSLT would allow anybody with a minimum of effort to extract semantic information from the web.

          XSLT has been around for decades so why are you speaking in hypotheticals, as if it’s an up-and-coming technology that hasn’t been given a fair chance yet? If it hasn’t achieved that by now, it never will.

        • jon-wood 2 days ago ago

          I feel like this is overly conspiratorial. Likely they want to remove it because it's a pain to support, and used by an ever shrinking proportion of the internet. I don't even necessarily think removing support is a terrible thing, if you want to turn XML into HTML or whatever with XSLT you're still very welcome to do so, you just might have to do it server side rather than expecting every web browser to it for you.

        • mschuster91 2 days ago ago

          > a minimum of effort

          That is not a combination of words that should be mentioned in the same sentence as the word XML or, even worse, XSLT.

          XML has its value in enterprise and reliable application development because the tooling is very old, very mature and very reliable. But it's not something taught in university any more, it's certainly not taught in "coding bootcamps", simply because it's orders of magnitude more complex than JSON to wrap your head around.

          Of course, JSON has jsonschema, but in practice most real-world usages of JSON just don't give a flying fuck.

      • Devasta 2 days ago ago

        There are other implementations of XSLT available besides libxslt, some even in Javascript. Security is something that could be overcome and they wouldn't need to break styling on RSS feeds or anything, it could be something like how FF has a js for dealing with PDFs.

        It doesn't need to be some big conspiracy: they see the web as an application runtime instead of being about documents and information, don't give a fuck about XML technologies, don't use them internally and don't feel anyone else needs to.

  • scotty79 a day ago ago

    When XSLT revival happens, don't let them forget to implement this:

    https://en.m.wikipedia.org/wiki/Efficient_XML_Interchange

    It's XML of the size of brotli compressed Json.

  • stackedinserter 2 days ago ago

    Google is evil, but man, I never missed XSLT. I'm old enough to remember it and it gives me war flashbacks.

    The good thing is that it makes you strong and resilient to pain over time. It's painfully unreadable. It's verbose (ask chatgpt to write a simple if statement). Loops? - here's your foreach and that's all we have. Debugging is for weak, stdout is your debugger.

    It's just shit tech, period. I hope devs that write soul harvesting surveillance software at Google go to hell where they are forced to write endless xslt's. Maybe that's the reason they want to remove it from Chrome.

    • jeroenhd 2 days ago ago

      I don't really get the hatred for XSLT. It's not the most beautiful language, I'll give you that, but it's really not as bad as people make it out to be.

      I can't imagine wanting to use anything more complex than a for-each loop in XSLT. You can hack your way into doing different loops but that's like trying to implement do/while in Haskell.

      Is it that I've grown too comfortable with thinking in terms of functional programming? Because the worst part of XSLT I can think of is the visual noise of closing brackets.

      • stackedinserter 2 days ago ago

        Probably you never _worked_ with XSLT (which is good for you). Very simple things quickly become 1K of unreadable text.

        E.g. showing the last element of the list with different something ``` <xsl:for-each select="items/item"> <xsl:choose> <xsl:when test="position() = last()"> <span style="color:red;"> <xsl:value-of select="."/> </span> </xsl:when> <xsl:otherwise> <span style="color:blue;"> <xsl:value-of select="."/> </span> </xsl:otherwise> </xsl:choose> </xsl:for-each> ```

        Or ask chatgpt to count total weight of a shipping based on xml with items that have weights. I did and it's too long to paste here.

        > It's not the most beautiful language, I'll give you that, but it's really not as bad as people make it out to be.

        TBH I can say that about any language or platform that I ever touched. ZX Spectrum is not that bad, although has its limits. That 1960x 29-bit machine is not that bad, just takes time to get used to it. C++ is not that bad for web development, it's totally doable too.

        The thing is that some technologies are more suitable for modern tasks then others, you'll just do much, much more (and better) with JSON model and JS code than XSLT.

  • colesantiago 2 days ago ago

    What can we do to stop Google killing the open web other than complaining?

    One way is to tell everyone to use Firefox (uBlock origin works there)

    It is still an issue that the Mozilla Foundation is still 80% funded by Google though, so this needs to be solved first.

    Somehow Firefox needs to be moved away from Mozilla if they cannot find an alternative funding source other than Google.

    • ozgrakkurt 2 days ago ago

      You can donate to some project like ladybird or servo if they take donations. Or contribute.

      • colesantiago 2 days ago ago

        Ladybird looks promising, but I don't see any donation form for this, only sponsorships.

        If that is the case, we need to come together and donate thousands to ladybird en masse.

        It might take around ~30 years for adoption but it is a start.

    • rapnie 2 days ago ago

      Use the open web yourself, build on it, align with standars, and help them mature, participate in open standard bodies.

      • colesantiago 2 days ago ago

        How long will this take us?

        Don't you think Google and the other big tech companies already has massive influence in the W3C and web standards?

    • pjmlp 2 days ago ago

      Stop using Chrome and Electron, but of course no one will, it was devs that made them what they are today.

    • zoobab 2 days ago ago

      Ask antitrust authorities to dissolve them properly in acid.

    • andersmurphy 2 days ago ago

      That would have been true pre Mozilla implosion.

    • ymolodtsov 2 days ago ago

      Firefox is used by 1% of users.

    • ekianjo 2 days ago ago

      If you read the article you will see that Mozilla supports the removal of XLST. So switching to Firefox which also turned off RSS support several years ago is hardly a good choice.

    • izacus 2 days ago ago

      You need to admit to yourself that maintaining a critical piece of software like a web browser costs a lot in work and finance and start figuring where and how you'll fund people that do more than complain.

      Developing software is hard - and OSS hasn't found a way to do hard things yet.

  • jmclnx 2 days ago ago

    To me, when google rename Bard to Gemini, they should have been dragged into court. But the Gemini people have no funds for this, so big money wins. But at the least a trademark complaint could have been filed.

    In any case, I do not use google at all unless forced. My old gmail address is a "dump" where if a site asks for am email, they get that one. I only long into to gmail to delete the "spam" I get.

  • 18 hours ago ago
    [deleted]
  • gennarro 2 days ago ago

    Site not loading? Maybe the open web isn’t all it’s cracked up to be? /s

    • wltr 2 days ago ago

      Might not load if you’re in Ukraine and the location of the server you’re trying to access is in Russia. Blocked on the country level. Works very well for me, I can easily distinguish someone trying to pretend they are in the EU, while being in Russia. Have no idea whether this is the case, but it does not load for me either.

      • cyberlimerence 2 days ago ago

        It's hosted in Italy, as per the DNS record. I don't think you can register .eu domain if you're not based in Europe/EU.

        • wltr a day ago ago

          As I’ve stated, have no idea if that’s the case. Might have been slashdotted. Having a .eu domain and Russian host are two different separate things, as you might knew. I was talking about the latter.

    • andersmurphy 2 days ago ago

      Loading fine for me.

  • EVa5I7bHFq9mnYK 2 days ago ago

    -

    • JumpCrisscross 2 days ago ago

      > here is ChatGpt summary

      Please don’t do this.