One thing that I think this discussion is highlighting to me is that there's very little support in the web standard (as implemented by browsers) for surfacing resources to users that aren't displayable by the browser.
Consider, for example, RSS/Atom feeds. Certainly there are <link /> tags you can add, but since none of the major browsers do anything with those anymore, we're left dropping clickable links to the feeds where users can see them. If someone doesn't know about RSS/Atom, what's their reward for clicking on those links? A screenful of robot barf.
These resources in TFA are another example of that. The government or regulatory bodies in question want to provide structured data. They want people to be able to find the structured data. The only real way of doing that right now is a clickable link.
XSLT provides a stopgap solution, at least for XML-formatted data, because it allows you to provide that clickable, discoverable link, without risking dropping unsuspecting folks straight into the soup. In fact, it's even better than that, because the output of the XSLT can include an explainer that educates people on what they can do with the resource.
If browsers still respected the <link /> tag for RSS/Atom feeds, people probably wouldn't be pushing back on this as hard. But what's being overlooked in this conversation is that there is a real discoverability need here, and for a long time XSLT has been the best way to patch over it.
> One thing that I think this discussion is highlighting to me is that there's very little support in the web standard (as implemented by browsers) for surfacing resources to users that aren't displayable by the browser.
Really wish registerProtocolHandler were more popular. And I really wish registerContentType hadn't been dropped!
Web technology could be such a nexus of connectivity. We could have the web interacting with so much, offering tools for so much. Alas, support has largely gotten worse decade by decade. And few have taken up the chance.
Bluesky is largely using at:// urls. Eventually we probably could argue for support for our protocol. But web+at:// is permission less. Tools like https://pdsls.com can just become web based tools, with near no effort, if they want.
The suggestion to move the RSS/Atom feed links to a hidden link element is a horrible one for me and presumably others who want to copy that and paste it into their podcast applications. With that suggestion it adds another layer of indirection an application has to fetch and inspect.
Part of the reason HTML 5/LS was created was to preserve the behaviour of existing sites and malformed markup such as omitting html/head/body tags or closing tags. I bet some of those had the same usage as XSLT on the web.
You're right, it's not a great flow! And while many podcast/feed reader applications support pasting the URL of the page containing the <link /> element, that still leaves the problem of advertising that one can even do that, or that there's a feed available in the first place.
Flash removal broke multiple government sites. I couldn't take a required training course for a few months after flash support was removed and the site was taken offline for an upgrade.
I’m sure ActiveX and Silverlight removal did too. And iframes not sharing cross domain cookies. And HTTP mixed content warnings. I get it, some of these are not web specs, but some were much more popular than XSLT is now.
The government will do what they do best, hire a contractor to update the site to something more modern. Where it will sit unchanged until that spec too is removed, some years from now.
It’s a big attack surface that receives no scrutiny due to very low use, in exchange for minimal benefits. That tradeoff was a lot more tilted in Flash’s favor, but removing it was still the right choice.
There would be no reason to fix this if the chrome people had kept up their end of the bargain by supporting the standard. We can quibble as to whether or not XSLT should have been part of the standard to begin with but it IS part of the standard.
Google says it's "too difficult" and "resource intensive" to maintain...but they've deliberately left that part of the browser to rot instead of incrementally upgrading it to a modern XSLT standard as new revisions were released so it seems like a problem of their own making.
Given their penchant for user-hostile decisions it's hard to give the chrome team the benefit of the doubt here that this is being done purely for maintainability and for better security (especially given their proposal of just offloading it to a js polyfill).
Commercial enterprises can only support standards if it's commercially viable.
It's commercially beneficial to make the web standard so complex that it's more or less impossible to implement, since it lets you monopolise the browser market. However complexity only protects incumbents if you can persuade enough people to use the overcomplicated bits. If hardly anyone uses it, like xslt, then it's a cost for the incumbent which new entrants might get away without paying. So there's no real upside for Google in supporting it. And you can't expect commercial enterprises to do something without any upside.
Here are some of the common sense rules for evolving a standard:
1. Keep the standards simple. Avoid adding features if you can. Standards define implementations. Don't invert that pattern and make the standards morbidly obese.
2. Keep the features orthogonal. Don't create multiple ways of doing the same thing. Make sure that each feature plays well with the the others.
3. Maintain backwards compatibility. Don't break anything that depends on your standard. Don't frustrate your implementers and their customers with an endless game of whack-a-mole.
4. All the above are on a best-effort basis. Exceptions are acceptable under exceptional circumstances.
For some reason, the WHATWG has the diametrically opposite belief on all the above. Perhaps they should be called the Web upside-down standards. You have no problem adding features to the standards faster than anyone can read it. But maintaining and upgrading an old feature is somehow too far beyond your capability to justify keeping it around. I guess it's back to uni for me to figure out how I got this so wrong.
I don't understand how WHATWG decides to remove XSLT, contradicting the 30+ years of never break the web doctrine... And simultaneously doesn't want to fix the typeof null specification bug because of, wait for it, Microsoft Exchange 2003 relying on that.
This makes absolutely no sense.
We could've had such a nice language. The efforts for a cleaner language and web platform API were there, but doctrine always said no because of legacy and people have moved on to alternatives now.
WHATWG have removed features before, e.g. frameset, font, and applet elements from HTML. All of them were rarely used and had better alternatives available.
Simple: xslt is a giant attack surface entirely in C, and no browser maintainer cares to expend resources on maintaining that (pretty sure every browser uses libxslt).
The big sticking point is not the JS access, that’s mildly easy to implement.
It’s that currently you can open an XML file (including feeds) with an associated stylesheet and the stylesheet gets applied, which can be used to render an HTML document on the client side from an xml source like a feed.
Browser vendors really screwed the pooch on XSLT, the same way that MNG got sidelined in the past. They integrated an early version of a tech and then completely failed to modernize it.
There's potentially a real need for something like XSLT + XForms for low-to-no JS interactivity.
Even a basic JS-free, HTML-modifying operation for WebForms would go a long way towards that (ie: insert a row, delete an element matching this ID on click), etc.
Context: until fairly recently, I worked an implementation of XForms (as the project creator, and leading most of the architecture/design). Prior to that, I inherited maintenance of another implementation, which utilized XSLT[1] as a core aspect of its design.
I’m curious if you could describe more about what you envision. I have a difficult time imagining:
1. How the stateful/interactive aspects of XForms (such as insert/delete, as you mention) would be implemented client side without JS.
2. How XSLT factors into that vision.
It might be lack of imagination on my part! Admittedly, I don’t know everything about XSLT and maybe it has more interactive capabilities than I’m aware of. But at least as far as I know, when a browser vendor did implement XForms functionality it still ultimately depended on JS to achieve certain behaviors. It’s been a while since I was looking into this, but IIRC it looked conceptually similar to XUL. And granted, that might mean the JS is provided by a browser, and encapsulated from user space similar to browsers’ use of shadow DOM for standard form controls now.
1: I also have good reason to believe my work on this prior project helped keep XSLT alive in Chrome the last time there was a major push to deprecate it! Albeit totally inadvertently: my work just happened to correlate very closely to a spike in usage metrics around the same time, which was then cited as a reason to hold off on deprecation.
Even the most basic style of interaction would be amazing:
1. Associate a form element with a non-JS action, ie: add-element, remove-element, modify-element.
2. Allow those actions to make use of <template> elements when adding or modifying elements, and/or XPath selectors.
3. Add a new <input type=id> (or similar) that auto-generates unique UUIDs for form rows.
A mockup of what we'd get, though it's actually focused on pure HTML (it would be XML-compatible, however). This is 100% a straw-man, probably not even fully-self-consistent, but giving an idea of what I would want:
By the way I just noticed the JSON embedded viewer has disappeared from Firefox and Chrome?
The XML viewer is still there, there are colors and you can collapse the nodes.
XML was an abomination in terms of format but there were some really good ideas in the early web. I remember you could point to a stylesheet to apply to an XML file.
I really wish we could apply CSS stylesheets to a JSON.
XML and XSLT: an elegant tool for a more civilized age. Anything that came later was worse: YAML and JSON; while dark forces bastardized XML and silently killed XHTML.
See if you parse an HTML page these days - heck, if you're lucky Anubis girl will let you in and see the javascript trash soup, maybe even taste it.
Seems like a pretty cut and dried case for removal of a feature that is not only not used but even in cases where it might be useful the sites have html versions and don't expect people to click on the xml links.
it seems like a pretty cut and dried case for removal (of a feature that is not only not used but even in cases where it might be useful the sites have html versions and don't expect people to click on the xml links) because it will break many government sites?
or
it seems like a pretty cut and dried case for removal (of a feature that is not only not used but even in cases where it might be useful the sites have html versions and don't expect people to click on the xml links) because the feature is not used etc. etc. ?
They should switch, then. This notion that we can't peal away browser spec complexity because it will break some websites is the same reason ActiveX continued to wreck havoc long after it should have. I found multiple security vulnerabilities in XSLT back in my pen-testing days, and even if it didn't increase the threat surface, browser spec needs to be simplified anyway.
If you read the first comment, it seems that these examples were cherry-picked alternative formats. Trying to drum up outrage, I guess.
I don't care one way or the other about XSLT, but fucking hell, I would like a boatload more intellectual honesty in the world. Being angry is not a good reason to cherry-pick your data. It is a reason to step away from the argument and cool down before you re-engage with honesty and clarity.
Is there a reason this can't move into an extension? It seems like JS in the browser could implement rendering. I think simplifying the standard makes sense.
I used to use XSLT all the time, but I had forgotten all about and haven't used it in years. It was perfect to do a quick SQL query with "for xml auto" and then add an XSLT stylesheet to it. Instant report.
xml + xslt should never have been allowed client side. not sure why no one decided to force this as a server side operation, or at least require it to be done in js.
BUT since it's there, don't take it out, or at least take it out of the spec and then require all browser vendors to auto-install a plugin that re-performs the missing feature, and warn that this page relies on a plugin or something
Removals sometimes take a decade or more, or sometimes don't happen at all. Just because the vendors would like to remove something, doesn't mean they can.
For example, MutationEvents were deprecated in 2011 and just removed last year.
So this is just the beginning of the process. Even the PR to remove XLST from the spec: it doesn't mean it's being merged soon. Removal from the spec is different from remove from engines.
XSLT isn't a tool for surveillance capitalism, nor for glossy product brochure presentation, nor for captive passive doomscrolling video experiences, so it must be actively excised from the global knowledge network hypermedia standard.
Frankly I think this is a tempest in a teapot, and the primary reason people are complaining is because Google is sponsoring the idea, not because it's going to harm users in some tangible way.
The response says that for every provided XML document URL, there is an existing equivalent HTML document URL that accomplishes the same outcome for the end user and that is strongly emphasized by the site's UX. The fact that humans rarely access the XML documents via common browsers is also evident from the available usage metrics.
The question of abstract efficiency via reuse is academic. If the XML documents were the ones that users accessed most of the time, or were the only documents available, that might change the analysis. But that isn't the case.
Having a reasonably complete and well-performing set of XML tools available in the browser is really nice.
The counterpoint seems to be "well it has Javascript so we don't need actual features since you can theoretically write anything in JS" but one of the nicest things about it was having that toolkit available to Javascript. You can spin up a DOM for arbitrary XML and apply a bunch of natively-compiled, fast tools to it, coordinating and supplementing that with JS. Then present that to the UI as HTML. It's very nice.
I'm on team "the right move is to upgrade these, not remove them".
If they want to remove things to simplify the spec I've got a list I'd suggest, but it's mostly stuff added in the last decade.
I'm sympathetic to the argument that the feature is an elegant one. But at the same time, maintaining it is not cost-free; every line of code written, imported, or linked poses an ongoing "tax" and/or risk. This is a garden-variety product management decision: is it worth the ongoing maintenance and risk? As a responsible PM, you have to weigh the costs against the benefits, and I think they've presented sufficient evidence users have voted "no" on the benefits.
As I said in another thread, not everything of value or benefit gains meaningful adoption. A rational actor doesn't preserve everything of value for its own sake. At best, we call people who do "museum curators," and at worst, "hoarders."
The people stewarding the Web platform have neglected a lot of nice things, among them this—a major version two major versions ahead of what's in browsers, is eight years old.
That it still has any use on notable sites at all seems like a decent cue to at least try not neglecting it before declaring nobody wants to use it.
... however, it's mostly useful for doing things that owners of walled Web platforms have no interest in, like working with interoperable protocols & formats. They've been killing those through active moves and neglect for 15 years, so why would they do something that makes that easier & nicer?
By that same argument, we could deprecate PDF support, as almost every PDF content on the web has an equivalent and emphasized HTML version.
Or we could deprecate support for .txt and .xml in the browser itself, for the same reasons.
We obviously don't want that. It's valuable to have support for multiple formats, and it's especially valuable to have a single file that can be used by both machines and humans.
Metrics show that users frequently access text and PDF files in their browsers. They almost never access XML files, despite the purported benefit you mention.
Not everything that has a benefit is accepted by customers. Some stuff just doesn't sell.
Have you considered that the only users who access XML files might be a) at work (where you won't get telemetry) or b) techies, who also disable metrics?
You can't just use the metrics of "what do people who don't know how to disable telemtry use?" to make decisions for everyone.
Where's your proof that it is in use sufficient to justify maintaining the functionality? If you have such data, take it to the decision makers. Nobody's going to make decisions based on hypotheticals in the presence of data.
As more than one VP has said at my company: "In God I trust. Everyone else needs data."
Google isn't even "sponsoring" the idea. Mason's just the one who opened these issues and PRs after the meetings. Lots of platform maintainers from various vendors are interested in this. They all have rotting XSLT impelmentations with security holes.
> Thanks for raising these 6 examples of sites publishing XML files. We can add it to the existing list of 357 sites
This feels like an email I'd get from HR. The point of the topic was the ease in finding those 6, not to discuss those 6 specifically. Maybe being direct pisses people off more than this corporate-styled language, though.
I think he's responding to the fact that someone felt the need to create a new issue instead of attaching the information to the existing issue. It makes managing the work more difficult.
It's also worth noting that mfreed7 of Chromium (the person who initiated the proposal) is not the one who came up with the 357 number; that was zcorpan of Mozilla ( https://github.com/whatwg/html/issues/11523#issuecomment-320... ), so mfreed7 is talking about the list they got from zcorpan.
No kidding, the entire thread seems to be "you don't know what you're doing and you haven't done any research," in the face of responses saying "correct, this is the beginning of the process, where we do the research."
Then some random tag on guy presumably after this hit HN "I like that it doesn't have ads," which has literally nothing to do with the issue, lol.
I'm totally sure the discussion on the github issue is nothing but civil and not at all filled with righteous keyboard warriors decrying the evils of mustache-twiddling corporate villains out to destroy the web...
Honestly, I hope they remove it. And I hope they remove more pointless/useless standards kept around for backward compatibility, it's the anthesis of progress. We cannot keep loading the browser with more and more standards forever, either we stop development or we begin to remove things.
"Is it possible that software is not like anything else, that it is meant to be discarded: that the whole point is to always see it as a soap bubble?" - Alan Perlis
One thing that I think this discussion is highlighting to me is that there's very little support in the web standard (as implemented by browsers) for surfacing resources to users that aren't displayable by the browser.
Consider, for example, RSS/Atom feeds. Certainly there are <link /> tags you can add, but since none of the major browsers do anything with those anymore, we're left dropping clickable links to the feeds where users can see them. If someone doesn't know about RSS/Atom, what's their reward for clicking on those links? A screenful of robot barf.
These resources in TFA are another example of that. The government or regulatory bodies in question want to provide structured data. They want people to be able to find the structured data. The only real way of doing that right now is a clickable link.
XSLT provides a stopgap solution, at least for XML-formatted data, because it allows you to provide that clickable, discoverable link, without risking dropping unsuspecting folks straight into the soup. In fact, it's even better than that, because the output of the XSLT can include an explainer that educates people on what they can do with the resource.
If browsers still respected the <link /> tag for RSS/Atom feeds, people probably wouldn't be pushing back on this as hard. But what's being overlooked in this conversation is that there is a real discoverability need here, and for a long time XSLT has been the best way to patch over it.
> One thing that I think this discussion is highlighting to me is that there's very little support in the web standard (as implemented by browsers) for surfacing resources to users that aren't displayable by the browser.
Really wish registerProtocolHandler were more popular. And I really wish registerContentType hadn't been dropped!
Web technology could be such a nexus of connectivity. We could have the web interacting with so much, offering tools for so much. Alas, support has largely gotten worse decade by decade. And few have taken up the chance.
Bluesky is largely using at:// urls. Eventually we probably could argue for support for our protocol. But web+at:// is permission less. Tools like https://pdsls.com can just become web based tools, with near no effort, if they want.
at:// urls were unfortunately mis-designed as explained in https://github.com/bluesky-social/atproto-website/issues/417
The suggestion to move the RSS/Atom feed links to a hidden link element is a horrible one for me and presumably others who want to copy that and paste it into their podcast applications. With that suggestion it adds another layer of indirection an application has to fetch and inspect.
Part of the reason HTML 5/LS was created was to preserve the behaviour of existing sites and malformed markup such as omitting html/head/body tags or closing tags. I bet some of those had the same usage as XSLT on the web.
You're right, it's not a great flow! And while many podcast/feed reader applications support pasting the URL of the page containing the <link /> element, that still leaves the problem of advertising that one can even do that, or that there's a feed available in the first place.
Flash removal broke multiple government sites. I couldn't take a required training course for a few months after flash support was removed and the site was taken offline for an upgrade.
I’m sure ActiveX and Silverlight removal did too. And iframes not sharing cross domain cookies. And HTTP mixed content warnings. I get it, some of these are not web specs, but some were much more popular than XSLT is now.
The government will do what they do best, hire a contractor to update the site to something more modern. Where it will sit unchanged until that spec too is removed, some years from now.
Flash was never a web standard. XLST is.
It’s a big attack surface that receives no scrutiny due to very low use, in exchange for minimal benefits. That tradeoff was a lot more tilted in Flash’s favor, but removing it was still the right choice.
What's the practical different to users and site maintainers?
Maybe I'm missing something here, but can't XSLT be processed server side instead of browser side?
It seems like a very easy fix for the handful of websites that still use it.
XSLT is often used on low-power IOT devices which don't have the resources to render server-side
RSS/Atom feeds can use them. How does it make sense to maintain two versions of the same data on the server?
Exactly. The Atom feed of my website declares an XSLT stylesheet which transforms it to HTML. That way it can be served directly to, and renders prettily by, a web browser (see https://paul.fragara.com/feed.xml). For the curious, the XLST can be found here: https://gitlab.com/PaulCapron/paul.fragara.com/-/blob/master...
Btw, you can also apply an XSLT sheet to an XML document using standard JavaScript: https://developer.mozilla.org/en-US/docs/Web/API/XSLTProcess...
There are no easy fixes for government sites.
There would be no reason to fix this if the chrome people had kept up their end of the bargain by supporting the standard. We can quibble as to whether or not XSLT should have been part of the standard to begin with but it IS part of the standard.
Google says it's "too difficult" and "resource intensive" to maintain...but they've deliberately left that part of the browser to rot instead of incrementally upgrading it to a modern XSLT standard as new revisions were released so it seems like a problem of their own making.
Given their penchant for user-hostile decisions it's hard to give the chrome team the benefit of the doubt here that this is being done purely for maintainability and for better security (especially given their proposal of just offloading it to a js polyfill).
Commercial enterprises can only support standards if it's commercially viable.
It's commercially beneficial to make the web standard so complex that it's more or less impossible to implement, since it lets you monopolise the browser market. However complexity only protects incumbents if you can persuade enough people to use the overcomplicated bits. If hardly anyone uses it, like xslt, then it's a cost for the incumbent which new entrants might get away without paying. So there's no real upside for Google in supporting it. And you can't expect commercial enterprises to do something without any upside.
Here are some of the common sense rules for evolving a standard:
1. Keep the standards simple. Avoid adding features if you can. Standards define implementations. Don't invert that pattern and make the standards morbidly obese.
2. Keep the features orthogonal. Don't create multiple ways of doing the same thing. Make sure that each feature plays well with the the others.
3. Maintain backwards compatibility. Don't break anything that depends on your standard. Don't frustrate your implementers and their customers with an endless game of whack-a-mole.
4. All the above are on a best-effort basis. Exceptions are acceptable under exceptional circumstances.
For some reason, the WHATWG has the diametrically opposite belief on all the above. Perhaps they should be called the Web upside-down standards. You have no problem adding features to the standards faster than anyone can read it. But maintaining and upgrading an old feature is somehow too far beyond your capability to justify keeping it around. I guess it's back to uni for me to figure out how I got this so wrong.
I don't understand how WHATWG decides to remove XSLT, contradicting the 30+ years of never break the web doctrine... And simultaneously doesn't want to fix the typeof null specification bug because of, wait for it, Microsoft Exchange 2003 relying on that.
This makes absolutely no sense.
We could've had such a nice language. The efforts for a cleaner language and web platform API were there, but doctrine always said no because of legacy and people have moved on to alternatives now.
WHATWG have removed features before, e.g. frameset, font, and applet elements from HTML. All of them were rarely used and had better alternatives available.
Easy: The WHAATWG hasn't decided to remove XLST. They are starting the process of deciding now.
Simple: xslt is a giant attack surface entirely in C, and no browser maintainer cares to expend resources on maintaining that (pretty sure every browser uses libxslt).
It really should just be compiled to WASM and used with some sort of DOM bridge.
The big sticking point is not the JS access, that’s mildly easy to implement.
It’s that currently you can open an XML file (including feeds) with an associated stylesheet and the stylesheet gets applied, which can be used to render an HTML document on the client side from an xml source like a feed.
Recent and related:
"Remove mentions of XSLT from the html spec" - https://news.ycombinator.com/item?id=44952185 - Aug 2025 (522 comments)
Should we remove XSLT from the web platform? - https://news.ycombinator.com/item?id=44909599 - Aug 2025 (96 comments)
Browser vendors really screwed the pooch on XSLT, the same way that MNG got sidelined in the past. They integrated an early version of a tech and then completely failed to modernize it.
There's potentially a real need for something like XSLT + XForms for low-to-no JS interactivity.
Even a basic JS-free, HTML-modifying operation for WebForms would go a long way towards that (ie: insert a row, delete an element matching this ID on click), etc.
Context: until fairly recently, I worked an implementation of XForms (as the project creator, and leading most of the architecture/design). Prior to that, I inherited maintenance of another implementation, which utilized XSLT[1] as a core aspect of its design.
I’m curious if you could describe more about what you envision. I have a difficult time imagining:
1. How the stateful/interactive aspects of XForms (such as insert/delete, as you mention) would be implemented client side without JS.
2. How XSLT factors into that vision.
It might be lack of imagination on my part! Admittedly, I don’t know everything about XSLT and maybe it has more interactive capabilities than I’m aware of. But at least as far as I know, when a browser vendor did implement XForms functionality it still ultimately depended on JS to achieve certain behaviors. It’s been a while since I was looking into this, but IIRC it looked conceptually similar to XUL. And granted, that might mean the JS is provided by a browser, and encapsulated from user space similar to browsers’ use of shadow DOM for standard form controls now.
1: I also have good reason to believe my work on this prior project helped keep XSLT alive in Chrome the last time there was a major push to deprecate it! Albeit totally inadvertently: my work just happened to correlate very closely to a spike in usage metrics around the same time, which was then cited as a reason to hold off on deprecation.
Even the most basic style of interaction would be amazing:
1. Associate a form element with a non-JS action, ie: add-element, remove-element, modify-element.
2. Allow those actions to make use of <template> elements when adding or modifying elements, and/or XPath selectors.
3. Add a new <input type=id> (or similar) that auto-generates unique UUIDs for form rows.
A mockup of what we'd get, though it's actually focused on pure HTML (it would be XML-compatible, however). This is 100% a straw-man, probably not even fully-self-consistent, but giving an idea of what I would want:
Some existing standards/specs/proposals I cribbed this from:- https://html.spec.whatwg.org/multipage/scripting.html#the-te...
- (defunct) https://html.spec.whatwg.org/multipage/form-elements.html#th...
By the way I just noticed the JSON embedded viewer has disappeared from Firefox and Chrome?
The XML viewer is still there, there are colors and you can collapse the nodes.
XML was an abomination in terms of format but there were some really good ideas in the early web. I remember you could point to a stylesheet to apply to an XML file.
I really wish we could apply CSS stylesheets to a JSON.
XML and XSLT: an elegant tool for a more civilized age. Anything that came later was worse: YAML and JSON; while dark forces bastardized XML and silently killed XHTML.
See if you parse an HTML page these days - heck, if you're lucky Anubis girl will let you in and see the javascript trash soup, maybe even taste it.
Seems like a pretty cut and dried case for removal of a feature that is not only not used but even in cases where it might be useful the sites have html versions and don't expect people to click on the xml links.
I'm not sure I understand -
did you mean:
it seems like a pretty cut and dried case for removal (of a feature that is not only not used but even in cases where it might be useful the sites have html versions and don't expect people to click on the xml links) because it will break many government sites?
or
it seems like a pretty cut and dried case for removal (of a feature that is not only not used but even in cases where it might be useful the sites have html versions and don't expect people to click on the xml links) because the feature is not used etc. etc. ?
They should switch, then. This notion that we can't peal away browser spec complexity because it will break some websites is the same reason ActiveX continued to wreck havoc long after it should have. I found multiple security vulnerabilities in XSLT back in my pen-testing days, and even if it didn't increase the threat surface, browser spec needs to be simplified anyway.
If you read the first comment, it seems that these examples were cherry-picked alternative formats. Trying to drum up outrage, I guess.
I don't care one way or the other about XSLT, but fucking hell, I would like a boatload more intellectual honesty in the world. Being angry is not a good reason to cherry-pick your data. It is a reason to step away from the argument and cool down before you re-engage with honesty and clarity.
Is there a reason this can't move into an extension? It seems like JS in the browser could implement rendering. I think simplifying the standard makes sense.
I used to use XSLT all the time, but I had forgotten all about and haven't used it in years. It was perfect to do a quick SQL query with "for xml auto" and then add an XSLT stylesheet to it. Instant report.
xml + xslt should never have been allowed client side. not sure why no one decided to force this as a server side operation, or at least require it to be done in js.
BUT since it's there, don't take it out, or at least take it out of the spec and then require all browser vendors to auto-install a plugin that re-performs the missing feature, and warn that this page relies on a plugin or something
Brian's comment here is really important for bystanders to be able to understand the process: https://github.com/whatwg/html/issues/11582#issuecomment-320...
Removals sometimes take a decade or more, or sometimes don't happen at all. Just because the vendors would like to remove something, doesn't mean they can.
For example, MutationEvents were deprecated in 2011 and just removed last year.
So this is just the beginning of the process. Even the PR to remove XLST from the spec: it doesn't mean it's being merged soon. Removal from the spec is different from remove from engines.
i’m strongly in favor of simplifying the standard
You can polyfill this with JavaScript though?
It's incorrect to say there are no removals, as we do not have <MARQUEE> anymore.
Yes we do.
https://caniuse.com/mdn-html_elements_marquee
The marquee element is deprecated but is supported by all major Web browsers.
XSLT isn't a tool for surveillance capitalism, nor for glossy product brochure presentation, nor for captive passive doomscrolling video experiences, so it must be actively excised from the global knowledge network hypermedia standard.
This is a fair response to the point:
https://github.com/whatwg/html/issues/11582#issuecomment-321...
Frankly I think this is a tempest in a teapot, and the primary reason people are complaining is because Google is sponsoring the idea, not because it's going to harm users in some tangible way.
How is it a fair response? Isn't it desirable to have a format that can be used by machines and humans at the same time?
XSLT accomplishes what json-ld and semantic html never managed.
The response says that for every provided XML document URL, there is an existing equivalent HTML document URL that accomplishes the same outcome for the end user and that is strongly emphasized by the site's UX. The fact that humans rarely access the XML documents via common browsers is also evident from the available usage metrics.
The question of abstract efficiency via reuse is academic. If the XML documents were the ones that users accessed most of the time, or were the only documents available, that might change the analysis. But that isn't the case.
Having a reasonably complete and well-performing set of XML tools available in the browser is really nice.
The counterpoint seems to be "well it has Javascript so we don't need actual features since you can theoretically write anything in JS" but one of the nicest things about it was having that toolkit available to Javascript. You can spin up a DOM for arbitrary XML and apply a bunch of natively-compiled, fast tools to it, coordinating and supplementing that with JS. Then present that to the UI as HTML. It's very nice.
I'm on team "the right move is to upgrade these, not remove them".
If they want to remove things to simplify the spec I've got a list I'd suggest, but it's mostly stuff added in the last decade.
I'm sympathetic to the argument that the feature is an elegant one. But at the same time, maintaining it is not cost-free; every line of code written, imported, or linked poses an ongoing "tax" and/or risk. This is a garden-variety product management decision: is it worth the ongoing maintenance and risk? As a responsible PM, you have to weigh the costs against the benefits, and I think they've presented sufficient evidence users have voted "no" on the benefits.
As I said in another thread, not everything of value or benefit gains meaningful adoption. A rational actor doesn't preserve everything of value for its own sake. At best, we call people who do "museum curators," and at worst, "hoarders."
The people stewarding the Web platform have neglected a lot of nice things, among them this—a major version two major versions ahead of what's in browsers, is eight years old.
That it still has any use on notable sites at all seems like a decent cue to at least try not neglecting it before declaring nobody wants to use it.
... however, it's mostly useful for doing things that owners of walled Web platforms have no interest in, like working with interoperable protocols & formats. They've been killing those through active moves and neglect for 15 years, so why would they do something that makes that easier & nicer?
By that same argument, we could deprecate PDF support, as almost every PDF content on the web has an equivalent and emphasized HTML version.
Or we could deprecate support for .txt and .xml in the browser itself, for the same reasons.
We obviously don't want that. It's valuable to have support for multiple formats, and it's especially valuable to have a single file that can be used by both machines and humans.
Metrics show that users frequently access text and PDF files in their browsers. They almost never access XML files, despite the purported benefit you mention.
Not everything that has a benefit is accepted by customers. Some stuff just doesn't sell.
Have you considered that the only users who access XML files might be a) at work (where you won't get telemetry) or b) techies, who also disable metrics?
You can't just use the metrics of "what do people who don't know how to disable telemtry use?" to make decisions for everyone.
Where's your proof that it is in use sufficient to justify maintaining the functionality? If you have such data, take it to the decision makers. Nobody's going to make decisions based on hypotheticals in the presence of data.
As more than one VP has said at my company: "In God I trust. Everyone else needs data."
https://www.congress.gov/bill/117th-congress/house-bill/3617 isn't really a comparable link to https://www.congress.gov/117/bills/hr3617/BILLS-117hr3617ih....
But I think that was maybe just a mistake? Maybe he meant https://www.congress.gov/bill/117th-congress/house-bill/3617...
Google isn't even "sponsoring" the idea. Mason's just the one who opened these issues and PRs after the meetings. Lots of platform maintainers from various vendors are interested in this. They all have rotting XSLT impelmentations with security holes.
I've got to admire Mason's chill in actually giving concrete replies to what are mostly ad-hominem attacks around this issue.
I dunno, my snarkometer is going off.
> Thanks for raising these 6 examples of sites publishing XML files. We can add it to the existing list of 357 sites
This feels like an email I'd get from HR. The point of the topic was the ease in finding those 6, not to discuss those 6 specifically. Maybe being direct pisses people off more than this corporate-styled language, though.
I think he's responding to the fact that someone felt the need to create a new issue instead of attaching the information to the existing issue. It makes managing the work more difficult.
The very first issue was locked and limited to collaborators.
It's also worth noting that mfreed7 of Chromium (the person who initiated the proposal) is not the one who came up with the 357 number; that was zcorpan of Mozilla ( https://github.com/whatwg/html/issues/11523#issuecomment-320... ), so mfreed7 is talking about the list they got from zcorpan.
You’re saying a nice guy, who is not obligated to engage and chooses to, is not allowed to mock an intellectually dishonest issue?
No kidding, the entire thread seems to be "you don't know what you're doing and you haven't done any research," in the face of responses saying "correct, this is the beginning of the process, where we do the research."
Then some random tag on guy presumably after this hit HN "I like that it doesn't have ads," which has literally nothing to do with the issue, lol.
It's great for admin areas.
I'm totally sure the discussion on the github issue is nothing but civil and not at all filled with righteous keyboard warriors decrying the evils of mustache-twiddling corporate villains out to destroy the web...
> righteous keyboard warriors
Worst comic ever!
Honestly, I hope they remove it. And I hope they remove more pointless/useless standards kept around for backward compatibility, it's the anthesis of progress. We cannot keep loading the browser with more and more standards forever, either we stop development or we begin to remove things.
"Is it possible that software is not like anything else, that it is meant to be discarded: that the whole point is to always see it as a soap bubble?" - Alan Perlis
Governments don't seem to be paying particular attention to which web sites they're killing, I don't see why we should provide them the courtesy.
How did this become a case of us vs them? Government websites are meant for our benefit.