59 comments

  • gddbvxmm 4 days ago ago

    This week, Google Cloud paid out their highest bug bounty yet ($150k) for a vulnerability that could have been prevented with ASI [0]. Good to see that Google is pushing forward with ASI despite the performance impact, because it would benefit the security of all hosting companies that use Linux/KVM, not just the cloud providers of big tech.

    [0] https://cyberscoop.com/cloud-security-l1tf-reloaded-public-c...

  • WhyNotHugo 4 days ago ago

    When enabling this new protection, could we potentially disable other mitigation techniques which become redundant and therefore re-gain some performance?

    • bjackman 3 days ago ago

      Yes! The numbers in the posting don't account for this.

      Before doing this though, you need to be sure that ASI actually protects all the memory you care about. The version that currently exists protects all user memory but if the kernel copies something into its own memory it's now unprotected. So that needs to be addressed first (or some users might tolerate this risk).

  • Eridrus 4 days ago ago

    My understanding was that many of the fixes for speculative execution issues themselves led to performance degradation, does anyone know the latest on that and how this compares?

    Are these performance hit numbers inclusive of turning off the other mitigations?

    • snvzz 4 days ago ago

      There's about one way[0] to fix timing side channels.

      The RISC-V ISA has an effort to standardize a timing fence[1][2], to take care of this once and for all.

      0. https://tomchothia.gitlab.io/Papers/EuroSys19.pdf

      1. https://lf-riscv.atlassian.net/wiki/spaces/TFXX/pages/538379...

      2. https://sel4.org/Summit/2024/slides/hardware-support.pdf

      • eigenform 3 days ago ago

        I'm all for giving programmers a way to flush state, and maybe this is just a matter of taste, but I wouldn't characterize this as "taking care of the problem once and for all" unless there's a [magic?] way to recover from the performance trade-off that you'd see in "normal" operating systems (ie. not seL4).

        It doesn't change the fact that when you implement a RISC-V core, you're going to have to partition/tag/track resources for threads that you want to be separated. Or, if you're keeping around shared state, you're going to be doing things like "flush all caches and predictors on every context switch" (can't tell if thats more or less painful).

        Anyway, that all still seems expensive and hard regardless of whether or not the ISA exposes it to you :(

        • snvzz 3 days ago ago

          The research (multiple papers; note they have published and presented more than I linked) they've done prove that hardware help is necessary.

          i.e. not about reducing cost, but about being able to kill timing side channels at all.

    • bjackman 3 days ago ago

      These numbers are all Vs a completely unmitigated system. AND, this is an extra-expensive version of ASI that does more work than really needed on this HW, to ensure we can measure the impact of the recent changes. (Details of this are in the posting).

      So I should probably post something more realistic, and compare against the old mitigations. This will make ASI look a LOT better. But I'm being very careful not to avoid looking like a salesman here. It's better that I risk making things look worse than they are, than risk having people worry I'm hiding issues.

      • Eridrus 3 days ago ago

        Not sure if you wrote this article and I appreciate an engineering desire to undersell, but if this is faster than what people actually do in practice, then the takeaway is different than if it is slower, so I think you're doing folks a disservice by not comparing to a realistic baseline in addition to an unmitigated one.

    • 0cf8612b2e1e 4 days ago ago

      Furthermore, if the OS level mitigations are in place, would the hardware ones be disabled?

  • api 4 days ago ago

    That's still really massive. It would only make sense in very high security environments.

    Honestly running system services in VMs would be cheaper and just as good, or an OS like Qubes. VM hit is much smaller, less than 1% in some cases on newer hardware.

    • gpapilion 4 days ago ago

      It makes sense in any environment you have two workloads sharing compute from two parties, public clouds.

      The protection here is to ensure the vms are isolated. Without doing this there is the potential you can leak data via speculative execution across guests.

    • eptcyka 4 days ago ago

      VMs suffer from memory use overhead. Would be cool if the guest kernel would cooperate with the host on that.

      • jeroenhd 4 days ago ago

        There's KSM that should help: https://pve.proxmox.com/wiki/Kernel_Samepage_Merging_(KSM)

        Probably works best running VMs with the same kernel and software version.

        • infogulch 4 days ago ago

          But that just seems to reintroduce the same problem again:

          > However, while KSM can reduce memory usage, it also comes with some security risks, as it can expose VMs to side-channel attacks. ...

      • traverseda 4 days ago ago

        It will! For Linux hosts and Linux guests, if you use virtio and memory ballooning.

        • shortrounddev2 4 days ago ago

          This was an issue for me a few years ago running docker on macOS. macOS required you to allocate memory to docker ahead of time, whereas Windows/Hyper-V was able to use memory ballooning in WSL2

      • api 4 days ago ago

        It's possible to address this to some extent with ballooning memory drivers, etc.

    • russdill 4 days ago ago

      Look at it this way, any time a new side channel attack comes out the situation changes. Having this as a mitigation that can be turned on is helpful

    • riedel 4 days ago ago

      From reading the article that is the exactly also the feeling of the people involved. The question is if they are on track towards e.g. the 1% eventually.

    • bjackman 3 days ago ago

      The next steps should make this much faster. Google's internal version generally gives us a sub-1% hit on everything we measure.

      If the community is up for merging this (which is a genuine question - the complexity hit is significant) I expect it to become the default everywhere and for most people it should be a performance win Vs the current default.

      But, yes. Not there right now, which is annoying. I'm hoping the community is willing to start merging this anyway with the trust we can get it to be really fast later. But they might say "no, we need a full prototype that's super fast right now", which would be fair.

  • kookamamie 4 days ago ago

    Windows suffers from similar effects when Virtualization-Based Security is active.

    • Avamander 4 days ago ago

      At the same time VBS is one of the biggest steps forward in terms of Windows kernel security. It's actually considered a proper security boundary.

      • munchlax 3 days ago ago

        Funny that they called it VBS.

        That's not something I'd easily associate with a step forward in security.

    • transpute 4 days ago ago

      Hypervisor overhead should be low, https://www.howtogeek.com/does-windows-11-vbs-slow-pc-games/

      What kind of workloads have noticeably lower performance with VBS?

      • jeroenhd 4 days ago ago

        It was measured to have a performance impact of up to 10%, with even higher numbers for the nth percentile lows: https://www.tomshardware.com/news/windows-vbs-harms-performa...

        Overhead should be minimal but something is preventing it from working as well as it theoretically should. AFAIK Microsoft has been improving VBS but I don't think it's completely fixed yet.

        BF6 requiring VBS (or at least "VBS capable" systems) will probably force games to find a way to deal with VBS as much as they can, but for older titles it's not always a bad idea to turn off VBS to get a less stuttery experience.

      • kookamamie 4 days ago ago

        We're working on HPC / graphics / computer-vision software and noticed a particularly nasty issue with VBS enabled just last week. Although, have to be mentioned it was on Win10 Pro.

        • kachapopopow 4 days ago ago

          This most likely comes from IOMMU - disable it.

          • jychang 3 days ago ago

            That’d break a lot of GPU setups

            • kachapopopow 3 days ago ago

              Only if you want to virtualize it or have vms, for VBS it simply disables hardware pcie memory space isolation. (With IOMMU on, each pcie device gets an isolated memory buffer).

    • lenerdenator 4 days ago ago

      Anything that runs on an ISA that has certain features has these effects, IIRC.

  • Traubenfuchs 4 days ago ago

    Sometimes something in me starts thinking about if this regularly occurring slowing of chips through exploit mitigation is deliberate.

    All of big tech wins: CPUs get slower and we need more vcpu's and more memory to serve our javascript slop to end customers: The hardware companies sell more hardware, the cloud providers sell more cloud.

    • gpapilion 4 days ago ago

      I think it’s more pragmatic. We can eliminate hyperthreading to solve this, or increase memory safety at the cost of performance. One is a 50% hit in terms of vcpus, the other is now sub 50%.

      • Traubenfuchs 4 days ago ago

        They also need some phony justifications though.

        Can't just turn off hyperthreading.

    • Avamander 4 days ago ago

      These types of mitigations have the biggest benefit when resources are shared. Do you really think cloud vendors want to lose performance to CPU or other mitigations when they could literally sell those resources to customers instead?

      • bzzzt 4 days ago ago

        They don't lose anything since they sell the same instance which performs less with the mitigations on. Customers are paying because they need more instances.

        • nebezb 4 days ago ago

          Every CPU that isn’t pegged at 100% all the time is leaving money on the table. Some physical CPU capacity is reserved, some virtual CPU capacity is reserved, the rest goes to ultra-high-margin elastic compute that isn’t sold to you as a physical or virtual CPU. They sell it to you as “serverless,” it prints cash, and it absolutely depends on juicing every % of performance out of the chips.

          edit: “burstable” CPUs are a fourth category relying on overselling the same virtual CPU while intelligently distributing workloads to keep them at 100%.

        • robertlagrant 4 days ago ago

          I imagine they're unable to squeeze as many instances onto their giant computers, though.

          • tracker1 4 days ago ago

            There are 3-4 year old servers with slower/fewer cores still operating fine and newer servers operating as well. The generation improvements seem to outweigh a lot of the mitigations in question, not to mention higher levels of parallel work.

    • depingus 4 days ago ago

      Sometimes its fun to engage in a little conspiratorial thinking. My 2 cents... That TPM 2.0 requirement on Windows 11 is about to create a whole ton of e-waste in October (Windows 10 EOL).

      • e2le 4 days ago ago

        I'm not so sure. Many people still ran Windows XP/7 long after the EOL date. Unless Chrome, Steam, etc drop support for Windows 10, I don't think many people will care.

        • depingus 4 days ago ago

          The home PC market is insignificant. The real volume is in corporate and government systems that will never run EOL Windows.

          Side Note: Folks, don't run EOL operating systems at home. Upgrade to Linux or BSD, and your hardware can live on safely.

          • tsimionescu 3 days ago ago

            There are many, many Windows XP systems still running today in many corporate and probably gov environments too. Even more Win 7 ones. There will be special contracts, workarounds, waivers, etc - all to avoid changing OS.

          • Avamander 4 days ago ago

            > Folks, don't run EOL operating systems at home.

            Especially not EOL Windows.

      • AlienRobot 4 days ago ago

        Hey, it's not nice to call Linux users "e-waste."

    • bzzzt 4 days ago ago

      Why would big tech do this when customers bring it upon themselves by building Javascript slop?

      • worthless-trash 4 days ago ago

        Big tech isnt running their stack on js.

        • bzzzt 4 days ago ago

          Maybe, but their cloud customers certainly are.

          • tatersolid 3 days ago ago

            All the large cloud-hosted infra I’ve encountered in my career were written in JIT or AOT compiled languages (C#, Java, Golang, etc.) This is basically necessary at any sort of scale.

          • surajrmal 3 days ago ago

            Cloud usage is dominated by larger companies with much older codebases that predates modern js backend development.

          • worthless-trash 3 days ago ago

            As long as the customer pays, why wouldn't they promote an option which makes them more profit?