Both approaches revealed the same conclusion: Memory Integrity Enforcement vastly reduces the exploitation strategies available to attackers. Though memory corruption bugs are usually interchangeable, MIE cut off so many exploit steps at a fundamental level that it was not possible to restore the chains by swapping in new bugs. Even with substantial effort, we could not rebuild any of these chains to work around MIE. The few memory corruption effects that remained are unreliable and don’t give attackers sufficient momentum to successfully exploit these bugs.
This is great, and a bit of a buried lede. Some of the economics of mercenary spyware depend on chains with interchangeable parts, and countermeasures targeting that property directly are interesting.
In terms of Apple Kremlinology, should this be seen a step towards full capability-based memory safety like CHERI ( https://en.wikipedia.org/wiki/Capability_Hardware_Enhanced_R... ) or more as Apple signaling that it thinks it can get by without something like CHERI?
That's Apple and here is Google (who have been at memory safety since the early Chrome/Android days):
Google folks were responsible for pushing on Hardware MTE ... It originally came from the folks who also did work on ASAN, syzkaller, etc ... with the help and support of folks in Android ... ARM/etc as well.
I was the director for the teams that created/pushed on it ... So I'm very familiar with the tradeoffs.
...
Put another way - the goal was to make it possible to use have the equivalent of ASAN be flipped on and off when you want it.
Keeping it on all the time as a security mitigation was a secondary possibility, and has issues besides memory overhead.
For example, you will suddenly cause tons of user-visible crashes. But not even consistently. You will crash on phones with MTE, but not without it (which is most of them).
This is probably not the experience you want for a user.
For a developer, you would now have to force everyone to test on MTE enabled phones when there are ~1mn of them. This is not likely to make developers happy.
Are there security exploits it will mitigate? Yes, they will crash instead of be exploitable. Are there harmless bugs it will catch? Yes.
...
As an aside - It's also not obvious it's the best choice for run-time mitigation.
> We believe memory safety protections need to be strictly synchronous, on by default, and working continuously.
FWIW, I presume this is "from experience"--rather than, from first principles, which is how it comes off--as this is NOT how their early kernel memory protections worked ;P. In 2015, with iOS 9, Apple released Kernel Patch Protection (KPP), which would verify that the kernel hadn't been modified asynchronously--and not even all that often, as I presume it was an expensive check--and panic if it detected corruption.
> First let’s consider our worst enemy since iOS 9: KPP (Kernel Patch Protection).
KPP keeps checking the kernel for changes every few minutes, when device isn’t busy.
> That “check every now and then” thing doesn’t sound too good for a security measure, and in fact a full bypass was released by Luca Todesco and it involves a design flaw. KPP does not prevent kernel patching; it just keeps checking for it and if one is caught, panics the kernel. However, since we can still patch, that opens up an opportunity for race conditions. If we do things fast enough and then revert, KPP won’t know anything ;)
I have some inside knowledge here. KPP was released around the time KTRR on A11 was implemented to have some small amount of parity on <A11 SoCs. I vaguely remember the edict came down from high that such a parity should exist, and it was implemented in the best way they could within a certain time constraint. They never did that again.
> FWIW, I presume this is "from experience"--rather than, from first principles, which is how it comes off
I interpreted that as what they came up with when first looking at/starting to implement MTE, not their plan since $longTimeAgo.
Apple has certainly gotten better about security, and I suspect things like what you listed are a big part of why. They were clearly forced to learn a lot by jailbreakers.
> There has never been a successful, widespread malware attack against iPhone. The only system-level iOS attacks we observe in the wild come from mercenary spyware ... to target a very small number of specific individuals and their devices. Although the vast majority of users will never be targeted in this way..
Correct me if I'm wrong, but the spyware that has been developed certainly could be applied at scale at the push of a button with basic modification. They just have chosen not to at this time. I feel like this paragraph is drawing a bigger distinction than actually exists.
Neither Apple or Google truly knows how widespread attacks on their products have been despite portraying it as if they have perfect insight into it. They're claiming to know something they cannot. GrapheneOS has published leaked data from exploit developers showing they're much more successful at exploiting devices and keeping up with updates than most people believe. We have access to more than what we've published, since we don't publish it without multiple independent sources to avoid leaks being identified. These tools are widely available, and it cannot be generally known when they're used whether it's data extraction or remote exploitation. Catching exploits in the wild is the exception to the rule, otherwise exploit development companies would have a much harder job needing to keep making new exploits after they're heavily used. They wouldn't value a single exploit chain nearly as much as they do if it stopped working after it was used 50k times. Law enforcement around the world has access to tools like Cellebrite Premium which are used against many people crossing borders, at protests, etc. That is usage at scale. There's far less insight into remote exploits which don't have to be distributed broadly to be broadly used.
I wonder why XcodeGhost doesn't count as successful, widespread malware attack against iPhone. WeChat was infected. It was before iOS had pasteboard protections.
It's mainly there as a swipe at Android. I don't think it really relates to the rest of the article (and, with no insight but with my conspiracy theory hat on, was included to peddle the merits of their App Store model).
absolutely. it is awful lawyer twinkie talk. but the fact that we get such a detailed artile press release on MIE new aphl tech it speaks to its validity and confidence which is plainly great for all of us.
> In 2018, we were the first in the industry to deploy Pointer Authentication Codes (PAC) in the A12 Bionic chip, to protect code flow integrity in the presence of memory corruption. The strong success of this defensive mechanism in increasing exploitation complexity left no doubt that the deep integration of software and hardware security would be key to addressing some of our greatest security challenges.
There have been multiple full-chain attacks since the introduction of PAC. It hasn’t been a meaningful attack deterrent because attackers keep finding PAC bypasses. This should give you pause as to how secure EMTE actually is.
To be fair, they didn't claim it to be a meaningful attack deterrent. They said "success...in increasing exploitation complexity".
Sure, the whole sentence is a bit of a weird mess. Paraphrased: it made exploits more complex, so we concluded that we needed a combined SW/HW approach. What I read into that is that they're admitting PAC didn't work, so they needed to come up with a new approach and part of that approach was to accept that they couldn't do it using either SW or HW alone.
Then again... I don't know much about PAC, but to me it seems like it's a HW feature that requires SW changes to make use of it, so it's kind of HW+SW already. But that's a pointless quibble; EMTE employs a lot more coordination and covers a lot more surface, iiuc.
It’s my understanding that this won’t protect you in the case where the attacker has a chance to try multiple times.
The approach would be something like: go out of bounds far enough to skip the directly adjacent object, or do a use after free with a lot of grooming, so that you get a a chance of getting a matching tag. The probability of getting a matching tag is 1/16.
But this post doesn’t provide enough details for me to be super confident about what I’m saying. Time will tell! If this is successful then the remaining exploit chains will have to rely on logic bugs, which would be super painful for the bad guys
Even with Android MTE, one of the workarounds was probabilistic attacks on the small tag size, which imply multiple tries. One of the big distinctions here is uniform synchronous enforcement, so writes trap immediately and not on the next context switch.
The other 15/16 attempts would crash though, and a bug that unstable is not practically usable in production, both because it would be obvious to the user / send diagnostics upstream and because when you stack a few of those 15/16s together it's actually going to take quite a while to get lucky.
With EU chat control, the state will be on my device, having access to everything they want, decide what I can and cannot do. Once Google forces WEI on us, the whole web will get locked down.
And secure boot and now MIE will make sure we can never take back our freedom.
> ...attackers must not be able to predict tag values that the system will choose. We address this issue by frequently re-seeding the underlying pseudo-random generator used to select new tags.
This point could use more explanation. The fundamental problem here is the low entropy of the tags (only 4 bits). An attacker who randomly guesses the tags has 1/16 chance of success. That is not fixed by reseeding the PRNG. So I am not sure what they mean.
At attacher can guess, and has a 1/16 probability to guess right, but they have only one chance to guess because if you guess wrong, the process terminates (if it's a user-process) or the kernel panics (if it's in the kernel), so in the next opportunity you'll have it will be a different tag to guess.
Four bits provide too few possibilities. Since memory allocations happen millions of times per minute, the chance of collisions grows very quickly, even with periodic reseeding.
"There has never been a successful, widespread malware attack against iPhone. ..."
b your iphones BEEN pwned for YEARS and it was done in minutes LOL. gtfoh
with help from ChatGPT:
Apple claims “never been a successful iPhone malware attack” Reality: WireLurker, Masque, XcodeGhost, YiSpecter, jailbreak 0-days, Pegasus/Predator/Reign 0-clicks.
>Google took a great first step last year when they offered MTE to those who opt in to their program for at-risk users. But even for users who turn it on, the effectiveness of MTE on Android is limited by the lack of deep integration with the operating system that distinguishes Memory Integrity Enforcement and its use of EMTE on Apple silicon.
>With the introduction of the iPhone 17 lineup and iPhone Air, we’re excited to deliver Memory Integrity Enforcement: the industry’s first ever, comprehensive, always-on memory-safety protection covering key attack surfaces — including the kernel and over 70 userland processes — built on the Enhanced Memory Tagging Extension (EMTE) and supported by secure typed allocators and tag confidentiality protections.
Of course it is a little disappointing not to see GrapheneOS's efforts in implementing [1] and raising awareness [2] recognised by others but it is very encouraging to see Apple making a serious effort on this. Hopefully it spurs Google on to do the same in Pixel OS. It should also inspire confidence that GrapheneOS are generally among the leaders in creating a system that defends the device owner against unknown threats.
>The presence of EMTE leaves Spectre V1 as one of the last avenues available to attackers to help guide their attacks, so we designed a completely novel mitigation that limits the effective reach of Spectre V1 leaks — at virtually zero CPU cost — and forces attackers to contend with type segregation. This mitigation makes it impractical for attackers to use Spectre V1, as they would typically need 25 or more V1 sequences to reach more than 95 percent exploitability rate — unless one of these sequences is related to the bug being exploited, following similar reasoning as our kalloc_type analysis.
Nope. I don't know why just checking the tags during speculation wouldn't stop Spectre V1, at least for cross-type accesses? I mean, it's not that simple because your program won't crash if speculation has mismatched tags. Which means you can try as many times as you want until you get lucky. But that's certainly not a "completely novel mitigation", so I'm sure I'm missing something obvious.
Perhaps the real problem is that you can use speculation to scan large amounts of memory for matching tags, some of which would be different types, so you need something to handle that?
> Arm published the Memory Tagging Extension (MTE) specification in 2019 as a tool for hardware to help find memory corruption bugs. MTE is, at its core, a memory tagging and tag-checking system, where every memory allocation is tagged with a secret; the hardware guarantees that later requests to access memory are granted only if the request contains the correct secret. If the secrets don’t match, the app crashes, and the event is logged. This allows developers to identify memory corruption bugs immediately as they occur.
This is great, since Oracle introduced SPARC ADI, into Solaris and their Linux SPARC workloads that I keep looking forward to "C Machines" becoming a common way to fix C language issues.
Unfortunately, like in many other cases, Intel botched their MPX design, only evolutions of MTE and CHERI are around.
The problem with PowerPC AS tagging was that it relied entirely on the trap instruction. If you could control execution at all, you could skip the trap instruction and it did nothing. This implementation, by my reading, essentially adds a synchronous trap instruction after every single load and store, which builds a real security boundary (even compared to Android MTE, where reads would trap but writes were only checked at the next context switch).
The big difference with this seems like it is an actual security mechanism to block "invalid" accesses where as the tagged memory extensions only provided pointer metadata and it was up to the OS to enforce invariants.
> Extensions provide no security. [...] The tagged memory extensions don't stop you from doing anything.
SPARC ADI was a predecessor to ARM MTE. ARM MTE has been available and used in production for several years now. ADI is also 4 bit but with 64 byte granularity rather than 16 byte.
Substantially less complex and therefore likely to be substantially easier to actually use.
CHERI-Morello uses 129-bit capability objects to tag operations, has a parallel capability stack, capability pointers, and requires microarchitectural support for a tag storage memory. Basically with CHERI-Morello, your memory operations also need to provide a pointer to a capability object stored in the capability store. Everything that touches memory points to your capability, which tells the processor _what_ you can do with memory and the bounds of the memory you can touch. The capability store is literally a separate bus and memory that isn't accessible by programs, so there are no secrets: even if you leak the pointer to a capability, it doesn't matter, because it's not in a place that "user code" can ever touch. This is fine in theory, but it's incredibly expensive in practice.
MIE is a much simpler notion that seems to use N-bit (maybe 4?) tags to protect heap allocations, and uses the SPTM to protect tag space from kernel compromise. If it's exactly as in the article: heap allocations get a tag. Any load/store operation to the heap needs to provide the tag that was used for their allocation in the pointer. The tag store used by the kernel allocator is protected by SPTM so you can't just dump the tags.
If you combine MIE, SPTM, and PAC, you get close-ish to CHERI, but with independent building blocks. It's less robust, but also a less granular system with less overhead.
MIE is both probabilistic (N-bits of entropy) and protected by a slightly weaker hardware protection (SPTM, which to my understanding is a bus firewall, vs. a separate bus). It also only protects heap allocations, although existing mitigations protect the stack and execution flow.
Going off of the VERY limited information in the post, my naive read is that the biggest vulnerability here will be tag collision. If you try enough times with enough heap spray, or can groom the heap repeatedly, you can probably collide a tag with however many bits of entropy are present in the system. But, because the model is synchronous, you will bus fault every time before that, unlike MTE, so you'll get caught, which is a big problem for nation-state attackers.
I think hackers are not ready for the idea that unhackable hardware might actually be here. Hardware that will never have an exploit found someday, never be jailbroken, never have piracy, outside of maybe nation-state attacks.
Xbox One, 2012? Never hacked.
Nintendo Switch 2, 2025? According to reverse engineers... flawlessly secure microkernel and secure monitor built over the Switch 1 generation. Meanwhile NVIDIA's boot code is formally verified this time, written in the same language (ADA SPARK) used for nuclear reactors and airplanes, on a custom RISC-V chip.
iPhone? iOS 17 and 18 have never been jailbroken; now we introduce MIE.
I would deeply, strongly caution against using public exploit availability as any evidence of security. It’s a bad idea, because hundreds of market factors and random blind luck affect public exploitability more than the difficulty of developing an exploit chain.
Apple are definitely doing the best job that any firm ever has when it comes to mitigation, by a wide margin. Yet, we still see CVEs drop that are marked as used in the wild in exploit chains, so we know someone is still at it and still succeeding.
When it comes to the Xbox One, it’s an admirable job, in no small part because many of the brightest exploit developers from the Xbox 360 scene were employed to design and build the Xbox One security model. But even still, it’s still got little rips at the seams even in public: https://xboxoneresearch.github.io/games/2024/05/15/xbox-dump...
I think the nature of the scene changed and exploits and jailbreaks are kept to small groups, individuals or are sold.
For example, I might know of an unrelated exploit I'm sitting on because I don't want it fixed and so far it hasn't been.
I think the climate has become one of those "don't correct your adversary when they make mistakes" types of things versus an older culture of release clout.
As the ability to make remote controlled hardware unhackable increases the power asymmetry between those who can create such hardware and the masses who cannot will drastically increase. I leave it as an exercise for the audience as to what the equilibrium implications are for the common man, especially in western countries where the prior equilibrium was quite different.
This is the opposite of fun computing. This is commercial computing who's only use case it making sure that people can send/receive money through their computers securely. I love being able to peek/poke inside and look at my processes ram, or patch the memory of an executable. All this sounds pretty impossible on Apple's locked down systems.
They're not so much general purpose computers anymore as they are locked down bank terminals.
It's all fun and games until somebody else patches the RAM of your device, and sends your money away from your account.
More interesting is how to trace and debug code on such a CPU. Because what a debugger often does is exactly patching an executable in RAM, peeks and pokes inside, etc. If such an interface exists, I wonder how is it protected; do you need extra physical wires like JTAG? If it does not, how do you even troubleshoot a program running on the target hardware?
I think if you want to tinker with hardware, you shouldn't buy Apple. It's designed for people who use it as a means to an end, and I think that's a good thing for most people (including me). I want to bank on hardware that I can trust to be secure. Nothing wrong with building your own linux box for play time though.
If you like using debuggers, don't worry, MTE gives you a lot more chances to use them since it finds a lot more crashes. It doesn't stop you writing to memory though, as long as it's the correct type.
PAC may stop you from changing values - or at least you'd have to run code in the process to change them.
Bingo. None of this is for users. Apple somehow managed to put on a marketing mask of user respect when they’re at least as user abusive as anyone else.
Meanwhile, Google is doing all it can to weaken Android safety by withholding images and patches, also by failing to fully segregate applications from each other. The evidence is linked below:
Look, I’m an iOS user but this seems like flame-bait to me without any technical details. I’ve seen a lot of Google blog posts about security improvements over the years so that seems like a very sweeping assertion if you’re not going to support it.
Both approaches revealed the same conclusion: Memory Integrity Enforcement vastly reduces the exploitation strategies available to attackers. Though memory corruption bugs are usually interchangeable, MIE cut off so many exploit steps at a fundamental level that it was not possible to restore the chains by swapping in new bugs. Even with substantial effort, we could not rebuild any of these chains to work around MIE. The few memory corruption effects that remained are unreliable and don’t give attackers sufficient momentum to successfully exploit these bugs.
This is great, and a bit of a buried lede. Some of the economics of mercenary spyware depend on chains with interchangeable parts, and countermeasures targeting that property directly are interesting.
In terms of Apple Kremlinology, should this be seen a step towards full capability-based memory safety like CHERI ( https://en.wikipedia.org/wiki/Capability_Hardware_Enhanced_R... ) or more as Apple signaling that it thinks it can get by without something like CHERI?
> This is great ...
That's Apple and here is Google (who have been at memory safety since the early Chrome/Android days):
https://news.ycombinator.com/item?id=39671337Google Security (ex: TAG & Project Zero) do so much to tackle CSVs but with MTE the mothership dropped the ball so hard.
RIP Vigilant Labs
Okay a bit drastic, I don’t really know if this will affect them.
> We believe memory safety protections need to be strictly synchronous, on by default, and working continuously.
FWIW, I presume this is "from experience"--rather than, from first principles, which is how it comes off--as this is NOT how their early kernel memory protections worked ;P. In 2015, with iOS 9, Apple released Kernel Patch Protection (KPP), which would verify that the kernel hadn't been modified asynchronously--and not even all that often, as I presume it was an expensive check--and panic if it detected corruption.
https://raw.githubusercontent.com/jakeajames/rootlessJB/mast...
> First let’s consider our worst enemy since iOS 9: KPP (Kernel Patch Protection). KPP keeps checking the kernel for changes every few minutes, when device isn’t busy.
> That “check every now and then” thing doesn’t sound too good for a security measure, and in fact a full bypass was released by Luca Todesco and it involves a design flaw. KPP does not prevent kernel patching; it just keeps checking for it and if one is caught, panics the kernel. However, since we can still patch, that opens up an opportunity for race conditions. If we do things fast enough and then revert, KPP won’t know anything ;)
I have some inside knowledge here. KPP was released around the time KTRR on A11 was implemented to have some small amount of parity on <A11 SoCs. I vaguely remember the edict came down from high that such a parity should exist, and it was implemented in the best way they could within a certain time constraint. They never did that again.
> FWIW, I presume this is "from experience"--rather than, from first principles, which is how it comes off
I interpreted that as what they came up with when first looking at/starting to implement MTE, not their plan since $longTimeAgo.
Apple has certainly gotten better about security, and I suspect things like what you listed are a big part of why. They were clearly forced to learn a lot by jailbreakers.
Yeah it’s hard to get these things right the first time.
> There has never been a successful, widespread malware attack against iPhone. The only system-level iOS attacks we observe in the wild come from mercenary spyware ... to target a very small number of specific individuals and their devices. Although the vast majority of users will never be targeted in this way..
Correct me if I'm wrong, but the spyware that has been developed certainly could be applied at scale at the push of a button with basic modification. They just have chosen not to at this time. I feel like this paragraph is drawing a bigger distinction than actually exists.
Neither Apple or Google truly knows how widespread attacks on their products have been despite portraying it as if they have perfect insight into it. They're claiming to know something they cannot. GrapheneOS has published leaked data from exploit developers showing they're much more successful at exploiting devices and keeping up with updates than most people believe. We have access to more than what we've published, since we don't publish it without multiple independent sources to avoid leaks being identified. These tools are widely available, and it cannot be generally known when they're used whether it's data extraction or remote exploitation. Catching exploits in the wild is the exception to the rule, otherwise exploit development companies would have a much harder job needing to keep making new exploits after they're heavily used. They wouldn't value a single exploit chain nearly as much as they do if it stopped working after it was used 50k times. Law enforcement around the world has access to tools like Cellebrite Premium which are used against many people crossing borders, at protests, etc. That is usage at scale. There's far less insight into remote exploits which don't have to be distributed broadly to be broadly used.
I wonder why XcodeGhost doesn't count as successful, widespread malware attack against iPhone. WeChat was infected. It was before iOS had pasteboard protections.
[1] https://en.wikipedia.org/wiki/XcodeGhost
Maybe, maybe not. But it seems fair to point out. Certainly if it was as exposed as, say, Windows, then there would have been many.
It's mainly there as a swipe at Android. I don't think it really relates to the rest of the article (and, with no insight but with my conspiracy theory hat on, was included to peddle the merits of their App Store model).
absolutely. it is awful lawyer twinkie talk. but the fact that we get such a detailed artile press release on MIE new aphl tech it speaks to its validity and confidence which is plainly great for all of us.
> In 2018, we were the first in the industry to deploy Pointer Authentication Codes (PAC) in the A12 Bionic chip, to protect code flow integrity in the presence of memory corruption. The strong success of this defensive mechanism in increasing exploitation complexity left no doubt that the deep integration of software and hardware security would be key to addressing some of our greatest security challenges.
There have been multiple full-chain attacks since the introduction of PAC. It hasn’t been a meaningful attack deterrent because attackers keep finding PAC bypasses. This should give you pause as to how secure EMTE actually is.
To be fair, they didn't claim it to be a meaningful attack deterrent. They said "success...in increasing exploitation complexity".
Sure, the whole sentence is a bit of a weird mess. Paraphrased: it made exploits more complex, so we concluded that we needed a combined SW/HW approach. What I read into that is that they're admitting PAC didn't work, so they needed to come up with a new approach and part of that approach was to accept that they couldn't do it using either SW or HW alone.
Then again... I don't know much about PAC, but to me it seems like it's a HW feature that requires SW changes to make use of it, so it's kind of HW+SW already. But that's a pointless quibble; EMTE employs a lot more coordination and covers a lot more surface, iiuc.
> It hasn’t been a meaningful attack deterrent because attackers keep finding PAC bypasses.
Correction: it forces attackers to find PAC bypasses. They are not infinite.
Haha, just because there's been bypasses doesn't mean it hasn't been effective.
This is really impressive.
It’s my understanding that this won’t protect you in the case where the attacker has a chance to try multiple times.
The approach would be something like: go out of bounds far enough to skip the directly adjacent object, or do a use after free with a lot of grooming, so that you get a a chance of getting a matching tag. The probability of getting a matching tag is 1/16.
But this post doesn’t provide enough details for me to be super confident about what I’m saying. Time will tell! If this is successful then the remaining exploit chains will have to rely on logic bugs, which would be super painful for the bad guys
Even with Android MTE, one of the workarounds was probabilistic attacks on the small tag size, which imply multiple tries. One of the big distinctions here is uniform synchronous enforcement, so writes trap immediately and not on the next context switch.
The other 15/16 attempts would crash though, and a bug that unstable is not practically usable in production, both because it would be obvious to the user / send diagnostics upstream and because when you stack a few of those 15/16s together it's actually going to take quite a while to get lucky.
I'm sure Apple/ARM model is vastly more sophisticated, but skimming thru made me think of the Burroughs large system memory tagging architecture: https://en.wikipedia.org/wiki/Burroughs_large_systems_descri...
With EU chat control, the state will be on my device, having access to everything they want, decide what I can and cannot do. Once Google forces WEI on us, the whole web will get locked down. And secure boot and now MIE will make sure we can never take back our freedom.
> MIE will make sure we can never take back our freedom.
Is the implication here that making phones more secure is... bad? Because it makes jailbreaks harder to develop?
That is, unless we balkanize our systems and services.
What is WEI?
> ...attackers must not be able to predict tag values that the system will choose. We address this issue by frequently re-seeding the underlying pseudo-random generator used to select new tags.
This point could use more explanation. The fundamental problem here is the low entropy of the tags (only 4 bits). An attacker who randomly guesses the tags has 1/16 chance of success. That is not fixed by reseeding the PRNG. So I am not sure what they mean.
At attacher can guess, and has a 1/16 probability to guess right, but they have only one chance to guess because if you guess wrong, the process terminates (if it's a user-process) or the kernel panics (if it's in the kernel), so in the next opportunity you'll have it will be a different tag to guess.
Four bits provide too few possibilities. Since memory allocations happen millions of times per minute, the chance of collisions grows very quickly, even with periodic reseeding.
"There has never been a successful, widespread malware attack against iPhone. ..."
b your iphones BEEN pwned for YEARS and it was done in minutes LOL. gtfoh
with help from ChatGPT: Apple claims “never been a successful iPhone malware attack” Reality: WireLurker, Masque, XcodeGhost, YiSpecter, jailbreak 0-days, Pegasus/Predator/Reign 0-clicks.
iPhones pwned for yrs — by kids in pajamas
> ... With Enhanced MTE, we instead specify that accessing non-tagged memory from a tagged memory region requires knowing that region’s tag, ...
I got a bit confused when reading this. What does it mean to "know the tag" if the memory region is untagged?
I believe they mean the source region's tag, rather than the destination.
>Google took a great first step last year when they offered MTE to those who opt in to their program for at-risk users. But even for users who turn it on, the effectiveness of MTE on Android is limited by the lack of deep integration with the operating system that distinguishes Memory Integrity Enforcement and its use of EMTE on Apple silicon.
>With the introduction of the iPhone 17 lineup and iPhone Air, we’re excited to deliver Memory Integrity Enforcement: the industry’s first ever, comprehensive, always-on memory-safety protection covering key attack surfaces — including the kernel and over 70 userland processes — built on the Enhanced Memory Tagging Extension (EMTE) and supported by secure typed allocators and tag confidentiality protections.
Of course it is a little disappointing not to see GrapheneOS's efforts in implementing [1] and raising awareness [2] recognised by others but it is very encouraging to see Apple making a serious effort on this. Hopefully it spurs Google on to do the same in Pixel OS. It should also inspire confidence that GrapheneOS are generally among the leaders in creating a system that defends the device owner against unknown threats.
[1] https://grapheneos.org/releases#2023103000 [2] https://xcancel.com/GrapheneOS/status/1716946325277909087#m
Apple has been working on this for years. It's not like they started thinking about memory tagging when Daniel decided to turn it on in GrapheneOS.
This is by far the best selling point of the new series of devices.
>The presence of EMTE leaves Spectre V1 as one of the last avenues available to attackers to help guide their attacks, so we designed a completely novel mitigation that limits the effective reach of Spectre V1 leaks — at virtually zero CPU cost — and forces attackers to contend with type segregation. This mitigation makes it impractical for attackers to use Spectre V1, as they would typically need 25 or more V1 sequences to reach more than 95 percent exploitability rate — unless one of these sequences is related to the bug being exploited, following similar reasoning as our kalloc_type analysis.
Did they ever explain what that mitigation does?
https://mastodon.online/@ezhes_/115175838087995856
Nope. I don't know why just checking the tags during speculation wouldn't stop Spectre V1, at least for cross-type accesses? I mean, it's not that simple because your program won't crash if speculation has mismatched tags. Which means you can try as many times as you want until you get lucky. But that's certainly not a "completely novel mitigation", so I'm sure I'm missing something obvious.
Perhaps the real problem is that you can use speculation to scan large amounts of memory for matching tags, some of which would be different types, so you need something to handle that?
(talking out of my butt here)
Full title is "Memory Integrity Enforcement: A complete vision for memory safety in Apple devices"
> Arm published the Memory Tagging Extension (MTE) specification in 2019 as a tool for hardware to help find memory corruption bugs. MTE is, at its core, a memory tagging and tag-checking system, where every memory allocation is tagged with a secret; the hardware guarantees that later requests to access memory are granted only if the request contains the correct secret. If the secrets don’t match, the app crashes, and the event is logged. This allows developers to identify memory corruption bugs immediately as they occur.
This is great, since Oracle introduced SPARC ADI, into Solaris and their Linux SPARC workloads that I keep looking forward to "C Machines" becoming a common way to fix C language issues.
Unfortunately, like in many other cases, Intel botched their MPX design, only evolutions of MTE and CHERI are around.
Is this only available on iPhone 17 for now?
Available on all the models announced today: air and 17/17 pro (a19 chip and above)
1988 called and wants it memory tagging back https://www.devever.net/~hl/ppcas !
But yeah this was support for a the longest time by IBM basically. It's nice to see it's getting more widespread.
The problem with PowerPC AS tagging was that it relied entirely on the trap instruction. If you could control execution at all, you could skip the trap instruction and it did nothing. This implementation, by my reading, essentially adds a synchronous trap instruction after every single load and store, which builds a real security boundary (even compared to Android MTE, where reads would trap but writes were only checked at the next context switch).
The big difference with this seems like it is an actual security mechanism to block "invalid" accesses where as the tagged memory extensions only provided pointer metadata and it was up to the OS to enforce invariants.
> Extensions provide no security. [...] The tagged memory extensions don't stop you from doing anything.
SPARC ADI was a predecessor to ARM MTE. ARM MTE has been available and used in production for several years now. ADI is also 4 bit but with 64 byte granularity rather than 16 byte.
Nitpick: The AS/400 in 1988 didn't use the PowerPC. I believe it had it's own proprietary memory with tag bits included.
The first RS-64 with the PowerPC AS extensions came out in 1995.
The tagging mechanism reminds me of generational indices like https://floooh.github.io/2018/06/17/handles-vs-pointers.html, but I’m a bit out of my depth
How does this compare to CHERI?
Substantially less complex and therefore likely to be substantially easier to actually use.
CHERI-Morello uses 129-bit capability objects to tag operations, has a parallel capability stack, capability pointers, and requires microarchitectural support for a tag storage memory. Basically with CHERI-Morello, your memory operations also need to provide a pointer to a capability object stored in the capability store. Everything that touches memory points to your capability, which tells the processor _what_ you can do with memory and the bounds of the memory you can touch. The capability store is literally a separate bus and memory that isn't accessible by programs, so there are no secrets: even if you leak the pointer to a capability, it doesn't matter, because it's not in a place that "user code" can ever touch. This is fine in theory, but it's incredibly expensive in practice.
MIE is a much simpler notion that seems to use N-bit (maybe 4?) tags to protect heap allocations, and uses the SPTM to protect tag space from kernel compromise. If it's exactly as in the article: heap allocations get a tag. Any load/store operation to the heap needs to provide the tag that was used for their allocation in the pointer. The tag store used by the kernel allocator is protected by SPTM so you can't just dump the tags.
If you combine MIE, SPTM, and PAC, you get close-ish to CHERI, but with independent building blocks. It's less robust, but also a less granular system with less overhead.
MIE is both probabilistic (N-bits of entropy) and protected by a slightly weaker hardware protection (SPTM, which to my understanding is a bus firewall, vs. a separate bus). It also only protects heap allocations, although existing mitigations protect the stack and execution flow.
Going off of the VERY limited information in the post, my naive read is that the biggest vulnerability here will be tag collision. If you try enough times with enough heap spray, or can groom the heap repeatedly, you can probably collide a tag with however many bits of entropy are present in the system. But, because the model is synchronous, you will bus fault every time before that, unlike MTE, so you'll get caught, which is a big problem for nation-state attackers.
https://saaramar.github.io/memory_safety_blogpost_2022/ is a nice article which goes into this topic for MTE in the past.
Is MTE restricted to the newest (17) iPhone models or does it work on the older ones too?
Just the newest ones.
I wonder if these protections will apply to macOS as well.
The hardware for it isn't there yet, but I assume when new Macs ship it will be enabled there.
I think hackers are not ready for the idea that unhackable hardware might actually be here. Hardware that will never have an exploit found someday, never be jailbroken, never have piracy, outside of maybe nation-state attacks.
Xbox One, 2012? Never hacked.
Nintendo Switch 2, 2025? According to reverse engineers... flawlessly secure microkernel and secure monitor built over the Switch 1 generation. Meanwhile NVIDIA's boot code is formally verified this time, written in the same language (ADA SPARK) used for nuclear reactors and airplanes, on a custom RISC-V chip.
iPhone? iOS 17 and 18 have never been jailbroken; now we introduce MIE.
I would deeply, strongly caution against using public exploit availability as any evidence of security. It’s a bad idea, because hundreds of market factors and random blind luck affect public exploitability more than the difficulty of developing an exploit chain.
Apple are definitely doing the best job that any firm ever has when it comes to mitigation, by a wide margin. Yet, we still see CVEs drop that are marked as used in the wild in exploit chains, so we know someone is still at it and still succeeding.
When it comes to the Xbox One, it’s an admirable job, in no small part because many of the brightest exploit developers from the Xbox 360 scene were employed to design and build the Xbox One security model. But even still, it’s still got little rips at the seams even in public: https://xboxoneresearch.github.io/games/2024/05/15/xbox-dump...
I think the nature of the scene changed and exploits and jailbreaks are kept to small groups, individuals or are sold.
For example, I might know of an unrelated exploit I'm sitting on because I don't want it fixed and so far it hasn't been.
I think the climate has become one of those "don't correct your adversary when they make mistakes" types of things versus an older culture of release clout.
Saying "never" is too bold. But it's definitely getting immensely difficult.
There are still plenty of other flaws besides memory unsafety to exploit. I doubt that we'll see like a formally proven mainstream OS for a long time.
>Xbox One, 2012? Never hacked.
Not publicly :)
Israeli companies and agencies will surely find a way.. even if software/hardware might really be unhackable, it seems people will never be..
> iPhone? iOS 17 and 18 have never been jailbroken; now we introduce MIE.
So far as you know. There's a reason they call them zero-day vulnerabilities.
As the ability to make remote controlled hardware unhackable increases the power asymmetry between those who can create such hardware and the masses who cannot will drastically increase. I leave it as an exercise for the audience as to what the equilibrium implications are for the common man, especially in western countries where the prior equilibrium was quite different.
This looks amazing, I cannot wait to see how attackers pivot.
https://xkcd.com/538/
If we are checking every pointer at runtime how isn't this dog slow?
The chip does it by itself, in parallel to its other operations.
This is the opposite of fun computing. This is commercial computing who's only use case it making sure that people can send/receive money through their computers securely. I love being able to peek/poke inside and look at my processes ram, or patch the memory of an executable. All this sounds pretty impossible on Apple's locked down systems.
They're not so much general purpose computers anymore as they are locked down bank terminals.
It's all fun and games until somebody else patches the RAM of your device, and sends your money away from your account.
More interesting is how to trace and debug code on such a CPU. Because what a debugger often does is exactly patching an executable in RAM, peeks and pokes inside, etc. If such an interface exists, I wonder how is it protected; do you need extra physical wires like JTAG? If it does not, how do you even troubleshoot a program running on the target hardware?
I think if you want to tinker with hardware, you shouldn't buy Apple. It's designed for people who use it as a means to an end, and I think that's a good thing for most people (including me). I want to bank on hardware that I can trust to be secure. Nothing wrong with building your own linux box for play time though.
If you like using debuggers, don't worry, MTE gives you a lot more chances to use them since it finds a lot more crashes. It doesn't stop you writing to memory though, as long as it's the correct type.
PAC may stop you from changing values - or at least you'd have to run code in the process to change them.
Bingo. None of this is for users. Apple somehow managed to put on a marketing mask of user respect when they’re at least as user abusive as anyone else.
Meanwhile, Google is doing all it can to weaken Android safety by withholding images and patches, also by failing to fully segregate applications from each other. The evidence is linked below:
(1) AOSP isn't dead, but Google just landed a huge blow to custom ROM developers: https://www.androidauthority.com/google-not-killing-aosp-356...
(2) Privacy-Focused GrapheneOS Warns Google Is Locking Down Android: https://cyberinsider.com/privacy-focused-grapheneos-warns-go...
(3) GrapheneOS exposes Google's empty promises on Android security updates: https://piunikaweb.com/2025/09/08/grapheneos-google-security...
Look, I’m an iOS user but this seems like flame-bait to me without any technical details. I’ve seen a lot of Google blog posts about security improvements over the years so that seems like a very sweeping assertion if you’re not going to support it.
More like Manufacturer Integrity Enforcement the way Apple makes things.
What’s the real benefit for regular/power users?
If you are targeted by advanced spyware then it makes it more difficult for their exploit to work.