Swap on servers somewhat defeats the purpose of ECC memory: your program state is now subject to complex IO path that is not end-to-end checksum protected.
Also you get unpredictable performance.
So typically: swap off on servers. Do they have a server story?
The third mitigating feature the article forgot to mention is that tmpfs can get paged out to the swap partition. If you drop a large file there and forget it, it will all end up in the swap partition if applications are demanding more memory.
The Linux OOM killer is kinda sketchy to rely on. It likes to freeze up your system for long periods of time as it works out how to resolve the issue. Then it starts killing random PIDs to try to reclaim RAM like a system wide russian roulette.
It's especially janky when you don't have swap. I've found adding a small swap file of ~500 MB makes it work so much better, even for systems with half a terabyte of RAM this helps reduce the freezing issues.
Yeah. I always disable overcommit (notwithstanding that Linux cannot provide perfectly accurate strict memory accounting), and I'd prefer not to use swap, but Linux VM maintainers have consistently stated that they've designed and tuned the VM subsystem with swap in mind. Is swap necessary in the abstract? No. Is swap necessary on Linux? No. But don't be surprised if Linux doesn't do what you'd expect in the absence of swap, and don't expect Linux to put much if any effort into improving performance in the absence of swap.
I've never ran into trouble on my personal servers, but I've worked at places that have, especially when running applications that tax the VM subsystem, e.g. the JVM and big Java apps. If one wonders why swap would be useful even if applications never allocate, even in the aggregate, more anonymous memory than system RAM, one of the reasons is the interaction with the buffer cache and eviction under pressure.
Install earlyoom or one of its near-equivalents. That mostly solves the problem of it freezing up the system for long periods of time.
I haven't personally seen the OOM killer kill unproductively - usually it kills either a runaway culprit or something that will actually free up enough space to help.
For your "even for systems with half a terabyte of RAM", it is logical that the larger the system, the worse this behaviour is, because when things go sideways there is a lot more stuff to sort out and that takes longer. My work server has 1.5TB of RAM, and an OOM event before I installed earlyoom was not pretty at all.
> For your "even for systems with half a terabyte of RAM", it is logical that the larger the system, the worse this behaviour is, because when things go sideways there is a lot more stuff to sort out and that takes longer. My work server has 1.5TB of RAM, and an OOM event before I installed earlyoom was not pretty at all.
I meant it more in the sense that it doesn't have to be more than a few hundred MB even for large RAM. It's not the size of the swap file that makes the difference, but its presence, and advice of having it be proportional to RAM are largely outdated.
Swapping still occurs regardless. If there is no swap space the kernel swaps out code pages instead. So, running programs. The code pages then need to be loaded again from disk when the corresponding process is next scheduled and needs them.
This is not very efficient and is why a bit of actual swap space is generally recommended.
Note though that if you don't have swap now, and enable it, you introduce the risk of thrashing [1]
If you have swap already it doesn't matter, but I've encountered enough thrashing that I now disable swap on almost all servers I work with.
It's rare but when it happens the server usually becomes completely unresponsive, so you have to hard reset it.
I'd rather that the application trying to use too much memory is killed by the oom manager and I can ssh in and fix that.
That's not true. Without swap, you already have the risk of thrashing. This is because Linux views all segments of code which your processes are running as clean and evictable from the cache, and therefore basically equivalent to swap, even when you have no swap. Under low-memory conditions, Linux will happily evict all clean pages, including the ones that the next process to be scheduled needs to execute from, causing thrashing. You can still get an unresponsive server under low memory conditions due to thrashing with no swap.
Setting swappiness to zero doesn't fix this. Disabling swap doesn't fix this. Disabling overcommit does fix this, but that might have unacceptable disadvantages if some of the processes you are running allocate much more RAM than they use. Installing earlyoom to prevent real low memory conditions does fix this, and is probably the best solution.
Disabling swap on servers is de-facto standard for serious deployments.
The swap story needs a serious upgrade. I think /tmp in memory is a great idea, but I also think that particular /tmp needs a swap support (ideally with compression, ZSWAP), but not the main system.
Pretty much all the guidelines about swap partitions out there reference old allocator behaviour from way over a decade ago - where you'd indeed typically run into weird issues without having a swap partition, even if you had enough RAM.
Short (and inaccurate) summary was that it'd try to use some swap even if it didn't need it yet, which made sense in the world of enough memory being too expensive, and got fixed at the cost of making the allocator way more complicated when we started having enough memory in most cases.
Nowadays typically you don't need swap unless you work on a product with some constraints, in which case you'd hand tune low memory performance anyway. Just don't buy anything with less than 32GB, and you should be good.
yeah pretty much, also configuring memory limits everywhere where apps allow it. some software also handles malloc failures relatively gracefully, which helps a whole lot (thank you postgres devs)
Actually quite handy and practical to know about, specifically in the context of a "low end box" where I personally would prefer that RAM exist for my applications and am totally fine with `/tmp` tasks being a bit slow (lets be real, the whole box is "slow" anyway and slow here is some factor of "vm block device on an ssd" rather than 1990s spinning rust).
I'm surprised to discover that it was not already the case for a long time for tmpfs to be used for /tmp, and that change is nice.
But the auto-cleanup feature looks awful to me.
Be it desktop or servers, machine with uptime of more than a year, I never saw the case of tmp being filled just by forgotten garbage. Only sometimes filled by unzipping a too big file or something like that. But it is on the spot.
It used to be the place where you could store cache or other things like that that will hold until next reboot.
It looks so arbitrary and source of random unexpected bugs to have files being automatically deleted there after random time.
I don't know where this feature comes from, but when stupid risky things like this are coming, I would easily bet that it is again a systemd "I know best what is good for you" broken feature shoved through our throats...
And if coming from systemd, expect that one day it will accidentally delete important filed from you, something like following symlinks to your home dir or your nvme EFI partition...
> I never saw the case of tmp being filled just by forgotten garbage.
It might have more to do with the type of developers I've worked with, but it happens all the time. Monitoring complains and you go into check, and there it is gigabytes of junk dumped there by shitty software or scripts that can't cleanup after themselves.
The issue is that you don't always knows what's safe to delete, if you're the operations person, and not the developer. Periodically auto-cleaning /tmp is going to do break stuff, and it will be easier to demand that the operations team disable auto-cleanup than getting the issue fixed in the developers next sprint.
Autocleaning: get the last accessed time from a file and only auto-clean files not accessed in the last n hours, e.g. 24 hours? Should be reasonably safe.
I tried out variations on this on my daily driver setups.
The design choices here were likely threefold:
Store tmpfs in memory: volatile but limited to free ram or swap, and that writes to disk
Store tmpfs on dedicated volume: Since we're going to write onto disk anyway, make it a lightweight special purpose file system that's commited to disk
On disk tmpfs but cleaned up periodically: additional settings to clean up - how often, what should stay, tie file lifetime to machine reboot? The answers to these questions vary more between applications than between filesystems, therefore it's more flexible to leave clean up to userspace.
In the end my main concern turned out to be that I lost files that I didn't want to lose, either to reboot cleanup, on timer cleanup, etc. I opted to clean up my temp files manually as needed.
We did this song and dance in RHEL. It's fine. Just use /var/tmp if you need persistent tmp storage. Gnome and X and tmux will not make you swap and if they do run xfce instead.
Why this change? Writing to it will be faster than disk but idk if am is a precious commodity I’d rather it was just a part of the disk I was writing to.
It's a dumb idea that came from the systemd people. They've never explained properly why it's a good idea, but it's the systemd default and for some reason distros defer to that.
Yup. There's lots of advice about how to reduce cycle count, increase lifetime of sd cards out there. This post has a bunch of ideas, and tmpfs is definitely on the list. https://raspberrypi.stackexchange.com/a/186/32611
The part that's more likely to bite people here and that's easily overlooked is that files in /var/tmp will survive a reboot but they'll still be automatically deleted after 30 days.
I feel like this is mixing agendas. Is the goal freeing up /temp more regularly (so you don’t inadvertently rely on it, to save space, etc) or is the goal performance? I feel like with modern nvme (or just ssd) the argument for tmpfs out of the box is a hard one to make, and if you’re under special circumstances where it matters (eg you actually need ram speeds or are running on an SD or eMMC) then you would know to use a tmpfs yourself.
(Also, sorry but this article absolutely does not constitute a “deep dive” into anything.)
'systemctl mask tmp.mount' - the most important command to run in these situations.
It's a really bad idea to put /tmp into memory. Filesystems already use memory when necessary and spill to the filesystem when memory is under pressure. If they don't do this correctly (which they do) then fix your filesystem! That will benefit everything.
You'd think that, but in ext4 the first write to a new file will hit the disk (the code mentions it is a workaround for something). Btrfs does it correctly.
Using the example from the article, extracting an archive. Surely that use case is entity not possible using in-memory? What happens if you're dealing with a not-unreasonable 100gb archive?
Sure, but note that your usecase goes specifically against fhs and posix specs:
>Programs must not assume that any files or directories in /tmp are preserved between invocations of the program.
>Although data stored in /tmp may be deleted in a site-specific manner, it is recommended that files and directories located in /tmp be deleted whenever the system is booted.
Now you can obviously use your Filesystem whichever way you like, but I would say Debian shouldn't have to take into consideration uses which are outside the general recommendations/specs.
For a long time my default download folder was /dev/shm. It is / was? the memory tmpfs and everything would just be gone after a reboot. Now I can just use /tmp
Even used something similar on my windows pc, had a B:/ disk 1GB in size that was my download folder. Automated cleanup made easy.
I'm still a fan of poly instantiated /tmp and PrivateTmp (systemd). This may confuse/annoy admins who are not aware of namespaces, but I know that it definitely closes the attack vector of /tmp abuse by bad actors.
Why is there no write through unionfs in Linux? Feels like a very useful tool to have. Does no one else need this? Have half a mind to write one with an NFS interface.
EDIT: Thank you, jaunty. But all of these are device level. Even bcachefs was block device level. It doesn't allow union over a FUSE FS etc. It seems strange to not have it at the filesystem level.
Do you mean that you can mark files for which still the underlying filesystem is used? As far as I remember there were experiments with that about 20 years ago, but it was decided that the added complexity wasn't worth it. The implementation that replaced all of that has been very stable (unlike the ones before) and i'm using it heavily, so i think they had a point. Some write-through behavior can be scripted on top of that.
EDIT: So, wikipedia lists overlayfs and aufs as active projects and unionfs predates both. Maybe unionfs v2 is what replaced all that? Maybe I'm hallucinating...
File is tmpfs will swap out if your system is under memory pressure.
If that happens, reading the file back is DRAMATICALLY slower than if you had just stored the file on disk in the first place.
This change is not going to speed things up for most users, it will slow things. Instead of caching important files, you waste memory on useless temporary files. Then the system swaps it out, so you can get cache back, and then it's really slow to read back.
It's also because a filesystem is much more likely to have consecutive parts of a file stored consecutively on disc, whereas swap is going to just randomly scatter 4kB blocks everywhere, so you'll be dealing with random access read speed instead of throughput read speed.
This doesn't really make sense. If /tmp was an on-disk directory the same memory pressure that caused swapping would just evict the file from the page cache instead, again leading to a cache miss and a dramatically slower read.
What distro are you running? systemd-oomd kills processes a bit quicker than what came before (a couple minutes of a slow, stuttery system). Still too slow for a server you'd want to have back online as quickly as possible.
At least now when I run out of memory it kills processes that consume the most memory. A few years back it used to kill my desktop session instead!
Right, that's traditionally been because the X server has typically had a fairly large footprint, and therefore has been very attractive for the oom killer. But in the last 15 years or so, some heuristics have been applied to deliberately discourage the oom killer from killing "important things".
I install earlyoom on systems I admin. It prevents the low-memory thrashing by killing things while the system is still responsive, instead of when the system is in a state that means it'll take hours to recover.
Right, but if it's a VM, it's probably provisioned by something like ansible/terraform? If so, it's quite easy to add an init script that will disable this feature and never have to worry about it again.
Swap on servers somewhat defeats the purpose of ECC memory: your program state is now subject to complex IO path that is not end-to-end checksum protected. Also you get unpredictable performance.
So typically: swap off on servers. Do they have a server story?
That's a really good point that had never occurred to me.
Edit: I think that the use of ZFS for your /tmp would solve this. You get Error Corrected memory writing to an check-summed file system.
ZFS /tmp is probably fine, but swapping to ZFS on Linux is dicey AIUI; there's an unfortunate possibility of deadlock https://github.com/openzfs/zfs/issues/7734
Ah, thanks for pointing that out - wasn't aware.
So maybe another filesystem with heavy checksums could be used? Btrfs or dm-crypt with integrity over ext4?
Why not dm-integrity?
I can see it now: pro ecc sata and m.2 ssds
The mentioned periodic clean up of tmp files is not enabled out-of-the-box in case of a upgrade from previous Debian versions, see https://www.debian.org/releases/trixie/release-notes/issues.... .
The third mitigating feature the article forgot to mention is that tmpfs can get paged out to the swap partition. If you drop a large file there and forget it, it will all end up in the swap partition if applications are demanding more memory.
Fedora did this long before debian. I remember doing wget of an .iso file on /tmp and my entire wayland session being killed by the OOM killer.
I still think it's a terrible idea.
Use `/var/tmp` of you want a disk backed tmp.
I thought /var/tmp is for applications while /tmp is for the user.
what swap partition?
I meant this sort of jokingly. I think have a few linux systems that were never configured with swap partitions or swapfiles.
I'm with you. I don't swap. Processes die. OOM. Linux can recover and not lose data. Just unavailable for a moment.
The Linux OOM killer is kinda sketchy to rely on. It likes to freeze up your system for long periods of time as it works out how to resolve the issue. Then it starts killing random PIDs to try to reclaim RAM like a system wide russian roulette.
It's especially janky when you don't have swap. I've found adding a small swap file of ~500 MB makes it work so much better, even for systems with half a terabyte of RAM this helps reduce the freezing issues.
Yeah. I always disable overcommit (notwithstanding that Linux cannot provide perfectly accurate strict memory accounting), and I'd prefer not to use swap, but Linux VM maintainers have consistently stated that they've designed and tuned the VM subsystem with swap in mind. Is swap necessary in the abstract? No. Is swap necessary on Linux? No. But don't be surprised if Linux doesn't do what you'd expect in the absence of swap, and don't expect Linux to put much if any effort into improving performance in the absence of swap.
I've never ran into trouble on my personal servers, but I've worked at places that have, especially when running applications that tax the VM subsystem, e.g. the JVM and big Java apps. If one wonders why swap would be useful even if applications never allocate, even in the aggregate, more anonymous memory than system RAM, one of the reasons is the interaction with the buffer cache and eviction under pressure.
Install earlyoom or one of its near-equivalents. That mostly solves the problem of it freezing up the system for long periods of time.
I haven't personally seen the OOM killer kill unproductively - usually it kills either a runaway culprit or something that will actually free up enough space to help.
For your "even for systems with half a terabyte of RAM", it is logical that the larger the system, the worse this behaviour is, because when things go sideways there is a lot more stuff to sort out and that takes longer. My work server has 1.5TB of RAM, and an OOM event before I installed earlyoom was not pretty at all.
> For your "even for systems with half a terabyte of RAM", it is logical that the larger the system, the worse this behaviour is, because when things go sideways there is a lot more stuff to sort out and that takes longer. My work server has 1.5TB of RAM, and an OOM event before I installed earlyoom was not pretty at all.
I meant it more in the sense that it doesn't have to be more than a few hundred MB even for large RAM. It's not the size of the swap file that makes the difference, but its presence, and advice of having it be proportional to RAM are largely outdated.
Swapping still occurs regardless. If there is no swap space the kernel swaps out code pages instead. So, running programs. The code pages then need to be loaded again from disk when the corresponding process is next scheduled and needs them.
This is not very efficient and is why a bit of actual swap space is generally recommended.
Which is a great reason to have a big swap file now.
Note though that if you don't have swap now, and enable it, you introduce the risk of thrashing [1]
If you have swap already it doesn't matter, but I've encountered enough thrashing that I now disable swap on almost all servers I work with.
It's rare but when it happens the server usually becomes completely unresponsive, so you have to hard reset it. I'd rather that the application trying to use too much memory is killed by the oom manager and I can ssh in and fix that.
[1] https://docs.redhat.com/en/documentation/red_hat_enterprise_...
That's not true. Without swap, you already have the risk of thrashing. This is because Linux views all segments of code which your processes are running as clean and evictable from the cache, and therefore basically equivalent to swap, even when you have no swap. Under low-memory conditions, Linux will happily evict all clean pages, including the ones that the next process to be scheduled needs to execute from, causing thrashing. You can still get an unresponsive server under low memory conditions due to thrashing with no swap.
Setting swappiness to zero doesn't fix this. Disabling swap doesn't fix this. Disabling overcommit does fix this, but that might have unacceptable disadvantages if some of the processes you are running allocate much more RAM than they use. Installing earlyoom to prevent real low memory conditions does fix this, and is probably the best solution.
Disabling swap on servers is de-facto standard for serious deployments.
The swap story needs a serious upgrade. I think /tmp in memory is a great idea, but I also think that particular /tmp needs a swap support (ideally with compression, ZSWAP), but not the main system.
Swap always seemed more meant for desktop use. Servers you need to give the real memory expected of the application stack.
Pretty much all the guidelines about swap partitions out there reference old allocator behaviour from way over a decade ago - where you'd indeed typically run into weird issues without having a swap partition, even if you had enough RAM.
Short (and inaccurate) summary was that it'd try to use some swap even if it didn't need it yet, which made sense in the world of enough memory being too expensive, and got fixed at the cost of making the allocator way more complicated when we started having enough memory in most cases.
Nowadays typically you don't need swap unless you work on a product with some constraints, in which case you'd hand tune low memory performance anyway. Just don't buy anything with less than 32GB, and you should be good.
plenty of footguns in that general advice, local in memory storage services with default config, etc
This is why I’m running with overcommit 2 and a different ratio per server purpose.
…though I’m not sure why we have to think about this in 2025 at all.
I'm assuming that you monitor the service closely for OOM then adjust with demand ?
yeah pretty much, also configuring memory limits everywhere where apps allow it. some software also handles malloc failures relatively gracefully, which helps a whole lot (thank you postgres devs)
Actually quite handy and practical to know about, specifically in the context of a "low end box" where I personally would prefer that RAM exist for my applications and am totally fine with `/tmp` tasks being a bit slow (lets be real, the whole box is "slow" anyway and slow here is some factor of "vm block device on an ssd" rather than 1990s spinning rust).
I'm surprised to discover that it was not already the case for a long time for tmpfs to be used for /tmp, and that change is nice.
But the auto-cleanup feature looks awful to me. Be it desktop or servers, machine with uptime of more than a year, I never saw the case of tmp being filled just by forgotten garbage. Only sometimes filled by unzipping a too big file or something like that. But it is on the spot.
It used to be the place where you could store cache or other things like that that will hold until next reboot. It looks so arbitrary and source of random unexpected bugs to have files being automatically deleted there after random time.
I don't know where this feature comes from, but when stupid risky things like this are coming, I would easily bet that it is again a systemd "I know best what is good for you" broken feature shoved through our throats...
And if coming from systemd, expect that one day it will accidentally delete important filed from you, something like following symlinks to your home dir or your nvme EFI partition...
> I never saw the case of tmp being filled just by forgotten garbage.
It might have more to do with the type of developers I've worked with, but it happens all the time. Monitoring complains and you go into check, and there it is gigabytes of junk dumped there by shitty software or scripts that can't cleanup after themselves.
The issue is that you don't always knows what's safe to delete, if you're the operations person, and not the developer. Periodically auto-cleaning /tmp is going to do break stuff, and it will be easier to demand that the operations team disable auto-cleanup than getting the issue fixed in the developers next sprint.
Autocleaning: get the last accessed time from a file and only auto-clean files not accessed in the last n hours, e.g. 24 hours? Should be reasonably safe.
I tried out variations on this on my daily driver setups. The design choices here were likely threefold:
Store tmpfs in memory: volatile but limited to free ram or swap, and that writes to disk
Store tmpfs on dedicated volume: Since we're going to write onto disk anyway, make it a lightweight special purpose file system that's commited to disk
On disk tmpfs but cleaned up periodically: additional settings to clean up - how often, what should stay, tie file lifetime to machine reboot? The answers to these questions vary more between applications than between filesystems, therefore it's more flexible to leave clean up to userspace.
In the end my main concern turned out to be that I lost files that I didn't want to lose, either to reboot cleanup, on timer cleanup, etc. I opted to clean up my temp files manually as needed.
A tmpfs itself is basically a ramdisk by definition. I assume you mean /tmp when you say tmpfs?
Yes. I'm not careful lately.
If you’ve got swap set up then stale files will get written back to disk so at least you’re not RAM indefinitely just because they’re stored on tmpfs.
It’s still not an ideal solution though.
I agree about the auto cleanup, I discovered it a few days after using /tmp as a ramdisk for Yocto build. Lost a few patches but nothing significant.
We did this song and dance in RHEL. It's fine. Just use /var/tmp if you need persistent tmp storage. Gnome and X and tmux will not make you swap and if they do run xfce instead.
Why this change? Writing to it will be faster than disk but idk if am is a precious commodity I’d rather it was just a part of the disk I was writing to.
It's a dumb idea that came from the systemd people. They've never explained properly why it's a good idea, but it's the systemd default and for some reason distros defer to that.
If I am satisfied with my disk speed, why would I want to use system memory? What are the specific use cases where this is warranted?
Computers like a Raspberry Pi, where the OS is on a sdcard, will hugely benefit.
Yup. There's lots of advice about how to reduce cycle count, increase lifetime of sd cards out there. This post has a bunch of ideas, and tmpfs is definitely on the list. https://raspberrypi.stackexchange.com/a/186/32611
Technically it'll have some impact on the number of write cycles your disk goes through, and marginally reduce the level of wear.
Most disks have a lot of write cycles available that you'll be fine anyway, but it's a tiny benefit.
The part that's more likely to bite people here and that's easily overlooked is that files in /var/tmp will survive a reboot but they'll still be automatically deleted after 30 days.
As someone who sort of needs to juice all my ram, this is annoying but at least it can be turned off.
I feel like this is mixing agendas. Is the goal freeing up /temp more regularly (so you don’t inadvertently rely on it, to save space, etc) or is the goal performance? I feel like with modern nvme (or just ssd) the argument for tmpfs out of the box is a hard one to make, and if you’re under special circumstances where it matters (eg you actually need ram speeds or are running on an SD or eMMC) then you would know to use a tmpfs yourself.
(Also, sorry but this article absolutely does not constitute a “deep dive” into anything.)
'systemctl mask tmp.mount' - the most important command to run in these situations.
It's a really bad idea to put /tmp into memory. Filesystems already use memory when necessary and spill to the filesystem when memory is under pressure. If they don't do this correctly (which they do) then fix your filesystem! That will benefit everything.
You'd think that, but in ext4 the first write to a new file will hit the disk (the code mentions it is a workaround for something). Btrfs does it correctly.
Sounds like fixing that ext4 problem could make a lot of things go faster.
Using the example from the article, extracting an archive. Surely that use case is entity not possible using in-memory? What happens if you're dealing with a not-unreasonable 100gb archive?
Who runs around with 100gb+ of swap?!
Use `/var/tmp` of you want a disk backed tmp. Not sure why the article omits that.
Who runs around with a 100gb+ /tmp partition?
Our default server images come with a 4.4GB /tmp partition...
I run a script that rotates my /tmp/ each day, so I can access yesterday's tmp files at /tmp/20250828/ and so on.
My /tmp is my default folder for downloads and temporary work. It will grow 100GB+ easily.
Sure, but note that your usecase goes specifically against fhs and posix specs:
>Programs must not assume that any files or directories in /tmp are preserved between invocations of the program.
>Although data stored in /tmp may be deleted in a site-specific manner, it is recommended that files and directories located in /tmp be deleted whenever the system is booted.
https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch03s18.htm...
Now you can obviously use your Filesystem whichever way you like, but I would say Debian shouldn't have to take into consideration uses which are outside the general recommendations/specs.
Programs shouldn't assume that about /tmp, the user advising this is fine.
For a long time my default download folder was /dev/shm. It is / was? the memory tmpfs and everything would just be gone after a reboot. Now I can just use /tmp
Even used something similar on my windows pc, had a B:/ disk 1GB in size that was my download folder. Automated cleanup made easy.
This reminded me of the spacebar heating xkcd: https://xkcd.com/1172/
(not making fun of the workflow or anything, it's just that changes like tmpfs breaking stuff very much holds true)
Your use case sounds more like "scratch" folder, not really what /tmp is meant for.
I'm still a fan of poly instantiated /tmp and PrivateTmp (systemd). This may confuse/annoy admins who are not aware of namespaces, but I know that it definitely closes the attack vector of /tmp abuse by bad actors.
https://www.redhat.com/en/blog/polyinstantiating-tmp-and-var...
Why is there no write through unionfs in Linux? Feels like a very useful tool to have. Does no one else need this? Have half a mind to write one with an NFS interface.
EDIT: Thank you, jaunty. But all of these are device level. Even bcachefs was block device level. It doesn't allow union over a FUSE FS etc. It seems strange to not have it at the filesystem level.
Do you mean that you can mark files for which still the underlying filesystem is used? As far as I remember there were experiments with that about 20 years ago, but it was decided that the added complexity wasn't worth it. The implementation that replaced all of that has been very stable (unlike the ones before) and i'm using it heavily, so i think they had a point. Some write-through behavior can be scripted on top of that.
EDIT: So, wikipedia lists overlayfs and aufs as active projects and unionfs predates both. Maybe unionfs v2 is what replaced all that? Maybe I'm hallucinating...
Dm-cache! https://www.kernel.org/doc/Documentation/device-mapper/cache...
File is tmpfs will swap out if your system is under memory pressure.
If that happens, reading the file back is DRAMATICALLY slower than if you had just stored the file on disk in the first place.
This change is not going to speed things up for most users, it will slow things. Instead of caching important files, you waste memory on useless temporary files. Then the system swaps it out, so you can get cache back, and then it's really slow to read back.
This change is a mistake.
Why is reading the data back from swap be slower at all -- much less "DRAMATICALLY" so -- than saving the data to disk and reading it back?
Because swapping back in happens 4kb at a time
It's also because a filesystem is much more likely to have consecutive parts of a file stored consecutively on disc, whereas swap is going to just randomly scatter 4kB blocks everywhere, so you'll be dealing with random access read speed instead of throughput read speed.
Why?
Because of page size. Its treated like any other page.
This doesn't really make sense. If /tmp was an on-disk directory the same memory pressure that caused swapping would just evict the file from the page cache instead, again leading to a cache miss and a dramatically slower read.
Most systems probably aren't having problems with insufficient RAM nowaday though, do they? And this will reduce wear on your SSD.
Also, you can easily disable it: https://www.debian.org/releases/trixie/release-notes/issues....
If you're running it in a VM you might not have all that luxurious RAM.
When my Linux VM starts swapping I have to either wait an hour or more to regain control, or just hard restart the VM.
What distro are you running? systemd-oomd kills processes a bit quicker than what came before (a couple minutes of a slow, stuttery system). Still too slow for a server you'd want to have back online as quickly as possible.
At least now when I run out of memory it kills processes that consume the most memory. A few years back it used to kill my desktop session instead!
Right, that's traditionally been because the X server has typically had a fairly large footprint, and therefore has been very attractive for the oom killer. But in the last 15 years or so, some heuristics have been applied to deliberately discourage the oom killer from killing "important things".
I install earlyoom on systems I admin. It prevents the low-memory thrashing by killing things while the system is still responsive, instead of when the system is in a state that means it'll take hours to recover.
Right, but if it's a VM, it's probably provisioned by something like ansible/terraform? If so, it's quite easy to add an init script that will disable this feature and never have to worry about it again.
On small VPS systems with 512 MB or 1 GB you're more likely to notice (if /tmp is actually used by what's running on the sytem).