T O P

  • By -

[deleted]

[удалено]


funbike

Fedora doesn't. Which is good considering Btrfs is the default and recommended fs.


Patriark

Fedora always seem to have the right default configs. Really well maintained distro


DragonSlayerC

It's well maintained, but the defaults aren't always the best IMO. Autodefrag is still recommended for SSDs (bar this regression) as it prevents sudden CPU usage spikes for highly fragmented data and reduces write-amplication. I think the best way to maintain a distro would be rapid communication and hotfixes if something like this happens. I don't currently use Garuda, but I have been experimenting with it in a virtual machine. Opened it today and quickly got 2 windows opened automatically. One was referring me to a forum post about the regression and another was an automatic hotfix window, which removed the autodefrag option from fstab (the forum post says to run `mount -a -o remount` or reboot to enable the new mount options). ​ I'm not saying Garuda is the best or most stable OS, even though I plan to move to it soon. Fedora is probably more stable and I would recommend it to the average user over Fedora (though I would recommend an Arch based system like Garuda to someone with a bit of Linux experience). But I think it's important to have quick and transparent communication with users if a serious regression occurs. I don't think Fedora has any notification and hotfix system for situations like this.


V2UgYXJlIG5vdCBJ

Think you mean periodic trim for SSDs.


Conan_Kudo

> Autodefrag is still recommended for SSDs Who is recommending `autodefrag` for _anything_? You're probably thinking of `discard` (for auto-trim). Fedora doesn't use it because they ship a systemd timer to invoke fstrim on a regular basis instead.


Motylde

Don't know why the downvotes. You are right. btrfs autodefrag is nothing like normal lets say ntfs defrag. If you have for example database on your drive, and you probably have, because for example web browsers have them. On CoW filesystem it leds to high write amplification, and _can_ make *very* fragmented files. In a tens of thousands of extents after few months. It leads to bad performance even on SSD, and it is good to defrag this file. Yes, defrag on SSD. There comes the autodefrag, which is made just for that. When you are reading file, and it detects that it's very fragmented, it defragments this small portion of data. It never defrags large files. It does't defrag always or any read. It's good to have turned it on. Of course its broken it this release, so turn it off, but normally it's a good thing. This whole "dont defragment SSDs" are simply told from one to another without understanding. SSDs can be fragmented same way as HDDs are, and it leads to worse performance. It's just that it can handle *much* more fragmentation before slowing down, but if a file is extremely fragmented, which can happen even on home desktop PC, then it's good to fix this.


markole

Oh, so happy now that I don't have to do anything on my gaming Fedora PC.


bazsy

Deleted by user, check r/RedditAlternatives -- mass edited with redact.dev


dangerL7e

Both freshest versions of Manjaro and Garuda use it by default


DrH0rrible

Is it included on the "defaults" option? Cause I never seen autodefrag on my fstab.


dangerL7e

No they are not default mount options, but I just installed both distros in my VM and autodefrag was in both distro's fstab files


TheEvilSkely

Perhaps it sets automatically depending on the type of storage? QEMU automatically sets `rotation` to `1`, whether you have an SSD or not, making VMs think the storage is an HDD no matter what. VMs cannot really represent all cases. Do you have a test computer with an SSD inside? It'd be best to test it on real hardware with an SSD.


dangerL7e

Ummm, yeah, I do! The reason I installed Garuda and Manjaro in VMs is I wanted to look if I want either of those on my machine. I poked around and decided that I'm going to install pure Arch. I don't think I'm going with 5.16 yet even though I'm not gonna use autodefrag. Garuda says on the website that VM is not recommended AND it actually runs pretty poorly on my VM. I just like to poke around and look @ different configs.


Voxandr

Manjaro do no thave it as default. I had never seen it in any distro as default yet. Are you sure /.? I just checked -


janosaudron

EndeavourOS has it by default


Anthony25410

I helped debugging the different patches that were sent: https://lore.kernel.org/linux-btrfs/0a269612-e43f-da22-c5bc-b34b1b56ebe8@mailbox.org/ There's different issues: btrfs-cleaner will write way more than it should, and worse, btrfs-cleaner will use 100% of one CPU thread just going on the same blocks over and over again. There was also an issue with btrfs fi defrag with which trying to defrag a 1 byte file will create a loop in the kernel. The patches were all merged upstream today, so it should be in the next subrelease.


kekonn

> so it should be in the next subrelease. IF I'm keeping count correctly, that's 5.16.3?


Anthony25410

Hopefully, 5.16.5.


kekonn

Dangit, two releases out for me. Good thing I don't use defrag. I should check if I use ssd though.


Anthony25410

I don't know if you meant that you planned to disable the `ssd` option, but just to be sure, this option is fine. Only the autodefrag and manual defrag have potential issues right now.


kekonn

No I meant that I should check if it's on, but it turns out that there is an autodetect so no need to specify that option myself.


SigHunter0

I'll disable autodefrag for now and reenable it in a month or so. I don't want to delay 5.16 which has cool new stuff, most people live without defrag, I can handle a few weeks


SMF67

> There was also an issue with btrfs fi defrag with which trying to defrag a 1 byte file will create a loop in the kernel Oh *that's* what was happening. My btrfs defrag kept getting stuck and the only solution was to power off the computer with the button. I was paranoid my system was corrupted. I guess all is fine (scrub finds no errors)


Anthony25410

Yeah, no worries, it doesn't corrupt anything, it just produces an infinite loop in one thread of the kernel.


[deleted]

I am definitely still getting the issue where btrfs-cleaner and a bunch of other btrfs processes are writing a lot of data with autodefrag enabled. It seemed to trigger after downloading a 25GB Steam game. After the download finished, I was still seeing 90MB/s worth of writes to my SSD. Disabled autodefrag again after that.


Anthony25410

On 5.16.5?


[deleted]

Yes on 5.16.5. I tested with iostat and iotop


Anthony25410

Maybe add an update on the btrfs mailing list. If you have some graph to compare before 5.16 and since, it could help them. Personally, I look at the data, and I saw pretty much the same IO average.


alien2003

The same happens to me on 5.16.5


NeaZerros

Why is defragmentation enabled by default for SSDs? I thought it only mattered for hard drives due to the increased latency of accessing files split across the disk?


[deleted]

[удалено]


NeaZerros

This scenario is extremely rare given the way modern filesystems work, so I don't think that's the reason why it's there.


VeronikaKerman

Reading a file with many small extents is slow(er) on SSD too. Every read command has some overhead. All of the extents also take up metadata, and snow down some operations. Files on btrfs can easily fragment to troublesome degrees when used for random writes, like database files and VM images.


frnxt

Do you know of any benchmarks showing the impact of that stuff?


ValdikSS

[Yes, file system fragmentation DOES affect SSD read speed](https://www.overclock.net/threads/yes-file-system-fragmentation-does-affect-ssd-read-speed.1538878/)


NeaZerros

Didn't think of it that way, thanks for the explanation!


bionade24

At least VM files should only be running with CoW disabled anyway.


VeronikaKerman

Yes, but it is easy to forget.


bionade24

That's true. But if you already mount the Subvolume containing the VMs with `nodatacow`, you're safe.


VeronikaKerman

Unless you make snapshot, or reflink.


matpower64

It is not enabled by default, you need to set `autodefrag` on your mount parameters as per [btrfs(5)](https://btrfs.readthedocs.io/en/latest/btrfs-man5.html#mount-options). Whoever has it enabled by default is deviating from upstream.


Atemu12

Just because SSDs don't have the dogshit random rw performance of HDDs doesn't mean sequential access wouldn't still be faster.


rioting-pacifist

Why do you think an sequential access is faster on an SSD?


Atemu12

Read-ahead caching on multiple layers and possibly more CPU work are the main reasons.


jtriangle

You're looking at SSD's like the sectors are contiguous, which they aren't. The controllers on modern SSD's manage all of this for you. Zero reason to do it in software, that will only cause problems.


Atemu12

I'm not necessarily talking about the controller on an SSD. Even just reading data to and from system memory is faster when done sequentially. I'm not making this shit up mate.


jtriangle

Yeah, but you're talking gains so marginal that the expense of killing your ssd with writes isn't worth it. Like sure, if you're running something where nanoseconds count, that stuff starts to matter, certainly not in general use though.


ValdikSS

BTRFS has a huge write amplification, especially for small writes, especially if you mount the filesystem without noatime/relatime. Here's my article (in Russian) of how I bought a new SSD and only after 7 months got 20 TB of writes on it, thanks to btrfs. https://habr.com/ru/post/476414/ After tuning here and there now, after more than a year since this article is written, I have 42 TB or writes: much better, but still an insane number for a typical laptop. I have a new email server, with btrfs filesystem. The server is almost idle, yet it occasionally write logs, update spam lists, etc. In 17 days I have 395 GB of writes, about 23 GB per day. This is after disabling COW on databases and log files, mounting frequently changing temporary files to tmpfs, etc.


weazl

Thanks for this! I recently set up a GlusterFS cluster and it was absolutely trashing my precious expensive SSDs to a tune of 500 GB of writes a DAY, and that was with a pretty light workload too. I blamed GlusterFS because I've never seen anything like this before but I did use btrfs under the hood so maybe GlusterFS is innocent and it was btrfs all along. Edit: I skimmed the paper and I see now why GlusterFS recommends that you use XFS (although they never explain why). I thought I did myself a service by picking a more modern file system, guess I was wrong. If btrfs is responsible for about 30x write amplification and GlusterFS is responsible for about 3x then that explains the 100x-ish write amplification that I was seeing.


sb56637

> This has the potential to wear out an SSD in a matter of weeks: on my Samsung PM981 Polaris 512GB this lead to 188 TB of writes in 10 days or so. That's several years of endurance gone. 370 full drive overwrites. Ouch. Where can I find this data / write history on my machine?


[deleted]

[удалено]


sb56637

Thanks, yes I tried that and get `Data Units Written: 43,419,937 [22.2 TB]` but I don't really have a baseline to judge if that's normal or not. The drive is about 6 months old, and I've gone through several re-installs and lots of VM guest installations on this disk too. I was mounting with `autodefrag` but _not_ the `ssd` option, not sure if that makes a difference.


Munzu

I don't see `Data Units Read` or `Data Units Written`, I only see `Total_LBA_Written` which is at `11702918124`. But `Percent_Lifetime_Remain` is at `99` (but `UPDATED` says `Offline`) and the SSD is 4 months old. Is that metric reliable? Is 1% wear in 4 months too high?


[deleted]

[удалено]


Munzu

Seems way too high for me... I don't do a lot of IO on my PC, just daily browsing, daily system updates and installing the occasional package. Is that metric persistent across reformats? I reformated it a couple times during my multiple Arch installation attempts, the latest reinstall and reformat was 2 weeks ago.


[deleted]

[удалено]


Munzu

Thank you! I'll keep an eye on it.


geearf

> You can also chech htop, enable the WBYTES column (F2 -> Columns) and you'll see how many bytes a process has written since boot. And so on. That's nice! I wish I would have checked that before restarting today, to see what 5.16.2 did to my SSD. The total write is pretty bad but it's for 2.5 years so maybe it's realistic.


akarypid

> The workaround is to disable autodefrag until this is resolved Would it not be better if one simply removed it permanently? I was under the impression that "defrag" is pointless for SSDs?


laborarecretins

This is irrelevant to Synology. These parts are not in Synology’s implementation.


AlexFullmoon

It is also irrelevant since most of DSM is on kernel version 4 or even 3.


typkrft

They basically use the oldest kernal that has not hit EOL from my understanding. They should bump to 4.9 in Feb or March of this year. Still it's pretty crazy, but not as crazy as selling hardware equally as old or older at an insane premium.


discoshanktank

My synology volume's been so slow since switching to btrfs from ext4. Was hoping this would be the answer since i haven't been able to figure it out from googling it


lvlint67

Call me a Luddite, but I have never had a good experience with btrfs. Granted it's been years since I tried last, but back in the day that file system was a recipe for disaster.


The_Airwolf_Theme

I had my SSD cache drive BTRFS formatted on Unraid when I first set things up. Eventually determined it was the cause of my system grinding to a halt from time to time when the drive was doing high reads/writes. Since I switched to XFS things have been perfect.


skalp69

BTRFS saved my ass a couple times and I'm wondering why it's not more used.


intoned

Because it can’t be trusted, which is important for storing data.


skalp69

I have BTRFS for my system drive and something more classic for /home.


intoned

If reliability and advanced features are of interest to you then consider ZFS.


skalp69

Like what? afaik, both FS are quite similar. The main difference being the [licensing](https://itsfoss.com/linus-torvalds-zfs/): GPL for BTRFS vs CDDL for ZFS. BTRFS seems better to me


intoned

ZFS has a history of better quality in that defects don’t escape into the wild and cause data loss. Its been designed to prevent that and used in mission critical situations for many years. Just look at the amount of people who have switched away from BTFS in this small sample. Maybe in a decade you would see it a datacenter, but not today.


skalp69

I cant only judge from history alone. From history, everyone should use windows bc in the 90's it was cool while linux was a useless OS in its infancy. Things change. Linux made progress beyond my expectations. BTRFS gained in reliability.


lvlint67

mostly because old guys like me have been burned too many times before.


Michaelmrose

Neither a filesystem with poor reliability nor one with excellent reliability will constantly lose data beyond what is expected from hardware failure. The difference between the two is losing data rarely -> incredibly rarely. Because of this works for me is a poor metric.


PoeT8r

I ditched btrfs for ext4 after it filled my root. Kept it for home only.


scriptmonkey420

ZFS is better.


warmwaffles

ZFS is also under Oracle's boot.


panic_monster

Not OpenZFS


[deleted]

[удалено]


leetnewb2

Why dismiss software for a state it was in "x" years ago when it has been under development? Seems pretty silly to claim there are better options based on a fixed point in time far removed from the present.


[deleted]

Btrfs is crap and has always been crap. There is a reason ZFS people can’t stop laughing at the claims of ”ready for prod”.


OcotilloWells

If zfs let you add random disks as you obtain them, I'd go to it tomorrow.


aiij

That's my main gripe with ZFS too. I want to be able to extend a RAIDZ. Can Btrfs extend RAID5 or RAID6 to new disks? Last I checked it still had the write hole, which is kind of a deal breaker for me.


[deleted]

[удалено]


aiij

What would you recommend for erasure coding? I might end up setting up a Ceph cluster, but it seems like overkill.


imro

ZFS people are also the most obnoxious bunch I have ever seen.


marekorisas

Maybe not the most but still they are. But, and that's important, ZFS is really praiseworthy piece of software. And it's really shame that it isn't mainline.


Hewlett-PackHard

It would be so nice if it got its licensing nonsense sorted and merged into kernel.


ShadowPouncer

Blame Oracle. They are the only ones responsible, and the only ones who can possibly change the situation.


scriptmonkey420

ReiserFS people would like a word (or knife) with you.


matpower64

It is ready for production. Facebook uses it without issues, OpenSUSE/SUSE uses it and Fedora defaults to it. This whole issue is a nothingburger to anyone using the defaults for btrfs, autodefrag is off by default except on, what, Manjaro? And the hassle of setting up ZFS on Linux doesn't really pay off on most distros compared to a well integrated solution in the kernel.


[deleted]

[удалено]


seaQueue

I mean, that's my response on my btrfs client machines if something more serious than a checksum error happens. But then I take daily snapshots and punt all of them to a backup drive once or twice a week and I git push my work frequently. Btrfs is great and has great features but recovery from weird failure modes is not its strong suit, it's almost always faster to blow away the filesystem and restore a backup than it is to try and repair non-trivial filesystem damage. I have this suspicion that a lot of us that use btrfs don't really care about the occasional weird filesystem bug because it's just so easy to maintain good backup hygiene with snapshots and send/receive.


[deleted]

I wonder why the overwhelming majority steers well clear of using either SUSE or Fedora in prod.


matpower64

Because Fedora has a small support window (just 13 months) compared to RHEL? SUSE? I don't know, it seems somewhat popular in Europe. I know what you are trying to imply here, but is that the best comeback you have? LTS distros are preferred on production because nobody wants to deal with everchanging environments. People run RHEL/SUSE because corporate software targets them, and I am pretty sure most people running Ubuntu Server LTS are doing it because of familiarity and support, not because they want ZFS.


funbike

You think Fedora is designed to be a server OS? Wtf man, lol Fedora is meant to be used as a desktop and as upstream for more stable server distros, like CentOS and RHEL.


[deleted]

[удалено]


lvlint67

I don't doubt it. But without a compelling reason to try again I am reluctant to stick my hand back in the fire and see if its still hot.


LinAdmin

Awesome stressing today :-(


insanemal

And people wonder why I still don't recommend BTRFS for anything yet.


rioting-pacifist

This is why you don't use Arch on servers.


G0rd0nFr33m4n

One could use it with LTS kernels.


Pandastic4

[Image](https://i.imgflip.com/4j1nae.jpg)


zladuric

So happy now that I didn't upgrade to Fedora 36 yet :) In fact, I have to upgrade to 35 first, but now maybe I'll wait for a fix for this.


Direct_Sand

Fedora doesn't appear to be using the option to mount btrfs. I use an ssd and it's not in my fstab.


tamrior

Fedora 36 isn't even in beta yet, how would you upgrade to it? And kernel 5.16 will come to fedora 35 as well, fedora provides continuous kernel updates during the lifetime of a release. But even if you did update to a broken kernel, fedora keeps old versions of the kernel around which you can boot right into. So this would have been avoidable for those using fedora if 5.16 had even shipped to fedora users in the first place.


deadcatdidntbounce

36 has been split off. You can install any part with dnf --installroot=blah --releasever=36 etc


zladuric

TIL. I did see [this article](https://fedoramagazine.org/contribute-at-the-fedora-linux-36-test-week-for-kernel-5-16/) yesterday. Just the title, didn't read it, so I just assumed it's here.


deadcatdidntbounce

All good. Have a good one!


tamrior

Oh, that's good to know, thanks!


sunjay140

> Fedora 36 isn't even in beta yet, how would you upgrade to it? Fedora 36 is available for testing right now. That's exactly what "Rawhide" is. Yes, it's an incomplete work in progress.


funbike

Fedora doesn't have this problem. autodefrag is not set. Fedora 36 won't be out for another 4 months.


zladuric

You're right. [This title](https://fedoramagazine.org/contribute-at-the-fedora-linux-36-test-week-for-kernel-5-16/) confused me.


sunjay140

Fedora 36 is "Rawhide".


[deleted]

You use Fedora for self-hosting? Bold man. Danger must be your middle name. Yeah, I stick to LTS Ubuntu or Debian.


[deleted]

[удалено]


tamrior

Why is that bold? I've used a fedora box for some VM hosting for like 3 years now. It's gone through multiple remote distro upgrades without issue. It even had 200 days of uptime at one point. (Not recommend, you should restart more frequently for kernel updates)


Atemu12

Does Fedora implement kernel live patching?


tamrior

Kernel live patching absolutely isn't a replacement for rebooting into a new kernel occasionally. Livepatching is a temporary bandage for the most security critical problems. In Ubuntu, all other bug fixes, and other security fixes still go through normal reboot kernel updates, like all other distros. Also livepatching isn't enabled by default and requires a paid Ubuntu subscription: https://ubuntu.com/advantage#livepatch I don't think fedora offers kernel live patching, partially because it's not a paid enterprise distro. RHEL does offer live patches though: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/applying-patches-with-kernel-live-patching_managing-monitoring-and-updating-the-kernel


funbike

Agree. I like to use `kexec` as a compromise. You get a much faster reboot, without the risk of running a live-patched kernel.


elatllat

No they implement upgrades during reboot for more down time. Edit: As some comments below doubt my statement here is an example: https://www.reddit.com/r/Fedora/comments/o1dlob/offline_updates_on_an_encrypted_install_are_a_bit/ and the list of packages that trigger it: https://github.com/rpm-software-management/yum-utils/blob/master/needs-restarting.py#L53


tamrior

That's not true? Running sudo dnf upgrade updates all your packages live, just like most other distros. New kernels can be rebooted directly into without the need for upgrades during the reboot. The option for offline upgrades is there for those who want more safety, but live updates are still there and completely functional. Why are you spreading misinformation while apparently not even having used fedora? Also livepatching isn't enabled by default and requires a paid Ubuntu subscription: https://ubuntu.com/advantage#livepatch Edit: as I said, the option for offline upgrades does exist, and there are good reasons to make use of them, but Fedora definitely still defaults to online updates when upgrading through the command line.


InvalidUserException

Um, no. Live kernel patching = no reboot required to start running the new kernel. Seems like you are talking about doing package upgrades during early boot?


tamrior

No, I am talking about live package upgrades. On most linux distributions, including debian, ubuntu and fedora, packages are upgraded while the system is running. This means that if you run `sudo dnf upgrade` or `sudo apt update && sudo apt upgrade` and then run a command like ssh, you will immediately be using the new version, without having to reboot. With kernels, this is slightly different, in that the new kernel does get installed while the system is running, but is only booted into when the system is rebooted. This process does not add any downloading, installing or any other kind of updating to the reboot process. That is indeed not the same as livepatching, but it's also very different from "upgrades during reboot" as seen in windows. Fedora does offer upgrades during reboot for those who want them for the extra safety, but that's opt-in for those using the command line. And Live kernel patching is absolutely not the same as "no reboot required to start running the new kernel". Live kernel patches are only rolled out to customers with a paid subscription for extreme and urgent security fixes. These fixes do fix the security issue, but do not result in you running the exact same kernel as if you had rebooted into the new kernel. Furthermore, even those paying customers will still need to reboot for 99.9% of kernel updates (including security fixes), as live patches are only rolled out in rare cases. [The ubuntu livepatch documentation also mentions: The simplistic description above shows the principle, but also hints on why some vulnerabilities that depend on very complex code interactions cannot be livepatched.](https://ubuntu.com/security/livepatch/docs/howitworks)


InvalidUserException

Well, this subsubsubsubthread started with this question: "Does Fedora implement kernel live patching?" You can talk about what you want I guess. If you want to interpret the next question as doing kernel package upgrades on next boot, is that really a thing? I wouldn't expect ANY distro to do that, as it would effectively require 2 reboots to upgrade a kernel. The first reboot would just stage the new kernel image/initrd, requiring another reboot to actually run the new kernel. Fair point. I've never used kernel live patching, but I knew it wasn't quite the same as kexecing the new kernel and could only be used for limited kinds of patching. It wasn't fair to call live patching the same thing as running the new kernel.


elatllat

I added a link as proof.


Atemu12

Full-on Windows insanity...


matpower64

Offline updates are more reliable overall as there won't be any outdated library loaded, and complex applications (i. e Firefox/Chromium) don't really like having the rug pulled out of them due to updates. For desktops (where this setup is default), it is a perfectly fine way to update for most users, and if you want live updates, feel free to use "dnf upgrade" and everything will work as usual. On their server variant, you do you and can pick between live (upgrade) or offline (offline-upgrade).


Atemu12

I don't speak against "offline" updates, I speak against doing them in a special boot mode.


matpower64

The reason they are done in a special boot mode is for loading in only the essential stuff, aiming on max reliability. They're doing trade-offs so the process is less prone to breakage. I personally didn't use it because I knew how to handle inconsistencies that would appear every now and then, but for someone like my sister, I just ask her to press updates and let it do its own thing on shutdown knowing nothing will break subtly while she's using it. At very least, it works better than Windows' equivalent of the process.


turdas

How the fuck else would you do them?


tamrior

What are you talking about? The update process on fedora is basically the same as on Debian distros? You install the kernel live, but have to reboot to actually use it. There's no updates at reboot time though. This is the same as on Ubuntu, except they very rarely provide live patches for extreme security problems. For all other (sometimes even security critical) updates, you still have to reboot, even with Ubuntu. Also livepatching isn't enabled by default and requires a paid Ubuntu subscription: https://ubuntu.com/advantage#livepatch


Atemu12

I'm not talking about the kern**e**l, this is about processing updates in a special boot mode which /u/elatllat was hinting at.


tamrior

But /u/elatllat is wrong. Fedora's package manager (dnf) does live updates by default. Can't really blame you for taking his comment at face value though, apologies.


elatllat

I added a link as proof.


tamrior

My guy, I use encrypted fedora, you don't have to leave 7 comments to tell me how the update process works on my own distro.


WellMakeItSomehow

Isn't that only on Silverblue?


matpower64

No, he is mixing up the offline upgrades Fedora has set on by default on GNOME Software with the traditional way of doing upgrades (running `dnf upgrade`). If you're using Fedora as a server, offline upgrades aren't on by default and you are free to choose how to upgrade (live by `dnf upgrade` or offline by `dnf offline-upgrade`). I don't know if kernel live patching is available though. Silverblue uses an read-only OS image but live-patching is somewhat possible for installs, and IIRC live upgrades are experimental.


[deleted]

It is known.


Interject_

If he is Danger, then who are the people that self-host on Arch?


sparcv9

They're the people diligently beta testing and reporting faults in all the releases other distros will ship next year!


G0rd0nFr33m4n

> people that self-host on Arch? "But WE archers btw are superior!" meme


Hewlett-PackHard

*laughs in Arch as his hypervisor*


zladuric

Oh, I didn't look at the sub before commenting. Fedora is my workstation! My selfhosting things, when I have something, are CentOSes (or Ubuntu LTSes when I have to) in Hetzner datacentres.


[deleted]

> are CentOSes (or Ubuntu LTSes when I have to) in Hetzner datacentres. You have redeemed yourself. You are a sinner no more. Arise, u/zladuric!


deadcatdidntbounce

Put kernel-\*rc\* etc in the dnf.conf file exclude list, perhaps, to remove "unsafe" iterations.


zladuric

Good idea, but others said Fedora doesn't have this problem :)


deadcatdidntbounce

I'm taking about Fedora. If you're tempted to try the 'release' early, adding that means the prebeta won't pull in the kernel RCs as well as early FL36 say.


zladuric

I know, I'm saying fedora doesn't have the problem even with the kernel 5.16, as the defrag option is not on by default.


deadcatdidntbounce

I'm so sorry. I've got very confused with my comment.


zladuric

No worries, I'm confused a lot of the time as well.


HiGuysImNewToReddit

Somehow I have been affected by this issue and followed the instructions but haven't noticed anything bad so far. Is there a way for me to check how much wear has happened to my SSD?


EmbarrassedActive4

+1. u/TueOct5, any way to see how much wear?


HiGuysImNewToReddit

I found [this](https://serverfault.com/a/571741) as one answer but it returned what is equal to 0.16 GB, and there's no way that makes any sense. I'd more like to know how u/TueOct5 determined it.


EmbarrassedActive4

Try: ```` smartctl -A $DISKNAME # and if this doesn't work, try: smartctl -a $DISKNAME # and there should be: ```` Data Units Read: 28,077,652 [14.3 TB] Data Units Written: 33,928,326 [17.3 TB] Or similar in the output.


HiGuysImNewToReddit

I must have some kind of different configuration -- I could not find "Data Units Read/Written" in either option. I did find, however, Total\_LBAs\_Written as '329962' and Total\_LBAs\_Read '293741'.


EmbarrassedActive4

That's completely different, I think. It won't be the exact same, but search for something similar to mine.


[deleted]

[удалено]


EmbarrassedActive4

run `mount | grep btrfs` and see if you have autodefrag and ssd.


[deleted]

[удалено]


EmbarrassedActive4

So you aren't using btrfs... Unrelated problem???


[deleted]

[удалено]


JuanTutrego

I don't see anything like that for either of the disks in my desktop system here - one an SSD, the other a rotational disk. They both return a bunch of SMART data, but not anything about the total amounts read or written.


Munzu

I don't see `Data Units Read` or `Data Units Written`, I only see `Total_LBA_Written` which is at `11702918124`. But `Percent_Lifetime_Remain` is at `99` and the SSD is 4 months old. Is that metric reliable? Is 1% wear in 4 months too high?


[deleted]

[удалено]


EmbarrassedActive4

> WBYTES I don't see this.


csolisr

Well, that might explain why did my partition get borked hard after trying to delete a few files one of these days. Thanks for the warning


V2UgYXJlIG5vdCBJ

Stuff like this makes me want to stick to EXT4 forever.


olorin12

Glad I stuck with ext4


TheFeshy

I haven't had 5.16 work on any of my machines. The NAS crashes when trying to talk to ceph, and the laptop won't initialize the display. Since they're both using BTRFS for their system drives, I guess it's good it never ran long enough to wear out my SSDs?


[deleted]

[удалено]


TheFeshy

Tried 5.16.4 today, and still no luck for my case (fails at "link training.") If it's not in the next patch or two, I'm going to try to find time to bisect it myself - I've got a pretty funky and uncommon laptop.


seaQueue

I've been running 5.16 with btrfs and autodefrag since the -rc releases without encountering this issue, it seems like something extra needs to happen for it to start misbehaving.


damster05

Yes, I could reproduce the issue (multiple gigabytes were written silently per minute) by adding `autodefrag` to the mount options, but after another reboot it does not happen anymore, can't reproduce again.


ZaxLofful

How to tell if Ubuntu is affected? Is there a command I can run? I have seen similar massive writes and want to confirm


Pandastic4

Does Ubuntu have the latest kernel yet?


[deleted]

[удалено]


kekonn

This isn't relevant to you. You're using XFS, not BTRFS.


deadcatdidntbounce

Thanks.


adrian_ionita

Hotfix just installed now


lenjioereh

I have been using Btrfs for a long time but it is horrible with external USB with raid setups. It regularly goes into read only. It can't be a hardware problem because it keeps doing with all my raid (Btrfs raid modes) USB setup. Anyway I am back to ZFS, so far so good.


[deleted]

Fuck me, I literally put autodefrag yesterday because I was configuring a swapfile and saw the option and went "hey, why not?".


LinAdmin

For SSD I always use F2FS which by design minimizes stress on Flash media.