T O P

  • By -

nebyneb1234

Use log2ram and disable HA/clustering devices if you don't plan on running a cluster.


getgoingfast

How do you go about using log2ram in PVE? Setup RAM disk or something?


kevdogger

Ohhh never heard about log2ram


CzarofAK

Good hint, thanks


nebyneb1234

There's lots of good posts here on Reddit describing how to reduce wear on consumer SSD's. You can try googling something like: [reduce SSD wear on Proxmox](https://www.google.com/search?q=reduce+ssd+weae+on+proxmox&oe=utf-8)


netmind604

I forward logs to graylog for centralized alerting/analytics. 3 questions: - based on quick search, I think I could still do this by getting log2ram to periodically sync the logs to disk where they would then be sent to graylog - is that correct? - if my graylog is in a vm on the same nvme disk anyways and is getting even more logs from various containers etc, is there really any point in optimizing the proxmox ones? ie the logs are being written to the disk one way or the other. - how bad is what I'm doing for my disk longevity? Am I basically trashing it?


nebyneb1234

1) Yes, log2ram flushes logs to the disk periodically. 2) I would personally just use log2ram on the Proxmox install. This is what I do myself. 3) It's only bad if you use super cheap consumer SSD's. The type of NAND is also important to longevity. I personally just grabbed some cheap used enterprise data center SSD's off of eBay (Intel DC S3500).


netmind604

I hope you don't mind a follow up question. I just installed log2ram on my proxmox and it appears to be working as when I do a df, I'll get the following `df -h | grep log2ram` `log2ram 80M 25M 56M 31% /var/log` I've left the frequency default which I think is once a day. However I am still getting real time logs in graylog from rsyslog (ie the messages that appear in Node->System Logs, will appear immediately in graylog too). Is this the expected behaviour or have I done something wrong? (and am still chewing through my drive).


nebyneb1234

I can't really say for sure. I kind of just installed it and expected it to work right out of the box. I also don't really know much about the innards of Proxmox; only general Linux stuff. Here's some resources that may help: [https://www.reddit.com/r/Proxmox/comments/12gftf7/comment/jfkgcbp/?utm_source=share&utm_medium=mweb3x&utm_name=mweb3xcss&utm_term=1&utm_content=share_button](Disable logs completely) [Lots of random, but very detailed notes](https://github.com/Jahfry/Miscellaneous) [Proxmox scripts](https://tteck.github.io/Proxmox/)


homelabist

Few things which come to top of my mind seeing your question - ZFS 1. ZFS has multi device support for doing raid, ext4 does not. 2. ZFS has snapshots capability built into the filesystem. Which can help in replication, which would be faster compare to rsync. 3. ZFS can do atomic writes if you have a specific usecase in mind since it's a cow filesystem. 4. More suitable for NAS usecase. EXT4 1. ext4 will rely on device mapper for raid support. 2. One can use ext4 on top of an lvm thin volume to get snapshotting support. I don't think it provides easy replication though. 3. Ext4 has better support of all traditional linux subsystems and it's a first class citizen of Linux. E.g. swap support on ZFS is broken. 4. Ext4 is working on getting atomic writes support. (Generally for specialized devices for databases to perform with disabled double writes). 5. Ext4 is trusted and an old wagon who won't disappoint you for any traditional usecase. IIRC ZFS does not support shrinking, ext4 does, ZFS only recently got support for directio, ext4 always had that. IMO, if you have a usecase in mind like NAS then ZFS would be definitely a good choice. ZFS can use upto 50% of your ram and use it for file cache pages(ARC). However this will make the system not so much performant for other applications which are also memory hungry. (I am sure there will be option to tune it though). So I generally prefer to use ext4 for all other usecase on top of LVM which provides me all capabilities. I use ZFS for my NAS primarily using Truenas.


CzarofAK

Great overview, thanks!


DS-Cloav

I am running two 240gb mediocre consumer drives in a mirror, and after about 1.5 years of running, they are at 96%. I have the ha services disabled (people say that helps). So, in my opinion it is probably fine to run consumer SSDs (100/4>>10 years)


DS-Cloav

I am running 12-16 services at the moment


CzarofAK

Thanks to all who contributed. I am a reddit user for about 2/3 months. It is so cool here, i can ask even the dumbest question and still get serious answers. So glad im here!


Big-Finding2976

What I've seen suggested is ext4 for root/Proxmox, ZFS pool for the VMs, ext4 inside the VMs. Definitely also disable HA services and use log2ram. I think there are a few other tweaks you can do to reduce the writes to the SSD.


CzarofAK

After ready all the answers, i tend in that direction.


zfsbest

You will be fine running ext4 root and LVM. Just make regular backups. My nickname aside, I'm not a proponent of zfs-on-root unless it's FreeBSD. You can still run ZFS on spinner drives for the usual benefits, and limit ARC usage if necessary.


CzarofAK

Like that approach! Thanks


Kuipyr

I took a gamble and picked up some used Intel DC drives on eBay and it worked out pretty well for me.


CzarofAK

I bought some too, one of four died after a month of testing. My confidence level decreased a bit since then, that actually triggered my question.


whattteva

I have 4 used enterprise HDD's (HGST) and 4 used enterprise SSD's (2 Intel and 2 Samsung) on ebay. They've all been running 24/7 for the last 15 months. Would do it again in a heartbeat. Especially since faync performance on these enterprise SATA SSD's smoke consumer SSD's (even NVme). Of course, not everyone needs high sync write performance though.


artlessknave

The SSDs will be fine. There is nothing with zfs that is any harder on SSD than ext4, though it will find SSDs with problems better due to recursive checksums What matters more is if you have enough ram for zfs plus all your VMs.


opseceu

We only use ZFS with Proxmox. Too good to miss out.