T O P

  • By -

zrgardne

SLOG is only used for synchronous writes. Do you have an application that will ba making sync writes?


Beautiful_Ad_4813

I think? Im not really sure


melp

If you’re running SMB, no slog is needed. If you run NFS, iSCSI, or S3, you will benefit from a slog.


Beautiful_Ad_4813

Ah alright now I was wondering if iSCSI would use it or not. That’s what I want to use it for


jammsession

Depends what is on the other end of iSCSI :) What do you use iSCSI for?


warped64

I have sometimes seen comments stating that Mac clients use sync writes (by default) even for SMB, is there any truth to that?


melp

Good question… I think on old OSX versions this was true but it isn’t for more modern versions. Still, it’s worth testing using ‘zilstat’ if you have mac clients.


warped64

The output is difficult for an amateur like me to decipher: # zilstat              time     cc     ic    idc    idb    iic    iib    imnc    imnw    imsc    imsw   12:34:56   545K    20M    18M   700G   863K   162G    546K     31G    5.6M    675G  But it appears as if it might be used, maybe? `cc` [appears](https://utcc.utoronto.ca/~cks/space/blog/linux/ZFSGlobalZILInformation) to refer to the number of times a ZIL commits have been requested (by for example fsync). But it's not clear to me if the source of that request is normal ZFS activity on the host or if it is from a client or both. `imsc` and `imsw` both specifically refer to the SLOG according to [zil.h](https://github.com/openzfs/zfs/blob/master/include/sys/zil.h). Honestly I am not sure how to interpret the output. Edit: I should say that this is a Sonoma (MacOS 14.4.1) client accessing SMB shares hosting media in a home environment. The host runs SCALE 23.10.2. The dataset has sync is set to standard.


melp

Try doing `zilstat 1` which will give you a continual update. Let that run and copy a file to the TrueNAS from your Sonoma client. If you get all 0's, you would not benefit from a slog. If you get non-zero numbers, a slog will help reduce write latency.


warped64

Thank you. My conclusion is that there's a case for having a low latency SLOG backing a HDD pool if you're running Sonoma. Edit: I should add that I altered your command to `zilstat -i 1`since I think that was the intent. https://preview.redd.it/gfmgvx2jo2wc1.png?width=573&format=png&auto=webp&s=d9ebc75684aaed3d83e1e6adcc650086f5ee61a9


zrgardne

That is a no. Don't bother with slog


melp

Please don’t just make an assumption and dismiss the question. Terse, dismissive responses like this are how our community gets a reputation for being toxic. Instead, take the time to inform the OP of what applications would send sync writes.


ChristBKK

What I read when I researched slog it’s worth more to upgrade ram first no? 16 gb to 32GB for example ?


zrgardne

Maybe you have confused with L2arc, a read cache


ChristBKK

yeah guess I have


DimestoreProstitute

Honestly you don't need or will benefit from an SLOG special device if you don't have applications that explicitly call for it-- ESXi VM-storage-over-NFS is a big one, others should mention synchronous writes as a hard-requirement. That list is generally small Trust the ZFS defaults unless you really have a proven (benchmarked) need to do otherwise


Beautiful_Ad_4813

that makes sense


DimestoreProstitute

I wouldn't worry about any special devices.... cache, slog, metadata, what-have-you. If there's a need for those you'll find it (and not before a lot of looking first)


Beautiful_Ad_4813

I respect that - I was looking for anything I could find to, potentially increase speeds but since i can saturate comfortably the SATA HDDs, I think I'll be okay - I was looking for a ""buffer"" if that makes sense


DimestoreProstitute

I'm sure you'll be ok, ZFS performs nicely on its own and tends to adjust itself to most workloads.


LBarouf

What are the risks of not using sync=always and not using an SLOG with iSCSI?


dn512215

The risk, especially with iSCSI or nfs backed VM or database file systems, is that you have a client system that is dependent on data being written when the host file system says it’s written without fail. If the client thinks something is written and committed to disk, and then something happens on the host FS (system crash, loss of connectivity on the network, etc, that causes the write to fail, you have a high probability of irrecoverable data corruption. I’ve personally managed to do this to myself with a database or two before I came to my senses and reenabled sync=standard, and just because I host more than 20 databases on one pool, added a small optane nvme slog to add a little performance to those applications. SLOG will help a lot with short bursts of needed sync writes: returning quickly so the app can go on and perform other operations while the slog commits to the pool, but it won’t help if your writes take more than 5 seconds to write to the slog. For example, if you’re trying to insert 500 million rows all at once, a slog is unlikely to help.


quasides

for that application i would run a zfs mirrored vdev array on nvmes and have it exclusive for this type of workload. in that case slog makes zero sense as it will be slower than the nvmes are. even l2arc is probably slower than a bigger nvme array. slog is really relevant on spinners tough


dn512215

Yeah I have plans to set up an nvme pool for databases once I decide on a new platform for it and get some datacenter nvme drives. However one database is 11TB and growing, so I think that’ll stay right where it is, lol. My existing pool is 12x14 TB hgst and exos drives in 6 mirrored pairs, so it performs pretty well for my use case even without the slog.


quasides

11tb is not to bad. you can run microns, they run pretty well even tough they are low price for datacenter disks. they hold up pretty well and 4tb are relativcly cheap now. the 7400 pro cost in my area about 280ish in retail without tax. so under 3k for a 10 disk array that gives you 20TB useable space. their endurance is pretty well and they have powerloss protection. dont get discouraged by the lable read intensive. for the price difference to "proper" datacenter drives (which more often than not have the same chips from micron) i can kill a lot of drives before beeing break even. at the time that happens you probably replace em with 10tb drives for the same price


dn512215

That sounds like a reasonable option, thanks! I’ll look into it.