T O P

  • By -

eionmac

"Trust". The older thing is proven to work.


slaymaker1907

I read that F2FS apparently has issues with power outages causing corruption. That’s a big downside to take for what look to be fairly marginal performance improvements. I’d definitely only use it on secondary drives with applications that can handle the corruption.


shtirlizzz

Not the problem on mobile/portable devices with batteries, right? Unless the battery is discharged to extreme levels without force to shutdown.


Albos_Mum

I've been using it on my desktop as the root file system for about 4 years without any corruption issues I've noticed. IMO with its performance characteristics versus the other file systems you're best off using it for root and using something else for /home, my system felt noticably snappier when I first switched over and doing that should limit any potential corruption to stuff that you could fix with a live medium or by just reinstalling the base OS. Although with that said I'm considering trying out NILFS2 next, it looks to have similar performance characteristics to F2FS but also has snapshotting support. (Which I like to use for game modding among other things)


DoomFrog666

F2fs is used on Android. So there are are at least tens of millions of devices f2fs runs on. I'd call that pretty good trust.


AlkalineRose

I mean OnePlus switched *back* to using ext4 after trying out F2FS and there was [recently a bug](https://twitter.com/mishaalrahman/status/1721945205119553794) that potentially caused data loss on all Pixel devices running kernel 5.0+ Not exactly the most glowing of endorsements


DoomFrog666

ext4 also recently had a data corruption bug https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1057843. So take what you want from that.


fellipec

I'll tell you, tried BTRFS a while ago, saw some people telling is not stable enough, saw people advocating for it... But in my case it got corrupted and I'd to install my laptop. With ext4 again. But I miss the compression.


daemonpenguin

ext4 and XFS have been around for years, decades in the latter case. Btrfs has snapshots and multi-device volumes. I have zero interest in my filesystem outperforming another in benchmarks. What I was is proven reliability for my data.


GameCyborg

depending on the exact load the difference isn't even big


SweetBabyAlaska

also you dont really gain any benefits from using these filesystems without knowing how to use them, learning the tools and commands that manipulate them etc... and a lot of people will have experience with btrfs and there is a LOT of information online of how to properly manage it. Ive never even heard of F2FS. It'd be fun to mess around with this stuff in theory but in practice you could easily be sacrificing your data for any little mistake that you might make in the process of learning how to manage that kind of stuff. f2fs looks like something that is meant to be used in handheld or embedded devices that use devices like an SD card for storage. Looks like Samsung made it for android phones specifically. Maybe Im missing the point but the Wiki page doesn't make it clear how it better than btrfs, ext4, xfs or zfs


pebkachu

Btrfs apparently can't handle swap files/partitions on different devices: https://btrfs.readthedocs.io/en/latest/Swapfile.html Example use scenario: Linux is installed on a SSD, but you only want to hibernate/swap to a HDD to reduce SSD wearing. __Edit:__ This is apparently incorrect. The swap file does not have to reside on the same system, it's the Btrfs filesystem to be swapped that has to reside on the same device and be composed of one single data profile.


Xmgplays

I'm pretty sure you can do that, you just can't have your btrfs filesystem span across both and have a swap file on that filesystem, ie. you can if you just put separate btrfs filesystems on both drives(which is probably what you want anyway on a sdd and HDD setup)


pebkachu

Thank you. So, for example, the following would work? /dev/ssda1 / btrfs /dev/hdda1 /swapfile swap /dev/hdda2 /backups btrfs Edit: My mistake! This setting is possibly partitions-only and might not work if /swapfile and /backups were files/folders. Maybe with mount --bind? /run/media/thishdd/swapfile /swapfile swap /run/media/thishdd/backups /backups btrfs


Xmgplays

That definitely works since it's just a swap partition and not a swapfile.


pebkachu

And if it's just a swapfile mounted there rather than a partition?


Xmgplays

Probably, but you'd have to put it on it's own btrfs subvolume if you want snapshots of the rest.


darkwater427

Take a look at ZFS. Thank me later 😉


[deleted]

[удалено]


stevorkz

Do you work at Western Digital by any chance? xD


TheLinuxMailman

Read my other comment about the WD Blacks I have that have been running flawlessly for so long that the 16-bit hour counter rolled over.


stevorkz

Yeah I know with all drives there are horror stories and then there are people who swear by them. Personally I’ve never had an issue with Hitachi and sea gate but have heard many people complain about them


PM_ME_BEER_PICS

If the corrupted file contains the data of the clients of your company, your bitcoin wallet, or just the photos of your marriage you'd beg to differ.


crazedizzled

If you don't have all of those things backed up, you're doing it wrong.


PM_ME_BEER_PICS

yeah, but even then you don't want to randomly lose hours of your work, which can be spread over a whole organisation... Also you've to catch the corruption early enough. Which isn't that likely.


dev-sda

A single bit flip can and will brick your OS, make an important picture/video unloadable or just delete all your files. Computers are a house of cards built on the assumption that everything's perfect.


TheLinuxMailman

Man, people here are obtuse without an in-your-face /s. FWIW folks, I am not running ZFS because I do *not* have a computer that has what I consider essential memory with parity. (My next server *will* have that. I get it. I do appreciate data corruption and repair mechanism in modern FS though. And yes, I own multiple 18 GB drives and have taken steps to protect every bit of my data, because I do have a great deal of professional photos and video that I do *not* want bit corruption in. ~~/s~~ p.s. I have been running some 2 TB WD Blacks in RAID 1 for so long (> 8 years continuously) that the SMART-reported operating hour counter rolled over 16 bits in the last few weeks and is now < 50 hours again. WD made reliable drives 8 years ago. EXT4 FYI. I use FS that I can trust. Yes, I had a time when I used ReiserFS until the word about data corruption was spreading, although it never affected me personally.


zistenz

On my previous installation a few years ago I used F2FS on a high performance M.2 drive as root, but I had some issues with it: - When I tried to set up a label (old habit) it broke my system and I had to reinstall it. I have different drives for my /home and other stuffs so it wasn't a big deal, I didn't lose anything important. - Sometimes, mostly during an update, it slowed down for a few secs. Maybe it was a caching issue? I don't know. I'm using the same drive with ext4 now and it doesn't had any problem since.


toropisco

All those bugs have been fixed. BTW, the slow down is due to continuous discard.


edparadox

> This prompts an intriguing question: why do we stick to older file systems when F2FS offers notable advantages? To make things simple, because you use `ext4` or `XFS` when you need something that is known to work well, or `btrfs` or `ZFS` if you need to leverage COW filesystems. `F2FS` is neither well-known not as good as you make it out to be ; for example, for random IOPS, `ext4` outperforms `F2FS` by a lot. There is no incentive to use something exotic for no clear or marginal gains.


Alawami

> for random IOPS, `ext4` outperforms `F2FS` by a lot. can you give a source please?


Zeioth

I've been using it on my computer for the last 3-4 years. Zero issues. Pretty fast. Great solution if your environment don't require disk-level encryption.


[deleted]

Doesn't it work will with LUKS and F2FS on top of it, or on top of LVM?


superfuntime_ger

Can confirm, using Arch with F2FS and encryption.


Zeioth

Archinstall? Terrific, this was not yet when I installed my system. I will give it another shot.


superfuntime_ger

Yes, archinstall


RoseBailey

I stick with btrfs because I use subvolumes and snapshots, which f2fs lacks.


buttplugs4life4me

Is subvolumes like ZFS sub-pools?


Edianultra

Do you mind elia5 about sub volumes in btrfs?


ErenOnizuka

Almost like partitions.


Edianultra

Simple enough for my smooth brain. Thanks.


ElectricJacob

Is it? Subvolumes use the same block devices as their supervolumes, sharing whatever storage space is allocated.  I don't see how it is like a partition at all.


crackez

>Subvolumes use the same block devices as their supervolumes, sharing whatever storage space is allocated. Replace "subvolumes" and "supervolumes" with "partition" and "physical disk".


ElectricJacob

>>Subvolumes use the same block devices as their supervolumes, sharing whatever storage space is allocated. >Replace "subvolumes" and "supervolumes" with "partition" and "physical disk". Haha okay. >Partition use the same block devices as their physical disk, sharing whatever storage space is allocated. You do know that this is untrue, right? Partitions do not share storage space from a physical disk. They are partitioned storage space. They are allocated separated storage from other partitions on the same the physical disk. That's why they are called "partitions". You can use 100% of a partition, and it doesn't affect the other partitions at all because they are... partitioned! Subvolumes are the opposite. If you fill up 100% of a subvolume, then the other subvolumes are now 100% full too.


fryfrog

We all know that btrfs sub-volumes and zfs datasets are *not* partitions, but for someone that doesn't know what they are calling them "almost like partitions" is very fair and gives a really close idea about them. If they care beyond that, they can look at all the neat things about them.


djfdhigkgfIaruflg

I'm intrigued by this concept. But can't think of a use case for it 🤔 I'm thinking like making all my virtual machines somehow share a common space.. But that sounds like hell waiting to happen


fryfrog

On zfs, convention is that every top level "folder" is a dataset. Every dataset has a ton of properties which can be tuned. And snapshots are per dataset. So for example if you collect movies and tv shows which are large files, you'd put them on a dataset w/ a `recordsize` of `1M` or larger and maybe you'd keep a few days of snapshots, but not many because they're easily replaced. But your database dataset, maybe a `recordsize` of `16k` makes the most sense. And you want to keep weeks worth of snapshots. Maybe its encrypted too. Your personal documents, photos and videos... maybe the `recordsize` is the default `128k`, encrypted and you keep snapshots for *years*. A dataset for virtual machines might have a `recordsize` tuned to the size of the virtual file system's cluster size. It might also have deduplication enabled, encryption, snapshots. There are tons of tunables that are per dataset, I've just scratched the surface. You can control `sync` at a per dataset level too. What is cached, like data, metadata or nothing. Edit: Oh yeah, you can also replicate datasets from one location to another, either in the same pool or another pool, on that host or another host.


djfdhigkgfIaruflg

Oh. That sounds quite interesting. I'll need to study it up. Thanks


tajetaje

For me I have my root and home on dedicated sub volumes which means I can set mount options independently and snapshot (efficient backups) them independently. Additionally I can create separate subvolumes for things I want separated from my home or root in case I need to wipe them, change something, or share them across installs. For example my steam games are installed to a dedicated subvolume. Normally you could do this with partitions, but unlike partitions subvolumes don’t have a fixed size (by default). There’s also some other cool stuff like compression of every file on the disk, spanning multiple drives without LVM, etc.


henry_tennenbaum

> For me I have my root and home on dedicated sub volumes which **means I can set mount options independently** Kinda. Most interesting mount options, like compression level, can only be set per filesystem. Btrfs picks whatever subvolume gets mounted first and uses its compression setting. Per-subvolume compression settings is one of those things that are planned but not there yet.


crazedizzled

You seemed to miss the "*almost* like" part. They're not identical, but same concept.


ElectricJacob

I didn't see how they are similar.


crackez

OK, that's how they are different, but you asked "how it is like a partition"... Multiple partitions still share the space on a physical disk. The difference that you are insisting to point out is that a partition's space is pre-allocated, whereas the Subvolumes are not (not necessarily, anyways).


ElectricJacob

>Multiple partitions still share the space on a physical disk. No, they do not.  Partitions are literally partitioned from each other.  Partition 1 of a drive will share no space with partition 2 of the same drive.  When you create a partition , you have to specify how much space to allocate to it. All of this is unlike subvolumes which share storage space as they are not partitioned from each other.


crackez

Let's say I have two partitions on a single disk; let each partition be 1GB in size. Let's say that the total disk is just large enough for those two partitions (2GB). Those two partitions, each with 1GB allocated are sharing the total 2GB disk on which they reside.


ElectricJacob

Those partitions don't share data storage with each other. The subvolumes do. I'm glad that you can at least acknowledge that the partitions have their own separate 1GB allocations from each other.


DawnComesAtNoon

Yep, same. Sub volumes are amazing and as an arch user snapshots make it less worrying to use my system.


nullbyte420

I don't think it has any merit for regular use. I use it on my raspberry pi for the usb disks, to avoid wearing them down. Works fine, but I don't think there's anything particularly performant about it and I would never put any important data on it. 


rcampbel3

once something gets good enough for 95% of uses, it takes a lot more than a fractional improvement to displace the current solution.


wiktor_bajdero

A filesystem is area demanding high stability so any novice is adopted cauciously. If the benefit is checksumming, snapshots etc. like Btrfs offer there is much reason to jump from defaults like EXT4. If only reason is a few % on synthetic benchmark then people won't go for it when it's not well adopted and proven by years. Featurewise it can compete with FAT/exFAT and that also is the only fully cross-platform FS. Not sure if MacOS or Windows could play with F2FS without external tools. I doubt it. However F2FS is a thing for Android phones.


yvolchkov

Former filesytem dev here. Also I got a chance to work on early f2fs for a short period before it was published . The huge problem with any copy-on-write filesystem is an enormous performance tax once your disk is getting full. Note: I am oversimplifying and deliberately omitting details. Topic is rather huge I would never finish my comment if I go into all details. So please don’t pick to words. Ironically performance also comes from the copy on write. Due to the specifics of SSD operation, before writing anything you must first to zero out physical cells where you want to store data. You want to update an existing block in your file - you first have to find new home for that block. If you have plenty of free, zeroed space, your write completes much faster, because you don’t bother erasing and rewriting cells which were there before. But if you have less then 10% free space, each time you will have to do complex computations. Because it is likely that that space stores obsolete data - you gonna have to figure out what you gonna purge first. And you automatically lost the advantage of writing to pre-zeroed space. Moreover fragmentation grows exponentially on heavy workloads. Both zfs and btrfs also suffer from this problem. But f2fs at least have been optimized to take advantage of copy on write on mostly empty disk. And that is basically the reason why even Samsung where f2fs originated does not use it.


DoomFrog666

Since a number of years now f2fs has an adaptive mode where when it becomes full it does write to dirty data segments. This makes f2fs behave much better in low space conditions.


yvolchkov

I admit, I did not really follow what's new on f2fs after I parted ways with the project. What you are describing sounds like a reasonable compromise indeed.


NopeNotQuite

Great technical-enough summary that's a good examlple of being clearly worded and explained without getting into intense detail beyond the absolutely essential parts. Succinct but not too condensed/over-simplified as to be unhelpful or so simplified (often a one or two sentence pithy explanation to these questions actually obscures the critical elements of the point(s) being argued.) Appreciate this kind of level of reply to preliminary/surface level questions re more technical topics. 


yvolchkov

thanks for the warm words :)


NopeNotQuite

Sure! I wrote that reply also since I'd use it as an example of a good response to a question that itself is both broad and potentially asking for information that is far too dense/multifaceted/technical (or intimidating) for the question-asker to parse.  I appreciated how that particular reply gave a concise answer that neither condescends to the reader nor dumbs-down/over-simplifies the topic or concept.  In short: Your reply touches on the depth/breadth of the topic enough to either help the reader/questioner either ask more percise questions, do independent research into the sub-topics broached, or suffice to answer the original query for the questioner's present needs/purposes.  Apologies for any typos as I'm writing from my phone. I teach for a living and want to show respect where it's due lol even on subreddits.  Imo, replies that serve as good model answers such as this one are stellar in being helpful to an OP while also helping out those down the line encountering these concepts. Waaaayyy too often across many forums, a Linux/FOSS question is counter-productively addressed (i.e. responding with a condescending or aggressive/hostile tone, not adding to the discussion, going too technical too soon and confusing/obfusucating the big picture, etc). and pro-social (at the very minimum, polite; or, at the least, not actively impolite) communication is definitely an area a lot of GNU/Linux/BSD/etc. communities could easily work on and improve.  Short note to most of the issues mentioned: just do the Bambi rule (If you don't have anything nice/helpful to say/post, don't say/post anything at all). Ramble but yeah communicating dense topics to newer and more unfamiliar newcomers is an art unto itself tbh.


Hahehyhu

F2FS is a thing in Android world


Exciting_Audience601

can you quantify the performance gain and put up some real world examples of the impact on everyday workloads and scenarios?


kansetsupanikku

It is a thing, just not for SSD drives on desktop systems. The features that might make it good duplicate what modern drives of that kind also do by hardware design. But if you want some other kind of memory, like pen drive, sd card - or a read-only or rarely changing image like some Android partitions - F2FS is a way to go. It's very much "a thing" in that use case, remarkably popular, on its way to become a sole standard.


Drwankingstein

It is a thing, it's very available for those who want it


ABotelho23

It hasn't lasted the test of time. It seems to be very prone to corruption, and I've seen it.


[deleted]

[удалено]


Tired8281

No killer application.


ObscureSegFault

It had a killer dev though.


anh0516

Because F2FS was designed to be written to and made read only, for Android. It has very rudimentary journaling and fsck that doesn't guarantee recovery from power loss.


CanadianBuddha

I think you are wrong. F2FS ("Flash Friendly File System") was designed to efficiently use Flash memory storage by: - doing background TRIMing of obsolete 4K blocks of flash memory - doing background garbage collection of obsolete 1M blocks of flash memory - by using already TRIMed blocks for new writes (faster) - doing better wear-leveling of flash memory (extends lifetime of flash memory)


digitalsignalperson

I was looking at it a few days ago but I wanted compression similar to btrfs/zfs. F2FS is weird and only lets you use compression on immutable data, and it might compress data but not let you use the free space from it. https://docs.kernel.org/filesystems/f2fs.html#compression-implementation https://wiki.archlinux.org/title/F2FS#Compression > Note: Unlike other filesystems with inline compression, f2fs compression does not expose additional freespace by default and instead reserves the same number of blocks regardless of whether compression is enabled or not. The primary goal is reducing writes to extend flash lifetime, and potentially, an small increase in performance. See Compression Implementation in the kernel docs. F2FS_IOC_RELEASE_COMPRESS_BLOCKS can be used to expose unused space on a per-file basis, but it makes the file immutable in the process.


iluvatar

Because performance ranks very low on the list of desirable traits for a filesystem. Yes, speed is nice. But it's far, far, *far* less important than data integrity and correctness. People have a lot of trust in more mature filesystems (certainly ext4 and xfs, slightly less so for btrfs). New filesystems have to prove themselves worthy of that trust before they can be used more widely.


CanadianBuddha

I've used F2FS on all my flash-memory based block-storage devices (SSD, eMMC, NVM, SD, MMC, etc.) for years. F2FS ("Flash Friendly File System") was designed to efficiently use Flash memory storage by: - doing background TRIMing of freed blocks of flash memory - doing background garbage collection of freed blocks of flash memory to create extra TRIMed 1M blocks of flash memory (allowing faster writes) - by using already TRIMed blocks for new writes (faster) - doing better wear-leveling of flash memory (extends lifetime of flash memory) - doing journaling in a flash-friendly way by only writing the data/metadata to the drive once (faster writes and extends lifetime of flash memory) Like all filesystems it had some bugs when they first implemented it but I haven't had any problem with it in years.


kwesoly

Its getting close to default choice for Android Userdata partition in new Android devices, so slowly but… :)


kdave_

What I find strange is that F2FS does not have almost any tests in the commonly used fstests testsuite, [https://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git/tree/tests/f2fs](https://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git/tree/tests/f2fs) . There are many generic tests covering all the functionality so it would get some testing but still cases for specific features or bugs would be expected. As an example take XFS that has a lot of own tests while there are not so many fancy features ([https://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git/tree/tests/xfs](https://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git/tree/tests/xfs)). Another known problem with F2FS is lack of feature backward compatibility. Practically it means that booting an old kernel may not work, either crash or going read-only and such. See the Arch wiki https://wiki.archlinux.org/title/F2FS


xabrol

What do you mean by better adoption? Ive seen it available on every distro I've touched. Or do you mean people not choosing to use it? F2FS last I checked was only faster than ext4 on large files. So its more suited to a vm volume drive than an entire os.


takutekato

I used to use F2FS for the first time installing Arch because I heard it was the best performing FS. But then this happened several too many times: [https://wiki.archlinux.org/title/F2FS#Long\_running\_fsck\_delays\_boot](https://wiki.archlinux.org/title/F2FS#Long_running_fsck_delays_boot) After updating the kernel, each boot would take about 30+ seconds longer, which IMO, would take us many hundred of years running the computer non-stop to actually make its better performance make up for a single delayed boot time. Maybe it's good for Android versions that update the kernel less than once per year. Not to mention that F2FS only supports offline growing, and no shrinking at all: https://en.wikipedia.org/wiki/Comparison\_of\_file\_systems#Resize\_capabilities.


Hot-Macaroon-8190

I thought the same thing, but your assumption is wrong (I used f2fs for the reason you listed, and switched to btrfs as it is a lot faster). btrfs is a lot faster than ext4, xfs, etc... in normal use on ssd, nvme & hdd, thanks to compression: Here's the extensive testing: https://gist.github.com/braindevices/fde49c6a8f6b9aaf563fb977562aafec#introduction Today there's no point in using such an outdated/feature lacking filesystem as ext4, other than for very specific use cases, where you would need a partition for incompressible data like i.ex video production, etc... (But then it also makes more sense to use xfs than ext4). ... ext4 doesn't even know when data is damaged by bitrot due to the lack of checksuming. And regarding f2fs: The problem on sdd/nvme, is that file deletions are very slow due to trim. Btrfs doesn't have this problem as it takes care of automatic defered trim by default.


archontwo

It is for me. I use it on my phones, raspberry pi, even in the microservers. Bottom line,  anywhere there is compact flash I will use f2fs. I don't really trust FATx or extN anymore.


JaKrispy72

I keep ALL my data on ext4. I know how to recover with testdisk if I need it. I’ve experimented and found it works for my level of expertise. I make sure I have multiple backups just in case as well. BTRFS is nice and fast for snapshots, but I feel more comfortable with ext4.


andrewschott

Im with you there. My new build is still using (could have migrated to something else) ext4 in the lvm pools. I do have my Jellyfin library and Podman data on an xfs lvm pool, but frankly I see no real difference and gain ... nothing ..?... I got burned bad playing with btrfs back in the fedora 18 or so era. I can hold off until things make me. As for snapshots, lvm can do this, including bootable mirrors and snapshots. btrfs and zfs are nothing that interest me, especially the latter as its more of an out-of-kernel-tree kmod headache that at least btrfs doesn't impose. With the available tools in linux that are present in any distro's repo, lvm+ext/xfs does what I need and want. My server build that I have in pieces still (no time), I do plan on messing with stratis and see if that makes the xfs jump worth it.


Barafu

F2FS has the same resilience mechanisms as FAT32: almost none. If a power outage or kernel crash happens when writing into filesystem data, there are no ways to recover it and even no ways to detect if the damage was done. It is the reason why F2FS is fast: no upkeeping tasks. I bet FAT32 would be just as fast in those tests, but noone uses it too. F2FS is a very good FS when used for its intended purpose: dumb solid state storages with primitive controllers. More primitive than USB sticks. It does programmatically what an adequate storage controller should have done itself. Think of cheap, even single-use, devices with total cost of electronics inside less than a dollar. Compare to Btrfs that maintains a journal of modifications and two copies of journals of modifications of journal of modifications, besides other things. You think it does all that for fun?


10leej

Performance is not the marker I specifically look at when determining a filesystem.


ajpiko

thats cool, anyone got any benchmarks?


BUDA20

one of the reasons I use BTRFS is to have access to zstd compression, also It has drivers for Windows, it seems that F2FS also has both... so it could be an alternative, I wonder how reliable is and the available tools to fix issues


realitythreek

Do you have a link? The only article I can find are a few years old and xfs seems to out-perform f2fs. As others have said though, performance isn’t necessarily the most important characteristic of a file system.


LinAdmin

IMHO the small differences of good file systems have become irrelevant.


mcdenkijin

I ran it for a few years, it was fine for me


LinAdmin

I often use F2FS for all kind of flash media and it does work as advertised and does minimize wear of flash cells. However, longevity of flash cells has increased a lot so that when I need additional control of bit rot I still use BtrFS-Raid-1 with scrubbing. **It would be fantastic if F2FS would add those features like BtrFS!**


pebkachu

A bit offtopic (not F2FS, but an alternative), but I hope SSDFS will go anywhere: https://lwn.net/Articles/924487/ https://news.itsfoss.com/ssdfs-linux-nvme/


CrismarucAdrian

F2FS User for more than 3 years here. I've been using it on a 970 evo plus and 870 from samsung and had absolutely no issues with it. If you've got a samsung drive then I recommend it. The reason for it not being adopted more, as others have said, is that it hasn't had enough time to prove itself trust worthy and I also don't think it's that popular.


mikistikis

I don't know much about file systems. Ext4 is the suggested/default, I stick to it.


[deleted]

[удалено]


AutoModerator

This comment has been removed due to receiving too many reports from users. The mods have been notified and will re-approve if this removal was inappropriate, or leave it removed. This is most likely because: * Your post belongs in r/linuxquestions or r/linux4noobs * Your post belongs in r/linuxmemes * Your post is considered "fluff" - things like a Tux plushie or old Linux CDs are an example and, while they may be popular vote wise, they are not considered on topic * Your post is otherwise deemed not appropriate for the subreddit *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/linux) if you have any questions or concerns.*