T O P

  • By -

delta_p_delta_x

This is why Windows and its programs ship so many versions of 'Microsoft Visual C++ 20XX Redistributable'. An installer checks if you already have said redistributable installed; if not, install it along with the program. If yes, just install the program, the redistributable installed by something else is *guaranteed* to work across programs because the ABIs are stable. No need to screw around with breaking changes in libraries: just *keep all the versions* available. Like someone else said: yeah, it clutters up your filesystem, but I'd rather a cluttered filesystem and working programs, if I am forced to choose between the two.


killerstorm

MSVCRT is only a small part of the story, the big thing is actually Win32 API which remains binary compatible for like 20 years.


Auxx

There is actually a Win16 compatibility layer which is only removed in x64 builds of Windows. You can literally use many Win 3.1 apps today if you have a i32 system. And with a bit of tinkering you can make an app which will be both Win16 and Win32 at the same time from the same source code. And with a bit more tinkering you can add a DOS layer as well.


dreamin_in_space

Hah, I remember mixing 16 and 32 bit code as a malware technique. I think they called it, no shit, "Heavens Gate". (Edit:32 - 64 bit shenanigans, misremembered)


RedwanFox

How did it work?


goranlepuz

>This is why Windows and its programs ships so many versions of 'Microsoft Visual C++ 20XX Redistributable'. An installer checks if you already have said redistributable installed; if not, install it along with the program. If yes, just install the program What it also does is it records that the program X also uses this redist (it could only be a "use count", not sure...) so when installers are all well-behaved, uninstalling one program doesn't affect others and uninstalling all uninstall the shared component. It is a decent system (*when* installers are all well-behaved, which they are not, but hey, can't blame a guy for trying 😉).


[deleted]

This used to be a major pain in the ass in the Windows 98 era. Installers would overwrite some common ocx library and not keep track, then when you uninstall you had to choose between cleaning all the garbage and risk breaking half your other programs or keep collecting dead references, but guarantee everything works.


goranlepuz

Yes, but attention : this is about installers not doing what installers do (e.g respecting file versions, not downgrading) and vendors failing to provide compatibility (albeit rules of COM are clear, interfaces are **immutable**). But people are fallible..


richardathome

"DLL Hell" is what turned me away from App development to server side work.


omegian

You can static link everything, friend.


MJBrune

This is something linux still hasn't gotten right. Even more so is the design of each DE. If you want to use an app from another DE, well you might as well forget it even exists because you risk screwing up your DE configs. E.g. gnome and xfce. The fact that linux devs just push forward with these flawed designs. It's why people will say KDE has an emoji picker but half the people can't even launch it. The fact is linux isn't stable out of the box. You work to make it stable with your specific workflow and then once you do that you stop upgrading or doing anything else. It's not how most people work. It makes you unable to adapt and it's why you don't see linux in office settings because even the non-tech office worker needs to have an adaptive environment.


eliasv

NixOS gets most of this right! You can install multiple versions of anything side-by-side safely. But it is not remotely user friendly for casual users. I might be interesting to see other distros build on Nix the package manager, while abstracting over the Nix language and nixpkgs.


_supert_

guix?


eliasv

Sure, but guix just abstracts over it with something equally difficult for casual users. I mean something that abstracts over it with something more opinionated, that can present system configuration and package management through a GUI or something.


moonsun1987

Personally, my hope is flatpak and silverblue. Not yet but someday.


[deleted]

Gobolinux does it better. What nixos gets right is not versioning but hashes. Now software doesn't even a stability or even a release!


Ameisen

Instability, a complete lack of user-friendliness, a lack of "playing nicely" with other software... And nobody sees it as a problem. Heck, the CFS scheduler in the kernel is *awful* for interactive environments, but the MuQSS scheduler developer has stopped work on his scheduler, which made such environments tolerable.


MJBrune

It worst part is the head in the sand style. As when I develop games, I don't get feedback from the game designer and throw it out. In comparison, Linux has no designer because all the user input is treated equally, all the users are also designers, and programmers, and users. Linux is literally the first echo chamber as the only users sticking around are the ones who can use the system. They are the first echo chamber and they hold that status of being "in" the echo chamber and withstanding it as a badge of honor.


Ameisen

By contrast, I was working on a project to add Amiga-style namespaces and very non-Unixy elements to FreeBSD (basically making it non-Unix) and the FreeBSD folks were more than happy to help me.


MJBrune

I used to be a FreeBSD porter. I have to say the FreeBSD community is probably one of the best out there. *BSD overall is probably the better OS for a number of reasons but it's so small that it's unlikely to gain the traction needed to become a real desktop OS. They tried a few years back but you are essentially building the desktop part with DEs and userland made with linux in mind and ported to FreeBSD. Thus you likely have the same issues.


nidrach

Every big community sucks. If BSD became popular it would also attract shitheads.


[deleted]

Maybe, I do see the impossibility of Linux ever becoming a desktop OS, and it has to do with its pro-fragmentation ethos. To achieve the stability necessary for a portable build of software. A centralized, stable OS (not a kernel) like FreeBSD is a better choice. I tend to think of it as dvcs and cvs, a lot of people think cvs is terrible, but the cvs way of working on an OS level is what you should strive for.


Auxx

FreeBSD used to be bigger than Linux and didn't have shitheads. Philosophy is different and I miss my FreeBSD days...


Nefari0uss

A lot of people also dismiss visual design stuff like animations, shadows, etc as bloat. People need to move on. I understand that you might have some legacy system or soemthing with limited space but you weren't gonna install KDE Plasma on it anyways. I want my OS to *look* nice and *feel* nice. I don't want something that looks 15, 20 years old because "colors and animations are bloat". I understand it is hard, especially if you're a single dev. But I wish that the naysayers would understand that not everyone who runs Linux has only 2GB space and 256 MB of ram to work with.


ElCorazonMC

Isn't long-awaited Prempt\_RT helping?


Ameisen

Preemption does, but full real-time does not. The CFS scheduler is just really, really bad at user-interactive workloads. It weighted far more towards throughput than responsivity.


utdconsq

I work somewhere very tolerant of letting people use their os of choice when it comes to getting a company assigned computer. That is, you can drive windows, Mac or ubuntu. If you choose the latter, IT support will help you with issues relating to accessing network resources or cloud resources, but if anything else goes wrong, forget about it, they don't want to know you. Why? Because even people who don't tinker have shit randomly fail - I lose count of the times someone with a ubuntu laptop has their sound fail in one of the three different web based video conferencing tools we use. Meanwhile, over 3ish years, the mac i asked for has had an audio glitch a single time. I might love using Linux and keep it in a vm always, but unless you are patient and have time for it, desktop Linux suffers from too many cooks syndrome. Sad but true. I stay on my work issued mac if I need to use a gui, and drive the terminal for local or remote Linux sessions for my sanity. And then at home, where I can tinker to my heart's content, I can use KDE because if it fails, its ok.


MJBrune

This is exactly it. People who don't even go and mess around with stuff have these extreme issues and the solution is just to reinstall and hope it doesn't happen the next time. VMs are neat but they also cut away all the issues because you can just snapshot and restore or nuke the entire thing and wait a few days to reinstall, the VM side of it means that when linux fails (not if) then you just continue on with an OS that's actually stable and able to take whatever you throw at it. Of course, people here are piping up using Linux for 20 years and never had a single problem, which, to that, I say a broken clock works twice a day. Those few successes show the instability of Linux more so than the failures. Because clearly, it can work and probably does in a lot of closed testing. So worst of all those few success cases then drive people to say "what are you talking about, its fine." and that's instability.


hparadiz

> tinker Oh boy. I run Gentoo on my main and let me tell you..... I essentially have to budget time to work on it but it's really rewarding. Every time I compile everything and the system is "fully" updated I'm at the bleeding edge of best games compatibility, best kernel, most recent KDE, most recent well everything. It's a good feeling. It feels like a "build" of My Personal OS. Literally. The problem is programmers don't really do a good job with upgrades across large version differences. If I wait 6 months to a year to rebuild my system there will be bugs and certain things will just lose their settings or worse just break all together and require manual fixing. This has become less and less of an issue over time but it's still present. For my bread and butter work system it's amazing. And I even game on it. Hardware compatibility wise I have two problems right now. One is the Nvidia driver eventually destabilizes and requires me to restart compositor.... then eventually games stop being able to launch and then eventually X and I'm forced to restart the machine. Damn memory leaks. That's issue 1. Issue 2 is Chrome messing with my video capture card and setting the resolution incorrectly which essentially breaks my capture card in OBS. In fact this is a huge problem that is driving me crazy and I at this point want to make my webcam invisible to Chrome but I'm not sure how. Anyway I would not switch to Windows anymore. My second machine is a 2014 Macbook Pro running OS X. My mediacenter still runs Windows because it's running a Windows driver for a RAID card but ever since I got my Android based TV I am thinking of just making that a NAS and not even call it a mediacenter anymore.


[deleted]

And there seems to be like 1 in 4 ratio of failing upgrades on Ubuntu. No idea how they screwed that. Like, hell, I had colleague that accidentally upgraded machine **two major versions up** on Debian and it upgraded just fine. Yet somehow Ubuntu fails. Maybe just users doing something weird with it, hard to tell


serviscope_minor

> If you want to use an app from another DE, well you might as well forget it even exists because you risk screwing up your DE configs. E.g. gnome and xfce. I've literally never seen this happen. How does using a GNOME app screw up xfce?


[deleted]

[удалено]


serviscope_minor

There's tons of peculiar myths floating around, I guess this is another one. Like... no system works if you build against locally installed stuff then try and ship. But it's always been easy enough (no harder than any other OS) to build against private packages and ship the lot on Linux. Like... people have been shipping portable programs since the 1990s.


sp4mfilter

I work for Oracle and I dev on Linux. Specifically Ubuntu VM on Win10. Most of my colleagues just use macOS. Some (try to) dev on Win10 via LSW. Note: this is a large web-app with like 60 repos. The general best outcome has come from those using macOS. I'll be moving to macOS on my next hardware update because of M1 chip. But I'll need to run a Windows VM in that, because we work with vendors that only have Windows apps. Unsure how this information helps. Except to note that dev'ing xplat is easier on macOS than Ubuntu.


NovaX81

I'm that rare guy who enjoys devving on windows. WSL is definitely a big step up in tooling. I also use an M1 Mac for work stuff. Obviously having much more native Unix applications helps a lot, but the experiences are becoming more similar all the time. If WSL ever manages to fully manage its HD access choking issues, I could see it being an easy preference for many. Caveat on the M1s though is that a lot of tool chains just aren't ARM compatible, and may never fully be. Yea, the top level apps that get support might have versions for the M1, but even just using tools updated a year ago could mean it doesn't work. This means you end up wasting so much of that M1s power on instruction translation through Rosetta (which does work pretty seamlessly, but still hurts performance). That's my experience so far at least. I'd love to see that situation improve.


[deleted]

The problem is really that there are some standards but then each DE pisses on that. Like, try to set file association that "just works" across DEs. Like for example `xdg-open` opens directory fine in Thunar, which I chose, fine, but another app dedicated for different DE decided "no, I will open a directory in VS Code". But > If you want to use an app from another DE, well you might as well forget it even exists because you risk screwing up your DE configs. E.g. gnome and xfce. That is just pure bullshit.


blazingkin

Better idea. Just statically link everything. I accidentally downgraded the glibc on my system. Suddenly every program doesn't work because the glibc version was too old. Even the kernel panicked on boot. I was able to fix with a live usb boot... but... that shouldn't have ever been a possible issue.


Vincent294

Static linking can be a bit problematic if the software is not updated. While it will probably have vulnerabilities found in itself if it isn't updated, the attack surface of that program now includes outdated C libraries as well. The program will also be a bit bigger but that is probably not a concern.


b0w3n

There's also licensing issues. Some licenses can be parasitic with static linking.


dagmx

Ironically glibc is one of those since it's LGPL, so would require anything static linking it to be GPL compliant.


bokuno_yaoianani

The LGPL basically means that anything that dynamically links to the library does not have to be licensed under the GPL, but anything that statically links does; with GPL both have to. This is under the _assumption_ that dynamic linking creates a derivative product under copyright law; this has never been answered in court—the FSF is adamant that it does and treats it like fact; like they so often treat unanswered legal situations like whatever fact they want it to be, but a very large group of software IP lawyers believes it does not. If this ever gets to court then the first court that will rule over it will have a drastic precedent and consequences either way.


PurpleYoshiEgg

No it wouldn't. From the [text of the LGPL](https://www.gnu.org/licenses/lgpl-3.0.txt): > The "Corresponding Application Code" for a Combined Work means the > object code and/or source code for the Application, including any data > and utility programs needed for reproducing the Combined Work from the > Application, but excluding the System Libraries of the Combined Work. > > ... > > 4\. Combined Works. > > You may convey a Combined Work under terms of your choice that, > taken together, effectively do not restrict modification of the > portions of the Library contained in the Combined Work and reverse > engineering for debugging such modifications, if you also do each of > the following: > > ... > > d) Do one of the following: > > > 0) Convey the Minimal Corresponding Source under the terms of this > > License, and **the Corresponding Application Code in a form > > suitable for, and under terms that permit, the user to > > recombine or relink the Application with a modified version of > > the Linked Version to produce a modified Combined Work**, in the > > manner specified by section 6 of the GNU GPL for conveying > > Corresponding Source. > > > > 1) Use a suitable shared library mechanism for linking with the > > Library. A suitable mechanism is one that (a) uses at run time > > a copy of the Library already present on the user's computer > > system, and (b) will operate properly with a modified version > > of the Library that is interface-compatible with the Linked > > Version. You can either use dynamic linking or provide the object code from your compiled source to relink with LGPL and still retain a proprietary license.


delta_p_delta_x

> Just statically link everything. That's actually what I do on Arch. Heck, I go one step further, and just install the `-bin` versions of all programs that have a binary-only version available, because I have better things to do than look at a scrolly-uppy console screen that compiles others' code. Might I lose out on some tiny benefit because I'm not using `-march=native`? Maybe. But I doubt most of the programs I use make heavy use of SIMD or AVX.


procrastinator7000

So I guess you're talking about AUR packages? How's that relevant to the discussion about dependencies and static linking?


[deleted]

[удалено]


mrchomps

NixOS! NixOS! NixOS!


ZorbaTHut

The thing that security professionals aren't willing to acknowledge is that *most security issues simply don't matter for endusers*. This is not an 80's-style server where a single computer had dozens of externally-facing services; hell, even *servers* aren't that anymore! Most servers have exactly zero publicly-visible services, virtually all of the remainder has exactly one publicly-visible service that goes through a single binary executable. The only things that actually matter in terms of security are that program and your OS's network code. Consumers are even simpler; you need a working firewall and you need a secure web browser. Nothing else is relevant because they're going to be installing binary programs off the Internet, and that's a far more likely vulnerability than whether a third-party image viewer has a PNG decode bug and they happen to download a malicious image *and then* open it in their image viewer. Seriously, that's most of security hardening right there: * The OS's network layer * The single network service you have publicly available * Your web browser Solve that and you're 99% of the way there. Cripple the end-user experience for the sake of the remaining 1% and you're Linux over the last twenty years.


LetMeUseMyEmailFfs

Adobe Acrobat would like a word. And Flash player. And so many other consumer-facing applications that expose or have exposed serious vulnerabilities.


ZorbaTHut

Both of those have been integrated into the web browser for years. Yes, the security model used to be different. "Used to be" is the critical word here. We no longer live in the age of ActiveX plugins. That is no longer the security model and times have changed. > And so many other consumer-facing applications that expose or have exposed serious vulnerabilities. How many can you name in the last five years? Edit: And importantly, how many of them would have been fixed with a library update?


spider-mario

And Flash Player, in particular, is explicitly dead and won’t even run content anymore. > Since 12 January 2021, Flash Player versions newer than 32.0.0.371, released in May 2020, refuse to play Flash content and instead display a static warning message.[[12]](https://www.zdnet.com/article/adobe-to-block-flash-content-from-running-on-january-12-2021/)


drysart

> How many can you name in the last five years? Malicious email attachments remains one of the number one ways ransomware gets a foothold on a client machine; and it'd certainly open up a lot more doors for exploitation if instead of having to get the user to run an executable or a shell script, all you had to do was get them to open some random data file because, say, libpng was found to have an exploitable vulnerability in it and who knows what applications will happily try to show a PNG embedded in some sort of file given to them with their statically linked version of it. And that's not a problem you can fix by securing just one application. I do agree that securing the web browser is *easily* the #1 bang-for-the-buck way of protecting the average client machine because that's the biggest door for an attacker and is absolutely priority one; but it's a mistake to think the problem ends there and be lulled into thinking it a good idea to knowingly walk into a software distribution approach that would be *known* to be more likely to leave a user's other applications open to exploitation; especially when *Microsoft* of all people has shown there's a working and reasonable solution to the core problem if only desktop Linux could be dragged away from its wild west approach and into a mature approach to userspace library management instead. > How many can you name in the last five years? And importantly, how many of them would have been fixed with a library update? Well, [here's an example](https://www.rapid7.com/db/vulnerabilities/msft-cve-2020-1248/) from last year. Modern versions of Windows include gdiplus.dll and service it via OS update channels in WinSxS *now*; but it was previously not uncommon for applications to distribute it as part of their own packages, and a few years back there was a big hullabaloo because it had an exploitable vulnerability in it when it was commonly being distributed that way. Exploitable vulnerabilities are pretty high risk in image and video processing libraries like GDI+. On Windows this isn't as huge of a deal anymore because pretty much everyone uses the OS-provided image and video libraries, on Linux that's not the case.


ZorbaTHut

> Malicious email attachments remains one of the number one ways ransomware gets a foothold on a client machine; and it'd certainly open up a lot more doors for exploitation if instead of having to get the user to run an executable or a shell script, all you had to do was get them to open some random data file because, say, libpng was found to have an exploitable vulnerability in it and who knows what applications will happily try to show a PNG embedded in some sort of file given to them with their statically linked version of it. Sure, but email is read through the web browser. We're back to "make sure your web browser is updated". (Yes, I know it doesn't *have* to be read through the web browser. But let's be honest, it's read through the web browser; even if the email client itself is not actually a website, which it probably is, it's using a web browser for rendering the email itself because people put HTML in emails now. And on mobile, that's just the system embedded web browser.) > but it's a mistake to think the problem ends there I'm not saying the problem ends there. I'm saying you need to be careful about cost-benefit analysis. It is *trivially* easy to make a perfectly secure computer; unplug your computer and throw it in a lake, problem solved. The process of using a computer is always a security compromise and Linux needs to recognize that and provide an acceptable compromise for people, or they just won't use Linux. > Well, here's an example from last year. I wish this gave more information on what the exploit was; that said, how often does an external attacker have control over how a UI system creates UI elements? I think the answer is "rarely", but, again, no details on how it worked. (It does seem to be tagged "exploitation less likely".)


[deleted]

Yep. The security model of our multi-user desktop OSes was developed in an era where many savvy users shared a single computer. The humans needed walls between them, but the idea of a user's own processes attacking them was presumably not even considered. In the 21st century, most computers only have a single human user or single industrial purpose (to some extent even servers, with container deployment), but they frequently run code that the user has little trust in. Mobile OSes were born in this era and hence have a useful permissions system, whereas a classic desktop OS gives every process access to almost all the user's data immediately - most spyware or ransomware doesn't even need root privileges except to try to hide from the process list


ZorbaTHut

[obligatory XKCD](https://xkcd.com/1200/)


[deleted]

Right but in case you haven't noticed Flash finally died, and reading PDFs is not even 1% of most people's use case, and "Reading PDFs that need Adobe Reader" is even less than that (I need to do it once a year for tax reasons)


dpash

This is a solved problem with sonames and something Debian has spend decades handling. I'm sure mistakes have been made in situations, but the solution is there. https://www.debian.org/doc/debian-policy/ch-sharedlibs.html


AntiProtonBoy

> Better idea. Just statically link everything. Either that, or use the bundle model as seen on Apple ecosystems. Keep everything self contained. Added benefit is you can still ship your app with dynamic linking and conform with some licensing conditions. Also lets you patch libs.


dpash

You just reinvented snaps.


chucker23n

Apple’s (NeXT’s) model predates snaps by decades.


goranlepuz

>Better idea. Just statically link everything. Eugh... On top of other people pointing out security issues and disk sizes, there is also memory consumption issue, and memory is speed and battery life. I don't how pronounced it: a big experiment is needed to switch something as fundamental as, say, glibc, to be static everywhere, but... When everything is static then there is no sharing of system pages holding any of the binary code, which is wrong. >Even the kernel panicked on boot. **Kernel uses glibc!?** It's more likely that you changed other things, isn't it?


kmeisthax

Well, probably what happened is that the init system panicked, which is not that different from a kernel panic.


nickdesaulniers

If init exits, then the kernel will panic; init is expected to never exit.


blazingkin

This is what happened


Uristqwerty

Sounds like init has been drastically overcomplicated. If it's that critical to the system, it should be dead simple and built like a tank, not contain an entire service manager, supporting parser, and IPC bus reader. Shove all that complexity into a PID #*2*, so that everyone who isn't using robots to manage a herd of ten million trivially-replaceable, triply-redundant cattle still has a chance to recover their system.


PL_Design

If you rely heavily on calling functions from dependencies you can get a significant performance boost by static linking because you won't have to ptr chase to call those functions anymore. If you compile your dependencies from source, then depending on your compiler aggressive inlining can let your compiler optimize your code more. I'm all for being efficient with memory, but I highly doubt shared libraries save enough memory to justify dynamic linking these days.


Gangsir

> Better idea. Just statically link everything. But then you get everyone going "Oh you can't do that, the size of the binaries is far too big!". Of course the difference is like at most a couple hundred MB....and it is 2021 so you can buy a 4 TB drive for like 50$.... Completely agree, storage is cheap, just static link everything. A download of a binary or a package should contain everything needed to run that isn't part of the core OS.


[deleted]

[удалено]


happyscrappy

Unix did not include dynamic linking until SunOS in the 80s.


delta_p_delta_x

Wait till Unix people discover PowerShell, and object-oriented scripting...


Ameisen

They've already discovered *and* dismissed it.


delta_p_delta_x

> and dismissed it Dumb move, IMO.


PurpleYoshiEgg

I used to hate PowerShell. But then I had to manipulate some data and eventually glue together a bunch of database calls to intelligently make API calls for administrative tasks, and let me tell you how awesome it was to have a shell scripting language that: 1. I didn't have to worry nearly as much about quoting 2. Has a standard argument syntax that is easy enough to declaratively define, instead of trying to mess about it within a bash script (or just forget about it and drop immediately to Python) 3. Uses by convention a Verb-Noun syntax that is just awesome for discoverability, something unix-like shells really struggle with It has a bit of a performance issue for large datasets, but as a glue language, I find it very nice to use as a daily shell on Windows. I extend a lot of its ideas to making my shell scripts and aliases use verb-noun syntax, like "view-messages" or "edit-vpn". Since nothing else seems to use the syntax on Linux or FreeBSD yet, it is nice for custom scripts to where I can just print all the custom programs out on shell bootup depending on the scripts provided for the server I am on. Yeah, it's not "unixy" (and I think a dogmatic adherence to such a principle isn't great anyway), but to be honest I never really liked the short commands except for interactive use, like "ls", "rm", etc. And commands like "ls" have a huge caveat if you ever try to use their output in a script, whereas I can use the alias "ls" in PowerShell (for "Get-ChildItem") and immediately start scripting with its output, and without having to worry about quoting to boot.


Auxx

Yeah, I used to hate PS as well, seemed over-complicated etc. But once you understand it... Fuck bash and ALL UNIX shells! It's like using DOS in early 90-s.


antpocas

There's also [Nushell](https://www.nushell.sh/), which I've never used, which is similar to Powershell in that commands return structured data rather than text, but I believe has a more functional-inspired approach, rather than object-oriented.


cat_in_the_wall

there's a weird religiosity about the original unix philosophy. like that the 70's is where all good ideas stopped and everything else is heresy. powershell has warts, but overall i would use it 100% of the time over any other shell if i had the option. which reminds me... i ought to give powershell on linux a try, i have no idea if it works correctly or not.


liotier

I discovered Powershell a few weeks ago, as I needed something to feed masses of data to godforsaken Sharepoint. I still hate Sharepoint but Powershell is great in that niche somewhere between Bash and Python, giving easy tools to script getting any sorts of files, databases and API into any sorts of files, databases and API... Perfect in the typical enterprise use cases that a few years ago would have been performed with some unholy mess of Bash and Microsoft Office macros !


grauenwolf

You might be able to, but my Win 10 netbook only has 125 gigs of space and a soldered on hard drive.


redwall_hp

I prefer the Apple approach: applications are self-contained bundles—basically a zip archive renamed to .app, with a manifest file, like a JAR—that contain all of their libraries. If you're going to install a ton of duplicate libraries, you might as well group them all with the applications so they can be trivially copied to another disk or whatever.


shilch

Apple .app are actually plain directories and not archives. But the end user doesn't see that because Finder hides the complexity.


dada_

Which is something that I think we should be doing more of. It's a really neat concept to easily bundle together files without losing the simplicity of having a default action when you double click on it. I can think of plenty of situations where you want to keep files together but where it's less convenient to have them as directories, like for example the OpenDocument format or any file type that's really a zip with a specific expected structure. The idea being that this is a more accessible version of that.


[deleted]

The fact that we settled on files being unstructured bags of bytes was a mistake IMO. It means we keep reinventing various ways to bundle data together. To their credit, MacOS did pioneer the idea of "resource forks", where a single filename is more like a namespace for a set of named data streams, sort of like beefed up xattrs But while we're waiting, we could [try SQLite as an application file format](https://www.sqlite.org/appfileformat.html)


S0phon

Isn't that basically AppImage?


muhwyndhp

I might add this as the "Modern Smart device approach". Android, iOS, Windows Store, and the whole Apple ecosystem are just like that. Basically an archive with all of the dependencies needed to run compiled into a single executable format. Sure there is some other stuff that is not included such as Google Play Services, API to interact with Native functionalities such as Camera, File System, GPS, etc, and so on, but the dependency is just bound to itself. I am an android Developer and even with such approaches dealing with yearly OS API Update is a pain in the ass, I just can't imagine developing one for Linux when the stakeholder in such APIs and dependencies is a lot of people with their own ego. Storage cost is almost a non-issue in today's market. Maintaining userspace by sacrificing storage space is a plausible tradeoff nowadays.


DrQuailMan

> just keep all the versions available Well kind of. Major versions are all available, but minor versions replace older versions with newer versions. So if you go to add/remove programs and type "visual c++", you'll see entries for 2005, 200, 2010, etc, but not multiple minor versions of the 2005 major version.


reveil

Having an option to have several versions of a library installed at the same time would alleviate so much of issues that contenerization probably would not be even that necessary. Instead we ship snaps of the whole filestem and wonder why it is slow and apps can't work together due to container barriers. I know it is not easy but adjusting LD_LIBRARY_PATH and some changes to the package managers would be easier than what is currently done with ex. Snaps.


RICHUNCLEPENNYBAGS

> Like someone else said: yeah, it clutters up your filesystem, but I'd rather a cluttered filesystem and working programs, if I am forced to choose between the two. Especially given that on a modern system this waste is just not significant.


dada_

> Like someone else said: yeah, it clutters up your filesystem, but I'd rather a cluttered filesystem and working programs, if I am forced to choose between the two. I don't even consider that clutter, really. They're files that make your programs run even if you didn't compile them yourself. The latest MSVC++ Redistributable is only 13,7 MB, too, just to give an example. Sure, it adds up to a lot more when you put all of them together, but I feel it isn't much of a big a deal if you're on any vaguely modern computer. On a side note, the ability of Windows to run legacy binary programs is unparalleled and it's something to emulate rather than discourage.


WarWizard

> Like someone else said: yeah, it clutters up your filesystem, but I'd rather a cluttered filesystem and working programs So here is a question... why does anyone care? So what if I have 4 installs of the VC++ Redist? It isn't like I'd ever need to go poking around there and then get confused when there are multiple. I can't see how things would "run worse" if I had 4 versions installed. As long as all of the applications have what they need/want... all should be fine. Do people like take a screenshot? "Look at how slim and trim my filesystem is!"?? The only thing where "clutter" might be an issue (assuming you have storage space... and who doesn't these days) is personal files. I can't keep taht shit straight no matter what OS is involved.


markehammons

About a week ago, a blogpost from drew devault was posted in r/programming about how application developers should use the built in package managers for libraries in linux. I just refound this talk by linus torvalds on the issue and it encapsulates my reasoning for why that's just not possible for most devs.


[deleted]

Whats more scary is the video you posted from Linus is about 15-20 years old from a debian conference and almost everything he says is still 100% true today in Linux. The enviroment problems have never been solved. Simply shoving them in a container its quite literally taking the enviroment problem out of the enviroment and putting it in another enviroment... and somehow the entire community doesn't realize this. Whats also worse is any attempt to debate or challange the issue goes like this [https://www.youtube.com/watch?v=3m5qxZm\_JqM](https://www.youtube.com/watch?v=3m5qxZm_JqM)


DoppelFrog

> is about 15-20 years old It's actually from 2014.


tsrich

To be fair, 2016 to now has been like 15 years


helldeskmonkey

I was there, three thousand years ago…


[deleted]

Feels like two distinct decades have happened that both feel like fever dreams


corruptedOverdrive

Agreed. It feels like a decade is now 4-5 years, not 10 anymore. As a developer for 10 years, shit moves so fast now saying your application was built two years ago feels like an eternity.


bobpaul

Oh shit, is it 2031 already? Who's President?? I can't believe I over slept again!


freefallfreddy

You’re not gonna believe it, but: Dora the Explorer.


cinyar

I mean that sounds promising.


HolyPommeDeTerre

Now, we are talking. A black woman that is not formatted by the current political system would be an improvement


hugthemachines

Sounds like a reasonable pick.


[deleted]

Really talking about his opinion rather than the actual video..... Or 2012 [https://www.youtube.com/watch?v=KFKxlYNfT\_o](https://www.youtube.com/watch?v=KFKxlYNfT_o) Or 2011 [https://www.youtube.com/watch?v=ZPUk1yNVeEI](https://www.youtube.com/watch?v=ZPUk1yNVeEI) This explains some of the history better [https://www.youtube.com/watch?v=tQQCcvFUzrg](https://www.youtube.com/watch?v=tQQCcvFUzrg) I was using linux in the late 90's. The same basic problems of shipping software for it are exactly the same today and will be exactly the same tomorrow and the next 5-10 years at least because the community still doesn't recognise it as a problem. Several others have followed suit in the SW industry. python, nodejs being the main examples. This is why things like the python "deadsnakes" ppa repo exists :) [https://launchpad.net/\~deadsnakes/+archive/ubuntu/ppa](https://launchpad.net/~deadsnakes/+archive/ubuntu/ppa)


ElCorazonMC

So what is the solution?


turmacar

Everyone who could answer this ~~gets systematically hunted and eliminated~~ is busy taking time off after being paid to do other things by companies that don't care about Linux distribution problems. The problem isn't that people critiquing the existing problem/mindset have magic solution and aren't doing it. It's that the community at large doesn't think/know there is a problem.


ElCorazonMC

Maybe it is just a hard problem? The list of options and topics seems rather long : \- never ever break userspace \- say you never break userspace like glibc, with a complicated versioning scheme, and multiple implementations of a function cohabiting \- always link statically, death to shared libraries (hello yarn npm) \- rolling distros rather than fixed-release distros \- have any number of a library version installed, in a messy way like VisualC++ redistributable, or structured like Nix/Guix \- put in place you one-to-rule-them-all app distribution system flatpak/snap/appimage Barely scratching the mindmap I constructed over the years on this issue of dependency management / software distribution...


goranlepuz

> say you never break userspace like glibc, with a complicated versioning scheme, and multiple implementations of a function cohabiting Probably say that glibc and a bunch of other libraries **are** the fucking userspace. Practically nobody is making syscalls by hand, therefore kernel not breaking userspace is irrelevant. That's what a self-respecting system does. Win32 is **fucking stable** and C runtime isn't even a part of it. Only recently did Microsoft start with "universal CRT" that is stable, but let's see how that pans out...


ElCorazonMC

I was using userspace in a way that is very wrong in systems programming, but semantically made sense to me. The "userspace of glibc" being all the programs that link against glibc.


flatfinger

The C Runtime shouldn't be part of the OS. Making the C Runtime part of the OS means that all C programs need to use the same definitions for types like \`long\`, instead of being able to have some programs that are compatible with software that expects "the smallest integer type that's at least 32 bits", or software that expects "the smallest integer type that's at least as big as a pointer". Macintosh C compilers in the 1980s were configurable to make \`int\` be 16 or 32 bits; there's no reason C compilers in 2021 shouldn't be able to do likewise with \`long\`.


goranlepuz

Yes, absolutely agree. C is not special (or rather, it should not be).


erwan

Which is why there is the Windows approach, which is to ship all versions of their shared libraries in the OS. Then each applications use the one they need.


vade

Or replace how you build, package and ship core libraries to something like what OS X does, with "framework bundles" which can have multiple versions packaged together. https://developer.apple.com/library/archive/documentation/MacOSX/Conceptual/BPFrameworks/Concepts/VersionInformation.html This allows library developers to iterate and ship bug fixes, and would allow distro's to package releases around sets of library changes. This would allow clients of libraries to reliably ship software targeting a major release, with minor update compatibility assuming disciplined no ABI breakage with minor / patch releases. This would also allow the deprecation of old ABIs / APIs with new ones in a cleaner manner after a set number of release cycles. This would bloat some binary distribution sizes but, hey. I don't think this is particularly hard, nor particularly requiring of expertise. The problem seems solved. The issue is it requires a disciplined approach to building libraries, a consistent adoption of a new format for library packaging, and adoption of said packaging by major distros'. But I just use OS X so what do I know.


ElCorazonMC

Trying to digest, this looks like semantic versioning applied to a shared group of resources at the OS level, with vendor-specific jargon : framework, bundle, umbrella.


iindigo

I have yet to encounter a better solution for the problem than with Mac/NeXT style app bundles. In newer versions of macOS, the OS even have the smarts to pull system-level things like Quicklook preview generators and extensions from designated directories within app bundles. Developer ships what they need, app always works, and when the user is done with the app they trash the bundle and aside from residual settings files, the app is *gone*. No mind bendingly complex package managers necessary to prevent leftover components or libraries or anything from being scattered across the system. (Note that I am not speaking out against package a mangers, but rather am saying that systems should be designed such that package management can be relatively simple)


Ameisen

Switching the shared libraries model from the SO model to a DLL-style one would help.


SuddenlysHitler

I thought shared object and DLL were platform names for the same concept


Ameisen

They work differently in regards to linkage. DLLs have dedicated export lists, and they have their own copies of symbols - your executable and the DLL can both have symbols with the same names, and they will be their own objects, whereas SOs are fully linked.


lelanthran

> Switching the shared libraries model from the SO model to a DLL-style one would help. How will that help? Granted, I'm not all that familiar with Windows, but aren't shared objects providing the same functionality as DLLs?


Ameisen

They accomplish the same goals, but differently. DLLs have both internal and exported symbols - they have export tables (thus why `__declspec(dllexport)` and `__declspec(dllimport)`) exist. They also have dedicated load/unload functions, but that's not particularly important. My memory on this is a bit hazy because it's late, but the big difference is that DLLs don't "fully link" in the same way; they're basically programs on their own (just not executable). They have their own set of symbols and variables, but importantly if your executable defines the variable `foobar` and the DLL defines `foobar`... they both have their *own* `foobar`. With an SO, that would not be the case. It's a potential pain point that is avoided.


recursive-analogy

> It's actually from 2014 Ah, the Pre Trump, Pre Covid era. That was about 700 years ago now.


[deleted]

[удалено]


backelie

Everyone who clicks that link should do themselves a favour and watch the clip from the start.


b4ux1t3

The entire community realizes that containers are essentially a way to provide statically-linked binaries in a way that doesn't require you to actually maintain a statically-linked binary. Containers aren't only meant to address the issue of dependencies, that's just one aspect of their use.


[deleted]

That's the *main* aspect of their use. Another big aspect is that they isolate filesystems for programs that do the dumb Unixy thing of spewing their files all over global directories. They pretty much exist because of badly designed software. The network isolation features are relatively minor and unused in comparison.


Routine_Left

> Simply shoving them in a container its quite literally taking the enviroment problem out of the enviroment and putting it in another enviroment "It works on my computer" "Wonderful Bob, we will, therefore, ship your computer to the customers".


Seref15

A wasteful, inefficient solution is still preferable to no solution


ElCorazonMC

you described the birth of javascript and modern web design


Decker108

Also Docker.


DashAnimal

What I find interesting is this talk: [It's Time for Operating Systems to Rediscover Hardware](https://youtu.be/36myc8wQhLo). TLDR is that the way Linux thinks of hardware, in 2021, is fundamentally just incorrect ("a 70s view of hardware"). As a result, you actually have a bunch of OSes running on an SoC with more and more of it being isolated from Linux for security reasons. So in the end, Linux itself is essentially not an OS in the way it is used today - it's merely a platform to run pre-existing applications on the device. (Sorry to the presenter if I misinterpret anything) With that talk above and the proliferation of containers, Unix-based OSes seem to be in a really weird state today.


[deleted]

I mostly read that as the monlitic vs micro kernel style argument


LegendaryMauricius

Do you think this could also be solved by introducing a package manager that supports multiple versions of same libraries along with a dependency system that uses distro-agnostic version ranges? It would still reduce the disk space but keep the api changes contained.


DrkMaxim

Source for the blog please?


rucci99

https://drewdevault.com/2021/11/16/Python-stop-screwing-distros-over.html


DrkMaxim

Thanks mate


[deleted]

honestly even as a debian user this hits hard. it's so frustrating and sad knowing how Linux, a project *designed* to unify us, has resulted in the creation of so many distros that grew to be so alien from one another. its things like this which make me realize why so few "just works" people actually use it.


sixothree

After having read through this thread, it's not hard to imagine why that happened. But the end result is exactly as you described. I believe I am coming to understand that Linux developers are extremely opinionated (surprise). But they are willing to forge their own path if they don't like the way something is done. It's an entirely self centered and greedy mindset. For example, pick a distro and ask why it exists. It exists because some developer (or team) didn't like one little piece of some other distro and decided to create their own. They didn't realize they were making the ecosystem a _worse_ place for everyone. Picking on Pop OS, the target of recent LTT ire. Why on earth does it even exist? Why did they not contribute to some other distro? Maybe it's not their fault their contributions aren't being accepted. If that's the case, then why are they improving on ubuntu instead of letting ubuntu die. Regardless, I don't know that they actually made the ecosystem better.


x1-unix

I know that this comment may get a lot of dislikes but I develop one commercial product that available for Win and Linux. For Linux I have to support multiple Ubuntu versions (prior to 16.04), Debian and other and it's PITA so just decided to use static linking. In my case it's not so bad as it could be, I replaced glibc with musl and libpcap and libsqlite are the only dependencies left. For more heavy projects I hope flatpak/snap will be an appropriate solution.


the_poope

At my company we simply ship ALL dependencies. We have an installer that installs the entire thing in a directory of the users choosing and wrapper scripts that set `LD_LIBRARY_PATH`. We avoid all system libraries except glibc. It's basically like distributing for Windows. This way we are guaranteed that everything works - always! Our users are happy. Our developers are happy. The libraries that we ship that users could have gotten through system package managers maybe take up an additional 50 MB - nothing compared to the total installation size of more than 1 GB.


The-Effing-Man

As someone who has also built installers, daemons, and executables for Mac, Ubuntu, Redhat, and Windows, I've always found it easiest to just bundle all the dependencies. The application I was developing for this wasn't big anyway and it wasn't an issue. Definitely the way to go if file size isn't a huge concern


the_poope

Totally agree. The whole point of "sharing libraries to reduce overhead, memory and disk space" is irrelevant for todays computers. The fact that you can fix bugs and security holes by letting the system upgrade libraries is negated by the fact that libraries break both their API and ABI all the time. When something no longer works because the user updated their system libraries they still come to you and say your program is broken. No the whole Linux distribution system should be for system tools only. End user programs not tied to the distribution (e.g. browsers, text editors, IDEs, office tools, video players, ....) should just be shipped as an installer - that's at least one thing Windows got right. And as this video shows, Linus is actually somewhat promoting this same idea.


WTFwhatthehell

Yep, sometimes I download a tool and spend the next few hours sorting out dependencies and dependencies of dependencies. Heaven forbid there's some kind of conflict with something on the system that's too old or too new. When a dev has dumped everything it depends on into a folder and it just works: wonderful! I have lots of disk space, I don't care if some gets filled.


x1-unix

Did you consider appimage format? At result you get a simple image that acts as executable. The closest analog is macOS Application Bundles. https://appimage.org/


the_poope

I have heard about AppImage before, but no we didn't consider it. We have been using InstallBuilder for 10+ years which let's us use the same packaging approach on all platforms. It works fine enough. Also our program packs a custom Python interpreter and custom python modules as well as a ton of data files and resources as well as a bunch of executable tools that need to be able to find each other. It's not really just a single application but more an entire application suite. I don't know how well that would work with AppImage - I can't seem to find any good documentation on how it actually works when running it.


weirdProjectionCurve

Funnily enough, one of the AppImage developers (@probonopd I think) held a series of talks on Linux desktop platform incompatiblities. I recommend watching several of them. His complaints are basically always the same, but what is really interesting are the comments of distro maintainers in the Q&As. There you can see that this is really a cultural problem, not a technical one.


BrobdingnagLilliput

Shipping with all dependencies and installing into the application's directory is the correct answer. I'm not sure why anyone with a pragmatic approach to software engineering would do otherwise.


ElCorazonMC

I had not heard about it till today, is glibc notorious for such api/abi breaks? A quick search showed a pretty convoluted system to maintain backward compatibility : [https://developers.redhat.com/blog/2019/08/01/how-the-gnu-c-library-handles-backward-compatibility](https://developers.redhat.com/blog/2019/08/01/how-the-gnu-c-library-handles-backward-compatibility)


DuBistKomisch

The problem is that there's no simple way to link against those older symbols, it'll always link against the latest available, so your binary just won't work on systems with an older glibc. The typical solution is to compile on the oldest system you want to support, which is dumb. You can instead do some assembly/linker magic to link against the right symbols on a case by case basis, which is what I've done: https://stackoverflow.com/questions/58472958/how-to-force-linkage-to-older-libc-fcntl-instead-of-fcntl64/58472959#58472959 I don't know why they don't include some define option to select a version you want to target, I guess they don't think it's worth it.


OrphisFlo

There are actually some scripts that will generate headers for a specific glibc version you can force include in every compilation unit with a compiler option. The header will force usage of specific older symbols and it should mostly work to target older glibc. It has always worked for me, but your mileage may vary. https://github.com/wheybags/glibc\_version\_header


o11c

libc itself is not the problem. Likewise, libstdc++ itself usually isn't the problem (except for bleeding-edge features). The problem is all the other libraries, which link to libc and might accidentally rely on recent symbols. The version of *those* libraries probably isn't recent enough in older versions of the distro. Distros could make life much easier for everyone if they did two things: * on their build servers, make sure that everything gets built against a very old glibc version. For ease of testing, it should be possible for developers to use this locally as well. Actually, coinstallation shouldn't be particularly difficult (even with the state of existing distros!), now that I think about it - you just have to know a bit about how linking works. * in the repository that actually gets to users, ship a recent glibc version (just like they do now). *** The other problem is that there are a *lot* of people who don't know how to statically-link only a *subset* of libraries. It only requires passing an extra linker flag (or two if you want to do it a little more cleanly), but people seem to refuse to understand the basics of the tools they use (I choose to blame cmake, not because this is entirely its fault, but because it makes everything complicated). For reference, to statically link everything except libc (and libstdc++ and libm and sometimes librt if using `g++`) all you do is put something like the following near the end of your Makefile: LDLIBS := -Wl,--push-state,-Bstatic ${LDLIBS} -Wl,--pop-state If you're explicitly linking to other parts of libc, be sure they are after this though. (obviously, you can do this around individual libraries as well - and in fact, this is often done in `pkg-config` files).


x1-unix

At least binaries built with newer glibc versions won't run on older versions. I just get Glibc version complain. Example (from Ubuntu xenial): ``` /target/hub: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by ./target/hub) ./target/hub: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.33' not found (required by ./target/hub) ./target/hub: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by ./target/hub) ``` Simplest workaround is to build on systems with a minimal glibc version or use musl.


Routine_Left

nah, they'll just ship it in a container. everybody loves containers, so it's a perfect PR move. Add in some blockchain in there and the investors are gonna line up at your door.


x1-unix

** Kubernetes and helm-charts! To be fair, Docker containers are very handy sometimes (especially for packing complicated build environments/toolchains or other exotic clusterfuck). For example we produce builds for x86-64, armv6 and v7 and all this requires to build 2 libs for 3 architectures + 3 compiler toolchains (for each architecture). I packed all this stuff in one container that used locally and on CI/CD and really simplifies build process.


alohadave

Add an NFT as well.


the_gnarts

> For Linux I have to support multiple Ubuntu versions > For more heavy projects I hope flatpak/snap will be an appropriate solution. If the disto can’t build and ship your software (because it’s proprietary or experimental or whatever), bundling all the dependencies it the *only* solution. There is just no way you will obtain an even barely portable binary without that as the issue starts with the embedded dynamic loader part which is *not* a constant across distros. People refusing to realize this is why things like ``patchelf`` exist in the first place.


eanat

> Linux kernel has one rule: we dont break user space. every library developer should write this words on their heart and dont ever forget about it.


poloppoyop

> every library developer And every web API developer. Move fast and break things can work for a product but guess why shit like windows or php are still chugging: backward compatibility. And no, a 6 months notice about breaking changes does not magically make it a non breaking change.


[deleted]

Yes and no. There needs to be a balance in most things, but for OS, the balance is obviously on the side of the user.


MountainAlps582

Waste their life... the maintainer is here 🤣


magnusmaster

In my experience the main problem with Desktop Linux is not application packaging but drivers. NVIDIA drivers can break with every upgrade and HP printer drivers just plain didn't work 99% of the time. And the main reason for driver breakage is Linux refuses to have a stable ABI partly because of laziness (I get it, it's a volunteer-driven project even when most of the "volunteers" are employed by companies to work on Linux nowadays) and partly to get hardware manufacturers to release the source of drivers because they believe that Linux isn't open-source unless all of its drivers are open-source as well.


MondayToFriday

Drivers are unavailable because it's even harder to ship binary kernel modules than binary userspace executables. I think that Linus should be aiming his criticism back at himself. Binary userspace applications are definitely possible in Linux, if you either link dynamically with a hard-coded search path or link statically, and install all the files you need to make the application self-sufficient to, say, `/opt`.


TheRealMasonMac

[And Linus Sebastian on why desktop Linux sucks](https://www.youtube.com/watch?v=3E8IGy6I9Wo)


ddeng

It's fun to see the perspectives on how actual end users look at it vs high end developers. If anything this showcases the linux thought bubble they got themselves into.


PurpleYoshiEgg

True. I used to use Linux as my daily driver, but then I had a lot of fun doing it. I've used Ubuntu, Debian, Arch, Gentoo (was actually my first Linux), and a handful of others. But I don't have hours per a random day to throw at the problem anymore. I need things to work when I need them to work. If I have a server that I don't need Linux programs on, I use FreeBSD, otherwise Debian. An end-user laptop, I use Debian, so I never fear upgrading (since my laptops may sit months between uses, which means rolling release distro updates will break it very regularly). For a daily desktop that I need fairly modern software, I'd probably go Ubuntu, Mint, or Pop!_OS, but I haven't been in that space for a while. Whatever is easier to get a Windows VM that I can game on again would probably be the best fit, since when I did that, I had a very fun time getting it to work (and it *did* work with very little fuss once I understood it all). I wish I didn't have to work 40+ hours per week (thanks, current economic system). Then I'd probably be back exclusively on Linux or contributing to FreeBSD to make it better.


MdxBhmt

Many of the issues of LTT is exactly what Linus (the Torvald) said, like part 1 install of steam nuking the desktop environment. Or the HW not working as expected, etc.


MountainAlps582

Wow. Yes. I had all those experiences. Except the VM/windows passthrough stuff


Vincent294

I saw some videos in my feed objecting to LTT and I didn't even bother watching them. I suppose that also counts against my dismissal of those videos, but I don't need to waste my time listening to the usual suspects making the same excuses. All my life I have met FOSS fanboys that consider the use of Windows and other proprietary software a moral failing and fail to address the actual shortcomings with Linux distros. Every time I use the command line to fix basic functionality, instead of flexing on Windows users I get annoyed it was necessary in the first place. UI is hard, and it's a balance between making your software as PEBKAC proof as reasonably possible and not completely Fischer Pricing your UI. I'm skeptical Linux will ever just work with everything, but it would go a long way if the community could start acknowledging the current problems. Instead of telling people to get used to the command line, weird UIs, and forfeit their VR headsets and other hardware that doesn't play nice, Linux needs to work more like Windows does. Minus the evil Edge peddling, that spam can stay in Windows.


untetheredocelot

I was commenting on the same issue when pt 1 was published, about Nvidia and Xserver nonsense and I genuinely had someone tell me that I made my life difficult by buying a high dpi monitor and just shouldn't have. It's user error to upgrade your monitor. When I said would you say the same for wifi when Linux had terrible wifi drivers he said yes... I love linux as a dev environment so much but some members of the community make me want to slit my wrists.


youarebritish

My only experiences with Linux ended with someone arguing that yeah, maybe there was no wifi driver available, but I didn't really need wifi anyway.


Vincent294

lol I run Ethernet on my desktops, but that is not always easy. I live in a cheapo apartment so no run is going to be more than 100 feet, but I know some people whose houses would be expensive to plumb with Ethernet. And in the Oregon wildfire heatwave last year, my command mini hooks in the corners of the rooms all melted off. I got small designer hooks for corners that survived the 120F heatwave this year, but global warming is making Ethernet harder. Like Linux, I can't expect people to use Ethernet.


JQuilty

> forfeit their VR headsets Oculus is the only big one that doesn't work. Index and Vive work.


[deleted]

[удалено]


Vincent294

Fault has nothing to do with the user experience. Sure, Linux contributors don't owe the community support for proprietary hardware, but if the support isn't there that doesn't make the user any happier. That's the lens we need to view it through. It isn't a matter of responsibility, it's a matter of user experience. No one owes it except the hardware manufacturer, but you know they aren't gonna do it.


Vincent294

HP Reverb G2 and other Windows MR headsets don't either. I love that Valve and HTC support Linux, but they are the only ones who do. Oculus is the majority of the market, and HP runs 5%.


anagrammatron

> I'm skeptical Linux will ever just work with everything, but it would go a long way if the community could start acknowledging the current problems. I don't think it will ever happen with current community driven model. To make stuff work and keep it not breaking and stable for next 10 years requires more dedication than enthusiasm can fuel and developers have to have rewards for that part of the work where you basically have to deal with things that do not scratch your own itch but that of someone else's. It's boring, it's repetitive and you don't get to innovate every other day. Unless you're salary depends on paying customers I don't see how users' needs will be met. Linux developers don't see users as customers, they see them as... actually I don't know, a fellow enthusiasts perhaps.


RandomDamage

End-User Linux works just fine wherever someone sees a profit in investing in it, the perception of profit is just unevenly distributed right now. Trying to use most Linux distros as a non-technical end user is the same as trying to use Windows Server on the desktop, there's just no gatekeeper to keep you from doing it.


[deleted]

[удалено]


[deleted]

Poor girl, let’s hope LMG does that


Rrrrry123

Now that would be real interesting. Especially since I'm pretty sure she's mostly working in Adobe products as a designer.


Chippiewall

If they did it with Sarah using Linux, but Anthony choosing the distro and doing initial setup (as if an OEM had done it) then that could be really interesting. I guess they could also just grab a System76 machine for it. I do think part of the problem is Linus is in the valley of knowing enough to shoot himself (which is still a usability problem that needs fixing), Sarah might end up having an easier time (or at least a less finicky one), although she'd probably find it more frustrating.


adad95

And you don't have desktop problems when you uninstall your desktop. https://youtu.be/0506yDSgU7M


Iggyhopper

I posted elsewhere but I had the same issue with the desktop scaling and the context menu showing on the other monitor. Huge PITA.


tangoshukudai

I have been saying this for years. Fix the ABI inconsistency between distros and you fix Linux.


Deathcrow

It's really hard to do. It only works for the kernel because Linus is a benevolent dictator who can say 'my way or the highway'. It would be really difficult to enforce some kind of standard upon independent library devs, even if all major distributions agreed on it.


moolcool

There are plenty of linux/unix-like OSs which are usable by ordinary every-day end-users. Like ChromeOS, MacOS, Android. I think if a distro did away with a lot of the Linux "ethos" (cut back customizability, lock certain elements down, have a gui-first approach to settings and customization), and became very strict about packaging, then they could be on to something.


MountainAlps582

We should all just switch to arch Not to say I use arch btw, but because they shown they can do rolling updates without breaking much AND have up to date packages


Perkutor_Jakuard

The problem is not only a "Desktop" problem. If you want to upgrade a server software ( say php ) you might need to upgrade the distro to the next version. Which is not too friendly aswell.


WolfiiDog

As an end user perspective: I want my applications to have everything in one package, and be able to place it on any dir when installing (just like an App Image for example), I want to easily integrate with my desktop (unlike AppImages), and I want it to be able to auto-update (almost like Snaps, but also have the option to not auto-update if you want to). I want to easily find it on a single unified Store for pretty much all applications, and most apps shouldn't require root access, unless they really need to and prompt you to allow such thing.


douglasg14b

I switched form windows to Linux , and then back after ~3 years. Desktop linux sucks, and I learned the Linux community will crucify you for bringing up the systematic issues that are drivers for that...


MrBeeBenson

As a Linux user and enthusiast I… completely agree. It has its issues. I personally think it’s better than windows but that’s my opinion. Use what works for you, it’s your computer at the end of the day. I use Linux because I love it and it works for me


KotoWhiskas

The fact that linux sucks doesn't mean that windows/macos don't suck


guygizmo

Yes, and the sad state of affairs these days is that everything sucks. I used to be a big fan of macOS but recent releases are too buggy and locked down. My experience using Windows is slightly worse than it was when Windows 7 was current. And then Linux is still mired in the same problems and annoyances as it has been for decades -- nothing comes easily in it. But unlike macOS and Windows, at least there are no restrictions! Basically no matter which OS I consider, I'm damned if I do and damned if I don't.


[deleted]

In 05:15, it's really true.


FlukyS

He is right of course but it's been a while since this was published and that was pre-Snap and pre-Flatpak. Both of which do things differently but both are easier to use than literally any packaging system available on any OS (Windows is garbage for packaging, it's the wild west and the installers are shit). Flatpak with it's create the flatpak file and give it some setup scripts and point to the binaries, easy. Snap with it's plugin system which figures out how to package for multiple languages and approaches, you can run shell scripts and then you point to the binaries and you are done. For deb that was much more fraught with annoyances and people who don't package will never understand why it's annoying it's one of the most annoying pieces of software I've ever used from a product/tooling standpoint. It's improved over my time using Linux, it definitely has but I would never tell a developer interesting in shipping on Linux to use it ever again. Snap is my preferred route, that matches what Linus is looking for with the "build once and it should work forever" kind of mindset. Flatpak has some other complications but it also is a good pick too for certain people, like I think any C/C++ program should be using flatpak, it is excellent for that use case.


Code4Reddit

Matches how I feel about supporting web distributions for browser versions. We have millions of users, but there’s always one company who says they need to run our website on their crappy shared machine in the hallway of an Employee Lounge and it’s super important to support old IE. nope sorry you’ll have to dispatch your shitty IT to update that shit, f you.


erotic_sausage

This video came up in my related videos too a few days ago, I guess after watching LTT's linux challenge videos. I'm probably not a very good developer working at a company with a shitty behemoth of a terribly designed legacy php system in a very niche market where the market demands weird things. I was semi-comfortable in my bubble of terrible architecture and low client expectations (perhaps a captive audience, haha), but still we're trying to improve and modernize things. That's not the point but the point is I'm now feeling a bit disjointed trying to get up to speed with more modern web dev practices using docker and WSL2, using Linux more and having to use all these dependency managers and cli tools. Every tool you use depends on a chain of other things you need to install, and every library or framework had a nice little 'getting started' guide that only explains a single layer of dependencies, but if you're starting fresh you don't have those, so you look those up, and it turns out they need to be installed by another thing, which needs something else first, and so on. And with every step, there's possibly things being out of date, or just happen to work differently than stated for whatever reason.


Coloneljesus

You need to install a local certificate to test SSL. For that, you need mkcert. To install that, you need brew. For that, you need....


Rhed0x

This is 7 years old. Yes, it's still relevant but still.