• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • Dualbooting is possible and easy: just gotta shrink the Windows partition and install Linux next to it. Make sure to not format the whole thing by mistake, though. A lot of Linux installers want to format the disk by default, so you have to pick manual mode and make sure to shrink (not delete and re-create!) the windows partition.

    As for its usefulness, however… Switching the OS is incredibly annoying. Every time you want to do that you have to shut down the system completely and boot it back up. That means you have to stop everything you’re doing, save all the progress, and then try to get back to speed 2 minutes later. After a while the constant rebooting gets really old.

    Furthermore, Linux a completely different system that shares only some surface level things with Windows. Switching to it basically means re-learning how to use a computer almost from scratch, which is, also, incredibly frustrating.

    The two things combined very quickly turn into a temptation to just keep using the more familiar system. (Been there, done that.)

    I think I’ll have to agree with people who propose Virtual Machines as a solution.

    Running Linux in a VM on Windows would let you play around with it, tinker a little and see what software is and isn’t available on it. From there you’ll be able to decide if you’re even willing to dedicate more time and effort to learning it.

    If you decide to continue, you can dual boot Windows and Linux. But not to be able to switch between the two, but to be able to back out of the experiment.

    Instead, the roles of the OSes could be reversed: a second copy of Windows could be install in a VM, which, in turn, would run on Linux.

    That way, you’d still have a way to run some more picky Windows software (that is, software that refuses to work in Wine) without actually booting into Windows.

    This approach would maximize exposure to Linux, while still allowing to back out of the experiment at any moment.


  • S410@kbin.socialtoLinux@lemmy.mlI dislike wayland
    link
    fedilink
    arrow-up
    21
    arrow-down
    1
    ·
    4 months ago

    Wayland has it’s fair share of problems that haven’t been solved yet, but most of those points are nonsense.

    If that person lived a little over a hundred years ago and wrote a rant about cars vs horses instead, it’d go something like this:

    Think twice before abandoning Horses. Cars break everything!
    Cars break if you stuff hay in the fuel tank!
    Cars are incompatible with horse shoes!
    You can’t shove your dick in a car’s mouth!

    The rant you’re linking makes about as much sense.




  • You’re linking a post… From 2010. AMD replaced radeon with their open source drivers (AMDgpu) in 2015. That’s what pretty much any AMD GPU that came out in the last 10 years uses now.

    Furthermore, the AMDgpu drivers are in-tree drivers, and AMD actively collaborate with the kernel maintainers and developers of other graphics related projects.

    As for Nvidia: their kernel modules are better than nothing, but they don’t contain a whole lot in terms of actual implementation. If before we had a solid black box, now, with those modules, we know that this black box has around 900 holes and what comes in and out of those.

    Furthermore, if you look at the page you’ve linked, you’ll see that “the GitHub repository will function mostly as a snapshot of each driver release”. While the possibility of contributing is mentioned… Well, it’s Nvidia. It took them several years to finally give up trying to force EGLStreams and implement GBM, which was already adopted as the de-facto standard by literally everybody else.

    The modules are not useless. Nvidia tend to not publish any documentation whatsoever, so it’s probably better than nothing and probably of some use for the nouveau driver developers… But it’s not like Nvidea came out and offered to work on nouveau to make up to par and comparable to their proprietary drivers.


  • k, so for the least used hardware, linux works fine.

    Yeah, basically. Which raises a question: how companies with much smaller market share can justify providing support, but Nvidia, a company that dominates the GPU market, can’t?

    The popular distros are what counts.

    Debian supports several DEs with only Gnome defaulting to Wayland. Everything else uses X11 by default.

    Some other popular distros that ship with Gnome or KDE still default to X11 too. Pop!_OS, for example. Zorin. SteamOS too, technically. EndeavorOS and Manjaro are similar to Debian, since they support several DEs.

    Either way, none of those are Wayland exclusive and changing to X11 takes exactly 2 clicks on the login screen. Which isn’t necessary for anyone using AMD or Intel, and wouldn’t be necessary for Nvidia users, if Nvidia actually bothered to support their hardware properly. But I digress.

    Worked well enough for me to run into the dozen of other issues that Linux has

    Oh, it’s no way perfect. Never claimed it is.

    I like most people want a usable environment. Linux doesn’t provide that out of the box.

    This both depends on the disto you use and on what you consider a “usable environment”.

    If you extensively use Office 365, OneDrive, need ActiveDirectory, have portable storage encrypted with BitLocker, etc. then, sure, you won’t have a good experience with any distro out there. Or even if you don’t, but you grab a geek oriented distro (e.g. Arch or Gentoo) or a barebones one (e.g. Debian) you, again, won’t have the best experience.

    A lot of people, however, don’t really do a whole lot on their devices. The most widely used OS in the world, at this point in time, is Android, of all things.

    If all you need to do is use the web and, maybe, edit some documents or pictures now and then, Linux is perfectly capable of that.

    Real life example: I’ve switched my parents onto Linux. They’re very much not computer savvy and Gnome with it’s minimalistic mobile device-like UI and very visual app-store-like program manager is significantly easier for them to grasp. The number of issues they ask me to deal with has dropped by… A lot. Actually, every single issue this year was the printer failing to connect to the Wifi, so, I don’t suppose that counts as a technical issue with the computer, does it?

    wacom tablets

    I use Gnome (Wayland) with an AMD GPU. My tablet is plug and play… Unlike on Windows. Go figure.





  • OpenSUSE + KDE is a really solid choice, I’d say.

    The most important Linux advice I have is this: Linux isn’t Windows. Don’t expect things to works the same.
    Don’t try too hard to re-configure things that don’t match the way things are on Windows. If there isn’t an easy way to get a certain behavior, there’s probably a reason for it.






  • Not OP, but I have the same setup.

    I have BTRFS on /, which lives on an SSD and ext4 on an HDD, which is /home. BTRFS can do snapshots, which is very useful in case an update (or my own stupidity) bricks the systems. Meanwhile, /home is filled with junk like cache files, games, etc. which doesn’t really make sense to snapshot, but that’s, actually, secondary. Spinning rust is slow and BTRFS makes it even worse (at least on my hardware) which, in itself, is enough to avoid using it.





  • On my devices like PCs, laptops or phones, syncthing syncs all my .rc files, configs, keys, etc.

    For things like servers, routers, etc. I rely on OpenSSH’s ability to send over environmental variables to send my aliases and functions.
    On the remote I have
    [ -n "$SSH_CONNECTION" ] && eval "$(echo "$LC_RC" | { { base64 -d || openssl base64 -d; } | gzip -d; } 2>/dev/null)"
    in whatever is loaded when I connect (.bashrc, usually)
    On the local machine
    alias ssh="$([ -z "$SSH_CONNECTION" ] && echo 'LC_RC=$(gzip < ~/.rc | base64 -w 0)') ssh'

    That’s not the best way to do that by any means (it doesn’t work with dropbear, for example), but for cases like that I have other non-generic, one-off solutions.