• 1 Post
  • 32 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle

  • By the time you’re ready to buy a new card, Nvidia might be working well under wayland. They’ve already made significant changes in the past couple of years, like implementing GBM and hardware accelerated XWayland. To my understanding, this MR will also fix some remaining issues in the future. I don’t know how much more work needs to be done after that, but just the fact they are cooperating with the free software ecosystem is a good sign.

    Perhaps more importantly, the free nouveau driver can now experimentally reclock nvidia gpus from the 2000 series and newer. With this breakthrough it is possible that nouveau + nvk will be able to compete with the proprietary driver in the near future. If/when we have a well-supported free driver, we will probably have proper wayland support as well.

    I’m not really in a hurry to switch to Nvidia. I’ve been quite happy with my AMD cards so far. But it’s definitely a good thing to have the option to buy from any vendor.


  • Clarification: In my previous comment I meant that the implementation was antiquated, which is why it was causing many problems.

    Although I do think that desktop icons in general are outdated because they’re designed around a desktop metaphor that is itself outdated. Our use of computers has changed vastly over time and the original metaphors are irrelevant to today’s newcomers. Yet most desktop environments are still replicating the same 30 year old ideas. It’s because we’re used to them (which I understand is a valid reason), not because they are necessarily the most pleasant or the most efficient.







  • I would have liked perhaps a more distribution agnostic method of running NVMe-TCP in a way that the OS would not have to be booted.

    From the pull request:

    This all requires that the target mode stuff is included in the initrd of course. And the system will the stay in the initrd forever.

    I think that’s as minimal a boot target as you can reasonably get, or in other words you’re as far away from booting the OS as you can get.

    So now the question is whether this uses any systemd-specific interfaces beyond the .service and .target files. If not, it should not take much effort to create a wrapper init script for the executable and run it on non systemd distros.



  • I hate partitions. Moving and resizing partitions is not fun if you don’t correctly predict exactly the amount of space you need. If you really want the modularity, use btrfs subvolumes instead. IMPORTANT: While it is definitely feasible, ability to retain subvolumes might depend on the distro installer! Check before you commit to this approach!

    Also, consider using LVM or multi-device btrfs to make the drives act as one filesystem. This means that you will never have to worry about where to place your files to balance the load, but it might make removing/replacing a drive in the future harder.


  • But HTTPS will stop them from seeing the content you actually see on the web.

    Sure, but that was true for your ISP as well. I’m not questioning what data you’re leaking. I’m saying that it’s the same data and you only change who you leak it to if you choose to use a VPN.

    It seems like you’ve thought about it and you have made an informed choice. That’s great and I don’t have anything to argue against here. The only reason I commented is that there seems to be a trend of “just use VPN and your data is protected” mentality, especially with all the ads in gaming/tech related content. There was no way for me to know if you or the other users who would read your initial comment were aware that using a VPN doesn’t magically protect your data if you don’t know who your provider is, so I though I’d point it out.


  • Thanks for the explanation. I don’t really know how flash storage works. The fundamental idea of the problem I described would still apply, though as long as the input block size for dd extends to more than one page of the underlying storage.

    For example, say that exactly three pages fit in a block. If dd attempts to read pages A, B and C (ABC) and fails to read B, you would want the corresponding part zeroed in the output to preserve the offsets of all the other pages (A0C). But instead dd reads whatever it can for the entire block, then pads the rest of the block size with zeroes, effectively moving C forward (AC0). So essentially you magnify errors.


  • Thanks for the input, guys. I consider my issue resolved.

    As for the specific question I head, dd can fill with zeroes the blocks that failed to read with conv=noerror,sync. However, this puts the zeroes at the end of the block and not over the exact bit/byte that failed to read, meaning that a read error will invalidate the rest of the block.

    But the consensus across source I searched seems to be to use ddrescue instead of dd.


  • I already have done an rsync copy. I noticed that some files failed to transfer and I thought that maybe the drive is failing. Wanting to attempt to debug and possibly rescue some more data (eg parts of big files that failed to transfer completely) without messing with the original copy, I tried dd and that’s how we got here.

    Also this was a Windows system that was used daily by a family member and has a lot of installed background/tray services with saved logins. I imagine I could figure out everything there is to keep in an rsync clone, but it might be easier to have an image that I can try to mount to a VM and inspect “internally”.

    So I don’t need the clone strictly speaking but it would be nice to have. Plus, I would like to know the answer for the future as well.






  • Personally I don’t care so much about the things that Linux does better but rather the abusive things it doesn’t do. No ads, surveillance, forced updates etc. And it’s not that linux happens to not do that stuff. It’s that the decentralized nature of free software acts as a preventative measure against those malicious practices. On the other side, your best interests always conflict with those of a multi-billion company, practically guaranteeing that the software doesn’t behave as you. So windows are as unlikely to become better in this regard as linux is to become worse.

    Also the ability to build things from the ground up. If you want to customize windows you’re always trying to replace or override or remove stuff. Good luck figuring out if you have left something in the background adding overhead at best and conflicting with what you actually want to use at worst. This isn’t just some hypothetical. For example I’ve had windows make an HDD-era PC completely unusable because a background telemetry process would 100% the C: drive. It was a nightmarish experience to debug and fix this because even opening the task manager wouldn’t work most of the time.

    Having gotten the important stuff out of the way, I will add that even for stuff that you technically can do on both platforms, it is worth considering if they are equally likely to foster thriving communities. Sure I can replace the windows shell, but am I really given options of the same quality and longevity as the most popular linux shells? When a proprietary windows component takes an ugly turn is it as likely that someone will develop an alternative if it means they have to build it from the ground up, compared to the linux world where you would start by forking an existing project, eg how people who didn’t like gnome 3 forked gnome 2? The situation is nuanced and answers like “there exists a way to do X on Y” or “it is technically possible for someone to solve this” don’t fully cover it.


  • I think you misunderstood what I was saying. I’m not saying wayland magically makes everything secure. I’m saying that wayland allows secure solutions. Let’s put it simply

    • Wayland “ignores” all the issues if that’s what you want to call it
    • Xorg breaks attempts to solve these issues, which is much worse than “ignoring” them

    You mentioned apps having full access to my home directory. Apps don’t have access to my home directory if I run them in a sandbox. But using a sandbox to protect my SSH keys or firefox session cookies is pointless if the sandboxed app can just grab my login details as I type them and do the same or more harm as they would if they had the contents of my home directory. Using a sandbox is only beneficial on Wayland. You could potentially use nested Xorg sessions for everything but that’s more overhead, will introduce all the same problems as Wayland (screen capture/global shortcuts/etc), while also having none of the Wayland benefits.

    And given how garbage the modern state of sandboxing still is

    I’m not talking about “the current state” or any particular tool. One protocol supports sandboxing cleanly and the other doesn’t. You might have noticed that display server protocols are hard to replace so they should support what we want, not only what we have right now. If you don’t see a difference between not having a good way to do something right now versus not allowing for the possibility to do something in a good way ever, let’s just end the discussion here. If those are the same to you no argument or explanation matters.

    If you actual want to solve this issue you have to provide secure means to do all those task.

    Yes that exactly the point. Proposed protocols for these features allow a secure implementation to be configured. You would have a DE that asks you for every single permission an app requests. You don’t automatically get a secure implementation, but it is possible. There might be issues with the wayland protocol development processes or lack of interest/manpower by DE/WM developers, or many other things that lead to subpar or missing solutions to current issues, but they are not inherent and unsolvable issues of the protocol.