I’m the administrator of kbin.life, a general purpose/tech orientated kbin instance.

  • 0 Posts
  • 55 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

help-circle
  • I think people’s experience with PLE will always be subjective. In the old flat we were in, where I needed it. It would drop connection all the time, it was unusable.

    But I’ve had them run totally fine in other places. Noisy power supplies that aren’t even in your place can cause problems. Any kind of impulse noise (bad contacts on an old style thermostat for example) and all kinds of other things can and will interfere with it.

    Wifi is always a compromise too. But, I guess if wiring direct is not an option, the OP needs to choose their compromise.



  • OK, one possibility I can think of. At some point, files may have been created where there is currently a mount point which is hiding folders that are still there, on the root partition.

    You can remount just the root partition elsewhere by doing something like

    mkdir /mnt/rootonly
    mount -o bind / /mnt/rootonly
    
    

    Then use du or similar to see if the numbers more closely resemble the values seen in df. I’m not sure if that graphical tool you used that views the filesystem can see those files hidden this way. So, it’s probably worth checking just to rule it out.

    Anyway, if you see bigger numbers in /mnt/rootonly, then check the mount points (like /mnt/rootonly/home and /mnt/rootonly/boot/efi). They should be empty, if not those are likely files/folders that are being hidden by the mounts.

    When finished you can unmount the bound folder with

    umount /mnt/rootonly

    Just an idea that might be worth checking.




  • Well I run an ntp stratum 1 server handling 2800 requests a second on average (3.6mbit/s total average traffic), and a flight radar24 reporting station, plus some other rarely used services.

    The fan only comes on during boot, I’ve never heard it used in normal operation. Load averages 0.3-0.5. Most of that is Fr24. Chrony takes <5% of a single core usually.

    It’s pretty capable.


  • I would agree. It’s useful to know all the parts of a GNU/Linux system fit together. But the maintenance can be quite heavy in terms of security updates. So I’d advise to do it as a project, but not to actually make real use of unless you want to dedicate time going forwards to it.

    For a compiled useful experience gentoo handles updates and doing all the work for you.



  • Well, yes and no. It depends on whether you call the Linux kernel as what makes Linux the OS or not.

    For any operating system there are the kernel components and user space components. The GUI in any operating system is going to be user space.

    They also suggest it’s a “minimalized” Linux microkernel. I kinda agree with this approach, why re-invent the wheel when you can cherry-pick the parts of the existing Linux kernel to make your foundations. The huge caveat in my mind is, the scheduler of modern OS’ is what they were complaining about most. I bet the scheduler is one of the things they took from the Linux kernel.

    As for the rest of the project. I don’t think there’s enough meat in this article to say much, and the very limited free version seems a bit too limited to make a good review of how useful it would be.

    I’ll wait until I’m told I need to port X aspect of my job to DBOS to see if it became a thing or not. :P



  • But isn’t that the point? You pay a low fee for inconvenient access to storage in the hope you never need it. If you have a drive failure you’d likely want to restore it all. In which case the bulk restore isn’t terrible in pricing and the other option is, losing your data.

    I guess the question of whether this is a service for you is how often you expect a NAS (that likely has redundancy) to fail, be stolen, destroyed etc. I would expect it to be less often than once every 5 years. If the price to store 12TB for 5 years and then restore 12TB after 5 years is less than the storage on other providers, then that’s a win, right? The bigger thing to consider is whether you’re happy to wait for the data to become available. But for a backup of data you want back and can wait for it’s probably still good value. Using the 12TB example.

    Backblaze, simple cost. $6x12 = $72/month which over a 5-year period would be $4320. Depending on whether upload was fast enough to incur some fees on the number of operations during backup and restore might push that up a bit. But not by any noticeable amount, I think.

    For amazon glacier I priced up (I think correctly, their pricing is overly complicated) two modes. Flexible access and deep archive. The latter is probably suitable for a NAS backup. Although of course you can only really add to it, and not easily remove/adjust files. So over time, your total stored would likely exceed the amount you actually want to keep. Some complex “diff” techniques could probably be utilised here to minimise this waste.

    Deep archive
    12288 put requests @ $0.05 = $614.40
    Storage 12288GB per month = $12.17 x 60 = $729.91
    12288 get requests @ $0.0004 = $4.92
    12288GB retrieval @ $0.0025 / GB x 12288 = $30.72 (if bulk possible)
    12288GB retrieval @ $0.02 / GB x 12288 = $245.76 (if bulk not possible)

    Total: $1379.95 / $1594.99

    Flexible
    12288 put requests @ $0.03 = $368.64
    Storage 12288GB per month = $44.24 x 60 = $2654.21
    12288 get requests @ $0.0004 = $4.92
    12288GB retrieval @ $0.01 / GB x 12288 = $122.88

    Total: $3150.65

    In my mind, if you just want to push large files you’re storing on a high capacity NAS somewhere they can be restored on some rainy day sometime in the future, deep archive can work for you. I do wonder though, if they’re storing this stuff offline on tape or something similar, how they bring back all your data at once. But, that seems to me to be their problem and not the user’s.

    Do let me know if I got any of the above wrong. This is just based on the tables on the S3 pricing site.



  • I mean, technically you could use unsigned 32bit if you don’t need to handle dates before 1970. But yes, the best course of action now is to use 64bits. The cost is pretty much nothing on modern systems.

    I’m just cautious of people judging software from a time with different constraints and expectations, with the current yardstick.

    I also wonder what the problem will be. People playing ghost recon in 2038 are going to be “retro” gaming it. There should be an expectation of such problems. Would it prevent you loading or saving the file is the question?


  • It’s not poorly written software if it’s is old. Likewise the y2k bug is often declared as bad programming, but at the time the software with the y2k bug was written memory was measured in kilobytes and a lot of accounting software and banking software was written in a time when 64k was the norm. Oh, and I’ll tell you now I know of at least some accounting software that is based on code written for the 8088 and has been wrapped and cross compiled so many times now it’s unrecognisable. But I know that 40 year old code is still there.

    So 2 digits for year was best practice at the time and at the time software vulnerable to the 2038 bug 32bit epoch dates was the best practice.

    Now, software written today doing the same, could of course be considered bad, but it’s not a good blanket statement.


  • r00ty@kbin.lifetoLinux@lemmy.mlThoughts on this?
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    6 months ago

    Fuck all of that. Linux desktop really could use a benevolent dictator that has some vision and understanding what the average user wants.

    It already has these. They’re called Linux Distros. They decide the combination of packages that make up the end to end experience. And they’re all aimed at different types of user.

    Why are none explicitly aimed at the average Windows user? I suspect there’s one major reason. The average Windows user is incapable of installing an operating system at all, and new PCs invariably come with Windows pre-installed. This isn’t a sleight on them by the way, it’s just that most computer users don’t want or need to know how anything works. They just want to turn it on, and post some crap on Twitter/X then watch cat videos. They don’t have an interest in learning how to install another operating system.

    Also, a distro aimed at an average Windows user would need to be locked down hard. No choice of window manager, no choice of X11/Wayland. No ability to install applications not in the distro’s carefully curated repository, plus MAYBE independently installed flatpak/other pre-packaged things. The risk of allowing otherwise creates a real risk of the system breaking on the next big upgrade. I don’t think most existing Linux users would want to use such a limiting distro.

    Unless Microsoft really cross a line to the extent that normal users actually don’t want anything to do with windows, I cannot imagine things changing too much.



  • I think that’s the main problem. You could make a Linux distro that works like android and other embedded setups. But it would be locked down to only allow installations from an app store and custom hardware likely not supported with no way to get a kernel update until the distro does it.

    That would totally alienate the current Linux userbase who are used to taking a distro, adding their own install sources, compile some stuff from source, upgrading kernel or perhaps also recompiling from source. Sure an upgrade might break things but they know how to fix it.

    The two types of user are worlds apart. I think snap/flatpak etc come closer to a way to get windowsesque setups. But again for many experienced users those also sacrifice too much in favour of convenience.


  • I think Linux blows windows out of the water as a server operating system. I’ve been using it that way for over 25 years now.

    For desktop, there’s a few problems. First is that the average user cannot install an operating system. So unless it comes pre-installed they’re going to be out of luck. The second is that I’ve not found a distro that won’t occasionally just blow itself up on an upgrade. Driver issues, circular dependencies, and all manner of other things that a normal user just doesn’t know how to deal with.

    Then you get to gaming. Which is WAY WAY better all the time. But, knowing what works and what doesn’t, which drivers to use, the best distro that has most of the gaming stuff already sorted for you. Not to mention the Wayland + NVidia issues that people are also talking about here. Also, I’ve never proven it. But on FPS games it feels like there’s just a bit more latency on linux (albeit I think overall most games run smoother on linux).

    I think Desktop is still great on Linux. But for mass consumption, it still has a way to go and I do wonder if, while windows exists and is preinstalled on everything if it will ever be more than a niche thing. Most users don’t know there’s an alternative and for sure would have no clue how to go about installing it.


  • In all honesty. Most business laptops will have recent TPM anyway. Simply because if you give employees laptops you damn well want bitlocker on them. Where I work they’re changed every 2 years anyway. People lose laptops. It’s just a fact of life and you want some protection for the data on there.

    Desktops, not so sure. For home users, there are of course very simple tools to make customised Win 11 boot USBs removing the fake requirements. But I’d say that the majority of users still couldn’t install an operating system at all. So if windows cannot upgrade itself, they’ll sit on unsupported win 10 or have to buy a new one.

    If you can install windows, you can install the customised one I’d wager. The skill level is about the same.