• 0 Posts
  • 36 Comments
Joined 2 years ago
cake
Cake day: November 28th, 2022

help-circle


  • I’m sure the developers are competent, but the reason I care about the design decisions is the same reason the electric brakes on cars don’t interface with its infotainment system; the interface inherently creates opportunities for out of spec behaviour and even if the introduced risk is tiny, the consequence is so bad that it’s worth avoiding.

    If you have to have an airbag be controlled by software (ideally the mechanism is physical, like a pull tab), it should be an isolated real time device with monitoring your accelerometer and triggering the airbag be it’s only jobs. If it’s also waiting to hear back from another device about whether your subscription ran out before it starts checking, the risk of failure also has to consider that triggering device.

    It can be done perfectly, but it’s software so of course it has bugs.



  • Yes, but also from an implementation perspective: if I’m making code that might kill somebody if it fails, I want it to be as deterministic and simple as possible. Under no circumstances do I want it:

    1. checking an external authentication service.
    2. connected to the internet in any way.
    3. have multiple services which interact over an API. Hell, even FFIs would be in the “only if I have to” bucket.




  • I build Linux routers for my day job. Some advice:

    • your firewall should be an appliance first and foremost; you apply appropriate settings and then other than periodic updates, you should leave it TF alone. If your firewall is on a machine that you regularly modify, you will one day change your firewall settings unknowingly. Put all your other devices behind said firewall appliance. A physical device is best, since correctly forwarding everything to your firewall comes under the “will one day unknowingly modify” category.

    • use open source firewall & routing software such as OpenWRT and PFSense. Any commercial router that keeps up to date and patches security vulnerabilities, you cannot afford.


  • It opens the door to more manufacturers since there is no ISA licence fees. While the AMD/Intel duopoly is being fairly competitive at the moment, it really doesn’t have to be. Only think back to how bad it was late 2000s to 2015.

    I imagine a plethora of core designers, soc vendors and platform creators filling their own niches; lowest cost, lowest power, HW accelerators, highest core count etc.

    I don’t see the raw performance of AMD/Intel being surpassed soon, just because of the sheer total R&D years each has, but that doesn’t mean there aren’t other areas better suited to a different architectural approach.





  • NT is not the majority of windows code though; for windows to be multi architecture, all of windows needs to work with the new architecture; NT, drivers & userspace.

    For Linux, if an existing userspace application doesn’t work in aarch64, somebody somewhere will build a port. For windows, so much of their stuff is proprietary that Microsoft are the only ones able to build that port.

    Not because “windows bad”, just a consequence of such a locked down system which doesn’t have anything open source to inherit.


  • Memory safety is likely to prevent a lot of bugs. Not necessarily in the kernel proper, I honestly don’t see it being used widely there for a while.

    In third party drivers is where I see the largest benefit; there are plenty of manufacturers who will build a shitty driver for their device, say that it targets Linux 4.19, and then never support/update it. I have seen quite a few third party drivers for my work and I am not impressed; security flaws, memory leaks, disabling of sensible warnings. Having future drivers written in rust would force these companies to build a working driver that didn’t require months of trawling through to fix issues.

    Now that I think about it, in 10 years I’ll probably be complaining about massive unsafe blocks everywhere…



  • I started using Linux maybe 5 years ago, just before DXVK and proton became a thing. The difference between now and then for gaming is night and day.

    If it’s on steam, there is a pretty good chance it’ll work. If it’s not on steam, it still might work through lutris.

    There are some holdouts like Riot games, but I haven’t owned windows in almost two years.


  • Interdependency is a large part of issues; If you have an aur package that breaks but has no other packages that depend on it, you have a minor problem. If you have an aur package that breaks which many packages depend on, you have a major problem. Keep your libraries as unchanging as you can; out of AUR if possible, definitely not -git packages.

    An AUR pkgbuild can also perform arbitrary actions to install the package, the security implication is obvious but many also miss that, yes as you install more AUR packages your system will diverge from the expected Arch state. Normally this is minor and fine, but it could trip you up here and there.