• 0 Posts
  • 12 Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle
  • I was already a dev in a small IT consultancy by the end of the decade, and having ended up as “one of the guys you go to for web-based interfaces”, I did my bit pushing Linux as a solution, though I still had to use IIS on one or two projects (even had to use Oracle Web Application Server once), mainly because clients trusted Microsoft (basically any large software vendor, such as Microsoft, IBM or Oracle) but did not yet trust Linux.

    That’s why I noticed the difference that Red Hat with their Enterprise version and Support Plans did on the acceptability of Linux.



  • CRT monitors internally use an electron gun which just fires electrons at the phosporous screen (from, the back, obviously, and the whole assembly is one big vacuum chamber with the phosporous screen at the front and the electron gun at the back) using magnets to twist the eletcron stream left/right and up/down.

    In practice the way it was used was to point it to the start of a line were it would start moving to the other side, then after a few clock ticks start sending the line data and then after as many clock ticks as there were points on the line, stop for a few ticks and then swipe it to the start of the next line (and there was a wait period for this too).

    Back in those days, when configuring X you actually configured all this in a text file, low level (literally the clock frequency, total lines, total points per line, empty lines before sending data - top of the screen - and after sending data as well as OFF ticks from start of line before sending data and after sending data) for each resolution you wanted to have.

    All this let you defined your own resolutions and even shift the whole image horizontally or vertically to your hearts content (well, there were limitations on things like the min and max supported clock frequency of the monitor and such). All that freedom also meant that you could exceed the capabilities of the monitor and even break it.


  • In the early 90s all the “cool kids” (for a techie definition of “cool”, i.e. hackers) at my University (a Technical one in Portugal with all the best STEM degrees in the country) used Linux - it was actually a common thing for people to install it in the PCs of our shared computer room.

    Later in that decade it was already normal for it to be used in professional environments for anything serving web pages (static or dynamic) along with Apache: Windows + IIS already had a lower fraction of that Market than Linux + Apache.

    If I remember it correctly in the late 90s RedHat started providing their Enterprise Version with things like Support Contracts - so beloved by the Corporates who wanted guarantees that if their systems broke the supplier would fix them - which did a lot to boost Linux use on the backend for non-Tech but IT heavy industries.

    I would say this was the start of the trend that would ultimately result in Linux dominating on the server-side.




  • Whilst a 100W delta seems unlikelly, a 50W delta seems realistic as the kind of stuff you have in a NAS will use maybe 5W (about the same as a Raspberry PI, possibly less) whilst the typical desktop PC uses significantly more even outside graphics mode (part of the reason to use Linux in text mode only is exactly to try and save power there). It mainly depends on what the desktop was used for before: a “gaming PC” with a dedicated graphics card from an old enough generation (i.e. with HW from back before the manufactures of GPUs started competing on power usage) will use signiificantly more power than integrated graphics even in idle mode.

    That said, making it a “home server” as you suggest makes a lot of sense - if that thing is an “All In One” server (media server, NAS, print server, torrent download server and so on) loaded with software of your choice (and hence stuff that respects your privacy and doesn’t shove Ads in your face) it’s probably a superior solution to getting those things as separate standalone devices, especially in the current era of enshittification.


  • A NAS is basically some software running on a computer, so you can use a desktop as that computer, ideally with a light operating system (for example, Linux in text only mode).

    HOWEVER: desktops are designed for far higher computational loads than needed by a NAS, plus things like graphical user interfaces and direct connection of user peripherals such as mice, so even when idle they consume a lot more power than the kind of hardware used in a typical NAS.

    Also the hardware in a good NAS will have things like extra higher speed connectors for HDDs/SDDs (such as SATA) rather than you having to use slower stuff like USB.

    So keep in mind that a desktop as NAS will consume significantly more power than a dedicated NAS (as the latter will probably be running on something like an ARM and have a power source dimensioned for a couple of HDDs, not to run a dedicate graphics card like a desktop has) and probably won’t fit as many disks.

    If you’re ok with having most disks be accessed a bit slower and USB3 work for you (and, for example, if your NAS is on 100 Mbit Ethernet, it’s the network that’s the slowest thing, not USB3) then it’s usually better to use an old notebook rather than desktop because notebooks were designed for running of batteries hence consume significantly less power.

    Frankly I would advise against using an old desktop as NAS mainly because in a year or two of continued use you’ll have paid enough in extra electricity costs vs using a NAS to pay for a simple but decent dedicated NAS.



  • At the level of microcontrollers there is an entire range with the necessary radio HW and enough computing power and memory to have WiFi and a TCP stack but not enough to fit Linux (stuff like the esp8266, which has only 80KB user data memory).

    Those things essentially run just the one application on top of some manufacturer provider libraries (no OS, though if you really want to there’s an RT OS) and which can be something that gets commands via the network and activates some hardware via GPIO ports.

    For example, smart LED lamps that can be controlled from a smartphone are made with this kind of HW.

    Mind you, recently somebody managed to get Linux to run of a top range model of the most recent of these things (an ESP32-S3).

    So I wouldn’t presume that a syringe driver can be made to run Linux, given that it’s functionality is simple enough to be implemented by a simple program that can fit in that kind of microcontroller.


  • That’s exactly my experience.

    I’ve been doing Linux since the early days when Slackware fitted a “few” floppy disks and you had to configure the low level CRT display timings on a text file to get X-windows to work, and through my career have used Linux abundantly, at some point even designing distributed high performance software systems on top of it for Investment Banks.

    Nowadays I just don’t have the patience to solve stupid problems that are only there because some moron decided that THEY are the ones that after 2 bloody decades of it working fine trully have the perfect way (the kind of dunning-krugger level software design expertise which is “oh so common” at the mid-point of one’s software development career and regularly produces amongst others “yet another programming language were all the old lessons about efficiency of the whole software development cycle and maintenability have been forgotten”) for something that’s been done well enough for years, and decided to reinvent it, so now instead of one well integrated, well documented solution there are these pieces of masturbatory-“brilliance” barelly fitting together with documentation that doesn’t even match it properly.

    Just recently I got a ton of headaches merely getting language and keyboard configuration working properly in Armbian because there was zero explanation associated with the choices and their implications, thousands of combinations (99.99% of which are not used or even exists) of keyboard configurations were ordered alphabetically on almost-arbitrary names across 2 tables, with no emphasis on “commonly used” (clearly every user is supposed to be an expert on the ins and outs of keyboard layouts) and there were multiple tools, most of which didn’t work (some immediatelly due to missing directories, others failing after a couple of minutes, others only affecting X) and whatever documentation was there (online and offline) didn’t actually match.

    (It’s funny how the “genious” never seems to extend to creating good documentation or even proper integration testing)

    Don’t get me wrong: I see Software Architecture-level rookie mistakes all the time in the design of software systems, frameworks and libraries in the for-profit sector (“Hi Google!!!”), but they seem to actually more frequent in Open Source because the barrier for entry for reinventing the wheel (again!) is often just convincing a handful of people with an equally low level of expertise.

    (anyways, rant over)