I wouldn’t be so sure about the lifetime - spinning up and spinning down put far more stress on the drive components than simply spinning at a constant rate.
I wouldn’t be so sure about the lifetime - spinning up and spinning down put far more stress on the drive components than simply spinning at a constant rate.
I would say the vast majority of people (across all generations) either don’t know, or don’t really understand how extensive it (the monitoring) is and what the consequences of that are.
+1 for Debian, if you just want a stable, reliable system and don’t care about the latest and greatest features there is no better choice
Downside: it’s entirety manual and not scalable whatsoever.
no that just sounds like a bug
Virtually all modern x86 chips work that way
they’re still pretty RISC, using fixed-width instructions and fairly simple encoding. certainly a hell of a lot simpler than the mess that is x86-64
Not to be confused with the longest paper ever published (see page 286)
Michelangelo’s David is a well-known marble statue which was carved using a chisel.
Yeah, although the neat part is that you can configure how much replication it uses on a per-file basis: for example, you can set your personal photos to be replicated three times, but have a tmp directory with no replication at all on the same filesystem.
What exactly are you referring to? It seems to me to be pretty competitive with both ZFS and btrfs, in terms of supported features. It also has a lot of unique stuff, like being able to set drives/redundancy level/parity level/cache policy (among other things) per-directory or per-file, which I don’t think any of the other mainstream CoW filesystems can do.
The recommendation for ECC memory is simply because you can’t be totally sure stuff won’t go corrupt with only the safety measures of a checksummed CoW filesystem; if the data can silently go corrupt in memory the data could still go bad before getting written out to disk or while sitting in the read cache. I wouldn’t really say that’s a downside of those filesystems, rather it’s simply a requirement if you really care about preventing data corruption. Even without ECC memory they’re still far less susceptible to data loss than conventional filesystems.
I considered a KVM or something similar, but I still need access to the host machine in parallel (ideally side-by-side so I can step through the code running in the guest from a debugger in my dev environment on the host). I’ve already got a multi-monitor setup, so dedicating one of them to a VM while testing stuff isn’t too much of a big deal - I just have to keep track of whether or not my hands are on separate keyboard+mouse for the guest :)
Functionally it’s pretty solid (I use it everywhere, from portable drives to my NAS and have yet to have any breaking issues), but I’ve seen a number of complaints from devs over the years of how hopelessly convoluted and messy the code is.
I do this for testing graphics code on different OS/GPU combos - I have an AMD and Nvidia GPU (hoping to add an Intel one eventually) which can each be passed through to Windows or Linux VMs as needed. It works like a charm, with the only minor issue being that I have to use separate monitors for each because I can’t seem to figure out how to get the GPU output to be forwarded to the virt-manager virtual console window.
That is very slow, unless the drive is connected over USB or failing or something, a drive of that capacity should easily be able to handle sequential writes much faster than that. How is the drive connected, and is it SMR?
What exactly happens when you issue a TRIM depends on the SSD and how much contiguous data was trimmed. Some drives guarantee TRIM-to-zero, but there’s still no guarantee that the data is actually erased (it could just marked as inaccessible to be erased later). In general you should think of it more as a hint to the drive that these bytes are no longer needed, and that the drive firmware can do whatever it likes with that information to improve its wear-levelling ability.
Filling an SSD with random data isn’t even guaranteed to securely erase everything, as most SSDs are overprovisioned (they have more flash cells than the drive’s reported capacity, used for wear leveling and the likes). even if you overwrite the whole drive with random bytes, there’s a pretty good chance that a number of sectors won’t be overwritten, and the random bytes would end up going to a previously unused sector.
Nowadays, if you want to wipe a drive (be it solid state or spinning rust), you should probably be using secure erase - it’s likely to be much faster than simply overwriting everything, and it’s actually guaranteed to make all the data irrecoverable.
i’ve taken to running apt inside eatmydata
, makes it run way faster since it doesn’t call fsync constantly. granted, you could end up in an invalid state if the power goes out, but that’s what UPSs, laptop batteries and backups are for :)
people always complain about nvidia drivers on linux, but personally my experience has never required anything more than sudo apt install nvidia-driver
I can assure you that before I set up Cloudflare, I was getting hit by SYN floods filling up the entire bandwidth of my home DSL2 connection multiple times a week.