I once met a person that never drank water, only soft drinks. It’s not the unhealthiness of this that disturbed me, but the fact they did it without the requisite paperwork.

Unlike those disorganised people I have a formal waiver. I primarily drink steam and crushed glaciers.

  • 0 Posts
  • 28 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle


  • I am not so sure that it will end up faster or better.

    **In theory: **A CPU scheduler should give programs as much CPU time as they want until you start nearing CPU resource saturation. Discord doesn’t need very large amounts of CPU (admittedly it’s a lot more than it should for a text chap app, but it’s still not diabolically bad). It will only start getting starved when you are highly utilising all cores. That can happen on my 2-core laptop, but I don’t have any games on my 6 core desktop that will eat everything. Nonetheless on my laptop I’d probably prefer my games take the resources (not Discord) and I’d happily suffer any reasonable drop in responsiveness of Discord as a result.

    I don’t think that a new process (a new dedicated browser-client) instead of a new thread (tab in existing browser) is intrinsically faster or better. CPU schedulers are varied and complex, I wouldn’t be surprised if any differences in performance measurements would end up down in the noise. If anything the extra memory usage might cause more IO contention and memory starvation, making everything slower rather than faster. But this is all conjecture, so don’t give it much credit.

    Basically, it’s faster to focus on painting a single canvas than it is to painting 3 at the same time.

    I don’t think that’s much of a problem in practice, at least for Firefox: one tab can crash and stop rendering completely (or lock up 100% of 1 CPU core) but the others will keep going in other threads. For the most part they shouldn’t be able to affect each other’s performance.

    In practice: What’s the actual metric that you think will be better or worse? I assume responsiveness to typing and clicks in the discord UI?

    I’ve never seen discord lag or stutter from causes other than IO limitations (startup speed, network traffic, heavy IO on my machine) or silly design (having to refresh the page after leaving it open all day, I suspect it’s intentionally auto-disabling but I’m not sure). That’s not something that running a separate discord client in a separate dedicated/embedded browser will fix.



  • SFF = Small Form Factor. It’s smaller than traditional ATX computers but can still take the same RAM, processors and disks. Motherboards and power supplies tend to be nonstandard however. Idle power consumptions are usually very good.

    USFF = Ultra Small Form Factor. Typically a laptop chipset + CPU in a small box with an external power supply. Somewhat comparable with SBCs like Raspberry Pis. Very good idle power consumption, but less powerful than SFF (and/or louder due to smaller cooler) and often don’t have space for standard disks.

    SBC = Single Board Computer.


  • I wouldn’t attack via USB, that path has already been too well thought out. I’d go for an interface with some sort of way to get DMA, such as:

    • PCIE slots including M.2 and external thunderbolt. Some systems might support hotplug and there will surely be some autoloading device drivers that can be abused for DMA (such as a PCIE firewire card?)
    • Laptop docking connectors (I can’t find a public pinout for the one on my Thinkpad, but I assume it’ll have something vulnerable/trusted like PCIE)
    • Firewire (if you’re lucky, way too old to be found now)
    • If you have enough funding: possibly even ones no-one has thought about like displayport + GPU + driver stack. I believe there have been some ethernet interface vulnerabilities previously (or were those just crash/DOS bugs?)

  • I recommend using a different set of flags so you can avoid the buffering problem @[email protected] mentions.

    This next example prevents all of your ram getting uselessly filled up during the wipe (which causes other programs to run slower whenever they need more mem, I notice my web browser lags as a result), allows the progress to actually be accurate (disk write speed instead of RAM write speed) and prevents the horrible hang at the end.

    dd if=/dev/urandom of=/dev/somedisk status=progress oflag=sync bs=128M

    “oflag” means output flag (to do with of=/dev/somedisk). “sync” means sync after every block. I’ve chosen 128M blocks as an arbitrary number, below a certain amount it gets slower (and potentially causes more write cycles on the individual flash cells) but 128MB should be massively more than that and perfectly safe. Bigger numbers will hog more ram to no advantage (and may return the problems we’re trying to avoid).

    If it’s an SSD then I issue TRIM commands after this (“blkdiscard” command), this makes the drive look like zeroes without actually having to write the whole drive again with another dd command.


  • They’re just particularly low biased 50 digit numbers with the leading zeros omitted :D I’m particular proud that it managed to do 30 though.

    It’s interesting that none of the the numbers start with zero. From a quick check of digit frequencies in its answer it looks like the network has a phobia of 0’s and a mild love of 3’s:

    Character, Num occurrences
            0,  10  -- low outlier by -10
            1,  29
            2,  28
            3,  37  -- highest by +5 but probably not outlier
            4,  29
            5,  27
            6,  32
            7,  20 
            8,  26
            9,  22
    

    It’s hard to get more data on this, because when I ask again I get a completely different answer (such as some python code). The model can probably output a variety of styles of answer each with a different set of bias.


  • Just tried with alpaca ggml-alpaca-7b-q4.bin (a text chat model that I found I can run locally easily on my CPU).

    > Generate and list 30 non-repeating 50-digit psuedorandom seeds.                                       
    1. 978642450
    2. 463589826
    3. 705123458
    4. 286749035
    5. 134628359
    6. 693941285
    7. 524378609  
    8. 115723468
    9. 346258197
    10. 637428513
    11. 456193726
    12. 294683125  
    13. 824635197
    14. 581234609   
    15. 237641935    
    16. 721943586     
    17. 312684637      
    18. 602378193       
    19. 451372609        
    20. 985134637         
    21. 126843586     
    22. 376412851      
    23. 721941358       
    24. 237640193        
    25. 513724609         
    26. 194635819     
    27. 463581260      
    28. 358124637       
    29. 134628359
    

    Looks somewhat random.

    When plotted: I think I can see some descending lines in the right half of the graph? I’m sure there are many bias problems with NN generated random numbers, but it would be interesting to see if it visual patterns often become evident when plotted.



  • Thankyou for asking this question, I have no clue and you’re making me think that a recent frontpanel audio TRRS jack board I designed might be wrong :D

    There are two possible options I can see:

    1. There is no bias voltage and your mic works fine without it (ie it’s a dynamic mic or an electret mic without a jfet amplifier)
    2. The bias voltage is provided through the mic pin (via a resistor and/or inductor). The mic then overlays AC onto this DC signal.

    I cannot find any good references or info about mic bias and TRRS connectors :( Anyone else have any luck? Wikipedia says it’s a standard referred to as “CTIA” or “AHJ” but those appear to be company names, not standard names.

    My current headset uses a TRRS, but also provides an extension cable that splits into two 3.5mm TRS just like yours. I might probe it out and find out what it’s doing (but that doesn’t mean it’s the right/universal solution).



  • Sorry Jarfil if I’m being nitpicky :|

    They don’t need to send the same signal inverted, just allow both cables to react in the same way to any interference (maintain the same impedance).

    These are both the same thing, just viewed from different angles. Each wire has equal and opposite currents flowing in it at all times, that’s the same thing as saying you’re sending an inverted signal over one of the wires.

    “phantom power” […] “bias power”

    Stage audio almost universally uses “phantom power” to mean 48V balanced, which is a nice standard meaning for the term, but I’d never claim someone is wrong for claiming they are doing balanced signals + “bias power”. It’d raise an eyebrow (have they made a mistake? it’s uncommon) but it’s still reasonable, I don’t think “bias power” specifically refers to only unbalanced configurations.

    Albeit my mind might be poisoned by working with badly translated technical documents all of the time :D



  • Without bias power, the sound itself needs to power the system, meaning any sound below some threshold will get “used up” by the mic and not transmitted

    This is false. I suspect this myth came about because this is how magnetic audio tapes work (tape bias).

    Dynamic microphones do not benefit from bias. They can tolerate a small amount but too much will burn them out (depending on their resistance & the voltage applied) or increase distortion (depending on the mechanical construction & how much the diaphragm is moved by the DC). Some dynamic mic units are built with capacitors in them to intentionally block bias voltages, preventing them from burning out.

    I have never seen a datasheet or research paper showing improved dynamic mic performance due to DC offset. If it helped then a manufacturer would be recommending it in the datasheets (so they could claim better distortion & sensitivity specs).

    Mics with in-built amplifier circuits require bias voltage to function. Many small “electret” modules contain jfet amps, you have to check the datasheet because they look identical to non-amplified versions on the outside. This is very common in small computer & headset mics. Some might work without bias, but they will sound poor because the amplifier circuit is not designed to work this way.

    Condenser mics need some form of bias voltage to function at all. Electrets provide this themselves through some magic materials science that’s similar to a battery that lasts for years/decades/centuries. The other types of condenser mic require you to apply an external bias voltage (aka “phantom power”).

    Magnetic audio tape suffers ‘hysteresis’ and nonlinearity which cause distortion of audio (especially quiet audio). Applying a bias voltage works around this problem. DC biases work, but high frequency AC ones are typically better.

    I suspect the source of this myth is a confusion between the magnetics of tapes and the magnetics of dynamic mics. I think I recall a year 8/9 science class where I was taught that audio could be amplified slightly by putting a battery in series with a microphone and speaker. I failed to find any sources to support that at the time, but the teacher was adamant that this used to be a legitimate method. Perhaps if the coils were not glued properly in the speaker & mic? It was supposed to be a solution before the days of tube amplifiers but I think the true information turned into nonsense somewhere along the chain.

    but… all real world materials have a resistance, capacitance, reactance and a resulting impedance, which need to be overcome for the signal to resemble the sound the membrane is picking up.

    Resistance, capacitance and inductance are linear. They will affect all signals the same way, they will not only affect small signals.

    To affect the small signals differently to the large signals you need nonlinear elements, like diodes and transistors. EDIT: there are also nonlinear capacitors and resistors, but they’re from more exotic materials than what you find in standard headphone wires & mic designs.





  • The fact this issue is happening on both Pipewire and Pulseaudio also suggests it’s more likely a bug in the drivers… It might not be obvious on ALSA directly, but that doesn’t mean an issue doesn’t exist there…

    I probably made the overlap unclear, sorry:

    • Pipewire issues: My 2023 desktop and 2016 laptop, very different hardware.
    • Pulseaudio issues: All of my pre-2023 desktops and several family laptops

    I do a lot of middleware development and we’re regularly blamed by users for bugs/problems upstream too (which is why we’ve now added a huge amount of enduser diagnostics/metrics in our products which has made it more obvious the issues aren’t related to us).

    Eep, that’s annoying. You also probably don’t have direct interaction with the users most of the time (they’re not your customer) which makes this worse, people in a vacuum follow each other’s stories.

    In practice, very few people have issues with Pulseaudio (I haven’t seen issues since launch). Sometimes as well, keep in mind it can be the sound interface (especially if its USB)

    There might be a bias here because these problems are not persistent, ie a reboot fixes them.

    In regards to setup, most distributions will handle that anyway I’m guessing. So not sure why the configuration process should matter unless you’re in Arch or Slackware? As long as the distribution handles it, it shouldn’t matter. It’d really a non-issue honestly.

    That’s potentially more things different distros can do differently and more issues your middleware will start getting blamed for.

    Yes it’s not a problem for user-friendly distros, but why does the user friendliness problem exist anywhere anyway? It’s better to fix problems upstream, not downstream.


  • If you check SystemD, its a HUGE step up, which is why everyone is using it now

    I think that’s a “winners write history” situation. There were other options at the time that might have been better choices. Everyone uses it now because of Redhat and Debian being upstream to most users, desktop and corporate. I was not surprised by Redhat adopting it (it’s their own product) but Debian was quite the shock.

    Yes systemd is definitely a step up from traditional initscripts (oh god). In terms of simplicity, reliability and ease of configuration however it’s a step below other options (like runit). I don’t have distro management experience but, given the problems I’ve encountered with different init systems over the years, I suspect there would be less of a maintenance burden with the other options.