• 4 Posts
  • 25 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle
  • Everything Wordpress is heavily infested with that. However you don’t have to let it impact you – it kind of looks to me like they pressure commercial vendors to put their stuff under the GPL if they’re wanting to offer a free version, so there’s a robust ecosystem of actually-FOSS tooling for it. My experience has been that it’s always worked pretty well in practice; you just have to keep your nope-I’m-not-paying-for-your-paid-version goggles firmly affixed. (Also, side note, GPT does an excellent job of writing little functions.php snippets for you to enable particular custom functionality for your Wordpress install when you need it.)


  • Wordpress 1,000% (probably coupled with WooCommerce but there are probably some other options)

    I honestly don’t even know off the top of my head why you would use anything else (aside from some vague elitism connected to the large ecosystem of commercial crap which has tainted by association the open source core of it) – it combines FOSS + easy + powerful + popular. You will have to tiptoe around some amount of crapware in order to keep it pure OSS though.


  • Yeah. I think it’s moderately likely that I’ll try to produce a little command-line tool that can do it effectively for deeply nested directories, with some attempt at making it cross platform. To me it’s kind of weird that there’s no stock solution existing to this problem. I get that it’s actually a deceptively difficult problem to solve for a couple of different reasons, but that’s no reason to pass the difficulty on to the programmer instead of just presenting a clean and nice interface.

    Update: I looked around for something already-existing, and found watchman and fswatch… IDK, maybe I’ll try to talk one of them into letting me write an fanotify backend for those tools instead. It seems like it’s purely just a Linux issue, and everything is simple on BSD/Mac/Windows, so maybe I’m just lucky.


  • I think inotify’s limit is per system… and even if it wasn’t, why would I want to take on the artificial challenge of keeping up with making sure all the watchers are set on the right directories as things change, instead of just recursively monitoring the whole directory? The whole point of asking the question was “hey can something do this for me” as opposed to “hey I’d like the opportunity to code up for myself a solution to this problem.” 🙂





  • This is a long time coming TBH. It hasn’t made sense for at least 10-15 years for Microsoft to still be trying to “win” against Linux. To me when I see it it seems weird. It’s like your old grandpa who still talks about the “japs” when he sees someone driving a Toyota.

    Linux runs most of the smartphones in the world, and a BSD fork runs the rest. It’s done. No one is going to deploy Windows Server 2023 edition to run their web services unless something’s gone pretty badly wrong. We’re all focused on AI and cloud computing now, and have been for some time.

    The most critical thing a business can do to remain successful is recognize and adapt to the new reality.


  • Almost as if the whole endeavor is a ridiculous counterproductive waste of time.

    It would be possible to implement a “slur filter” on the reader’s side, that automatically redacted a configurable list of bad words from any comment on any instance… but I suspect that the percentage of people who would enable it, and the general community feedback on it, wouldn’t be what the person who made the decision wants to hear. Doing it on the sender side provides a convenient pretense of “I’m doing a good thing here” because it prevents that feedback.


  • mo_ztt ✅@lemmy.worldtoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    10 months ago
    1. Settings & Beta -> Data controls -> Export data
    2. Unzip
    Python 3.11.3 (main, Apr 21 2023, 11:54:59) [Clang 14.0.0 (clang-1400.0.29.202)] on darwin
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import json
    >>> with open('conversations.json') as infile:
    ...     convos = json.load(infile)
    ... 
    >>> for convo in convos:
    ...     for key, value in convo['mapping'].items():
    ...         message = value.get("message", None)
    ...         if message:
    ...             parts = message.get("content", {}).get("parts", [])
    ...             for part in parts:
    ...                 if 'text to search' in part:
    ...                     print(part)
    
    1. Customize to taste

  • You shouldn’t have to… as I understand it, if it’s showing up on your server, that means your server authenticated it. Given the general flakiness of all this software and Lemmy in particular, I wouldn’t put too much reliability on that, but that’s the theory.

    If you do want to double-check it yourself, I know partially how to do it. You don’t have to get the key from the database; it’s probably simpler and safer to get it from your user’s JSON. Here’s a super-basic script to dump a fediverse endpoint’s contents:

    import requests
    import json
    import sys
    
    def fetch_and_pretty_print(url, headers=None):
        # If headers are not provided, set default to fetch ActivityPub content
        if headers is None:
            headers = {
                'Accept': 'application/activity+json',
                'User-Agent': 'Fediverse dump tool via @[email protected]'
            }
        
        try:
            response = requests.get(url, headers=headers)
            response.raise_for_status()  # Raise an exception for HTTP errors
    
            # Try to parse JSON and pretty print it
            parsed_json = response.json()
            print(json.dumps(parsed_json, indent=4, sort_keys=True))
            
        except requests.RequestException as e:
            print(f"Error fetching the URL: {e}")
        except json.JSONDecodeError:
            print("Error decoding JSON.")
    
    if __name__ == '__main__':
        fetch_and_pretty_print(sys.argv[1])
    

    If I want to validate your comment, I would start by getting your public key via your user’s endpoint on your home server. I could save that script up above as fetch, then run python fetch https://lemmy.mindoki.com/u/Loulou, and in among with a bunch of other stuff I would see:

        "publicKey": {
            "id": "https://lemmy.mindoki.com/u/Loulou#main-key",
            "owner": "https://lemmy.mindoki.com/u/Loulou",
            "publicKeyPem": "-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArRwWZneP9efCrsymHDE2\nsJAHojjxE4A2Q3Hquwt7s/HPTAi3gKP7NKCRSH7XVPtGhieJdtDeoLMkitvZXCUX\nS1pZArTYihuLeOwbB+JrAHZpWr1sYpazspUPvl3MhDAOOCCAnSeqsMNPNd8QX1Tf\nN/3Bp4PRVmp9E968L61h93L5N3B7VxZ37kbzKFXrhmU6qFQbAoVQvHtojCD6WqR2\nMb84eJy5QBN+0SjvGR8LRE0iJZiwYvVXKNoEyOqr4Fw8YnELi3TYbfxX++0uXw97\ne+/rFgaa/QVCSopUbHkuX/ZfjzCdBAI+aqXsbmYLgdxdRDHur0k53aCh3u0t/IDL\nHQIDAQAB\n-----END PUBLIC KEY-----\n"
        },
    

    I don’t know off the top of my head how you could navigate your way to the fediverse JSON for your comment, or how to verify its signature once you find it (I tried to get the post by dumping your user’s outbox and the lemmy_support community’s outbox, but neither of those worked the way I expected it to), but that all might be a helpful starting point. I know that according to the docs, anything that was created by your user and then federated is supposed to be signed with that key so that other servers can authenticate it.


  • Here’s quite a good overview. The short answer, I think, is that the signature is embedded into the JSON object representing the post / upvote / whatever, which then gets passed around server-to-server (and each server checks the signature against the original server’s TLS certificate). It’s not something you can get your head around just by asking a couple simple questions but it’s a pretty fascinating design when you get your head around it.




  • You gotta have the concepts the machines are named after change as the nature of the machine changes (and bonus points if the nature of the concept is analogous to the nature of the machine). E.g. if my main machines were planets, then when I added servers they would be named after space hardware (hubble, webb, iss, etc). Raspberry Pis can be ceres, eros, vesta, juno, etc. It actually genuinely helps by distributing around within your brain the placement of which machine corresponds to which concept or which name, and also it frees up more names when you start having tons of machines in different categories.

    I’ve had tons of naming schemes over the years (chemical elements and classic video games were two that I used for different banks of machines) and I’ve done that system with good results.




  • This is a masterfully Orwellian post. So, Redhat is threatening their customers with withdrawal of support that they depend on quite deeply, if the customers exercise their rights under the GPL. In response, the community got upset. Redhat’s response is:

    I was shocked and disappointed about how many people got so much wrong about open source software and the GPL in particular —especially, industry watchers and even veterans who I think should know better. The details — including open source licenses and rights — matter, and these are things Red Hat has helped to not only form but also preserve and evolve.

    So, as of 15 years ago, the total value of what Redhat is selling was estimated at around 10 trillion dollars. The fraction of that that was created by Redhat is, fair play, higher than most companies that distribute FOSS software. They are, in terms of code, a significant contributor (especially in the kernel). But what they’re building on in the first place is this multi-trillion dollar thing that they got for free. The only caveat was that they need to maintain the same freedom for others that they made use of.

    So, when people ask them to do that, they say:

    I feel that much of the anger from our recent decision around the downstream sources comes from either those who do not want to pay for the time, effort and resources going into RHEL or those who want to repackage it for their own profit. This demand for RHEL code is disingenuous.

    I see. It’s yours, and we’re not allowed to repackage it for our own “profit.” Because:

    Simply repackaging the code that these individuals produce and reselling it as is, with no value added, makes the production of this open source software unsustainable.

    Got it.


  • I’m not asking them to make available the exact same code; nothing says they have to make RHEL available to anyone other than their customers. It’s conventional in the open source world to do so, but not required, and they’ve chosen not to because they have this business model of selling GPL software and making it difficult to obtain for free what they’re selling.

    Trying to make a profit through that business model is fine. Having that as their business model doesn’t give them the right to violate the license though. They are threatening their customers if their customers exercise their right to redistribute RHEL (with the apparent goal of making RHEL, the exact product, difficult to obtain for anyone other than their customers – basically building on other people’s work for free, without honoring the terms of free redistribution under which those people made their work available to Redhat for free).

    In GPL v2, the relevant text is in section 6:

    You may not impose any further restrictions on the recipients’ exercise of the rights granted herein.


  • If that were accurate, then what Redhat is doing would be fine. The issue is that they’ve been requiring that their customers not exercise their rights under the GPL to copy or share the source code that Redhat is providing, with the threat of cutting off their support if they do. There’s an unsettled argument on whether that is actually a violation of the law that grants them the ability to sell someone else’s work in the first place, or merely a gross violation of the spirit that most of the people who authored the source code they’re selling would be 100% opposed to. But it’s at least one of those things.

    The GPL exists so that companies can’t just take the code and contribute nothing back.

    This isn’t accurate, though. The GPL says nothing about contributing anything back in terms of authoring improvements or making them available. What it says is, you can redistribute our work, or even sell it, but you need to make sure that people who receive it from you also have those rights.

    I’m aware that Redhat is comparatively speaking, a huge contributor to the FOSS ecosystem. But, if the amount of code they’ve written is huge, the amount that people outside Redhat wrote that they’re selling is gargantuan. I would be very surprised if as much as 5% of the code they’re selling to their customers was anything they authored. If they want to sell the other 95+%, I think it’s fair to ask that they obey the licensing that allows them to.


  • Right, this source is just weird. The story is 100% real, and honestly probably a problem to the extent that Microsoft and the Linux Foundation are even relevant anymore, but everything in this is told in this hyperbolic style that makes it hard to even make sense of.

    just like the Open Source Initiative, where most of the money comes from Microsoft

    Is this true? This doesn’t sound true.

    and the official blog promotes Microsoft, its proprietary software, and Microsoft’s side

    https://blog.opensource.org/

    https://www.linuxfoundation.org/blog

    What is this even talking about? Where does whichever of these blogs this is talking about promote Microsoft’s proprietary software?

    in a class action lawsuit over GPL violations (with 9 billion dollars in damages at stake).

    I was really curious because I hadn’t heard of this. It turns out it’s the Github Copilot lawsuit. I could be wrong, but I’ve looked and I couldn’t find this $9 billion number anywhere else; it sounds like it’s arrived at by simply assuming that 1% of code that Copilot produces is infringing, and computing DMCA damages based on that 1%. It’s not really clear to me whether that argument was just an illustrative example of the scope of the problem, or whether they’re actually asking for $9 billion, but I tend to assume the former. In other venues when the litigants have been asked what remedy they want, they’ve said things like, “We’d like to see them train their AI in a manner which respects the licenses and provides attribution,” as opposed to “we want $9 billion.”

    Etc etc. I picked out a little excerpt, but the whole article is written like this which makes me look at it sideways.