• 1 Post
  • 17 Comments
Joined 1 year ago
cake
Cake day: June 10th, 2023

help-circle




  • As you mentioned, with Fedora the best alternatives are immutable spins. Updating means downloading a new base image, applying overlays and additional installations to it and on the next reboot you start from that image. You can configure it to keep as many previous versions as you need and boot into those directly on startup. Since you never change your current image once it’s built, you can’t break a known good system. You can only ever break your next version and in that case, just boot the previous.

    I’ve created an Ansible playbook that configures a vanilla Kinoite the way I want it. No need to back up the system if I can recreate it with less than a megabyte of text files. Secrets are in my password vault, personal files are in my personal cloud and get synced to and from the laptop continuously. I would never go back to backing up system files as opposed to recreating it with a playbook. That seems so wasteful in hindsight.



  • Lichtblitz@discuss.tchncs.detoSelfhosted@lemmy.worldStalwart v0.5.0
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    6 months ago

    Weird, I’ve never had problems over the past 15 years or so and I’ve been using VPS servers exclusively. Maybe my providers were reputable enough.

    I realize my evidence is only anecdotal, but that’s why I started “in my experience”. Also, common blacklists are checked by the services I mentioned.



  • Lichtblitz@discuss.tchncs.detoSelfhosted@lemmy.worldStalwart v0.5.0
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    4
    ·
    edit-2
    6 months ago

    In my experience, this is nothing more than an urban legend at this point. There are great standards, like DMARC, DKIM, SPF, proper reverse DNS and more, that are much more reliable and are actually used by major mail servers. Pick a free service that scans the publicly visible parts of your email server and one that accepts an email that you send to them and generates a report. Make sure all checks are green. After an initial day of two of getting it right, I’ve never had trouble with any provider accepting mail and the ongoing maintenance is very low.

    Milage may vary with an unknown domain and large email volumes or suspicious contents, though.


  • Everyone keeps saying that but I just can’t see it. The only time my mails were rejected was because I didn’t know what I was doing at the beginning of my journey. Now, whenever I changed my stack or did some major updates the past 20 years or so, I just go to 2-3 sites that analyze my mail server from the outside and tell me if there is anything wrong. The free tier is always more than enough. Just make sure there is at least one service in the list where you send an email to a generated mailbox and have it analyzed. Just looking at the mail server is not enough to find all potential configuration issues.

    I aim at a100% score. It’s time consuming the first time around but later it’s just a breeze.





  • Most of your points seem to be spot on from what I understand as well. However, I believe that the GDPR requirements can and should be baked into Lemmy itself. This would prevent the fragmentation you mentioned. A guarantee of removing user data as requested while federated plus a guarantee to remove stale user data while defederated since requests won’t get through in that case. That would “just” leave the list of processors. This one can be very tricky because you are not just sharing data with your home instance and their federated instances but also with the federated instances of those federated instances. The home instance has no way of learning about the 2nd degree federation. I have no idea how to get the network of data sharing GDPR compliant and I think this is the mich more complicated part that your proposal also suffers from.


  • Sure thing.

    So there are two parts to all of this:

    1. Getting MediaWiki set up , properly configured and running.
    2. Having it securely accessible from the Internet (if needed), including SSL certificates.

    Part 1 is well covered my the MediaWiki release already. You only need to worry about the correct configuration. When you download the current version from the official MediaWiki page, you’ll notice that there is already a docker-compose.yml file in there. This gets you most of the way to your destination.

    Read the file and set the values of all variables you wish to override in a separate “.env” file in the same folder. It could look something like this: MW_SCRIPT_PATH=/w MW_SERVER=https://your-url.com MW_DOCKER_PORT=80 MEDIAWIKI_USER=Admin MEDIAWIKI_PASSWORD=some_password XDEBUG_CONFIG= XDEBUG_ENABLE=true XHPROF_ENABLE=true MW_DOCKER_UID=1000 MW_DOCKER_GID=1000

    Now you can just docker-compose up and everything will be set up when visiting your site for the first time, it should hold your hand, guide you through configuration options and finally offer you to download the LocalSettings.php file, that contains all the decisions you’ve made. You can review and adjust it futher and finally save it to the same folder as your docker-compose.yml file. Refresh the site and it should be accessible right away. I would say for a closed audience, these are the most important options to set:

    # The following permissions were set based on your choice in the installer
    $wgGroupPermissions['*']['createaccount'] = false;
    $wgGroupPermissions['*']['edit'] = false;
    $wgGroupPermissions['*']['read'] = false;
    

    These options will prevent people from creating their own accounts (you will have to create one for them from the UI) and it will block people from viewing any pages without being logged in.

    If you do not wish to use SQlite but rather a dedicated DBMS (I strongle discourage you from getting into that trouble for smaller or even medium user bases), you will find more information on the page for alternative configuration recipes.

    If you would like to go into part 2, just ask and I’ll give you an overview of my setup here as well. I’m using docker-letsencrypt-nginx-proxy-companion.



  • I’m running MediaWiki for a role playing group in docker. The difficult part was getting everything set up to get certificates from letsencrypt and offering https without leaving docker compose. The great thing about this is that creating a backup or moving servers has become trivial now. As long as you don’t expect your users to perform dozens or even hundreds of operations per second, I’d strongly advise sticking with SQLite to make your admin life that much easier. If you want, I’ll look up my full stack and post it here once I’m not on mobile any more.


  • Almost everything has been mentioned already so I just stick with the unusual: I host a private MediaWiki instance for note taking in my pen and paper rounds. It’s amazing once the other players got a bit more comfortable how to use it well regarding templates, categories and articles. My only regret is that I didn’t set up new instances per gaming group.