Her sidder jeg, med mit hjerte brudt // Prøvede at skide, men slog kun en prut

  • 2 Posts
  • 36 Comments
Joined 1 year ago
cake
Cake day: June 5th, 2023

help-circle

  • Why do you have to use NGINX? Caddy does the proxying to the Lemmy containers for you. That docker-compose.yml file is my entire deployment, there is no hidden NGINX container or config file that needs to be added. Just remove your broken Lemmy deployment with docker compose down and delete the containers, then docker compose up my docker-compose.yml (after you edit the postgres variables) with config.hjson in the same folder.


  • Oh shit, I forgot that your Caddy would be running on a bridge network by default because mine is on the host network where all ports are already exposed to it! (It’s generally a bad idea to use the host network, so don’t do this if you’re only using Caddy with containers on the same network) I edited the Gist to expose 80 and 443 for HTTP/S on that container, the updated file uses the same Github link. Really sorry about that!



  • Yeah, the config file on the documentation sucks. I had to poke through several discussions on /c/selfhosting to find a config that wasn’t the extremely minimal one linked in the documentation. Your config.hjson is fine from what I can tell, although I’m not sure why you censored the hostname there as it’s supposed to be lemmy.emphisia.nl and not anything confidential.

    Honestly, I don’t have enough understanding of NGINX to debug its config, so I’ll just share my docker-compose.yml for leddit.danmark.party which worked correctly and federated out of the box, with a few adjustments to match your deployment. Note that you’ll have to tear down your existing deployment if you want to use this docker-compose.yml because they use the same ports.

    I should probably self-host my own pastebin
    version: "3.9"
    x-logging:
      &default-logging
      options:
        max-size: '10m'
      driver: json-file
    
    services:
      caddy:
        image: caddy:2
        volumes:
          - ./volumes/caddy:/data
          - ./volumes/caddy:/config
        # See Caddy's documentation for customizing this line
        # https://caddyserver.com/docs/quick-starts/reverse-proxy
        command:
          - /bin/sh
          - -c
          - |
            cat <<EOF > /etc/caddy/Caddyfile && caddy run --config /etc/caddy/Caddyfile
            
            {
              debug
            }
            
            (common) {
            	encode gzip
            	header {
            		-Server
            		Strict-Transport-Security "max-age=31536000; include-subdomains;"
            		X-XSS-Protection "1; mode=block"
            		X-Frame-Options "DENY"
            		X-Content-Type-Options nosniff
            		Referrer-Policy no-referrer-when-downgrade
            		X-Robots-Tag "none"
            	}
            }       
            
            # Lemmy instance
            lemmy.emphisia.nl {
              log
              import common
              reverse_proxy http://lemmy-ui:1234 # lemmy-ui
              
              @lemmy {
            		path /api/*
            		path /pictrs/*
            		path /feeds/*
            		path /nodeinfo/*
            		path /.well-known/*
            	}
             
             	@lemmy-hdr {
            		header Accept application/*
            	}
              
              handle @lemmy {
                reverse_proxy http://lemmy:8085 # lemmy
              }
              
              handle @lemmy-hdr {
                reverse_proxy http://lemmy:8085
              }
              
              @lemmy-post {
            		method POST
            	}
            
            	handle @lemmy-post {
            		reverse_proxy http://lemmy:8085
            	}
            }
            EOF
        lemmy:
          image: dessalines/lemmy:0.18.1-rc.9
          ports:
            - 8085:8536
          volumes:
            - ./lemmy.hjson:/config/config.hjson
          depends_on:
            - postgres
            - pictrs
          restart: always
          logging: *default-logging
          
        lemmy-ui:
          image: dessalines/lemmy-ui:0.18.1-rc.9
          ports:
           - 1234:1234
          environment:
            - LEMMY_UI_LEMMY_INTERNAL_HOST=lemmy:8085
            - LEMMY_UI_LEMMY_EXTERNAL_HOST=localhost:1236
          depends_on:
            - lemmy
          volumes:
            - ./volumes/lemmy-ui/extra_themes:/app/extra_themes
          restart: always
          logging: *default-logging
       
        postgres:
          image: postgres:15-alpine
          ports:
            - 5432:5432
          environment:
            - POSTGRES_USER=MyPostgresUser
            - POSTGRES_DB=MyPostgresDb
            - POSTGRES_PASSWORD=MyPostgresPassword
          volumes:
            - ./volumes/postgres:/var/lib/postgresql/data
          restart: always
          logging: *default-logging
          
        pictrs:
          image: asonix/pictrs:0.4.0-rc.7
          user: 991:991
          hostname: pictrs
          environment:
            - PICTRS__MEDIA__VIDEO_CODEC=vp9
            - PICTRS__MEDIA__GIF__MAX_WIDTH=256
            - PICTRS__MEDIA__GIF__MAX_HEIGHT=256
            - PICTRS__MEDIA__GIF__MAX_AREA=65536
            - PICTRS__MEDIA__GIF__MAX_FRAME_COUNT=400
          volumes:
            - ./volumes/pictrs:/mnt
          restart: always
          logging: *default-logging
    	  
        postfix:
          image: mwader/postfix-relay
          environment:
           - POSTFIX_myhostname=lemmy.emphisia.nl
          restart: "always"
          logging: *default-logging
    

  • I don’t use NGINX as my proxy server, but it’s a bit strange that you would need two configs for this while mine runs perfectly with one config and two open ports (:8536 for Lemmy-BE and :1234 for Lemmy-UI). And why are you using different versions of Lemmy-BE (18.1-rc9) and Lemmy-UI (18.1-rc4)?

    If you are using the default docker-compose.yml on the Lemmy repo, that part of the NGINX config uses https:// + the name of the Docker containers. And you always give NGINX the external port (the number on the right side of the colon defined in ports:, like 1234 in 1234:5678). The port on the left is only known to the container the port is defined for.

    If it’s still broken after you correct the NGINX config, what are your docker-compose.yml and config.hjson like? There’s several versions of them floating around and you might have combined incompatible versions with each other.







  • USB-A is one-sided, unlike USB-C, so you can’t do direct data transfers between two devices with USB-A ports. It’s much slower too. Electronic waste is not ideal but it has to happen for a large-scale hardware upgrade. I try to reduce it by recycling my USB-A bricks and cables.

    I also cannot understand why, unless you use Apple devices exclusively, you would be happy that one company’s series of devices has to use a completely unique charging system from every other device in the world. I don’t care if Lightning is better when it’s proprietary. If Apple “sticks two fingers up” and doesn’t integrate USB-C charging into the iPhone 15, I won’t be buying another device from them, because I’m tired of having to carry two different cables around - one USB-C for my laptop, Android phone, power bank, speaker and other devices, and one Lightning charger for nothing else but the damn iPhone.




  • Saved this comment. It claims that the Lemmy frontend and backend are stateless and can be scaled arbitrarily, as can the web server. The media server (pict-rs) and Postgres database are the limitations to scaling. I’m working to deploy Lemmy with external object storage to solve media storage scaling and there’s probably some database experts figuring out Postgres optimization and scaling as well. None of the instances are big enough to run into serious issues with vertical scaling yet, so this won’t be a problem for a while.





  • Anyone can scrape data and corporations are already doing it. But data scraping is considered a legal gray zone and companies can be prevented from accessing data that they are not legally authorized to use, which is why companies like OpenAI retrieve their training data from data dumps and don’t just run web crawlers across the entire internet. A publicly announced platform with an appropriate clause in its Terms of Service can grant Meta the legal ownership of all data from the fediverse that arrives on their platform.



  • Sounds like the problem here is that your colleagues are your only social circle outside of family rather than remote work being isolating. I think it’s unhealthy to have work relationships take up a significant part of your social contacts in general, because you’ll have a less rational perspective on your job when you associate it with friends. You might be reluctant to leave a job with poor compensation and hours because all of your friends are there, for example. My commute to work and back takes over 2 hours a day and it’s much easier to be peer-pressured into working overtime when you can see everyone else doing so. All of this only benefits the employer. I’d rather work remotely and spend the saved time with people I choose to be with.