"Buy Me A Coffee"

  • 0 Posts
  • 18 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle
  • Yes it would. In my case though I know all of the users that should have remote access snd I’m more concerned about unauthorized access than ease of use.

    If I wanted to host a website for the general public to use though, I’d buy a VPS and host it there. Then use SSH with private key authentication for remote management. This way, again, if someone hacks that server they can’t get access to my home lan.


  • Their setup sounds similar to mine. But no, only a single service is exposed to the internet: wireguard.

    The idea is that you can have any number of servers running on your lan, etc… but in order to access them remotely you first need to VPN into your home network. This way the only thing you need to worry about security wise is wireguard. If there’s a security hole / vulnerability in one of the services you’re running on your network or in nginx, etc… attackers would still need to get past wireguard first before they could access your network.

    But here is exactly what I’ve done:

    1. Bought a domain so that I don’t have to remember my IP address.
    2. Setup DDNS so that the A record for my domain always points to my home ip.
    3. Run a wireguard server on my lan.
    4. Port forwarded the wireguard port to the wireguard server.
    5. Created client configs for all remote devices that should have access to my lan.

    Now I can just turn on my phone’s VPN whenever I need to access any one of the services that would normally only be accessible from home.

    P.s. there’s additional steps I did to ensure that the masquerade of the VPN was disabled, that all VPN clients use my pihole, and that I can still get decent internet speeds while on the VPN. But that’s slightly beyond the original ask here.






  • I’m also running Ubuntu as my main machine at home. (I have a Mac and do Android development for my day job).

    But at home, I do a lot of website and backend dev.

    1. Code in VSCode
    2. Build using docker buildx
    3. Test using a local container on my machine
    4. Upload the tested code to a feature brach on git (self hosted server)
    5. Download that same feature branch on a RaspberryPi for QA testing.
    6. Merge that same code to develop 6a. That kicks off a CI build that deploys a set of docker images to DockerHub.
    7. Merge that to main/master.
    8. That kicks off another CI build.
    9. SSH into my prod machine and run docker compose up -d

  • That looks like 8.8.8.8 actually responded. The ::1 is ipv6’s localhost which seems odd. As for the wong ipv4 I’m not sure.

    I normally see something like requested 8.8.8.8 but 1.2.3.4 responded if the router was forcing traffic to their DNS servers.

    You can also specify the DNS server to use when using nslookup like: nslookup www.google.com 1.1.1.1. And you can see if you get and different answers from there. But what you posted doesn’t seem out of the ordinary other than the ::1.

    Edit just for shits and giggles also try nslookup xx.xx.xx.xx where xx.xx… is the wrong up from the other side of the world and see what domain it returns.


  • Another thing that can be happening is that the router or firewall is redirecting all port 53 traffic to their internal DNS servers. (I do the same thing at home to prevent certain devices from ignoring my router’s DNS settings cough Android cough)

    One way you can check for this is to run “nslookup some.domain” from a terminal and see where the response comes from.








  • The project is open source so you can see what they are logging, if you can read the code.

    But simply some things that are logged:

    • IPs are logged but I don’t see them being associated with a user account. This looks to mainly be for rate limiting.
    • What posts/comments you’ve looked at are logged. This is so the UI can gray out posts you’ve already seen or mark replies to you own comments as read.

    From what I can tell neither of these data points are federated so only the instance your logged into has that information.

    ** Don’t use this as an exhaustive list. These are just the two items you specifically asked about and what I’ve seen looking through the code so far. **



  • Are you planning on modifying the lemmy backed or UI? If not I would suggest:

    1. Comment out the sections about building the image from scratch.
    2. Go to dockerhub and find the newest tag that matches your system’s architecture and use that instead.

    If you can paste your docker-compose file we can more precisely tell you what needs to be changed.

    The default config expects that your cloning the entire GitHub repo, for the backend, at least, and tries to compile it from scratch. You can instead just tell docker to use a precompiled image instead.

    Lastly YAML is very picky about whitespace so something might be indented incorrectly. So again, if we can see your docker-compose file we might be able to see what’s wrong.