• 0 Posts
  • 26 Comments
Joined 1 year ago
cake
Cake day: July 22nd, 2023

help-circle

  • There’s also the option of setting up a cloudflare tunnel and only exposing immich over that tunnel. The HTTPS certificate is handled by cloudflare and you’d need to use the cloudflare DNS name servers as your domains name servers.

    Note that the means cloudflare will proxy to you and essentially become a man-in-the-middle. You – HTTPS --> cloudflare --http–> homelab-immich. The connection between you and cloudflare could be encrypted as well, but cloudflare remains the man-in-the-middle and can see all data that passes by.






  • I’m all for it as long as you keep using your brain. Coworker of mine set something upn on AWS that wasn’t working. Going through it I found the error. He said he tried it using chatgpt. He knows how to do it himself, he knows the actual mistake was a mistake, but he trusted Amazon Q when it said the mistake was correct. Even when double checking.

    Trust, but verify.

    I found it to be a helpful tool in your toolkit. Just like being able to write effective search queries is. Copying scripts off the internet and running them blindly is a bad idea. The same thing holds up for LLMs.

    It may seem like it knows what it’s talking about, but it can often talk out of its arse too…

    I’ve personally had good results with 3.5 on the free tier. Unless you’re really looking for the latest data







  • It’s looping back to itself? Location header is pointing back to itself.

    Is it possible your backend is sending back an http 301 redirect back to caddy, which forwards it to your browser?

    Possibly some old configuration on your backend from the letsencrypt beforehand? Can you check the logs from your backend and see what they’re sending back?

    I’m assuming the request might replace the host with the IP on your reverse Proxy and that your next cloud backend is replying with a redirect to https://nextcloud.domain.com:443

    Edit: I think this is the most incoherent message I wrote to date.

    I think your reverse Proxy is forwarding the request to your next cloud, but replacing the Host header with the IP you specified as reverse Proxy. As a result the request arrives at your next cloud with the IP as “host”.

    Your next cloud installation is then sending back a 301 redirect to tell the client that they should connect to https://nextcloud.domain.com. this arrives through caddy at your browser, goes through the same loop until you’ve reached the max redirects.

    Have a look at your next cloud backend http logs to see what requests are arriving there and what HOST( http header ) it’s trying to connect to on that IP.







  • fluckx@lemmy.worldtoLinux@lemmy.mlNew laptop
    link
    fedilink
    arrow-up
    35
    ·
    6 months ago

    Tuxedo computers could be a good fit I think? It’s like system76, but from Germany. You can pick from a few OS including an Ubuntu fork they made ( tuxedo os ). You can tweak the laptop yourself ( different you/CPUs/disk sizes/… ) to fit your use case.

    https://www.tuxedocomputers.com

    Personally I’ve never bought there, but a friend of mine has and he’s happy with his purchase.

    Note: I do not work for them, or am affiliated with them in any way.


  • If you create a new account you should have configured a root email address for it. That one should have received an email to login and set the initial password IIRC.

    You can get an estimate of what it’s going to cost by going to https://calculator.aws

    Upload to AWS shouldn’t really cost much, unless you’re sending a lot of API put requests. Since they are backups I’m going to guess the files are large and will be uploaded as Multi-Part and will probably invoke multiple API calls to do the upload.

    My suggestion would be to upload it to s3 and have it automatically transition to glacier for you using a lifecycle rule.

    Cost explorer would be your best bet to get an idea of what it’ll cost you at the end of the month as it can do a prediction. There is (unfortunately) not a way to see how many API requests you’ve already done IIRC.

    Going by the s3 pricing page, PUT requests are $ 0.005 per 1000 requests( N. Virginia ).

    Going by a docs example

    For this example, assume that you are generating a multipart upload for a 100 GB file. In this case, you would have the following API calls for the entire process. There would be a total of 1002 API calls. 
    

    https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html

    Assuming you’re uploading 10x 100gb according to the upload scheme mentioned above you’d make 10.020 API calls which would cost you 10 * 0.005= 0.05$.

    Then there would be the storage cost on glacier itself and the 1 day storage on s3 before it transitioned to glacier.

    Retrieving the data will also cost you, as well as downloading the retrieved data from s3 back to your device. If we’re talking about a lot of small files you might incur some additional costs of the KMS key you used to encrypt the bucket.

    I typed all this on my phone and it’s not very practical to research like this. I don’t think I’d be able to give you a 100% accurate answer if I was on my pc.

    There’s some hidden costs which aren’t Hidden if you know they exist.

    Note that (imo) AWS is mostly aimed at larger organisations and a lot of things ( like VMs ) are often cheaper elsewhere. It’s the combination or everything AWS does and can do so that makes it worth the while. Once you have your data uploaded to s3 you should be able to see a decent estimate in cost explorer.

    Note that extracting all that data back from s3 to your onprem or anywhere or you decide to leave AWS will cost you a lot more than what it cost you to put it there.

    Hope this helps!