I’m also going to push forward Tilda, which has been my preferred one for a while due to how minimal the UI is.
I’m also going to push forward Tilda, which has been my preferred one for a while due to how minimal the UI is.
We all mess up! I hope that helps - let me know if you see improvements!
I think there was a special process to get Nvidia working in WSL. Let me check… (I’m running natively on Linux, so my experience doing it with WSL is limited.)
https://docs.nvidia.com/cuda/wsl-user-guide/index.html - I’m sure you’ve followed this already, but according to this, it looks like you don’t want to install the Nvidia drivers, and only want to install the cuda-toolkit metapackage. I’d follow the instructions from that link closely.
You may also run into performance issues within WSL due to the virtual machine overhead.
Good luck! I’m definitely willing to spend a few minutes offering advice/double checking some configuration settings if things go awry again. Let me know how things go. :-)
It should be split between VRAM and regular RAM, at least if it’s a GGUF model. Maybe it’s not, and that’s what’s wrong?
Ok, so using my “older” 2070 Super, I was able to get a response from a 70B parameter model in 9-12 minutes. (Llama 3 in this case.)
I’m fairly certain that you’re using your CPU or having another issue. Would you like to try and debug your configuration together?
Unfortunately, I don’t expect it to remain free forever.
No offense intended, but are you sure it’s using your GPU? Twenty minutes is about how long my CPU-locked instance takes to run some 70B parameter models.
On my RTX 3060, I generally get responses in seconds.
My go-to solution for this is the Android FolderSync app with an SFTP connection.
I mean, sysvinit was just a bunch of root-executed bash scripts. I’m not sure if systemd is really much worse.
Systemd was created to allow parallel initialization, which other init systems lacked. If you want proof that one processor core is slower than one + n, you don’t need to compare init systems to do that.
Of course!
The Docker client communicates over a UNIX socket. If you mount that socket in a container with a Docker client, it can communicate with the host’s Docker instance.
It’s entirely optional.
There’s a container web UI called Portainer, but I’ve never used it. It may be what you’re looking for.
I also use a container called Watchtower to automatically update my services. Granted there’s some risk there, but I wrote a script for backup snapshots in case I need to revert, and Docker makes that easy with image tags.
There’s another container called Autoheal that will restart containers with failed healthchecks. (Not every container has a built in healthcheck, but they’re easy to add with a custom Dockerfile or a docker-compose.)
It’s really not! I migrated rapidly from orchestrating services with Vagrant and virtual machines to Docker just because of how much more efficient it is.
Granted, it’s a different tool to learn and takes time, but I feel like the tradeoff was well worth it in my case.
I also further orchestrate my containers using Ansible, but that’s not entirely necessary for everyone.
You can tinker in the image in a variety of ways, but make sure to preserve your state outside the container in some way:
docker exec -it containerName /bin/bash
Yes, you can set a variety of resources constraints, including but not limited to processor and memory utilization.
There’s no reason to “freeze” a container, but if your state is in a host or volume mount, destroy the container, migrate your data, and resume it with a run command or docker-compose file. Different terminology and concept, but same result.
It may be worth it if you want to free up overhead used by virtual machines on your host, store your state more centrally, and/or represent your infrastructure as a docker-compose file or set of docker-compose files.
I bought a used 2018 model over a new current model because of the lack of physical function keys.
Also, Dell, bring back Fn + Left for Home and Fn + Right for End!
Who looked at a great keyboard layout and decided, “I know! I’ll make this Developer Edition hardware more difficult to develop on!”
I’m using https://www.kavitareader.com/ with Moon+ Reader. Kavita supports OPDS feeds, which is perfect.
I’m exactly with you on everything you said.
It depends on the model you run. Mistral, Gemma, or Phi are great for a majority of devices, even with CPU or integrated graphics inference.