This note is for practical engineering notes on working with AI. For conceptual and philosophical notes see AI.
Local AI
Python and HuggingFace
See my note on Python, particularly Python#Machine setup and then visit various Hugging Face models / transformers and try to run them locally
Elixir and Bumblebee and LiveBook
Elixir LiveBook has a way to run the backend locally. You can then easily create "smart cells" that will do a bunch of the heavy lifting for you.
ollama
An amazingly simple and effective CLI for working with local LLMs.
ollama run
works also as an ollama attach
, meaning it will attach to the current background process running a particular model instead of creating a new one (which would eventually overwhelm the RAM on your machine).
Ollama and open web ui
After setting up ollama I ran the quick start with docker setup for OpenWebUI and it "just worked". It seemed like it didn't at first because the docker command runs detached but if you monitor it you can see it is just taking time to pull down images etc.
LocalAI
I'm still experimenting with this