Run Local LLM with Ollama
You need a big GPU to run this