banner
虫子游戈

虫子游戈

一个写故事的人类
mastodon
email

Deploy LLM locally and integrate it into Obsidian

ollama is a very useful tool for local deployment of LLM, suitable for anyone to use, just download and install.

Install LLM#

I installed two large models: Llama 3 with 8B version and Gemma 2 with 27B version. To do this, run respectively:

ollama run llama3 and ollama run gemma2:27b

After that, we may need a convenient interactive interface. I chose Open WebUI, installed using docker:

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

If you want to use NVIDIA GPU to drive Open WebUI, you can:

docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda

After that, just open http://localhost:3000/ in your browser to use it, the interface is quite similar to ChatGPT:

image

Integration with Obsidian#

First, download and enable the Ollama plugin from the Obsidian community plugin market, then configure the commands and models you want to use. After configuring, remember to restart Obsidian once for the changes to take effect.

image

Then, in your document where you need to use LLM, simply click Ctrl+P or select Ctrl+P in the text to bring up the command tool, search for your command name, such as "Chinese summary" in the image above, and call it.

image

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.