Reprinted from Machine Heart.
To do a good job, one must first sharpen their tools. Using effective tools can greatly enhance our productivity and learning efficiency. Today, we will introduce a set of AI tool combinations based on Obsidian, which includes the local LLM deployment tool Ollama and two Obsidian plugins (BMO Chatbot and Ollama).
This set of tools can help us analyze notes, summarize, generate titles, write code, and also spark our creativity by continuing content and providing suggestions.
Introduction to Obsidian#
Obsidian is currently one of the most popular note-taking tools, but its capabilities go far beyond that. You can use it not only as a notebook but also as your personal knowledge base and document productivity tool! Many people even refer to it as their "second brain."
Its advantages include: support for Markdown, a rich plugin ecosystem, customizable themes, Wiki-style document linking, built-in relationship graphs, completely free basic features, and full support for local storage...
These numerous advantages have helped Obsidian gain a large user base worldwide. You can find many people sharing their experiences using this tool to learn knowledge, write papers, create novels, plan schedules, and manage everything in life, leading to a sizable niche market around the tool—templates, courses, and pre-configured vaults can all become commodities. This also corroborates Obsidian's extraordinary capabilities from another perspective.
(Caption) Obsidian is a very popular topic on Bilibili and YouTube.
In short, if you are looking for a useful learning and productivity tool, or if you want to build a second brain for yourself, Obsidian is definitely worth a try!
Why Use LLM in Obsidian?#
There is no doubt that we are now in the era of large models. They can not only help us improve efficiency and productivity but also assist us in innovating and exploring more possibilities.
Many clever uses of LLM have already been developed, and here we briefly showcase some applications you can implement in Obsidian to help you discover and explore more interesting or useful uses.
The first example is when I forgot the idiom "见微知著" in the previous paragraph; instead of using a search engine or calling for help, I simply asked the chatbot nearby and quickly got the result I wanted.
This uses the BMO Chatbot plugin, which integrates LLM into your Obsidian in the form of a chatbot. This plugin also allows you to chat based on the current document. As shown below, we asked the LLM to summarize this English report in Chinese and suggest some titles:
Of course, helping you continue a story is also a piece of cake. Below, we let the LLM help continue Frederick Brown's famous shortest novel in the world:
The last person on Earth sat alone in a room. Suddenly, there was a knock on the door...
Here, another plugin, Ollama, and a pre-configured command were used, with the prompt being: "Based on the above content, continue the story. The continuation should be 200 words, maintaining the character style and leaving suspense for the following text."
Additionally, it is noticeable that this plugin runs a bit slower. This is because a locally installed LLM—a version of the 8B llama3.1 model—is used here, and its speed is limited by the current hardware.
Alright, that's it for the examples. Now let's see how to install and use these plugins and LLM.
Installing Local LLM#
For most of us, the performance of LLM that can run on local computers cannot compare to the online services provided by large companies like OpenAI, but the biggest advantage of local LLM is data privacy and security—when using local LLM, all your computations are done on your own computer, so you don't have to worry about your data being transmitted to service providers.
Of course, if you don't care about the privacy of your notes, then using online services can also accomplish your tasks well, and you can completely skip this step.
To facilitate local use of LLM, we will use a tool called Ollama. Ollama is a very user-friendly tool for locally deploying LLM, suitable for anyone to use. Just download and install it from: https://github.com/ollama/ollama/releases
Then, enter the model library supported by Ollama: https://ollama.com/library, choose a model based on your needs and computer hardware, and then run the corresponding code. For example, if you want to install an 8B parameter model that has been fine-tuned for instructions and Q8_0 quantization of Llama 3.1, run: ollama run llama3.1:8b-instruct-q8_0
Of course, you can also install multiple models of different scales or fine-tuned for different tasks (such as programming), which allows you to balance speed and generation quality based on varying needs.
Installing and Configuring BMO ChatBot and Ollama Plugins#
Both of these plugins are available in the Obsidian community plugin market; you can search, download, and enable them.
Configuring BMO Chatbot Plugin#
In the options, you can select the local model you have installed or configure the online model in the General settings. As shown in the image below, I have locally installed a Llama 3.1 and a Llama 3, while also configuring the OpenRouter API (which provides access to many models) and an online language model GLM-4-Flash from Zhiyu. You can then set the maximum token count, temperature (between 0-1, with higher values generating more creative text), and choose whether to index the current note.
In the Prompts section, you can set system prompts through notes.
In the API Connections area below, you can configure online models.
Once configured, you can use these LLMs through the right sidebar in Obsidian or by using the Ctrl+P/Cmd+P shortcut.
In addition to using the BMO chatbot, this plugin also supports renaming the current document with LLM and using selected text as prompts to generate content.
Configuring Ollama Plugin#
The Ollama plugin only supports the local models installed through Ollama, but its advantage is that it can pre-configure commonly used prompt commands, which can then be easily called using Ctrl+P/Cmd+P.
Below is an example of code generation:
Conclusion#
Combining Obsidian with LLM tools can bring great convenience to our learning and productivity work. As a powerful note-taking tool, Obsidian not only supports a rich plugin ecosystem but also enhances our efficiency and creativity through local LLM deployment.
Installing and using the BMO Chatbot and Ollama plugins allows us to easily integrate LLM into Obsidian, enabling note analysis, summarization, title generation, content continuation, and more. This not only saves us time and effort but also sparks our creativity.
Of course, while using these tools, we should also pay attention to data privacy and security issues. Local LLM deployment ensures that our data does not leave our personal devices, thereby reducing the risk of data leakage.
In summary, Obsidian + LLM opens a new door for us, allowing us to better leverage technology in the age of information explosion and enhance ourselves.