Setup AI locally by Ollama

This post was last updated for 276 days ago, and the information may already be changed

EDIT: With SiYuan >=3.0.0 setup is much easier. Please see my latest comment.

It is possible to serve a LLM locally using an OpenAI compatible API. It seems to be possible to use local inference with SiYuan:

Once

  1. Install ollama
  2. Install LiteLLM
    • pipx install 'litellm[proxy]'
  3. ollama run orca2

Serve

ollama serve & litellm --model ollama/orca2

The settings in Siyuan are:

I entered a dummy OpenAI API key (any value works)

You can use any model provided by ollama (or see liteLLM for even more models)

    3 Operate
    dwinkl updated this article at 2024-02-21 02:09:46
    dwinkl updated this article at 2024-01-06 06:56:14
    dwinkl updated this article at 2024-01-06 06:54:34

    Welcome to here!

    Here we can learn from each other how to use SiYuan, give feedback and suggestions, and build SiYuan together.

    Signup About
    Please input reply content ...