DeepSeek本地部署

DeepSeek发布了开源版本大语言模型DeepSeek-R1, 性能对标OpenAI o1。Ollama已支持DeepSeek部署, 本文基于Ollama在本地部署DeepSeek大模型。并且使用Gradio构建WebUI, 实现大模型聊天界面。

Ollama安装

curl -fsSL https://ollama.com/install.sh | sh

安装成功自动运行ollama服务,如需手动启动服务,运行如下命令

systemctl stop    ollama.service # 停止服务
systemctl restart ollama.service # 重启服务
systemctl enable  ollama.service # 开机启动

Ollama用法

ollama help

Usage:
  ollama [command]

Available Commands:
  serve       Start ollama
  create      Create a model from a Modelfile
  show        Show information for a model
  run         Run a model
  stop        Stop a running model
  pull        Pull a model from a registry
  push        Push a model to a registry
  list        List models
  ps          List running models
  cp          Copy a model
  rm          Remove a model
  help        Help about any command

下载并运行模型

ollama run deepseek-r1

>>> 你是谁
您好!我是由中国的深度求索(DeepSeek)公司开发的智能助手DeepSeek-R1。如您有任何任何问题,我会尽我所能为您提供帮助。

>>> /bye
ollama list

NAME               ID              SIZE      MODIFIED
deepseek-r1:latest 0a8c26691023    4.7 GB    1 days ago

模型安装位置

/usr/share/ollama/.ollama/

Ollama其他模型

  • deepseek-r1
  • qwen2.5
  • qwen2.5-coder
  • llama3.2
  • llama3.2-vision
  • gemma2

Python接口

ollama服务启动后会监听本机"11434"端口,用于API为调用。

如果需要其他设备访问,需要添加环境变量,然后重启服务。

/etc/systemd/system/ollama.service

Environment="OLLAMA_HOST=0.0.0.0:11434"

安装ollama库

pip install ollama

Python测试代码

import ollama

def chat_ollama(question, model='deepseek-r1'):
    text = ''
    stream = True
    ollama_host = 'http://127.0.0.1:11434'
    client = ollama.Client(host=ollama_host)
    
    response = client.chat(model=model, stream=stream, messages=[
        {'role': 'user', 'content': question},
    ])
    
    if stream:
        for chunk in response:
            content = chunk['message']['content']
            text += content
            print(content, end='', flush=True)
    else:
        content = response['message']['content']
        text += content
        print(content)
    
    print('\n')
    return text

if __name__ == '__main__':
    chat_ollama('你是谁')

运行输出

您好!我是由中国的深度求索(DeepSeek)公司开发的智能助手DeepSeek-R1。如您有任何任何问题,我会尽我所能为您提供帮助。

Gradio创建webUI

Grdio是一个开源Python库,可以快速创建大语言模型的交互webUI,无需了解HTTP、CSS、JavaScript等web语言。

安装及测试

安装gradio库

pip install gradio

在上一个例子基础上添加gradio创建的UI

import ollama
import gradio as gr

def chat_ollama(question, model='deepseek-r1'):
    text = ''
    stream = True
    ollama_host = 'http://127.0.0.1:11434'
    client = ollama.Client(host=ollama_host)
    
    response = client.chat(model=model, stream=stream, messages=[
        {'role': 'user', 'content': question},
    ])
    
    if stream:
        for chunk in response:
            content = chunk['message']['content']
            content = content.replace('<think>', 'think')
            content = content.replace('</think>', 'think')
            text += content
            print(content, end='', flush=True)
            yield text
    else:
        content = response['message']['content']
        text += content
        print(content)
        yield text
    
    print('\n')
    return text

def chat_response(message, history):
    resp = chat_ollama(message)
    for r in resp:
        yield r

def webui():
    demo = gr.ChatInterface(fn=chat_response, type='messages', examples=['你好', '你是谁'])
    demo.launch(server_name='0.0.0.0')

if __name__ == '__main__':
    webui()

访问地址

http://127.0.0.1:7860

运行效果如下图所示

deepseek-gradio