Appearance
AI Large Model Integration Feature Installation
Here we introduce how to use the large model integration feature built into Liii STEM. This feature is exclusive to Liii STEM.
Installation
Supported Models
The currently supported models are:
- DeepSeek V3 (SiliconFlow, DeepSeek API)
- DeepSeek R1 (SiliconFlow, DeepSeek API)
- Qwen-2.5 72B (DeepSeek API)
- Qwen-2.5 7B (DeepSeek API)
Pandoc: For macOS and Linux Only
Additionally, for macOS
and Linux
users, you need to install pandoc before use.
zsh
# macOS
brew install pandoc
Or
bash
# Ubuntu
sudo apt-get install pandoc
After installing pandoc, you need to update the LLM plugin in Liii STEM, as shown below:
Using Liii STEM's large model plugin also requires you to manually enter an API Key. We currently support SiliconFlow and DeepSeek APIs. Tutorials for obtaining APIs are available at How to get DeepSeek API and How to get SiliconFlow API. The prices for each API are as follows, users need to purchase them themselves.
Model Name | Context Length | Max Output Length | Input Price (Million Tokens) | Output Price (Million Tokens) | Key Capabilities & Features | Remarks |
---|---|---|---|---|---|---|
DeepSeek-V3 | 64K | 8K | ¥0.1 (cache hit) | ¥2 | High-performance general chat, code generation, logical reasoning, multilingual processing, cost-effective. | Input price is ¥1 when cache miss. |
DeepSeek-R1 | 64K | 8K | ¥1 (cache hit) | ¥16 | Strong in math, code, natural language reasoning, supports model distillation, MIT license. | Input price is ¥4 when cache miss. |
Qwen-2.5 72B | 1M | 8K | ¥0.0036/kToken | ¥4.13 | 72B parameters, supports long text processing, free for commercial use with <100M MAU. | Provided by SiliconFlow. Offers various quantized versions (e.g., AWQ, GPTQ). |
Qwen-2.5 7B | 1M | 8K | Free | Free | 7B parameters, suitable for lightweight tasks, supports multi-GPU inference and quantized deployment. | Provided by SiliconFlow. Offers GGUF, AWQ, GPTQ, etc., suitable for resource-limited environments. |
Getting SiliconFlow API
Using Liii STEM's large model plugin requires you to manually enter an API Key. We currently support SiliconFlow and DeepSeek APIs. Below is the tutorial for obtaining the SiliconFlow API.
Step 1: Apply for an account on the SiliconFlow official website
SiliconFlow official website: https://cloud.siliconflow.cn/i/h8qNv0VJ
Before use, you need to apply for an account on the SiliconFlow official website and obtain the corresponding API Key. Click to copy the key, as shown below:
Step 2: Open Liii STEM and configure the key
In Liii STEM, go to Help
-> Plugins
-> LLM
, as shown below:
Then double-click to open the llm_zh.tm
document.
Paste your copied API Key into the position shown below, then click save.
After saving, click Literate
-> Build buffer
in the menu. You can then enjoy our large model plugin!
Getting DeepSeek API
Using Liii STEM's large model plugin requires you to manually enter an API Key. We currently support SiliconFlow and DeepSeek APIs. Below is the tutorial for obtaining the DeepSeek API.
Step 1: Apply for an account on the DeepSeek official website
Before using DeepSeek V3
and R1
, you need to apply for an account on the DeepSeek official website and obtain the corresponding API Key, as shown below.
Step 2: Open Liii STEM and configure the key
In Liii STEM, go to Help
-> Plugins
-> LLM
, as shown below:
Then open the llm_zh.tm
document.
Paste your copied API Key into the position shown below, then click save.
After saving, click Literate
-> Build buffer
in the menu. You can then enjoy our large model plugin!
Proxy Configuration
In some network environments, it might be necessary to access the model API service through a proxy server. Liii STEM's LLM plugin supports configuring independent proxies for different API providers.
Proxy Configuration Method
Proxy configuration can be done by opening Help
-> Plugins
-> LLM
, viewing the detailed help documentation, and configuring directly in the document.
Alternatively, it can be done in the $HOME/.liii_llm_key.json
file. Each API provider can have separate proxy settings configured.
Basic Format
Proxy configuration uses the following format:
json
"<Provider Domain>": {
"api-key": "<Your API Key>",
"proxy": {
"http": "<HTTP Proxy Address>",
"https": "<HTTPS Proxy Address>"
}
}
OpenAI Proxy Configuration Example
Below is an example of configuring a proxy for OpenAI:
json
"openai.com": {
"api-key":"Fill in your secret key here",
"proxy":{
"http":"http://127.0.0.1:7890",
"https":"http://127.0.0.1:7890"
}
}
Not Using a Proxy
If you do not need to use a proxy, there are two ways to handle it:
- Set "proxy" to an empty object:
json
"proxy":{}
- Completely remove the "proxy" field.
Taking Effect
If you modify the configuration in the help document, you need to:
- Click the menu item
Literate→Build buffer
- Restart Liii STEM for the configuration to take effect.
If you directly modify the $HOME/.liii_llm_key.json
file:
- Save the file
- Restart Liii STEM for the configuration to take effect.
Common Proxy Configuration Issues and Solutions
When configuring and using a proxy, if it doesn't work correctly, you might encounter the following common issues:
1. Connection Timeout
Symptom: Long waiting time when requesting the API after configuring the proxy.
Solution:
- Check if the proxy tool is running correctly.
- Try changing the proxy server/line.
- Confirm if the proxy address and port are correct.
- Temporarily disable the firewall or security software to check if it's blocking the connection.
2. Error During Use
Symptom: Error occurs when using the large model after configuring the proxy.
Solution:
- If the proxy you configured listens on the HTTPS protocol, e.g.,
https://127.0.0.1:7890
, ensure the proxy server supports HTTPS requests. - Most local proxy tools (like Clash, V2RayN, Shadowsocks clients, etc.) expect HTTP or SOCKS protocol connections on their listening ports, not HTTPS connections.
- Check the JSON format of the configuration file for errors, including commas, quotes, etc.
Configuring a proxy is an important way to resolve network access restrictions, especially for services like OpenAI. If the above methods cannot solve your problem, you can contact us for more detailed help.