Step-by-Step:Running DeepSeek locally in VSCode for a Powerful, Private AI Copilot
AI悦创原创2025年2月10日大约 4 分钟...约 1149 字
This step-by-step guide will show you how to install and run DeepSeek locally, configure it with CodeGPT, and start leveraging AI to enhance your software development workflow, all without relying on cloud-based services.
To run DeepSeek locally, we first need to install Ollama, which allows us to run LLMs on our machine, and CodeGPT, the VSCode extension that integrates these models for coding assistance.
Now that you have successfully installed both Ollama and CodeGPT, it’s time to download the models you’ll be using locally.
Chat model: *deepseek-r1:1.5b*, which is optimized for smaller environments and will run smoothly on most computers.
Autocompletion model: *deepseek-coder:1.3b.* This model utilizes Fill-In-The-Middle (FIM) technology, allowing it to make intelligent autocompletion suggestions as you write code. It can predict and suggest the middle part of a function or method, not just the beginning or the end.
Navigate to the Local LLMs section in the sidebar.
From the available options, select Ollama as the local LLM provider.
Choose the model deepseek-r1:1.5b.
Click the Download button. The model will begin downloading automatically.
Introducing DeepSeek R1:1.5b Running Locally in Cursor! In less than 4 minutes, I set up the DeepSeek R1:1.5b model, download it, and run it locally to seamlessly work with code in Cursor The video is shown in real-time, with the model impressively running on an Intel Core i5 😱 using Ollama and the CodeGPT extension👇
That's it! Now open the extension, and you'll be able to install all the deepseek_ai models to run them locally and completely privately.
Once the download is complete, CodeGPT will automatically install the model. After installation, you’re ready to start interacting with the model.
You can now easily query the model about your code. Simply highlight any code within your editor, add extra files to your queries using the # symbol, and leverage powerful command shortcuts such as:
/fix — For fixing errors or suggesting improvements in your code.
/refactor — For cleaning up and improving the structure of your code.
/Explain — To get detailed explanations of any piece of code.
This chat model is perfect for assisting with specific questions or receiving advice on your code.
Run the following command to pull the deepseek-coder:1.3b model:
ollama pull deepseek-coder:1.3b
This command will download the autocompletion model to your local machine.
After the download completes, return to CodeGPT and navigate to the Autocompletion Models section.
Select deepseek-coder:1.3b from the list of available models.
Deepseek running locally and privately for autocompletion in VSCode! 🙌 In less than a minute, I'll show you how to download Deepseek-coder and set it as the autocompletion model in VSCode. You’ll need to use ollama to download the model and CodeGPT to select it as the autocompletion model. Enjoy the best models running locally with http://codegpt.co 😃
视频
Once selected, you can start coding. As you type, the model will begin providing real-time code suggestions, helping you complete functions, methods, and even entire blocks of code with ease.
Once you’ve set up the models, you can now enjoy the full benefits of working with these powerful tools without relying on external APIs. By running everything locally on your machine, you ensure complete privacy and control over your coding environment. No need to worry about data leaving your computer, everything stays secure and private 👏