How to Run Deepseek R1 Locally
Introduction
Deepseek R1 is an open-source large language model (LLM) built for tasks like conversational AI, code generation, and natural language understanding. Using Deepseek R1 locally ensures greater control over data privacy, minimizes latency, and optimizes performance. In this guide, we’ll cover how to set up Deepseek R1 on your local machine using Ollama.
What is Deepseek R1?
Deepseek R1 is an advanced language model trained to process and generate human-like text. It can be used for:
Text generation – It can be used to write articles, summaries, and more.
Code assistance – It can generate and debug code.
Natural language understanding – It helps to analyze and interpret human input.
Question-answering – It helps to provide context-based responses.
By running Deepseek R1 locally, you eliminate dependency on cloud services and gain full control over your workflow.
Now, let’s explore how to set up Deepseek R1 on your machine using Ollama.
How to set up Deepseek R1 locally using Ollama
Before we begin with the installation process, let’s first understand what Ollama is and why it is the preferred choice for running Deepseek R1 locally.
What is Ollama?
Ollama is a lightweight tool designed to simplify the process of running AI models locally. It offers:
Quick setup – It requires minimal installation steps. You can have your AI model up and running in no time.
Optimized resource usage – It efficiently manages memory, ensuring smooth performance.
Local inference – No internet connection is needed after setup.
Now that we understand what Ollama is, let’s explore why it’s beneficial to use it to run Deepseek R1.
Why use Ollama?
Running Deepseek R1 with Ollama provides several advantages:
Privacy – Your data stays on your device.
Performance – Faster response times without cloud delays.
Customization – Ability to tweak model behavior for specific tasks.
With these benefits in mind, let’s move on to the installation process.
How to install Ollama
Follow these steps to install Ollama on your system:
macOS
Open Terminal and run:
brew install ollamaIf Homebrew isn’t installed, visit brew.sh and follow the setup instructions.
Windows & Linux
Download Ollama from the official website.
Follow the installation guide for your operating system.
Alternatively, Linux users can install it via Terminal:
curl -fsSL https://ollama.com/install.sh | sh
With Ollama successfully installed, let’s move on to using Deepseek R1 on your local machine.
Running Deepseek R1 on Ollama
Once Ollama is installed, follow these steps to set up Deepseek R1 locally:
Step 1: Download the Deepseek R1 model
To begin using Deepseek R1, you first need to download the model. Run the following command in the terminal to download Deepseek R1:
ollama pull deepseek-r1
For a smaller version, specify the model size:
ollama pull deepseek-r1:1.5b
After downloading the model, you’re ready to start using it.
Step 2: Start the model
Now that you have the model downloaded, you need to start the Ollama server to run Deepseek R1. Use the following command:
ollama serve
Then, run DeepseekR1:
ollama run deepseek-r1
To use a specific version:
ollama run deepseek-r1:1.5b
Step 3: Interact with Deepseek R1
With the model running, you can now interact with it in the terminal. Try entering queries like the following:
ollama run deepseek-r1 "What is a class in C++?."
Try experimenting with different prompts. It will help you understand the model’s strengths and how it can best serve your needs.
Conclusion
Running Deepseek R1 locally using Ollama offers a powerful and private AI solution. By following this guide, you can install, configure, and interact with Deepseek R1 seamlessly on your local machine. Whether for text generation, coding, or knowledge retrieval, Deepseek R1 provides an efficient AI experience without relying on cloud-based services.
If you want to deepen your understanding of AI models and their applications, check out Codecademy’s Generative AI for Everyone course to expand your knowledge and enhance your AI development skills.
Author
'The Codecademy Team, composed of experienced educators and tech experts, is dedicated to making tech skills accessible to all. We empower learners worldwide with expert-reviewed content that develops and enhances the technical skills needed to advance and succeed in their careers.'
Meet the full teamRelated articles
Learn more on Codecademy
- Skill path
Code Foundations
Start your programming journey with an introduction to the world of code and basic concepts.Includes 5 CoursesWith CertificateBeginner Friendly4 hours - Career path
Full-Stack Engineer
A full-stack engineer can get a project done from start to finish, back-end to front-end.Includes 51 CoursesWith Professional CertificationBeginner Friendly150 hours