At the beginning of 2025, Deepseek hit the headlines. We were treated to a chatbot of Chinese origin, with performances close to Chatgpt or Claude, with the highlight that the development of this generative AI had required a lot of less resources than its American equivalents. In fact, ChatGPTChatGPTClaude or also theIAIA Generative of Midjourney images, require training their models by using very expensive IT flea. DeepSeekDeepSeek Affirms that she would get a comparable result from lower levels.
In reality, it turned out that Chatgpt, as a multimodala (being able to deal with text, images, videos …) was much more complete. The fact remains that, if we try above all to obtain answers to your questions, Deepseek often does the trick.
Why install Deepseek locally?
One of the strong points of Deepseek R1, in addition to the fact that this chatbot is free, is that it is possible, just like French AI MistralMistralto use it locally.
Having Deepseek on your own computer has two main assets:
- You can access Deepseek even if you are in a place that does not have an internet connection. In fact, all treatments are carried out on your Mac or your PC.
- Your data do not go back to serversservers De Deepseek.
Required conditions
If you plan to use Deepseek locally, know that all the data used for the reflection of this chatbot will be loaded on your computer. You must therefore plan between 4 GB and 40 GB depending on whether you download a databasedatabase compressed or not.
You also need to have a powerful PC, ideally with a graphics cardgraphics card efficient like those of Nvidia.
You will also need to have a “framework” on your PC or Mac, an execution environment capable of welcoming Deepseek. Among the compatible libraries are LLAMA.CPP, OLLAMA, LM Studio or Text Generation Webui. As part of this tutorialtutorial We chose Ollama.
Ollama download
Ollama is an open-source tool designed to easily execute LLM (language models). What is interesting is that it does not require high technical skills. In addition, as we will see, Ollama simplifies the download, installation and use of Deepseek – but also alternative AI models like Mistral.
Go to the site: https://ollama.com
Click Download, then choose your system: macOS, LinuxLinux You windows.
Once the softwaresoftware downloaded Ollama installation, obviously click on theiconicon correspondent in order to carry out the effective installation of this framework. It should take about five minutes.
Once the installation is completed, an opinion appears at the bottom of the screen: “Ollama is running. Click here ”. Click on the “here”.
Choice of Deepseek model under Ollama
Ollama is a applicationapplication which has run from the computer’s terminal mode. This is launched as soon as you click on the “here”.
Ollama invites you to launch your first model and offers you by default Llama3.2 of Meta. However, what we want is launching Deepseek. Here is the exact command you need to type:
ollama run deepseek-llm
You will then see several sibyllin commands run under your eyeseyes. They will download Deepseek to your Mac or PC. If everything is going well, you see the mention success appear.
And that’s it. You have Deepseek on your computer. You just have to ask questions from the order interface. Admittedly, it is not extremely sexy as an interface, but at least you have an intelligent chatbot at your disposal whatever the context.
Use Deepseek locally
If you use Deepseek in a subsequent session, you must launch Ollama then open the control interface as follows:
- Launch the terminal application on Mac
- Launch CMD – Command prompt on PC. Under this environment, you will probably need to place yourself in the appropriate repertoire by first typing this order: CDCD .. until going back to the source file of the PC. Then CD Windows and finally CD System32.
Then type ollama run deepseek-llm And voila! You have access to Deepseek locally. Know: If your equipment is ineffective, Deepseek will be slow to answer your questions.
If ever at some point, you want to uninstall Deepseek locally, type:
OLLAMA RM Deepseek-llm Then uninstall Ollama from the system.
Use Mistral locally
You can also, once Ollama has been installed, load a model such as that of the French Mistral. The commands to be used are as follows:
To download Mistral:
ollama pull mistral
To use it:
ollama run mistral
And that’s it! You have a chatbot that can be used at will, and without it being necessary to have a connection InternetInternet.