IDE with Local LLMs

IDE with Local LLMs

In Running LLMs in Intel Laptops we installed and ran an LLM in a laptop and use it as as your personal chatbot. Let’s push this further by using the LLM to help us write code by connecting our IDE (Integrated Development Environment) to the model.

In the previous article we installed the Qwen 2.5 7B Instruct model, although for our IDE use case we will be using another variant of the Qwen model called Coder. It is important to understand that a model can have several variants that is more suited for a specific task. For instance: Continue Reading

Running LLMs in Intel Laptops

Running LLMs in Intel Laptops

AI (especially LLMs) are taking over the world right now. The pace of progress has been especially dizzying this year, with every month producing a new breakthrough in the technology (e.g Opus 4.6, OpenClaw). As with all technologies, it is also important to have a grasp of the basics. And what better way to learn the basics than trying to run an LLM in your computer!

We are all familiar with the popular AI tools (LLMs) like ChatGPT and Claude. These services are running on what is called frontier models, which are the best models that is available in the market right now. But these models are usually paid, either per token or through a usage quota. If you want have a “free” LLM, then you have to run your own model using your machine. For this, we need to use open source (or more accurately open weight) models. Fortunately, we have a wide selection of models: Continue Reading