Microsoft launches local LLM tool called Foundry Local
Posted: (EET/GMT+2)
Today Microsoft announced something many of us have been hoping for: a fully local, on-premises LLM runtime called Foundry Local. If you work in an environment where cloud AI usage is limited or not allowed, this is a very welcome development.
Foundry Local is effectively the "local mode" of Microsoft's AI Foundry platform. You download the model and supporting runtime to your own machine (Windows or Linux) and run everything locally, much like you would do with Ollama or GPT4All. No cloud calls, no external dependencies, just a self-contained LLM system.
Getting started is straightforward. Microsoft provides an installation and configuration guide here:
https://learn.microsoft.com/en-us/azure/ai-foundry/foundry-local/get-started
A few early notes:
- You can run models fully offline, which is excellent for secure environments.
- The tooling integrates nicely with .NET and Python.
- Models can be swapped and updated without affecting your applications.
- Performance depends heavily on your hardware (as expected).
If you've been experimenting with local AI workflows or want more control over your infrastructure, Foundry Local feels like an option many organizations will evaluate. I'm planning to test how well it fits into CI systems, offline development, and local indexing scenarios.
More thoughts will follow once I have run it with a few of my own datasets.