Episode 514
514: Running Local LLMs in VS Code
May 11th, 2026
55 mins 46 secs
About this Episode
In this episode James and Frank dive into running AI coding models locally versus in the cloud—BYOK/Open Router, VS Code’s chat/agent harness, model runners (Olama, vLLM), and the practicality of 27B models on a 3090 using 4‑bit quantization. They share hands-on takeaways—how recent engineering (MT/MTPLX) boosts inference to usable token rates, when auto model selection makes sense, cost and hardware trade‑offs, and why local models can liberate your workflow while still needing smarter, unified tooling.
Follow Us
- Frank: Twitter, Blog, GitHub
- James: Twitter, Blog, GitHub
- Merge Conflict: Twitter, Facebook, Website, Chat on Discord
- Music : Amethyst Seer - Citrine by Adventureface
⭐⭐ Review Us ⭐⭐
Machine transcription available on http://mergeconflict.fm
Support Merge Conflict