<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Vllm on Caminho Solo</title><link>https://www.caminhosolo.com.br/en/tags/vllm/</link><description>Recent content in Vllm on Caminho Solo</description><generator>Hugo -- gohugo.io</generator><language>en</language><lastBuildDate>Sun, 29 Mar 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://www.caminhosolo.com.br/en/tags/vllm/index.xml" rel="self" type="application/rss+xml"/><item><title>vLLM: How to Serve LLMs in Production with High Throughput</title><link>https://www.caminhosolo.com.br/en/2026/03/vllm-inference-production/</link><pubDate>Sun, 29 Mar 2026 00:00:00 +0000</pubDate><guid>https://www.caminhosolo.com.br/en/2026/03/vllm-inference-production/</guid><description>TL;DR: vLLM is an open-source inference engine that delivers 2-4x more throughput than traditional solutions, with 50-80% lower costs than external APIs for high-volume usage. Recommended for products exceeding 100k tokens/month.</description></item></channel></rss>