Skip to main content
Baseten deploys models from a single config.yaml file. You point to a model on Hugging Face, choose a GPU, and Baseten builds a TensorRT-optimized container with an OpenAI-compatible API. No Python code, no Dockerfile, no container management. This tutorial deploys Qwen 2.5 3B Instruct, a small but capable LLM, to a production-ready endpoint on an L4 GPU.

Set up your environment

You need uv installed and a Baseten account with an API key.

Log in to Baseten

Generate an API key from Settings > API keys, then authenticate the Truss CLI:
truss login
Paste your API key when prompted:
💻 Let's add a Baseten remote!
🤫 Quietly paste your API_KEY:
You can skip the interactive prompt by setting BASETEN_API_KEY as an environment variable:
export BASETEN_API_KEY="paste-your-api-key-here"

Create the config

Create a project directory with a config.yaml:
mkdir qwen-2.5-3b && cd qwen-2.5-3b
Create a config.yaml file with the following contents:
config.yaml
model_name: Qwen-2.5-3B
resources:
  accelerator: L4
model_metadata:
  tags:
    - openai-compatible
trt_llm:
  build:
    base_model: decoder
    checkpoint_repository:
      source: HF
      repo: "Qwen/Qwen2.5-3B-Instruct"
    max_seq_len: 8192
    quantization_type: fp8
    tensor_parallel_count: 1
Each section of this config controls a different part of the deployment:
  • resources: Selects an L4 GPU (24 GB VRAM) for inference.
  • trt_llm: Tells Baseten to use Engine-Builder-LLM, which compiles the model with TensorRT-LLM for optimized inference.
  • checkpoint_repository: Points to the model weights on Hugging Face. Qwen 2.5 3B Instruct is ungated, so no access token is needed.
  • quantization_type: fp8: Compresses weights to 8-bit floating point, significantly reducing memory usage with negligible quality loss.

Deploy

Push to Baseten:
truss push
You should see:
✨ Model Qwen 2.5 3B was successfully pushed ✨

🪵  View logs for your deployment at https://app.baseten.co/models/abc1d2ef/logs/xyz123
The logs URL contains your model ID, the string after /models/ (e.g. abc1d2ef). You’ll need this to call the model’s API. You can also find it in your Baseten dashboard. Baseten now downloads the model weights, compiles them with TensorRT-LLM, and deploys the resulting container to an L4 GPU. You can watch progress in the logs linked above. When the deployment status shows “Active” in the dashboard, it’s ready for requests.
New accounts include free credits. This deployment uses an L4 GPU, one of the most cost-effective options available.

Call your model

Engine-based deployments serve an OpenAI-compatible API, so any code that works with the OpenAI SDK works with your model. Replace {model_id} with your model ID from the deployment output.
Install the OpenAI SDK if you don’t have it:
uv pip install openai
Create a chat completion:
import os
from openai import OpenAI

client = OpenAI(
    api_key=os.environ["BASETEN_API_KEY"],
    base_url="https://model-{model_id}.api.baseten.co/environments/production/sync/v1",
)

response = client.chat.completions.create(
    model="Qwen-2.5-3B",
    messages=[
        {"role": "user", "content": "What is machine learning?"}
    ],
)

print(response.choices[0].message.content)
You should see a response like:
Machine learning is a branch of artificial intelligence where systems learn
patterns from data to make predictions or decisions without being explicitly
programmed for each task...

What just happened

With a single config file, you deployed a production-ready LLM endpoint. Here’s what Baseten did:
  1. Downloaded the Qwen 2.5 3B Instruct weights from Hugging Face.
  2. Compiled the model with TensorRT-LLM, applying FP8 quantization for faster inference and lower memory usage.
  3. Packaged everything into a container and deployed it to an L4 GPU.
  4. Exposed an OpenAI-compatible API that handles tokenization, batching, and KV cache management automatically.
No model.py, no Docker setup, no inference server configuration. This config-only pattern works for most popular open-source LLMs, including Llama, Qwen, Mistral, Gemma, and Phi models.

Next steps

Engine configuration

Tune max sequence length, batch size, quantization, and runtime settings for your deployment.

Custom model code

Add custom Python when you need preprocessing, postprocessing, or unsupported model architectures.

Autoscaling

Configure replicas, concurrency targets, and scale-to-zero for production traffic.

Promote to production

Move from development to production with truss push --promote.