← Back to Home

LLM Inference

Ongoing research2026

LLM Inference
RLLLMAgents

BOLT: Budget-Optimal LLM Inference via Quantization, Adaptive Exits, and Test-Time Verification.

The Pareto Frontier of Open Inference

Open-weight LLM deployment is often a zero-sum game between memory, latency, and accuracy. While quantization and early-exit methods are usually studied in isolation, BOLT treats them as a unified, joint optimization problem. By co-tuning INT4 precision with adaptive layer-skipping, we recover the "reasoning tax" imposed by compression through lightweight test-time verification.

The 3-Knob Stack: Efficiency without Collapse

Our research investigates the interplay between three distinct inference "knobs" across Qwen2.5-7B and 14B architectures:

Methodology & COLM-Next Benchmarking

Designed for reproducibility on a single-GPU (A100) budget, the project generates a multi-modal failure taxonomy across math (GSM8K), code (HumanEval), and long-context QA. We map the Accuracy vs. Effective Compute curve, identifying the specific regimes where verification compensates for quantization noise and where early-exit heuristics break complex reasoning chains.

// Execution Plan:
Prototype: Colab (bitsandbytes NF4)
Production: HPC Job Arrays (A100)
Metrics: Pareto Frontiers + Calibration + Failure Breakdown