Skip to content

Commit e4a43da

Browse files
authored
remove attn_temperature_tuning in default user guide (#49)
Signed-off-by: Lu Fang <[email protected]>
1 parent 3eb4d4d commit e4a43da

File tree

1 file changed

+4
-6
lines changed

1 file changed

+4
-6
lines changed

_posts/2025-04-05-llama4.md

+4-6
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ VLLM_DISABLE_COMPILE_CACHE=1 vllm serve meta-llama/Llama-4-Scout-17B-16E-Instruc
3535
```
3636
VLLM_DISABLE_COMPILE_CACHE=1 vllm serve meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8 \
3737
--tensor-parallel-size 8 \
38-
--max-model-len 430000 --override-generation-config='{"attn_temperature_tuning": true}'
38+
--max-model-len 430000'
3939
```
4040

4141
On 8x H200 GPUs:
@@ -45,19 +45,17 @@ On 8x H200 GPUs:
4545
```
4646
VLLM_DISABLE_COMPILE_CACHE=1 vllm serve meta-llama/Llama-4-Scout-17B-16E-Instruct \
4747
--tensor-parallel-size 8 \
48-
--max-model-len 3600000 --override-generation-config='{"attn_temperature_tuning": true}'
48+
--max-model-len 3600000'
4949
```
5050

5151
* Maverick (up to 1M context):
5252

5353
```
5454
VLLM_DISABLE_COMPILE_CACHE=1 vllm serve meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8 \
5555
--tensor-parallel-size 8
56-
--max-model-len 1000000 --override-generation-config='{"attn_temperature_tuning": true}'
56+
--max-model-len 1000000'
5757
```
5858

59-
Note: we highly recommend to turn on attn_temperature_tuning to improve accuracy for long contexts longer than 32K tokens, and VLLM_DISABLE_COMPILE_CACHE=1 is required.
60-
6159
**Multimodality:**
6260

6361
The Llama 4 models excel at image understanding up to 8-10 images. By default, vLLM server accepts 1 image per request. Please pass `--limit-mm-per-prompt image=10` to serve up to 10 images per request with OpenAI-compatible API. We also recommend checking out our multi-image offline inference example with Llama-4 [here](https://github.com/vllm-project/vllm/blob/v0.8.3/examples/offline_inference/vision_language_multi_image.py).
@@ -74,6 +72,7 @@ While more performance enhancements are on the way, we believe the Llama 4 model
7472

7573
* **Boost Performance & Context Length:** Set `--kv-cache-dtype fp8` to potentially double the usable context window and gain a performance boost. We observe little to no accuracy drop in relevant evaluations with this setting.
7674
* **Maximize Context Window (up to 10M):** To fully utilize the maximum context windows (up to 10M for Scout), we recommend serving across multiple nodes using tensor parallelism or pipeline parallelism. Follow our distributed inference guide [here](https://docs.vllm.ai/en/latest/serving/distributed_serving.html).
75+
* **Improve Long Context Accuracy (\>32K):** We highly recommend adding `--override-generation-config='{"attn_temperature_tuning": true}'` to improve accuracy for contexts longer than 32K tokens.
7776

7877
**Other Hardware Support & Quantizations:**
7978

@@ -108,4 +107,3 @@ We extend our sincere thanks to the Meta team for their implementation of the mo
108107
We also thank the AMD team for their support in enabling these models on MI300X: [Hongxia Yang](https://github.com/hongxiayang) and Weijun Jiang.
109108

110109
The vLLM team’s performance benchmarks were run on hardware generously provided by Nebius and NVIDIA.
111-

0 commit comments

Comments
 (0)