You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+7-5
Original file line number
Diff line number
Diff line change
@@ -42,6 +42,8 @@ We will try to rebase against upstream frequently and we plan to contribute thes
42
42
## About
43
43
vLLM is a fast and easy-to-use library for LLM inference and serving.
44
44
45
+
Originally developed in the [Sky Computing Lab](https://sky.cs.berkeley.edu) at UC Berkeley, vLLM has evloved into a community-driven project with contributions from both academia and industry.
46
+
45
47
vLLM is fast with:
46
48
47
49
- State-of-the-art serving throughput
@@ -76,16 +78,16 @@ Find the full list of supported models [here](https://docs.vllm.ai/en/latest/mod
76
78
77
79
## Getting Started
78
80
79
-
Install vLLM with `pip` or [from source](https://vllm.readthedocs.io/en/latest/getting_started/installation.html#build-from-source):
81
+
Install vLLM with `pip` or [from source](https://docs.vllm.ai/en/latest/getting_started/installation/gpu/index.html#build-wheel-from-source):
80
82
81
83
```bash
82
84
pip install vllm
83
85
```
84
86
85
-
Visit our [documentation](https://vllm.readthedocs.io/en/latest/) to learn more.
0 commit comments