You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+20-20
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
-
# ☄️ OpenGPT
1
+
# ☄️ RunGPT
2
2
3
3
<palign="center">
4
-
<ahref="https://github.com/jina-ai/opengpt"><imgsrc="https://github.com/jina-ai/opengpt/blob/main/.github/images/logo.png"alt="OpenGPT: An open-source cloud-native large-scale multimodal model serving framework"width="300px"></a>
4
+
<ahref="https://github.com/jina-ai/rungpt"><imgsrc="https://github.com/jina-ai/rungpt/blob/main/.github/images/logo.png"alt="rungpt: An open-source cloud-native large-scale multimodal model serving framework"width="300px"></a>
**OpenGPT** is an open-source _cloud-native_ large-scale **_multimodal models_** (LMMs) serving framework.
17
+
**RunGPT** is an open-source _cloud-native_ large-scale **_multimodal models_** (LMMs) serving framework.
18
18
It is designed to simplify the deployment and management of large language models, on a distributed cluster of GPUs.
19
19
We aim to make it a one-stop solution for a centralized and accessible place to gather techniques for optimizing large-scale multimodal models and make them easy to use for everyone.
20
20
@@ -30,7 +30,7 @@ We aim to make it a one-stop solution for a centralized and accessible place to
30
30
31
31
## Features
32
32
33
-
OpenGPT provides the following features to make it easy to deploy and serve **large multi-modal models** (LMMs) at scale:
33
+
RunGPT provides the following features to make it easy to deploy and serve **large multi-modal models** (LMMs) at scale:
34
34
35
35
- Support for multi-modal models on top of large language models
36
36
- Scalable architecture for handling high traffic loads
@@ -41,13 +41,13 @@ OpenGPT provides the following features to make it easy to deploy and serve **la
41
41
42
42
## Updates
43
43
44
-
-**2023-05-12**: 🎉We have released the first version `v0.0.1` of OpenGPT. You can install it with `pip install open_gpt_torch`.
44
+
-**2023-05-12**: 🎉We have released the first version `v0.0.1` of RunGPT. You can install it with `pip install run_gpt_torch`.
45
45
46
46
## Supported Models
47
47
48
48
<details>
49
49
50
-
OpenGPT supports the following models out of the box:
50
+
RunGPT supports the following models out of the box:
51
51
52
52
- LLM (Large Language Model)
53
53
@@ -69,7 +69,7 @@ For more details about the supported models, please see the [Model Zoo](./MODEL_
69
69
70
70
## Roadmap
71
71
72
-
You can view our roadmap with features that are planned, started, and completed on the [Roadmap discussion](https://github.com/jina-ai/opengpt/discussions/categories/roadmap) category.
72
+
You can view our roadmap with features that are planned, started, and completed on the [Roadmap discussion](https://github.com/jina-ai/rungpt/discussions/categories/roadmap) category.
73
73
74
74
## Get Started
75
75
@@ -78,15 +78,15 @@ You can view our roadmap with features that are planned, started, and completed
@@ -117,7 +117,7 @@ We use the [stabilityai/stablelm-tuned-alpha-3b](https://huggingface.co/stabilit
117
117
In most cases of large model serving, the model cannot fit into a single GPU. To solve this problem, we also provide a `device_map` option (supported by `accecleate` package) to automatically partition the model and distribute it across multiple GPUs:
What's more, we also provide a [Python client](https://github.com/jina-ai/inference-client/) (`inference-client`) for you to easily interact with the server:
174
174
175
175
```python
176
-
fromopen_gptimport Client
176
+
fromrun_gptimport Client
177
177
178
178
client = Client()
179
179
@@ -206,7 +206,7 @@ To do so, you can use `deploy` command:
Copy file name to clipboardexpand all lines: docs/docs/index.md
+5-5
Original file line number
Diff line number
Diff line change
@@ -1,18 +1,18 @@
1
1
# Quick start
2
2
3
-
`opengpt` is an open-source _cloud-native_ large-scale **_multimodal models_** (LMMs) serving framework.
3
+
`rungpt` is an open-source _cloud-native_ large-scale **_multimodal models_** (LMMs) serving framework.
4
4
It is designed to simplify the deployment and management of large language models, on a distributed cluster of GPUs.
5
5
We aim to make it a one-stop solution for a centralized and accessible place to gather techniques for optimizing large-scale multimodal models and make them easy to use for everyone.
6
6
7
7
8
8
## Installation and setup
9
9
10
-
To use `opengpt`, install it with `pip`:
10
+
To use `rungpt`, install it with `pip`:
11
11
12
12
<divclass="termy">
13
13
14
14
```shell
15
-
$ pip install open_gpt_torch
15
+
$ pip install run_gpt_torch
16
16
```
17
17
18
18
</div>
@@ -25,9 +25,9 @@ We use the [stabilityai/stablelm-tuned-alpha-3b](https://huggingface.co/stabilit
0 commit comments