Skip to content

Commit

Permalink
Merge pull request #29 from Cypih/add_max_tokens
Browse files Browse the repository at this point in the history
Add max_tokens variable
  • Loading branch information
ricklamers authored Jan 6, 2024
2 parents 4744651 + 5aa53f3 commit 441b7e0
Show file tree
Hide file tree
Showing 3 changed files with 12 additions and 9 deletions.
17 changes: 9 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,14 +48,15 @@ Shell-AI will then suggest 3 commands to fulfill your request:
### Optional Variables

1. **`OPENAI_MODEL`**: Defaults to `gpt-3.5-turbo`. You can set it to another OpenAI model if desired.
2. **`SHAI_SUGGESTION_COUNT`**: Defaults to 3. You can set it to specify the number of suggestions to generate.
3. **`OPENAI_API_BASE`**: Defaults to `https://api.openai.com/v1`. You can set it to specify the proxy or service emulator.
4. **`OPENAI_ORGANIZATION`**: OpenAI Organization ID
5. **`OPENAI_PROXY`**: OpenAI proxy
6. **`OPENAI_API_TYPE`**: Set to "azure" if you are using Azure deployments.
7. **`AZURE_DEPLOYMENT_NAME`**: Your Azure deployment name (required if using Azure).
8. **`AZURE_API_BASE`**: Your Azure API base (required if using Azure).
9. **`CTX`**: Allow the assistant to keep the console outputs as context allowing the LLM to produce more precise outputs. ***IMPORTANT***: the outputs will be sent to OpenAI through their API, be careful if any sensitive data. Default to false.
2. **`OPENAI_MAX_TOKENS`**: Defaults to `None`. You can set the maximum number of tokens that can be generated in the chat completion.
3. **`SHAI_SUGGESTION_COUNT`**: Defaults to 3. You can set it to specify the number of suggestions to generate.
4. **`OPENAI_API_BASE`**: Defaults to `https://api.openai.com/v1`. You can set it to specify the proxy or service emulator.
5. **`OPENAI_ORGANIZATION`**: OpenAI Organization ID
6. **`OPENAI_PROXY`**: OpenAI proxy
7. **`OPENAI_API_TYPE`**: Set to "azure" if you are using Azure deployments.
8. **`AZURE_DEPLOYMENT_NAME`**: Your Azure deployment name (required if using Azure).
9. **`AZURE_API_BASE`**: Your Azure API base (required if using Azure).
10. **`CTX`**: Allow the assistant to keep the console outputs as context allowing the LLM to produce more precise outputs. ***IMPORTANT***: the outputs will be sent to OpenAI through their API, be careful if any sensitive data. Default to false.

You can also enable context mode in command line with `--ctx` flag:

Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

setup(
name='shell-ai',
version='0.3.21',
version='0.3.22',
author='Rick Lamers',
long_description=long_description,
long_description_content_type='text/markdown',
Expand Down
2 changes: 2 additions & 0 deletions shell_ai/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,7 @@ def main():
prompt = " ".join(sys.argv[1:])

OPENAI_MODEL = os.environ.get("OPENAI_MODEL", "gpt-3.5-turbo")
OPENAI_MAX_TOKENS = os.environ.get("OPENAI_MAX_TOKENS", None)
OPENAI_API_BASE = os.environ.get("OPENAI_API_BASE", None)
OPENAI_ORGANIZATION = os.environ.get("OPENAI_ORGANIZATION", None)
OPENAI_PROXY = os.environ.get("OPENAI_PROXY", None)
Expand Down Expand Up @@ -116,6 +117,7 @@ def main():
openai_api_base=OPENAI_API_BASE,
openai_organization=OPENAI_ORGANIZATION,
openai_proxy=OPENAI_PROXY,
max_tokens=OPENAI_MAX_TOKENS,
)
if OPENAI_API_TYPE == "azure":
chat = AzureChatOpenAI(
Expand Down

0 comments on commit 441b7e0

Please sign in to comment.