You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As far as I have seen, and please correct me if I'm wrong, the SDK doesn't have a way to programmatically retrieve the maximum number of tokens that can be generated in a single request.
I'd like to see these values added to the ChatModel enum, or another appropriate area.
Here're a few examples, just in-case I'm using the wrong terminology:
gpt-3.5-turbo -> Max of 4096
gpt-4 -> Max of 8192
gpt-4-32k -> Max of 32,768
These values were pulled from an old project of mine, which in-turn pulled them from an old copy of the OpenAI docs.
The text was updated successfully, but these errors were encountered:
As far as I have seen, and please correct me if I'm wrong, the SDK doesn't have a way to programmatically retrieve the maximum number of tokens that can be generated in a single request.
I'd like to see these values added to the
ChatModel
enum, or another appropriate area.Here're a few examples, just in-case I'm using the wrong terminology:
gpt-3.5-turbo
-> Max of 4096gpt-4
-> Max of 8192gpt-4-32k
-> Max of 32,768These values were pulled from an old project of mine, which in-turn pulled them from an old copy of the OpenAI docs.
The text was updated successfully, but these errors were encountered: