You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The GPT-4 Turbo model, released in its preview version in November 2023, supports a massive context window, allowing for virtually unlimited text size for input during an API call. The length of the prompt file is no longer a concern. However, since the output size is still limited to 4,096 tokens, long articles will still need to be processed by dividing the original text into fragments.
That said, the unlimited nature of the input text means that, by processing the fragments sequentially and passing the already translated fragments to subsequent API calls, it might be possible to improve consistency in context and terminology in long text translations, without relying on prompt file tricks.
Although it's hard to imagine how effective this approach might be, it's worth adding a new flag to enable this behavior.
The text was updated successfully, but these errors were encountered:
The GPT-4 Turbo model, released in its preview version in November 2023, supports a massive context window, allowing for virtually unlimited text size for input during an API call. The length of the prompt file is no longer a concern. However, since the output size is still limited to 4,096 tokens, long articles will still need to be processed by dividing the original text into fragments.
That said, the unlimited nature of the input text means that, by processing the fragments sequentially and passing the already translated fragments to subsequent API calls, it might be possible to improve consistency in context and terminology in long text translations, without relying on prompt file tricks.
Although it's hard to imagine how effective this approach might be, it's worth adding a new flag to enable this behavior.
The text was updated successfully, but these errors were encountered: