Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider adding "incremental translation" mode for consistency #12

Open
smikitky opened this issue Nov 22, 2023 · 0 comments
Open

Consider adding "incremental translation" mode for consistency #12

smikitky opened this issue Nov 22, 2023 · 0 comments
Labels
enhancement New feature or request

Comments

@smikitky
Copy link
Owner

The GPT-4 Turbo model, released in its preview version in November 2023, supports a massive context window, allowing for virtually unlimited text size for input during an API call. The length of the prompt file is no longer a concern. However, since the output size is still limited to 4,096 tokens, long articles will still need to be processed by dividing the original text into fragments.

That said, the unlimited nature of the input text means that, by processing the fragments sequentially and passing the already translated fragments to subsequent API calls, it might be possible to improve consistency in context and terminology in long text translations, without relying on prompt file tricks.

Although it's hard to imagine how effective this approach might be, it's worth adding a new flag to enable this behavior.

@smikitky smikitky added the enhancement New feature or request label Nov 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant