You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It properly escapes output from the LLM, and the LLM does receive the tokens properly as best I can tell. However, when the UI renders the user input, HTML tags are not escaped and can lead to potential weirdness.
Examples:
Inputting <html>:
Inputting <em>have</em>:
I can see why it might be nice to have user input text render as html, but might lead to strange or unexpected behavior.
The text was updated successfully, but these errors were encountered:
The text is actually rendered as markdown using marked.js. It would be kind of involved to try to decide which HTML tags to escape, especially since some language models may use HTML in their outputs. marked.js correctly handles literal text inside backticks, like `<html>`. So I'm a little unsure how much complexity this calls for?
Okay, so I think it should be addressed with the latest commit. User input will now be rendered without HTML formatting while the bot can still output formatted responses.
It properly escapes output from the LLM, and the LLM does receive the tokens properly as best I can tell. However, when the UI renders the user input, HTML tags are not escaped and can lead to potential weirdness.
Examples:
Inputting
<html>
:Inputting
<em>have</em>
:I can see why it might be nice to have user input text render as html, but might lead to strange or unexpected behavior.
The text was updated successfully, but these errors were encountered: