You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Was wondering how we could use this approach for a RAG system.
Eg: can we modify the self_responses to make use of RAG to ensure better factuality of the system?
or, will you be able to suggest any other approaches we could try to ensure that the LLM answer(which is already retrieved from RAG) is still not a hallucinated one..
Thanks!
The text was updated successfully, but these errors were encountered:
That's a great question. Do you want to detect or evaluate the hallucination of the RAG system?
One easy way is to modify the self_responses to incorporate your RAG pipeline and make sure the self_responses are generated by your RAG system.
Or you may check the consistency of retrieved contexts before your generation. If they are consistent, probably the final response should be consistent but you may have to double-check.
Our method mainly aims to check if LLM hallucinations or not (with or without retrieved contexts). In the current stage, our method cannot ensure generated responses with hallucinations.
Hi, great job on this work!
Was wondering how we could use this approach for a RAG system.
Eg: can we modify the self_responses to make use of RAG to ensure better factuality of the system?
or, will you be able to suggest any other approaches we could try to ensure that the LLM answer(which is already retrieved from RAG) is still not a hallucinated one..
Thanks!
The text was updated successfully, but these errors were encountered: