Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat : New LLM evals metric added #66

Merged
merged 2 commits into from
Sep 11, 2024
Merged

Conversation

jaywyawhare
Copy link
Contributor

Solved #32 Issue. [Closed previous PR]

@tarun-aiplanet tarun-aiplanet added the component;evaluate New Evaluation metrics addition or modifications request for existing ones label Jul 15, 2024
@tarun-aiplanet
Copy link
Member

Solved #32 Issue. [Closed previous PR]

we are currently testing out the evals on different LLMs

@jaywyawhare
Copy link
Contributor Author

Ok sir

@taha-aiplanet
Copy link
Collaborator

Hi, So Faithfulness is how close the final output is to the retrieved chunk, we do not need ground truth for that. Also, we are already doing that in Groundedness. We divide the generated response into statements and score each individual statement and take the average score for the entire response. You may refer to this

https://beyondllm.aiplanet.com/core-components/evaluation#groundedness

Thanks alot for the PR, looking forward to seeing more from you.

@jaywyawhare
Copy link
Contributor Author

Added the fix as discussed. @tarun-aiplanet Please review it whenever you have time.

@tarun-aiplanet tarun-aiplanet merged commit 0a3a30b into aiplanethub:main Sep 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component;evaluate New Evaluation metrics addition or modifications request for existing ones
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants