Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feat_Add] Addition of new LLM evals metric #32

Open
2 tasks
tarun-aiplanet opened this issue Apr 19, 2024 · 3 comments
Open
2 tasks

[Feat_Add] Addition of new LLM evals metric #32

tarun-aiplanet opened this issue Apr 19, 2024 · 3 comments
Assignees
Labels
component;evaluate New Evaluation metrics addition or modifications request for existing ones good first issue Good for newcomers help wanted Extra attention is needed

Comments

@tarun-aiplanet
Copy link
Member

Beyond LLM supports, 4 evaluation metrics: Context relevancy, Answer relevancy, Groundedness, and Ground truth.

We would be looking forward to add new evaluation metric support to evaluate LLM/RAG response

  • Faithfulness
  • Correctness

Or any other research based metric

@tarun-aiplanet tarun-aiplanet added good first issue Good for newcomers help wanted Extra attention is needed component;evaluate New Evaluation metrics addition or modifications request for existing ones labels Apr 19, 2024
@adityasingh-0803
Copy link

I can work on lexical diversity

@adityasingh-0803
Copy link

@tarun-aiplanet please assign it

@tarun-aiplanet
Copy link
Member Author

lexical diversity

I have never heard such metrics to evaluate LLM. Can you provide the reference of research paper. Also you were assigned for Perplexity LLM. Is that done?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component;evaluate New Evaluation metrics addition or modifications request for existing ones good first issue Good for newcomers help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants