[Feat_Add] Addition of new LLM evals metric #32
Labels
component;evaluate
New Evaluation metrics addition or modifications request for existing ones
good first issue
Good for newcomers
help wanted
Extra attention is needed
Beyond LLM supports, 4 evaluation metrics: Context relevancy, Answer relevancy, Groundedness, and Ground truth.
We would be looking forward to add new evaluation metric support to evaluate LLM/RAG response
Or any other research based metric
The text was updated successfully, but these errors were encountered: