Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UI for a comparison of Submissions accross tasks / datasets #477

Open
mam10eks opened this issue Jul 24, 2023 · 7 comments
Open

UI for a comparison of Submissions accross tasks / datasets #477

mam10eks opened this issue Jul 24, 2023 · 7 comments
Assignees
Labels
enhancement New feature or request stale Inactive issues

Comments

@mam10eks
Copy link
Member

I saw a very nice way to visualize submissions across multiple tasks / datasets in the entity linking tool GERBIL.

The UI of Gerbil had the following main features for this:

  • Select datasets (default: all)
  • Select submissions (default: all)
  • Select measures (forgot what the default was, maybe all?)

For this, Gerbil rendered a table with the main results but also some figures (e.g., task/dataset on the x-axis sorted by decreasing effectiveness and effectiveness on the y-axis resulting in a line plot.)

@mam10eks mam10eks added the enhancement New feature or request label Jul 24, 2023
@mam10eks
Copy link
Member Author

This would be especially interesting for TIRA/TIREx (cc. @potthast, @heinrichreimer, @seanmacavaney).

@juhehehe, @Kavlahkaff, If you want, you can already start a bit with brainstorming on the UI for this.

@potthast
Copy link
Member

Can you post some screenshots? It's not clear to me where to find them on the linked page.

@mam10eks
Copy link
Member Author

mam10eks commented Sep 8, 2023

I also could not find the screenshots anymore.

Here is a sketch with what I remember from the talk:
20230908_123526

Does this sketch clarify the idea?

I would really love to have something like this.
Maybe this would be a good next todo for @juhehehe?

@mam10eks
Copy link
Member Author

mam10eks commented Sep 8, 2023

So in the end it is basically a table.
There was also a way to draw the content of the table in a figure, but the table would be the main thing to me.

@juhehehe juhehehe self-assigned this Sep 8, 2023
@janheinrichmerker
Copy link
Contributor

Yes, I agree, comparing the results of a bunch of selected approaches in this table-like fashion would also be the main use case for me.

Further nice-to-have's:

  • CSV export
  • online significance tests (for starters just a t-test with fixed alpha, that automatically applies multiple test correction based on the number of selected approaches)

@mam10eks
Copy link
Member Author

I finished the REST endpoint for this and integrated it into the existing RunList.vue:

Screenshot_20230911_124313

From that, I think @juhehehe can finalize the UI of this.
(You see this table on the submission pages)

Copy link

github-actions bot commented Jun 3, 2024

This issue has been marked stale because it has been open 60 days with no activity.

@github-actions github-actions bot added the stale Inactive issues label Jun 3, 2024
@github-project-automation github-project-automation bot moved this to In Progress in Agenda Aug 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request stale Inactive issues
Projects
Status: In Progress
Development

No branches or pull requests

4 participants