Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Get evaluation for attributes #127

Open
LittlePika opened this issue Oct 13, 2024 · 2 comments
Open

Get evaluation for attributes #127

LittlePika opened this issue Oct 13, 2024 · 2 comments

Comments

@LittlePika
Copy link

Hello everyone,

Thank you very much for providing the toolkit. In some papers you can see the evaluation of the attributes by e.g. camera motion. In common.py the tag files are also read in. I did not see any information about this in the documentation. How can I perform the evaluation accordingly?

I would be very happy to receive an answer.
Many thanks in advance.

@lukacu
Copy link
Collaborator

lukacu commented Oct 13, 2024

Could you let me know if you're asking for a custom dataset? If so, the tags are indeed the key; you have to annotate the suitable frames.

Then, it all depends on the evaluation; some older approaches have this implemented in the toolkit. The new single-target analysis, based on anchor restarts, does not have a public implementation, but the approach is also a lot less sophisticated; we are talking about weighted sum over the sequences. If this is what you are interested in, I can look into adding the code to the toolkit, it is an extension using a lot of toolkit code.

@LittlePika
Copy link
Author

Thank you for your fast response.
The question is not about a specific data set. Let's use VOT 2018 as a common example, with a framewised labeled e.g. occulusion tag.
Based on my understanding of the graphs, e.g. Figure 3 (Failure rate with respect to the visual attributes.) and Figure 7 of the accompanying VOT Challenge Report (https://prints.vicos.si/publications/365), it is possible to generate the results for the individual attributes separately for a detailed evaluation.

My question is how to obtain similar graphs directly by using the VOT Challenge? Is it possible to generate them using a flag similar to the html report?

Do I therefore understand your answer correctly that this is possible in an evaluation with the public legacy code (https://github.com/votchallenge/toolkit-legacy), but not (yet) in the toolkit?
If this is part of the additional code you have mentioned, I would appreciate it very highly if this could be made available.
Thank you very much.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants