-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Get evaluation for attributes #127
Comments
Could you let me know if you're asking for a custom dataset? If so, the tags are indeed the key; you have to annotate the suitable frames. Then, it all depends on the evaluation; some older approaches have this implemented in the toolkit. The new single-target analysis, based on anchor restarts, does not have a public implementation, but the approach is also a lot less sophisticated; we are talking about weighted sum over the sequences. If this is what you are interested in, I can look into adding the code to the toolkit, it is an extension using a lot of toolkit code. |
Thank you for your fast response. My question is how to obtain similar graphs directly by using the VOT Challenge? Is it possible to generate them using a flag similar to the html report? Do I therefore understand your answer correctly that this is possible in an evaluation with the public legacy code (https://github.com/votchallenge/toolkit-legacy), but not (yet) in the toolkit? |
Hello everyone,
Thank you very much for providing the toolkit. In some papers you can see the evaluation of the attributes by e.g. camera motion. In common.py the tag files are also read in. I did not see any information about this in the documentation. How can I perform the evaluation accordingly?
I would be very happy to receive an answer.
Many thanks in advance.
The text was updated successfully, but these errors were encountered: