Replies: 2 comments 6 replies
-
@zjysteven have listed several key points that I also consider to be meanningfull. I also list two points that could be discussed.
|
Beta Was this translation helpful? Give feedback.
0 replies
-
Comments from @sunyiyou in our last meeting
|
Beta Was this translation helpful? Give feedback.
6 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Overview
As discussed with @Jingkang50, we would like to develop an enhanced version of OpenOOD. Compared to its current version, v2 will:
(PS: Jingkang I know you said you were about to lay out a plan, but I'm free these days so I just write down something first.)
Ideas
Data
ID - OOD splits:
Outlier data for OE-based methods:
Model architectures
We definitely want to include baseline results for architectures beyond CNNs, e.g., transformers or even MLP-mixer. The caveat here is that these advanced models are often pre-trained with large-scale datasets. We may want to decouple the effect of pre-training and model architecture in our experiments.
Methods
Impact
Discussion
There are many other things we could do, for example, measuring the inference latency of each method.
@Jingkang50 Feel free to leave comments and let others take a look and join the discussion. I don't see the due date for this year's NeurIPS dataset & benchmark track, but if it is still in June, we might be able to catch it if we can start early in February.
Beta Was this translation helpful? Give feedback.
All reactions