Behavior label annotation within SLEAP #785
Replies: 2 comments
-
Note: This was originally posted as an Issue/Feature request and converted into an Ideas discussion thread to serve as a place to discuss this frequently requested feature and current alternatives. BackgroundThe goal of SLEAP is to predict landmark coordinates on images which can reflect the degrees of freedom afforded by an animal's morphology ("pose"). Changes in these landmark positions over time are what we call postural dynamics and can capture the movements that underlie a particular behavior: Going from postural dynamics to behavior labels is a task called action recognition or behavior segmentation. Approaches that do this will produce a label like "walking" or "grooming" (or probabilities thereof) for each frame or clip, given a sequence of poses or raw images. SLEAP does not do action recognition. This is a downstream and different problem than pose tracking. It comes with a lot of differences in how you would design a machine learning system to solve it, many of which are application-dependent, so we made the decision to curtail the scope of SLEAP to pose tracking only, leaving it up to users to decide how they want to perform downstream analyses. In the future we may explore options to provide an integrated solution that is available from within SLEAP to do action recognition, but in the meantime we recommend checking out the alternatives listed below. If you feel that this is something you would like to see in SLEAP and that the tools below can't address, let us know by commenting or upvoting this idea. Alternatives and ResourcesWhile SLEAP does not do action recognition, it is compatible with several downstream tools that do. Here we list a few that we recommend:
Other tools and resources we recommend checking out:
There are a lot more tools out there that may be better suited to your individual needs. We recommend checking out OpenBehavior which is dedicated to cataloguing systems for behavior capture and analysis. Original replies: Reply 1 by @talmo> Hey @kylethieringer, > > This would be a great feature to have and would definitely help to bootstrap some downstream behavior segmentation stuff. > > I'll add it to the list, but I'll just say that it might be a bit tricky to implement for a little bit (especially the segments). We can try for a basic version of it after the next stable version release.Reply 2 by @talmo> I agree that our GUI framework is nice for this purpose and if you're using it already to do pose annotation, it would be helpful to be able to do everything in one place. > > ... but I'm going to close this for now, although we might revisit this functionality request in the future. It is a fairly large feature that we would need to add, and segmentation is a bit outside of the scope of what SLEAP is designed for. > > In the meantime, we suggest checking out [BENTO](https://github.com/neuroethology/bento) from Ann Kennedy's lab which is explicitly designed to do this and now supports SLEAP inputs as well (https://github.com/neuroethology/bento/pull/144). |
Beta Was this translation helpful? Give feedback.
-
Hi!
I was wondering if it would be possible to add a feature to help annotate videos as I skim through them. Something like where I hit a hotkey to mark a frame (or selection of frames) and this gets added to an annotations table.
For Example,
This would help with manually marking behaviors, marking clips to analyze later, etc. I dont know if this could then be incorporated into the .slp dataset file and accessed when exporting to an .h5 or if its easier to have an option to just export the annotations to a .csv file directly from the GUI?
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions