Skip to content

Latest commit

 

History

History
262 lines (203 loc) · 16.2 KB

index.md

File metadata and controls

262 lines (203 loc) · 16.2 KB

imagen

June 11th or 12th (TBD), 2025, CVPR, Nashville (TN), USA.

Held in conjunction with the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2025.

Welcome to the 5th International Workshop on Event-Based Vision!

Important Dates

  • Paper submission deadline: March 12, 2025 (23:59h PST). Submission website (CMT)
  • Demo abstract submission: March 12, 2025 (23:59h PST)
  • Notification to authors: April 1, 2025.
  • Camera-ready paper: April 7, 2025 (as per CVPR website, deadline set by IEEE)
  • Early-bird registration April 30th (23:59h ET)
  • Standard registration begins May 1st.
  • Workshop day: June June 11th or 12th (TBD), 2025. Full day workshop.

CVPRW 2023 edition photo by S. Shiba

Objectives

Event-based cameras are bio-inspired, asynchronous sensors that offer key advantages of microsecond temporal resolution, low latency, high dynamic range and low power consumption. Because of these advantages, event-based cameras open frontiers that are unthinkable with traditional (frame-based) cameras, which have been the main sensing technology for the past 60 years. These revolutionary sensors enable the design of a new class of efficient algorithms to track a baseball in the moonlight, build a flying robot with the agility of a bee, and perform structure from motion in challenging lighting conditions and at remarkable speeds. In the last decade, research about these sensors has attracted the attention of industry and academia, fostering exciting advances in the field. The proposed workshop covers the sensing hardware, as well as the processing, data, and learning methods needed to take advantage of the above-mentioned novel cameras. The workshop also considers novel vision sensors, such as pixel processor arrays, which perform massively parallel processing near the image plane. Because early vision computations are carried out on-sensor (mimicking the retina), the resulting systems have high speed and low-power consumption, enabling new embedded vision applications in areas such as robotics, AR/VR, automotive, gaming, surveillance, etc.

Topics Covered

  • Event-based / neuromorphic vision.
  • Algorithms: motion estimation, visual(-inertial) odometry, SLAM, 3D reconstruction, image intensity reconstruction, optical flow estimation, recognition, segmentation, feature/object detection, visual tracking, calibration, action understanding, sensor fusion (video synthesis, events and RGB, events and LiDAR, etc.), model-based, embedded, or learning-based approaches.
  • Event-based representation, signal processing, and control.
  • Event-based active vision, event-based sensorimotor integration.
  • Event camera datasets and/or simulators.
  • Applications in: computational photography, robotics (navigation, manipulation, drones, obstacle avoidance, human-robot interaction,...), automotive, IoT, AR/VR (e.g., smart eyewear), space science, automated inspection, surveillance, crowd counting, physics, biology.
  • Novel hardware (cameras, neuromorphic processors, etc.) and/or software platforms, such as fully event-based systems (end-to-end).
  • New trends and challenges in event-based and/or biologically-inspired vision (SNNs, Reservoir Computing, etc.).
  • Efficient computing architectures for Event-based processing (e.g., HD Computing, State Space Models).
  • Near-focal plane processing, such as pixel processor arrays (PPAs).

A longer list of related topics is available in the table of content of the List of Event-based Vision Resources

Call for Contributions

Research papers

Research papers and demos are solicited in, but not limited to, the topics listed above.

  • Paper submissions must adhere to the CVPR 2025 paper submission style, format and length restrictions. See the author guidelines and template provided by the CVPR main conference. These submissions are meant to represent novel contributions, i.e., unpublished work (submissions should not have been published, accepted or be under review elsewhere). Accepted papers will be published open access through the Computer Vision Foundation (CVF) (see examples from CVPR Workshop 2023, 2021 and 2019). We encourage authors of accepted papers to write a paragraph about ethical considerations and impact of their work.

  • For demo abstract submission, authors are encouraged to submit an abstract of up to 2 pages using the same template as CVPR 2025 paper submissions.

Courtesy papers (in the poster session)

We also solicit contributions of papers relevant to the workshop that are accepted at the CVPR main conference or at other peer-reviewed conferences or journals. These contributions will be checked for suitability (soft review) and will not be published in the workshop proceedings. Papers should be submitted in single blind format (e.g., accepted version is fine), and should mention if and where the paper has been accepted / published. These contributions provide visibility to your work and help building a community around the topics of the workshop.

Competitions / Challenges

1. Eye-tracking

We are excited to arrange a challenge focused on advancing event-based eye tracking, a key technology for driving innovations in interaction technology and extended reality (XR). While current state-of-the-art devices like Apple's Vision Pro or Meta’s Aria glasses utilize frame-based eye tracking with frame rates from 10 to 100 Hz and latency around 11 ms, there is a pressing need for smoother, faster, and more efficient methods to enhance user experience. By leveraging two different event-based eye tracking datasets (the Enhanced Ev-Eye dataset and the 3ET+ dataset), this challenge offers participants the opportunity to contribute to cutting-edge solutions that push beyond current limitations. Both datasets are readily available, have been ethically collected with full consent and strict privacy protections, and have been validated. Submissions will be evaluated on accuracy and model efficiency to ensure low latency. We believe the outcomes of this challenge will play an important role in shaping the future of XR and interaction technology by pushing the boundaries of what's possible in eye tracking.

Challenge timeline:
  • Challenge Start: February 10, 2025
  • Challenge End: March 15, 2025
  • Top-ranking teams will be invited to submit factsheet, code, and paper after competition ends, the submission deadline: March 25, 2025
  • Top-ranking teams will be invited to write challenge report together, the deadline: April 5, 2025
  • Paper review deadline: April 5, 2025
Contact:

2. Space-time Instance Segmentation (SIS) Challenge

MouseSIS Visualization

Overview:

  • Task: Predict mask-accurate tracks of all mouse instances from input events (and optional frames).
  • Data: This challenge is based on the MouseSIS dataset.
  • Two Tracks: (1) Frame + Events Track, and (2) Events-only Track.

Challenge Page (Codabench)

Timeline:

  • February 7, 2025: Challenge opens for submissions
  • May 23, 2025: Challenge closes, final submission deadline
  • May 26, 2025: Winners announced. Top teams are invited to:
    • submit factsheets and code
    • collaborate on challenge report
    • present a poster at the CVPR workshop
  • June 6, 2025: Deadline for top teams to submit: Factsheets, Code and Challenge report.
  • June 11-12, 2025: Results presentation (Posters) at CVPR 2025 Workshop on Event-based Vision

Contact: Friedhelm Hamann (f.hamann [at] tu-berlin [dot] de)


3. Event-Based Image Deblurring Challenge

Deblur with events

Overview:

This challenge focuses on leveraging the high-temporal-resolution events from event cameras to improve image deblurring. We hope that this challenge will serve as a starting point for promoting event-based image enhancement on a broader stage and contribute to the thriving development of the event-based vision community.

  • Task: To obtain a network design / solution that fusing events and images produces high quality results with the best performance (i.e., PSNR).
  • Data: This challenge is based on the HighREV dataset.

Challenge Page (CodaLab)

Timeline:

  • February 10, 2025: Challenge opens for submissions
  • March 15, 2025: Final test data release
  • March 21, 2025: Challenge ends: submission deadline to upload results on the final test data
  • March 22, 2025: Fact sheets and code/executable submission deadline
  • March 24, 2025: Preliminary test results release to the participants
  • April 1st, 2025: Paper submission deadline for entries from the challenge
  • June 11-12, 2025: Results presentation at CVPR 2025 Workshop NTIRE and/or Workshop on Event-based Vision (Poster)

Contact: Lei Sun (leo_sun [at] zju [dot] edu [dot] cn)


Speakers

Location

  • On site (Music City Center, Nashville TN): Room TBD

Schedule

The tentative schedule is the following:

Time (local) Session
8:00 Welcome. Session 1: Event cameras: Algorithms and applications I (Invited speakers)
10:10 Coffee break. Set up posters.
10:30 Session 2: Poster session: contributed papers, competitions, demos and courtesy presentations (as posters).
12:30 Lunch break
13:30 Session 3: Event cameras: Algorithms and applications II (Invited speakers)
15:30 Coffee break
16:00 Session 4: Hardware architectures and sensors (Invited speakers)
17:45 Award Ceremony and Final Panel Discussion.
18:00 End

Organizers

FAQs

Related Workshops

See also this link

Ack

The Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.