Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lower-Latency Design Patterns #17

Open
12 tasks
Ishaan-Datta opened this issue Aug 30, 2024 · 0 comments
Open
12 tasks

Lower-Latency Design Patterns #17

Ishaan-Datta opened this issue Aug 30, 2024 · 0 comments
Milestone

Comments

@Ishaan-Datta
Copy link
Collaborator

  • Design a pipeline architecture to process data through multiple stages, allowing for parallel execution of different processing steps.
  • Experiment with image streaming functionality within Nvidia DeepStream SDK and Zed Camera SDK
  • Implement lazy initialization (delay the initialization of non-essential components until they are required) if startup lazy too high
  • Optimize thread inter-communication through shared memory or message passing for preprocessing, inference, postprocessing nodes
  • Choose an appropriate serialization method based on your data size and throughput requirements. ROS2 supports different serialization formats, including Binary, FastRTPS, and more.
  • Fine-tune parameters such as reliability, durability, and history/deadline to optimize message delivery for your specific use case.
  • ros2 topic echo with QoS Settings: Use ros2 topic echo with custom QoS settings to visualize message serialization and deserialization time.
  • Custom Benchmarking Code: Implement custom code to measure the time taken to serialize and deserialize messages. Compare the results with different message sizes and types.
  • ROS2 Time Synchronization: Network Time Protocol (NTP): Ensure that your system clocks are synchronized using NTP.
  • Utilize tools like rqt and rviz to visualize ROS2 messages, topics, and timings. Analyze graphical representations for identifying latency patterns.
  • Compare benchmark results across different runs, configurations, or nodes. Look for patterns and anomalies that may indicate areas for optimization.
  • Focus on optimizing the identified bottlenecks based on profiling results. Implement optimizations iteratively and re-run benchmarks to measure their impact. Continuously monitor the latency after each optimization.
@vangeliq vangeliq added this to the MVP milestone Sep 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants