Skip to content
This repository has been archived by the owner on Oct 25, 2023. It is now read-only.
Pierre Laclau edited this page Feb 14, 2018 · 32 revisions

General architecture

  • Installation
  • Main ROS message interactions (dégueulasse oui, j'accepte tout don de schéma venant d'une âme charitable ^^') simple_architecture_links
  • Strategy launch procedure and game ending: system_init_workflow

Technical documentation

Note: wanna create a new wiki page for a node or system component ? Please follow the Wiki Guidelines when doing so!

Note 2: external nodes (existing packages around the internet used in our system) are marked in italics.

Arduinos

  • asserv : a dedicated arduino in the robot manages the wheels' trajectory and movement; also connected to the odometry system. Connected to ROS with the drivers_ard_asserv Wiki duidelineswrapper node.
  • others : manages all movement requests for servos, PWM devices and motors, and publishes all arduino-connected sensors (e.g. belt sensors, color sensor) directly to a ROS topic using rosserial.
  • hmi : used to display a selection interface on a mini-oled screen with selection buttons for selecting the robot strategy and team before the game starts. Also displays the estimated AI score.

ROS Nodes

  • feedback
    • webclient : web server that provides an easy way to manually publish on topics or send service/action requests, diagnose topic data and visualize many of the important nodes' elements.
    • rviz : we regularly use RViz, a well-known ROS tool that lets us visualize and debug our system with a real-time or simulated 3D render of the map, robots and virtual elements (collisions zones, waypoints, trajectories...).
    • sensors_simulator : simulates sensor data when the system is executed in simulation mode. Can also provide fake laser scans for processing/particle_filter.
  • ai
    • game_status : node that stores the main system statuses. Can be contacted by a node to start the game or halt the entire system (timer ended, critical failure).
    • scheduler : this is the robot's AI. Selects which actions to perform, and adapts the next ones in case of problems or errors (enemy blocking the navigation path, node not responding, servo blocked...) by exploing a possibilities tree defined by the user.
    • scripts : auxilary node of ai/scheduler. Lets us execute complex actions that need proper coding and that wouldn't be feasible with a simple action tree in scheduler.
    • timer : node that keeps track of how much time is left during the game. Publishes a HALT message ai ai/game_status to stop all node actions.
  • memory
    • map : a database manager that holds all map walls, elements, waypoints, objects and robot characteristics. Also holds the main robot's containers' state (how many balls in the robot's main container, how many cubes in the tower #3...). Can generate occupancy images with the walls and dangerous objects (used in navigation/pathfinder).
    • definitions : node that holds all public definition files of all the nodes in the system. Replies to file retrieval requests with the path to the right file version according to which robot the system is on (GR or PR in our case).
  • navigation
    • navigator : Node that manages a robot trajectory from point A to B. Dynamically adapts to events (enemy blocking the way, imminent collision...) and automatically tries different routes to the destination before giving up.
    • pathfinder : takes the map bmp file generated by memory/map containing the walls and dangerous objects positions and replies to navigation/navigator's requests with a valid navigation path that avoids all map and dynamic obstacles.
    • collisions : listens to all sensors data and gets the current robot's speed and path to predict a collision with any dangerous obstacle while navigating. Notifies navigation/navigator when a collision is seen.
  • movement
    • actuators : sert de dispatcher de requêtes d'actions de mouvement (commande de moteurs, servos, glissières... autres que l'asservissement) vers le bon driver connecté au moteur en question à commander.
  • recognition
    • localizer : écoute les données de tous les capteurs pour ensuite publier une estimation la plus précise possible de la position du robot.
    • enemy_tracker : écoute les données des capteurs pour identifier les positions des robots, en les trackant au cours du temps.
    • cp_recognizer : utilise la caméra RGB pour reconnaitre le plan de construction (spécifique 2018).
    • cube_finder : utilise les données filtrées de lidar_objects et données brutes pour trouver les positions des cubes (spécifique 2018).
  • processing
    • obstacle_detector : listens to the lidar scands published by urg_node and simplifies the scan points to segments and circles. The circles are tracked in time and thus published with an instantaneous speed.
    • belt_interpreter : écoute les données brutes de la ceinture et interprete si les distances trouvées visent un mur statique de la map ou un objet inconnu.
  • drivers
    • camera : publie au choix une photo sur demande ou un flux vidéo en continu venant d'une caméra RGB.
    • ax12 : se connecte aux servos AX12 et éxecute les commandes de mouvement reçues depuis movement/actuators.
    • ard_asserv : se connecte à l'arduino gérant l'asservissement et envoie les requêtes d'action de navigation/navigator. Retourne le statut de l'asserv et les données d'odométrie.
    • ard_others : se connecte à l'arduino gérant les moteurs et servomoteurs et envoie les requêtes d'action de movement/actuators. Retourne les données des capteurs branchés à l'arduino (e.g. ceinture).
    • ard_hmi : se connecte à l'arduino gérant l'écran de sélection de stratégie/couleur d'équipe et remonte les informations sélectionnées à ai/scheduler.
    • pico_flexx : se connecte à la caméra ToF et publie le nuage de points 3D brut.
    • urg_node : Hokuyo LiDAR driver. Connects to the Hokuyo (automatically finds the port) and published the raw scans on topic /scan.
Clone this wiki locally