Overview
Cristobal was developed within the scope of a robotics challenge, aiming to design a robot capable of overcoming various obstacles, adapting to dynamic environments, and executing vision-based tasks. The robot's architecture is founded on a Raspberry Pi, serving as its central processing unit, complemented by Mindstorms components for sensing and actuation.
The challenge environment comprises a series of sections, each presenting distinct obstacles: (1) an area with two pillars requiring the robot to perform an angular maneuver for avoidance, (2) an area featuring dynamic obstacles not present in the initial map, and (3) an open area where the robot must locate, acquire, and transport a red ball to an designated exit.

Camera view of a simplified scenario. The robot initiates from the right area. The central area is shared with another robot and contains the exits.

Left: Representation of the robot's internal virtual map. The green rectangle indicates the starting point, while yellow rectangles denote goal regions. Red glyphs represent the objective pose of the robot, and blue glyphs represent the measured poses.
Middle: Plots of measured (blue) and objective (red) velocities at each timestamp, prior to filtering and sensor fusion.
Right: Plots of the same velocities after Kalman Filter application, illustrating the signal smoothing effect.
Robot Structure
The robot's physical configuration includes:
- A core brick housing a Raspberry Pi and a communication board for actuator interfaces.
- A battery pack for device power.
- Two wheels equipped with rotary encoders and corresponding motors for locomotion.
- Two gripping tweezers with a dedicated motor for object acquisition.
- A camera providing visual perception capabilities.
- A gyroscope for angular motion sensing.
- A compass for orientation determination.
- A sonar sensor for proximity measurements of obstacles in front of the robot.
Software Architecture
The system's core is implemented as an asynchronous message-passing architecture, conceptually similar to ROS. This design leverages the Raspberry Pi's multi-core processor to enhance computational throughput without compromising real-time performance. Message-based communication ensures information synchronization upon transmission, allowing each subsystem to operate at its maximum speed. This is particularly crucial given that vision processing is computationally intensive compared to odometry processing; a reduction in the odometry update rate directly impairs the robot's self-localization capabilities.

Diagram illustrating the system architecture.
Main System
This module manages the robot's external system connections and internal operations:
- Main Controller: Receives commands via an SSH connection.
- Logging: Stores data essential for debugging and performance analysis.
- Sensor Management: Handles connections and data acquisition from various onboard sensors.
- Motor Control: Commands the robot's motors using a PID control scheme, optimizing movement precision and stability.
Odometry System
This module processes and fuses odometry data from sensors to provide an aggregated pose estimation for other subsystems. Accurate and high-frequency pose estimation is critical, as all subsequent robot behaviors rely on this information.
The robot's state is defined by its current position, linear velocity, and angular velocity. Multiple sensors contribute measurements for state computation. A Kalman Filter is employed for robust sensor fusion, denoising, and performance enhancement. Furthermore, the system incorporates logic to detect and mitigate unreliable sensor readings (e.g., compass inaccuracies near magnetic fields), dynamically re-prioritizing other sensors for state estimation in such scenarios.
Specifically, it leverages the following sensors:
- Wheel Encoders: Provide estimates of linear and angular velocity by extracting individual wheel motion. Measurements can be erroneous if wheel slippage occurs.
- Compass: Offers orientation measurements, but is susceptible to noise from the robot's internal magnetic fields or external structures.
- Gyroscope: Provides angular velocity measurements, though performance can be affected by thermal drift.
Additionally, an experimental mode integrates sonar data into the sensor fusion. This approach leverages the robot's internal map representation to compute the closest "surface" along the sonar's line of sight via ray casting. By comparing this computed value with actual sonar measurements, the system can potentially enhance linear motion estimation and detect dynamic environmental changes.
Path Planning System
This system is responsible for computing the robot's subsequent movements based on its current location (provided by other systems) and its designated goal. It incorporates three distinct behaviors:
- Movement Compositor: Manages the sequential execution of linear and angular movements to guide the robot to a specified position. This includes enforcing constraints on angular displacement and speed to ensure stability.
- Path Planning: Utilizes a virtual voxel-based discretization of the robot's surrounding environment. Upon receiving a target position, an A* algorithm is executed to determine the shortest path. Given the dynamic nature of the environment, path re-computation is necessary after any map updates.
- Hunter Mode: When a red ball is detected within the designated hunting area, this mode receives the ball's image plane coordinates. It then computes the necessary movements to approach the ball and initiates the gripping mechanism.
Vision System
This system manages the robot's visual perception of its environment, serving two primary functions during the challenge:
- Ball Detection: Identifies a red ball within acquired images. For efficiency, images are first transformed into the CieLab color space to enhance red shade discrimination. Pixels matching the color criteria are thresholded, followed by clustering of proximate unmasked points. The radius of each cluster is computed, and the largest cluster is returned as a match. The detected position information is then relayed to the path planning system to facilitate ball acquisition.
- Place Recognition: Although the scenario is not ideal for full SLAM due to insufficient texture points, it features markers with known positions. The system attempts to locate these markers in its visual field using a point localization algorithm. If a sufficient set of matches is found, a Perspective-n-Point (PnP) algorithm is executed to determine the camera's pose relative to the markers. This pose information is subsequently transmitted to other systems for self-localization correction.