Aircraft Assembly Robot
Design Lab at Rensselaer Capstone Project
Introduction
As part of my Spring 2024 senior capstone project, I collaborated with five other RPI students from different majors to further develop a robotic system designed to assist aircraft technicians in the aerospace industry with cable routing and installation during aircraft assembly.
​
This project aimed to streamline the labor-intensive process by enabling robots to navigate an aircraft assembly facility, pick up cables, and hold them in place, reducing the physical strain on technicians.
​
At the start of the semester, we established key objectives, including demonstrating both manual and autonomous robot control, effective cable pickup, and obstacle detection using LIDAR. While the long-term goal is to deploy autonomous robots that work cohesively in a manufacturing plant, this semester’s efforts focused on developing a single robot to achieve these objectives.
Objectives Met
At the end of the semester, the team achieved significant milestones:
-
Successfully integrated electronic control, enabling both keyboard and joystick operation of the robot’s base and arm using ROS2 Humble on a Raspberry Pi.
-
Completed the integration of LIDAR and the drivetrain on the Raspberry Pi, demonstrating the robot’s ability to navigate and interact with its environment in a ROS2 Eloquent environment.
-
Fully redesigned the robot’s electronics, including wiring, cooling, and mechanical mounting.
-
Developed and tested a robust cable detection algorithm, ensuring reliable identification and handling of cables in the robot’s vicinity. The program is ready for deployment on any ROS2 system.
-
Sourced, selected, and implemented a durable power source, incorporated safety fuses, and integrated an emergency stop mechanism to enhance operational safety and longevity.
-
Designed and 3D-printed various mechanical components to house, connect, and support the robot’s subsystems.
My Role
My role focused on developing the LIDAR and motor control subsystems, enabling the robot to perceive its environment and navigate while avoiding obstacles. To achieve this, I collaborated with my team to determine the necessary components, how they would interact within the broader ROS software framework, and any hardware limitations we encountered. I consistently updated my teammates and our project engineer, our team’s mentor, on the progress of my subsystem and the project as a whole.
​
Given that our team was incorporating new sensors and functionalities into the robot, we determined that a rework of the software integration and hardware components was necessary. After brainstorming several designs, we finalized the following robot system diagram:

Figure 1: Robotic System Diagram.
This modular design divides sensing, processing, and actuation, making expansion and debugging more straightforward. Since we opted for a simplified contour detection program instead of deep learning, the Raspberry Pi handles LIDAR, camera, and joystick inputs, while the ESP32 offloads motor control for smoother operation. While there is room for improvement, this setup aligns with our semester’s objectives.
​
While working on the LIDAR subsystem, we initially considered several mounting options, including placing it on the robotic arm, attaching it to the side of the robot, or using an external extension. However, these approaches introduced issues such as obstructed scanning, instability due to arm movement, and interference with other components. Given our need for 360° scanning and the compact design of the robot, we ultimately decided to elevate the arm using standoffs and a metal platform, allowing the LIDAR to be centrally mounted on the robot’s frame. This solution ensured full scanning capability, preserved arm mobility, and allowed the front-mounted camera to perform cable detection without obstruction.

Figure 2: Structural Design of the Robot.

Figure 3: Completed Robot.
With the LIDAR, camera, and electronics implemented, it was time to develop the system’s software. The robot uses ROS2 as the central framework for navigation and control. The team decided that the arm, base, LIDAR, and computer vision subsystems could share data and commands with each other to allow for seamless coordination and autonomous operation. The figure below shows a simplified version of the ROS2 system running on the Raspberry Pi. For more detail, please consult the document at the end of this page.

Figure 4: ROS2 System Graph.
In the diagram above, input devices such as the keyboard and controller provide two separate data streams for base control through distinct input nodes (base_keyboard_input and base_controller_input). These nodes publish MotionControl values (see the document at the end of the page for more details) to their corresponding topics (base_keyboard_head_vel and base_controller_head_vel). These values are then processed by an arbiter node (base_arbiter), which prioritizes and resolves conflicting commands to ensure smooth base operation. The processed MotionControl values are then sent to a driver node (base_driver), which converts high-level commands into low-level signals and transmits them to the ESP32 via USB.
​
The arm is controlled using the Interbotix ROS2 library. Since it does not accept standard joystick input, a translation node (aircraft_ds4_interbotix_adapter) converts controller values into compatible arm movements.
​
The camera node (camera) processes visual data and publishes movement data for cable navigation. Meanwhile, LIDAR data is managed by the lidar_control node, which plays a key role in obstacle detection and avoidance. If necessary, it can publish to the all_stop topic, a safety feature designed to halt all robot movement.
​
One of our goals was to implement autonomous navigation using LIDAR and SLAM. However, due to time constraints and the need to debug complex system issues toward the end of the semester, we opted for a simpler obstacle detection approach instead. This allowed us to ensure reliable operation while maintaining core functionality.
Implementing the LIDAR Subsystem
With the LIDAR sensor securely mounted and the ROS2 workflow planned, I began implementing directional obstacle detection. The ESP32 microcontroller manages motion control for the omnidirectional Mecanum wheels using encoder data from the four motors. Commands are sent to the ESP32 in the form of a MotionControl data type with the following parameters:
-
Heading: The intended movement direction in radians.
-
Velocity: The speed in centimeters/second.
-
Linear/Angular: A Boolean flag indicating whether the Velocity value corresponds to linear (translational) or angular (rotational) movement.
​
The LIDAR sensor utilizes heading data from the motor drivers to dynamically adjust the scan region, ensuring that only obstacles in the robot’s current path trigger a stop command. For this application, the RPLIDAR A2M12 360° LIDAR sensor was chosen due to its affordability, lightweight design, and technical specifications, which include:​
-
A 0.2 - 12m scan range, suitable for close to medium-range sensing.
-
An adjustable scan rate of up to 10 Hz (600 RPM).
-
A 12,000 samples per second sampling rate.
​
​Looking at the ROS2 program below, we see that it consists of nodes that publish LIDAR data, process it by filtering relevant scan ranges, and send stop or resume commands to the motors via a ROS2 topic.

Figure 5: LIDAR System Graph.
The LIDAR system consists of two primary ROS2 nodes:
-
The SLLIDAR Node: Interfaces with the RPLIDAR A2M12 sensor and publishes LaserScan messages to the /scan topic.
-
LIDAR Node: Subscribes to /scan, processes scan data, and publishes stop/resume commands to the /all_stop topic.
​
The LIDAR Node does not process all LaserScan readings but restricts detection to a specific heading based on robot trajectory data received from the /base_head_vel topic. This ensures that only obstacles directly in the robot’s intended path trigger a stop command, reducing false detections from surrounding objects.
The LIDAR Node’s logic follows:
-
If an obstacle is detected within the specified range and heading, it publishes True to /all_stop, signaling the motor system to halt.
-
If no obstacle is detected in the path, it publishes False, allowing motion to resume.
​
When a stop command is issued, each wheel’s dedicated PID controller gradually decreases its speed, enabling a smooth and controlled stop rather than an abrupt halt.
Results
Near the end of the semester, we faced significant challenges integrating all subsystems. While each component—whether it was the arm, base, or LIDAR—worked properly in isolation, their interactions introduced unexpected issues. For example, as seen in the video below, our remote controller and base communication experienced delays or disconnections, and at times, the arm activated on its own without any prior input 😨. I apologize for the low video and audio quality.
​
​
​
​​
​
​
​
​
​
​
​
​
​
​
These issues stemmed from errors in our ROS2 code, where certain nodes were not activating or shutting down correctly (among other problems). After extensive debugging, we successfully resolved these issues, allowing us to fully integrate all subsystems in time for both our final demonstration and the Capstone Showcase, where all senior-year capstone projects were presented and evaluated.
​
As a result of our efforts, our team was awarded the Outstanding Technical Approach Award, recognizing us as the top-performing Spring 2024 Capstone team. This award highlighted our excellence in innovation, technical execution, and teamwork, distinguishing our project as one of the best among all participating teams.
​
The videos below showcase some of the completed robot’s functionality. While they provide a general overview, additional features—such as contour-based cable detection using the camera, the emergency E-stop and battery subsystem, and the full capabilities of the arm when integrated with the LIDAR and motors—are not fully shown here.

Figure 6: Motor and Arm Test Fail.

Figure 7: Driving and LIDAR Sensing Demonstration.

Figure 8: Arm Manipulability Demonstration.

Figure 9: Arm Pick Up Demonstration.
The images below are from the final day presenting the Capstone project.

Figure 10: Capstone Presentation Day Poster.

Figure 11: Robotic Aircraft Assembly Mentors and Team (Left-to-Right: Dr. Santiago Paternain, Dr. Alex Patterson, Aircraft Assembly Team, Senior Research Engineer Mr. Glenn Saunders, Senior Project Engineer Dr. Kannathal Natarajan).

Figure 12: Robotic Aircraft Assembly Team (Left-to-Right: Kalen Akins, Om Anavekar, Mark DiPilato, Alejandro Begara Criado, Kaiwen (Kevin) Yang, Kaihong Lin).
Acknowledgements
I would like to thank my teammates Kalen, Om, Mark, Kevin, and Kai for their incredible knowledge, collaboration, and dedication throughout the capstone project. I also wish to thank our mentors and advisors — Dr. Santiago Paternain, Dr. Alex Patterson, Dr. Kannathal Natarajan, and Mr. Glenn Saunders — for their valuable guidance and support. Lastly, I would like to thank the Design Lab at Rensselaer for providing the resources and environment that enabled us to successfully develop this project.