January 2025 – May 2025
The Autonomous Firefighting Vehicle (AFV) is intended to be a fully autonomous vehicle capable of traversing, locating, and extinguishing a fire source without any human intervention, other than a long-range ground station to monitor the status of the vehicle.
During this semester, we developed autonomous travel for the AFV using path correction through a set of defined waypoints. This ongoing project was completed in the Spring of 2025 for the Electrical and Computer Engineering Capstone program at Oklahoma State University and was awarded the best OSU ECE capstone project for that semester.
AFV Ground Station Pathing Interface
Path correction for the AFV was completed using Model Predictive Path Integral Control (MPPI) in which a series of predicted paths are calculated from random control sequences of steering and velocity. This sequence is initiated by a remote ground station, in which a set of waypoints is entered by an operator to be sent over radio transmission. This radio connection is used to update the ground station on the position and orientation of the vehicle through a GUI.
Radio messages were defined using MAVLink, so that a message header could be used to determine the data type when received and to develop a robust and expandable messaging for future teams to iterate upon.
A GNSS module was implemented to capture the global position of the vehicle, and a gyroscope was used to gather the Z angle to develop a location model capable of determining the position and heading of the AFV.
Other technologies present on the AFV include an infrared camera (FLIR) capable of capturing infrared-emitting patches to localize a heat source, along with a turret capable of firing firefighting foam to extinguish a fire.
Since path correction and intercommunication with the ground station have been developed, long-range travel can be implemented using geofencing solutions to navigate infrastructure. A LIDAR module can introduce object detection and obstacle avoidance to deploy a robust navigation solution.
2023 | itch.io
Pico Puppets is an online peer-to-peer multiplayer party game developed using the Unity game engine for Windows and Macintosh systems. I began this project to strengthen my network programming skills, as it implements server querying, network synchronization, and cloud API integration, which allows users to play online together.
8 players will take turns removing pieces from a block tower, but also get a chance to compete in mini-games that will give the winner an advantage through special stickers. Once the tower falls, that player will be eliminated, and the game will continue until one player is left standing.
Players can join public lobbies hosted by other users using Unity's Lobby services, and P2P connections are handled through Unity Relay. A code will be generated for each lobby instance that other players may use to join this room. A quick-search feature is also available to query all lobbies that have open player slots, and will create a lobby if no such room exists so that a player may enter a game in a single click.
Online interactions are synchronized using Mirror Networking, where network calls can be implemented into C# scripts to verify fields across each client. The lobby host will act as both a server and client, where both client and server commands will be processed on one system.
August 2020 – May 2021 | NASA HUNCH Design and Prototyping 2021 Finalist | Github
This project was designed and submitted to the NASA HUNCH national competition in the Design and Prototyping division for the Simulated Gravity—VR/AR category, which was accepted as a finalist for the year of 2021.
As required by the competition, as VR simulation was developed to portray an astronauts experience in a spacecraft producing artificial gravity through constant rotational velocity. Many elements of life in space were replicated through interactable objects in the virtual environment, along with the differences in gravitational forces generated by the rotary state of the vessel.
To monitor the changes in behavior of previous astronauts and astronaut candidates in artificial gravity, tasks present in daily activities were implemented throughout the spacecraft.
Other tools exist to showcase scenarios that may be encountered, such as halting the rotation of the vessel such that zero gravitation force is exerted on astronauts. A subject may move themself around in this circumstance by pushing themselves around the enclosure.
The artificial gravity solution proposed by NASA engineers requires that a spacecraft be maintained at a constant rotational velocity to enact a perceived centrifugal force on its inhabitants, along with all items onboard.
However, another force will be present in this environment, the Coriolis force, which will prevent moving agents from acting as they would due to constant linear acceleration. This force will be experienced more strongly near the axis of rotation, as the centrifugal force is much less prominent.
This force has been applied to each moving object in this simulation to display how
movement differs in this environment, and how astronauts may adjust to this.
Breadboard Configuration Implementing a PA1010D GPS
October 2024 - January 2025 | Github
To reduce the program and dynamic memory overhead when implementing the Adafruit_GPS library on Arduino Uno hardware, I developed a lightweight parsing framework, which handles the polling of RMC NMEA messages.
The PA1010D is a GPS module designed for embedded systems, for which this library provides an object reference. This object contains methods to handle the setup of the GPS (transmit and receive pins, polling period, and baud rate), polling, and the storage of collected fields.
This library can be compiled in 7372 bytes of program memory, and requires 437 bytes of dynamic memory when flashed using Arduino IDE. Since this program has been developed in C++, it can be implemented on various microcontrollers that support C++ library integration.
To conserve memory and active processor time on the microcontroller, this library converts fields to the CSV format to be transmitted to an external processor. The reduced processor overhead and memory allocation allows the microcontroller to prioritize the polling of multiple peripheral devices while data processing is handled by a master system.
Sample Program to Display Fields Captured by the PA1010D
A simple example showing a polling loop using the GPS class object provided by the PA1010D
October 2023 – December 2023 | Github
To apply machine learning to robotics in manufacturing, this demonstration displays a set of techniques that could be implemented to train a multipurpose mechanical arm. This machine is intended to be programmed for various tasks in the assembly of a product or other industrial functions. Rather than manually program behavior for the arm at each stage, this demonstration proposes a pipeline through which actions can be trained using a simulated environment and machine learning.
The simulated environment was created using Unity, and the neural network was constructed through the Unity ML Agents add-on. Scripting in C# would define the physical movement of the arm as well as integration of the neural network and training criteria.
The task for the agent (the robotic arm) to complete was defined as picking up an object and placing it in a container. This was kept simple to reduce the complexity of training the neural network.
Training "episodes" were conducted in batches as seen on the right, where a number of trials would progress independently.
The methods of machine learning utilized for this demonstration are reinforcement learning, imitation learning, and curriculum learning.
Reinforcement learning yielded the best results, with the criteria limited to placing the object in the container. While the task of moving the object was completed, the desired behavior of picking up and placing the item was ignored, and instead, the arm would nudge the object into the container.
This early optimization is expected when reinforcement learning is the sole learning method applied, as there is little room for intermediate rewards in this scenario.
Imitation learning (for this demonstration) implements two learning algorithms were applied to achieve imitation learning, being "behavioral cloning" and "curiosity", where the latter would inject randomness into the imitated actions by the agent.
Since imitation learning requires human trials to be sampled by a neural network, it falls short in this setting, as the joints for the robotic arm do not allow for user accessible controls, so trials were limited in number.
Curriculum learning defines a series of stages for neural network learning, where the task at each stage may vary. This method of learning attempted to teach the agent to first grab the object, then translate it upward, and finally place it in the container. However, early optimization prevented the agent from performing successfully in future levels, causing the agent to continue this intermediate behavior.
October 2021 - December 2021
This virtual reality program aimed at middle school students interested in STEM fields demonstrates the function of digital gates through a simple schematic editor and introduces formal logic through an interactive medium.
This program was developed in Unity using the XR Interaction Toolkit to handle virtual reality devices used to interface with the schematic editor, and the digital logic editor backend was created in C#. The curriculum, player interaction, and backend are all my work for design and implementation.
A curriculum of levels was created to focus on each gate individually, as well as build upon these concepts by requiring the user to create a specified logical system. The user is prompted to create a system of gates that perform a specific logical operation, which will be used to control a light.
When a gate or terminal is placed on the virtual proto board, the backend will simulate the path through which current will through through each connection. The user is able to alter the electronic configuration and see their changes to the logical system in real time.