Showing 4 Result(s)

Simultaneous Localization and Mapping (SLAM)

In the field of robotics, Simultaneous Localization and Mapping (SLAM) is the problem of constructing and constantly updating a map of an environment that is unknown to the robot while at the same time keeping track of the robot’s location in that map.

In this project, I developed several algorithms to implement the SLAM technique for a mobile robot in an indoor environment. For this, I used odometry, inertial and range measurements that had been previously collected using the robot. The LIDAR used to connect these measurements is Hokuyo UTM-30LX. The sensors the robot used to collect these measurements include wheel encoders, a Light Detection and Ranging sensor (LIDAR), and an Inertial Measurement Unit (IMU).

The project has two main parts. For the first part, I estimated the map and robot position using the dead-reckoning technique. For the second part, I implemented a Particle Filter (PF) with systematic re-sampling to implement SLAM and improve the results obtained in the previous part.

For this project I used Python. I avoided using any SLAM libraries, so I can confidently say that I now understand how the algorithms work through and through!

Check out a few videos for different environments the robot visited, and click on the button to the project repository to read the report!

Gesture Recognition

In this project, I trained several Hidden Markov Models (HMMs) to identify different arm motion gestures in real time. The data used to train the models were sensor readings from the accelerometer and the gyroscope of an Inertial Measurement Unit (IMU). These readings corresponded to six different motions: Wave, Infinity, Eight, Circle, Beat3, Beat4.

An HMM was trained for each of these motions. Once trained, given a set of new IMU measurements, the model that assigns the highest probability to the sequence of observations is selected as the motion that the measurements describe. I wrote the code to train and test the HMMs from scratch without the use of HMMs libraries – what a journey!

To learn more about this project click the button below!

Color Segmentation

The aim of this project was to train different machine learning models to detect red barrels within new test images collected by a mobile robot and provide their location coordinates.

For this, I implemented color segmentation by training up to 12 models corresponding to different pixel colors. I used Multivariate Gaussian Probability Density Functions (PDFs) to represent each color distribution, and Bayes Theorem to calculate the probability of a new pixel belonging to each of the classes. I explored different color spaces and techniques to identify connected components and localize the barrel coordinates.

Check out the full project report by clicking below!

Reinforcement Learning

In the field of Machine Learning (ML), Reinforcement Learning (RL) is the area that studies how to determine the actions that a software agent must take in different environments to maximize the reward it obtains. It explores how to combine and balance the exploration of new areas
and the use of the information the agent has about explored areas to achieve the best outcome.


In this project, I developed and tested the algorithms to implement several RL tasks in three different environments: a domain Maze with a discrete state space, and the Acrobot-v1 andMountainCar-v0 environments with a continuous state space from the RL toolkit OpenAI Gym. The algorithms I implemented are Policy Iteration, Q-Learning and REINFORCE with Baseline. I evaluated them to different hyperparameters and identified the optimal parameters.

If you want to learn more, check out the project repo below!