In a not-strictly chronological order, here are some projects I've worked on for fun and professionally.
Robotic Surface Finishing - GrayMatter Robotics
GrayMatter Robotics provides fully automatic sanding, polishing, painting, grinding and more for many applications and verticals in high-mix manufacturing without requiring fixtures by scanning each part. In my position as Lead Robotics Engineer, I have both made my hands dirty as well as overseen projects and mentored engineers. I planned and lead the migration from ROS 1 to ROS 2, drastically improved the deployment process using docker + compose, lead the development of on-premise API and web server development, kickstarted projects in painting, gel coat spraying, and sand blasting.
Open-Source Rover v2
The OSR has grown to become one of the largest open-source hardware projects around and after years of experience and community growth it was time for a major overhaul. The launch made it to the front page of Hackaday.com! I'm the lead maintainer for this effort and wonderful community.
Build one yourself here
Machining adventures
I consider myself a 'generalist roboticist' which means being able to build robots from scratch, including electrical, mechanical, and software. While I spend most of my time writing software, I absolutely love getting my hands dirty and building stuff. Here's some of the mechanical stuff I've build.
Winch for a stair lift
Designed and machine part myself from aluminum on a Jet lathe (Maketory)
TIG welding
I've done both TIG and MIG welding.
More turning
Lathes are cool
Water jet for a custom robotic gripper
Water jets are also cool
CNC milling for robotic gripper
Designed, programmed, and fabricated to spec at an MIT machine shop
3D printing
I've done a ton of this for prototyping, before making things out of metal.
Left:
Manually milling an aluminum piece for a side project at an angle using a tilting vise @Maketory
Right:
A 'girolle', a cheese curler. The cherry wood was milled using a CNC for wood, the metal parts are TIG welded stainless steel. The parts are very small so I learned the hard way but I'm happy with the results.
Enterprise-grade robotic fleet interoperability - Freedom Robotics
At Freedom Robotics my team built mission critical on-premise interoperability and monitoring tools that were managed through the cloud. This was a first for several of our (S&P500) customers and it took lots of learning on our side to get the hang of enterprise sales and relations and be able to scale quickly and reliably. With a small team we were able to fulfill multi-million dollar contracts and SLAs to allow fleets of robots of different brands to share the same space and work together efficiently. I built out over 20 plugins for major brands of automation equipment, producing consistent data such that these devices could work together (control) and metrics & statistics could be gathered (monitoring) regardless of the brand or model of the device. I dreamed out, built, and iterated on easily configurable yet powerful automation and control tools such that non-technical people regularly were creating or updating automation logic to manage their fleets and meet their KPIs.
One of the largest car manufacturers in the world calls us their "only fleet management solution across all factories". Some of their factories had 100s of robots of different brands working together.
Explaining Cuspidal Manipulators
Through my time at Peanut I've become somewhat of a manipulation - motion planning expert. I was first to bring an unknown but very important class of robot arms into the public domain. View my work on Medium, or the talk I gave for Weekly Robotics:
Open-Source: Developing the Mars Rover and bringing it to the next level
As a side project, I built the JPL open source rover which is a simplified scale model of the Mars Curiosity rover. This is my first venture into space robotics and I'm using this platform to develop and test new algorithms for navigation and heterogeneous sensor calibration. Also for following me to the park while carrying my beers :)
I've since become a maintainer on the two repositories, have done the most contributions through overhauling the code stack, and have also ported the code to ROS2. See some of my blogs here:
Starting with a team of 3 engineers, built out the software stack for autonomous service robots used for commercial cleaning:
set up and tuned 3D mapping and localization; heterogeneous sensor fusion
wrote a custom motion planner from scratch using fast analytical inverse kinematics solver (C++) derived for a new type of arm (7DOF spherical wrist with offsets at elbow and shoulder, until then an unsolved problem). Set up Descartes (ROS-Industrial) for reactive cartesian toleranced motion planning and 100x'ed planning speed using proprietary techniques
heterogeneous sensor self-calibration using hand-eye calibration
sensor integration in C++
set up and maintained clustered database infrastructure using CouchDB to contain operational data
Use PCL in C++ to compare a pre-made map with live pointcloud information using efficient octomap operations, highlighting changes in the robot's environment
grasping using deep learning
ROS
various: full robot control using xbox teleop, Travis CI integration and test suite, computer infrastructure, distributed processing on embedded systems, ...
Also included in the hiring process, conducting interviews as well as most aspects of growing a sustainable early-stage startup.
Intelligent Service robots - Penn
Development of artificial intelligence and learning algorithms for low-cost service robots for CIS700 at Penn
gitmaster: responsible for organization and management of the codebase for a group of 28 robotics engineers (lots of pull requests!)
member of the manipulation focus group. Implemented a grasp handler pipeline that takes in a request from our task server to pick up a specific object (detected through our vision pipeline), generates a set of possible grasping points (using gpd, built on caffe), and executes the best grasping point using MoveIt!.
Code development in ROS
stitched URDFs from scratch, centralized the launching for all nodes developed by each focus group.
Open-source transformations package for ROS using dual quaternions
See https://github.com/Achllle/dual_quaternions_ros
Dual quaternions have been undervalued in the robotics community considering their interesting (and quite beautiful) mathematical properties (see: K. Daniilidis - Hand-eye calibration using dual quaternions). This is a humble attempt to integrate this means of transformations in my company's work and share this effort with the robotics open-source community in the hope of convincing other roboticists to use this format. This was also my first project to use Continuous Integration (CI).
If you're a robotics engineer or mathematician; give me your feedback - or better - contribute!
Starting a company: Climbing robots - rigrade
Started a robotics venture that creates (software enabled hardware) climbing robots for vertical access and surveillance. Received three seed funding rounds from MIT Sandbox and won the Wharton Summer Venture Award ($10k)
full development in ROS
built robot from scratch, machined custom parts myself
business development, business plan creation
Ryan, Kit, and Achille at our office in The Engine, an incubator in Cambridge, MA.
Quadcopters, Penn Advanced Robotics
Wrote Matlab code for flying quadcopters including:
a linear and a nonlinear, geometric controller that could handle aggressive maneuvres
a polynomial trajectory generator for minimum acceleration (cubic), minimum jerk (quintic), or minimum snap (heptic) polynomial
a path planner (Dijkstra and A*)
a pipeline that combined these and implemented this on a CrazyFlie quadcopter
Several trajectories flown on a CrazyFlie quadcopter. The actual path was recorded using a Vicon tracking system. The quad had to fly through each waypoint. The trajectory generator calculated a trajectory using a (here quintic) polynomial for each segment. The controller then executed these trajectories.
Orientation Estimation using Unscented Kalman Filter
An Unscented Kalman Filter was implemented from scratch and used in quaternion representation to fuse and filter data from an accelerometer and a gyroscope. The underlying state aimed to estimate the underlying orientation the Inertial Measurement Unit achieved. Attached to the IMU was a camera which took sequential pictures during rotation. Filtered orientation was then used to stitch these images together to create a panorama of the environment.
The panorama was created in multiple steps. First, I sampled the orientations at the corresponding time stamps as there are many more orientation estimates than camera images. I then represent the pixels of each image onto a sphere around the platform. They are then unrolled for vectorized computation of transformation. They are then rotated according to the transformation matrix that is extracted from the quaternion estimate. Next, these rotated pixels are then projected onto a cylindrical space. Finally, this cylindrical representation is then unwrapped to create the panorama.
Naive panorama creation based on the filtered orientation of the platform. Note that this projection did not make use of image stitching, which can greatly enhance the quality and even correct the orientation estimation.
Simultaneous Localization And Mapping (SLAM) for a humanoid using a particle filter
A particle filter was implemented from scratch and used to perform Simultaneous Localization And Mapping (SLAM) for a humanoid walking robot equipped with LIDAR and IMU sensors. A 2D map was generated, providing a basis for the robot to navigate while also correcting the estimate of its position and orientation. Stratified resampling was used to update the particles each step.
The object-oriented programming paradigm was used to write good code for a complex project like this. It not only allows more intuitive reasoning, but also provides a good framework for extending the code.
one of the maps generated by the algorithm. In red its trajectory is shown.
the humanoid used in this project
OOP class diagram for the core code
Barrel detection using Gaussian color models and other computer vision techniques
Implemented a Gaussian color model that was trained on a small data set of images containing one or more red barrels.
Robockey
3rd place in the annual Robockey competition at the University of Pennsylvania. These very low cost autonomous robots use a constellation of IR LEDs on the ceiling to orient themselves.
Responsible for electronic circuit design and soldering of three robots
Wrote high and low level functionality in C.
Robockey was a part of MEAM510 at the University of Pennsylvania. Other projects included a self-balancing wheeled inverted pendulum, building a speaker from scratch, and building a low-cost RC car.
With Prof. Dr. Jonathan Fiene
Machine learning
While taking the class Machine Learning (CIS519) at Penn, I implemented the following algorithms from scratch in Python:
Neural Network for digit recognition from images
Reinforcement learners: Q-learning, value iteration, policy iteration, feature extractors. Implemented and trained approximate q-learning for the pacman game (see video below) and 'taught' a crawler how to walk in simulation
general K-Means; image segmentation using K-Means (see image below)
Boosted Decision Tree for classification
Online Naive Bayes
Linear, Polynomial, and Logistic regression
Support Vector Machines using different kernels
Final report on the use of machine learning techniques to predict start-up success.
It is interesting to note that the learner never tried to 'eat' the ghosts after eating the large pill. It learned that it could win simply by avoiding the ghosts and eating all pills. If a high reward would be given to the learner for eating a ghost, it might learn to do so given that it is allowed to explore. It might be necessary to shift to deep q-learning or other more powerful learners to learn such hierarchical strategies.
Haptic environment
For the class 'Introduction to Robotics', MEAM520 at Penn, My team created a virtual environment consisting of multiple objects and surfaces. The Phantom robot senses the position of the end-effector and displays that on screen in a virtual box. By moving the end-effector of the robot with their fingertip, the user can interact with these objects on the screen. The simulator will feed the forces generated by the interactions to the motors on the robot so that the user can physically feel these surfaces and objects.
These surfaces and objects are purely mathematical constructs. We created a point of attraction (pulls your hand to a specific location when within its reach), a switch, a surface with different spatial texture, a viscous fluid, and a ball.
Path planning and collision avoidance using potential fields
Implementation in Matlab of path planning for a simple articulated robot using potential fields with collision avoidance. The end pose has points of attraction that correspond to points on the robot. Objects have a sphere of influence around them and a repulsive force within that sphere that becomes exponentially larger the closer any point on the robot gets to the object's surface. The forces are calculated in Cartesian space as opposed to joint space and are then converted to joint space using the Jacobians.
The algorithm does not consider local minima and could get stuck in certain configurations. A solution would be to add random walks. In the demonstration video, you can see that the second random end pose is not completely reached. This is because the gradient descent step size is not matched with the goal tolerance. The algorithm is not optimal and slow but very easy to understand.
Light-painting robot
For the class Intro to Robotics, MEAM520 at Penn, my team planned trajectories for a PUMA robot to follow. The robot was mounted with an LED. A long-exposure photograph captured the robot moving so that the end result was a 'light painting' by a robot.
With Prof. Dr. Katherine Kuchenbecker
long exposure photograph of the painting. The robot can be vaguely seen in the background.
Edelmanlab, MIT - multi-modal stent testing
Design and development of a high-throughput multi-modal stent testing device.
Responsible for designing and building a machine that twists, bends, and extends stents while allowing fluids to run through them to simulate and research biodegradable stent degradation ex-vivo for extended periods of time.
Built a full model in SolidWorks. Machined several custom parts at the MIT Edgerton machine shop.
prototype
working product completed by Boston Scientific
Edelmanlab, MIT - computer vision
Co-lead on a project for predicting outcome of Trans Aortic Valve Replacement (TAVR) using pre-operative CT scans. Developed OOP Python computer vision algorithms for segmenting images and automatically extracting features for large datasets. The algorithm improved upon the state of the art automatic calcium detection on very noisy data. Responsible for maintaining a large code base.
With Elazer Edelman, MD PhD