• Home
  • AboutMe
  • projectss
  • Publications
  • Resume
  • Home
  • AboutMe
  • projectss
  • Publications
  • Resume

Neural Network-based Approach for Self-Driving Cars

Code

In my pursuit to understand the basics of self-driving car technology, I embarked on this project that involved leveraging neural networks and data augmentation techniques. Here's an overview of the key parts of this project:

1. Leveraging Data: I obtained road images and steering angle data, sourced from Udacity's Self-driving car simulator. This  dataset consisted of road images from three angles as well as other parameters such as steering angle, throttle, speed etc.  The road images and steering angle values were considered for a basic but robust self-driving model

2. Data Augmentation and Diversity: To ensure the reliability and versatility of my model, I employed OpenCV and imgaug libraries to meticulously augment and diversify the dataset. This innovative approach enabled the generation of additional data, enhancing the model's performance and adaptability

3. Real-time Navigational: The ultimate goal was to achieve a practically good navigational accuracy in self-driving scenarios. To achieve this, I designed and implemented a novel deep neural network architecture, inspired by NVIDIA's DAVE2 model. This architecture allowed me to map road images to precise steering angle values in real-time, enabling the self-driving car to navigate effectively and complete the route

This project showcases my proficiency in deep learning, data augmentation, and the practical application of neural networks to solve complex real-world challenges. Through this project, I've contributed to the ongoing evolution of self-driving car technology and laid the foundation for further exploration in this exciting field.

Your browser does not support HTML5 video.

Done at Columbia University, New York

Deep Learning-based Computer Vision for Robotic Grasping

Code

In the dynamic realm of robotics, achieving precision in object recognition and manipulation is paramount. This project  employs deep learning techniques to enhance computer vision for robotic grasping. Here are the key highlights:

1. Enhanced Object Recognition Accuracy: To facilitate precise robotic grasping, I engineered a segmentation model using a simplified U-Net neural network, implemented in PyTorch. This model predicted masks from RGB images, significantly improving object recognition accuracy. This laid the foundation for more accurate and reliable robotic interactions with the environment

2. 3D World Understanding: To empower the robot with a comprehensive perception of its surroundings, I generated object point clouds from RGBD images. This involved the generation of segmentation masks and depth masks, followed by the extraction of world-coordinate point clouds. This approach enabled the robot to perceive objects in three dimensions, facilitating more informed decision-making during grasping

3. Precise Object Positioning and Orientation: The success of robotic grasping hinges on accurate object positioning and orientation. Here, I skillfully applied the Iterative Closest Point (ICP) algorithm to align the original and segmented point clouds with precision. This alignment not only ensured accurate object detection but also enabled the robot to determine the optimal positioning and orientation for a successful grasp

This project stands is an amalgam of computer vision, deep learning, and robotics. It represents a critical step forward in enhancing robotic capabilities for object manipulation, with a focus on accuracy and efficiency. 

Done at Columbia University, New York

Learning-based Model Predictive Control (MPC) for Robot Arm Dynamics

code

In the realm of robotics, achieving precision and control in arm dynamics is pivotal for real-world applications. My project represents a significant milestone in this domain, where I harnessed the synergy of Model Predictive Control (MPC) and deep learning to enable a robot arm to emulate precise ground-truth movements. Here's a glimpse into the project's core achievements:

1. Precision through MPC: I spearheaded the development of an advanced Model Predictive Control (MPC) system tailored for a robotic arm. The MPC controller was meticulously designed to achieve exact positional imitation of a ground-truth model. This fundamental control system formed the backbone of the project, ensuring the robot arm's movements aligned closely with the desired trajectories

2. Learning Forward Dynamics: To bolster the robotic arm's capabilities, I conceptualized and trained a deep learning model to grasp the intricacies of forward dynamics. This model was adept at calculating joint velocity and position based on neural network outputs. Through extensive training, the deep learning model became a crucial component in predicting and optimizing the robot arm's movements

3. Integration of Control and Deep Learning: The crux of this project lay in the seamless integration of MPC and the learned forward model. By effectively combining these two elements, I orchestrated the robot arm to execute movements in harmony with ground-truth modeling. This fusion of control systems and deep learning techniques showcased my expertise in bridging the gap between theory and practice in robotic applications.

This project underscores my proficiency in control systems design, deep learning integration, and the practical application of these skills in the realm of robotics. It represents a significant step forward in enabling precise and controlled movements of robotic arms, with the potential to revolutionize various industries where accuracy and reliability are paramount. Through this endeavor, I've contributed to the ongoing evolution of robotic technology, paving the way for more versatile and efficient robotic systems.

Done at Columbia University, New York

Intruiged?
CLick here to watch more

SHERLOCK

A quadruped robot designed and
 developed from scratch!

Watch here
I undertook a challenging and rewarding project focused on the design, fabrication, and optimization of a robotic quadruped. This multifaceted endeavor combined mechanical engineering, computer-aided design, 3D printing, and artificial intelligence to create a functional and agile quadruped robot.

CAD Modeling and 3D Printing: Leveraging Fusion360, I meticulously designed a CAD model of the quadruped robot, meticulously considering every aspect of its mechanical structure. The CAD model served as the blueprint for the physical robot, which I successfully 3D printed with precision and attention to detail, ensuring structural integrity and lightweight construction

WiFi Communication and Remote Control: I configured the hardware for seamless WiFi communication between the robot and an on-board computer, enabling remote control and real-time monitoring. This feature allowed for convenient and flexible operation of the robot, enhancing its usability and versatility.

Gait Pattern Development: One of the project's highlights was the development of two distinct automated gait patterns for the quadruped. Leveraging optimization techniques and evolutionary algorithms, I achieved impressive results, with the robot achieving a top speed of 11 cm/sec while maintaining stability and control.

This project showcases the successful integration of mechanical design, software development, and optimization techniques to create a functional and agile robotic system. The quadruped's capabilities have far-reaching potential, from robotics research and education to applications in surveillance, exploration, and more. In the future, I plan to further refine the gait patterns, implement additional sensors for enhanced autonomy, and explore collaborative behaviors among multiple robots.




Done at Columbia University, New York

Your browser does not support HTML5 video.

Creating a Simulator to teach a custom Robot to move

scroll down

Created a simple simulator using Python that contains some of the basic physics properties such as gravity, friction, coefficient of restitution and damping . 

Using these properties succesfully made a cubical wireframe structure to bounce and breathe to verify the working of the physics.

In addition, I've combined a series of wireframe cubes to create a custom robot and tested the physics to see if the physics still work for multiple cubes taking into account the forces acting between them.

This simulator can now be used to make the custom soft robot to learn to walk using Evolutionary Algorithms (EA). The main advantage of building your own simulator is that it is much easier to integrate it with your EAs and it won't crash because of the garbage values generated by these algortihms 

Watch the video below for more

Your browser does not support HTML5 video.

Done at Columbia University, New York

Your browser does not support HTML5 video.

The Traveling Salesman Problem

Scroll down

The problem statement where you are given a list of cities and the distances between them and are required to find the shortest path joining all these cities but traveling to each of them only once is called the Traveling Salesman Problem (TSP)

Evolutionary Algorithms are best suited for problems that are hard to solve and easy to test. The TSP is one example of this kind where it does not take much time to calculate the goal, which is the total length of the path. This means that for every iteration it is possible to improve/optimize the parameters by calcuting the current goal and comparing it with the previous one.

In this case, 4 different methods namely:
    Random Search
    Random Mutation Hill Climber
    Beam Search
    Genetic Algorithms
 were used to tackle this problem with different efficiencies.

Below is an example video showing the performance of beam search for a dataset of 100 points and 1000  iterations

Watch video below github

Your browser does not support HTML5 video.

Done at Columbia University, New York

Soft Robotics

Scroll down

 Abstract

Variable stiffness grippers can adapt to objects with different shapes and gripping forces. This paper presents a novel variable stiffness gripper (VSG) based on the Fin Ray effect that can adjust stiffness discretely. The main structure of the gripper includes the compliant frame, rotatable ribs, and the position limit components attached to the compliant frame. The stiffness of the gripper can be adjusted by rotating the specifific ribs in the frame. There are four confifigurations for the gripper that were developed in this research: a) all ribs OFF (Flex) mode; b) upper ribs ON and lower ribs OFF (Hold) mode; c) upper ribs OFF and lower ribs ON (Pinch) mode; d) all ribs ON (Clamp) mode. Different confifigurations can provide various stiffness for the gripper’s fifinger to adapt the objects with different shapes and weights. To optimize the design, the stiffness analysis under various confifigurations and force conditions was implemented by fifinite element analysis (FEA). The 3-D printed prototypes were constructed to verify the feature and performance of the design concept of the VSG compared with the FEA results. The design of the VSG provides a novel idea for industrial robots and collaborative robots on adaptive grasping.

CLick here to access the manuscript

In collaboration with Purdue University

Rehabilitation Robotics

Scroll down

 Abstract

A control design is presented for a cable driven parallel manipulator for performing a controlled motion assistance of a human ankle. Requirements are discussed for a portable, comfortable, and light-weight solution of a wearable device with an overall design with low-cost features and user-oriented operation. The control system utilizes various operational and monitoring sensors to drive the system and also obtain continuous feedback during motion to ensure an effective recovery. This control system for CABLEankle device is designed for both active and passive rehabilitation to facilitate the improvement in both joint mobility and surrounding muscle strength. 

CLick here to access the manuscript

In collaboration with University of Rome tor Vergata

Industrial Robotics

Scroll down

 Abstract

Delta robots are a subset of a class of robots called parallel robots. These are widely used in pick-and-place and assembly tasks involving high motion-accuracy. The basic idea for the design is the development of parallelogram structures attached to the output platform. Three such parallelogram mechanisms are used to ensure that the orientation of the moving platform is restricted to only translational motion relative to the base. This paper aims to improve the robot’s motion accuracy by performing a study on the parallelism error and verify its influence on accuracy. Previous studies on the error analysis in the motion of Delta robots mainly consider dimensional tolerance, joint clearance and driving error of each component, mainly focusing on the position error of the moving platform. However, the analysis of the parallelogram arm is also equally important to ensure minimum posture error to reduce the pose error(orientation+position) of the mobile platform. Any posture error in the mechanism results in the posture error of the moving platform. This article approaches to reduce this posture error by considering the main influencing factors-length error in the connecting rods & joint errors in the parallelogram mechanism. The work is divided into two sections. The first section deals with the kinematic analysis and estimation of optimum dimensions. The second section deals with the error analysis using the kinematic approach. This section also relates the errors of pose and structural types. And, due it’s analytical nature, this error model can be used for sensitivity analysis to further enhance manipulator accuracy

CLick here to access the manuscript

In collaboration with BITS Pilani

prathyushivs@gmail.com
vi2158@columbia.edu

Website Templates

created with

Website Builder Software .