Main Reasons Evolving, smart and smooth environment is the utmost desire of the modern human being. The world is changing faster and we want our daily life routine completely inspired by the ideas and objects that could perform their task on thei
Vision Assisted Pick & Place Autonomous Mobile Manipulator
Main Reasons
Evolving, smart and smooth environment is the utmost desire of the modern human being. The world is changing faster and we want our daily life routine completely inspired by the ideas and objects that could perform their task on their own with the least possible human intervention.
Our project is meant to design and implement the task in one of its nature, going to help humans in the way they want. This "Vision Assisted Pick & Place Autonomous Mobile Manipulator " will bring automation to human life. The idea is to have an automated robot responsible for the detection of a particular object, moves towards it on its own, pick it and then place it at the desired location. This concept will make the process of picking and placing objects to their locations completely automated without any efforts being put forward by humans which will surely result in countless benefits.
Theme of the Project
The theme of the project is to use image processing algorithms for the detection and recognition of objects with the help of a camera, plan a motion pathway to reach the object, pick it, perform motion planning in real-time from the initial position to the final destination with an optimal path to be achieved. The theme is to have an automatic pick and place mobile robot capable to perform tasks on its own within a prescribed environment.
Need of Design
Our goal is to develop a real-time intelligent autonomous pick and place robot that can sense and interpret information collected from its environment and to determine its position, direction to the destination, movement of the manipulator for the grasping action, smooth navigation, and delivering the object to its final destination as programmed. In both structured and unstructured environments, our robot will be able to perform the task safely without the intervention of
human beings making the process automated either in the manufacturing industry, transportation industry, retail warehouses, etc.
Objectives
Following are going to be some of the main objectives of the project:
Design of Mobile Manipulator
The Design and Implementation of working to mobile manipulator using ROS (Robotic Operating System)
Object Detection & Image Processing
The proposed robot will have a camera installed on its body, one of the core objectives is to detect objects once captured in-camera, and then convert them into signals. The main concept will be the use of image processing algorithms to make our system aware of the object and further proceeding to make a decision of picking it up and placing it to the desired location.
Automation
Automation is the biggest objective of this whole project. The world is moving fast, every industry is moving towards automation as it yields great results. This mobile modulator will be an automatic robot, performing the task of pick and place without any human help required.
Use of Robotics Operating System (ROS)
ROS is supposed to be the heart of Robotics learning, development, and implementation. The use of ROS is an objective to make the robot up to the state of the art standards and open for all future working and development worldwide.
Project Implementation
The project implementation consists of the following steps:
System Calibration
The calibration step is somewhat the initial programming of the robot, to define the workspace as to be seen through the camera so that the manipulator could reach any visible location in the workspace. This is an important step when we are using a camera to perceive the environment,
Object Recognition
The object recognition is the second part of the implementation, once an object has been found in the camera view, it will recognize and compare the object with the features of a target object in memory as per image recognition techniques.
Motion Planning
After recognition of an object, we need motion planning to reach out to the object, pick it, map the pathway towards the destination, and place the object there. All these steps need motion planning for which we are using Moveit! package inside the Robotic Operating System (ROS) for motion planning algorithms.
Grasping Action
Grasping involves two steps - finding the right pose for the target object then making actual movement to make physical contact with the object in order to grasp. We will use graspable affordance for the object directly from the RGBD point cloud acquired from the onboard camera.
Project Benefits
Following are going to be some of the prominent benefits of the project:
Automation
The concept of a self-driven pick and place robot will bring automation to the performing task.
Speed and efficiency
A robot is always faster and efficient than a human, so does ours.
Consistent and saves time
Robots do not get bored by performing repetitive tasks unlike humans, so it brings consistency and will save a lot of time by doing the task efficiently.
Well being of workers
With the use of our project, industries won't have to use the labor force for repetitive tasks of bringing one item from one place to another, it will not make the workers tired and is a positive step towards the healthcare of workers.
The project is implemented using Robot Operating System (ROS) framework. The working operation of the complete system is divided into different several modules, each performing a specific different task. Each of these modules is made available as a node which are the basic computation units in a ROS environment. These nodes communicate with each other using topics, services, and parameter servers.
Topics are unidirectional streaming communication channels where the data is continuously published by the generating node and other nodes can access this data by subscribing to this topic. In case, nodes are required to receive a response from other nodes, it can be done through services. All the modules are controlled by a central node named “XYZ controller”. The simulation environment and RVIZ visualizer is also part of this system and is made available as an independent node.
| Item Name | Type | No. of Units | Per Unit Cost (in Rs) | Total (in Rs) |
|---|---|---|---|---|
| Raspberry Pi 4 | Equipment | 1 | 16000 | 16000 |
| Kinect Camera | Equipment | 1 | 9500 | 9500 |
| Mechanical Arm Gripper Clamp Kit with 3 Servo | Equipment | 1 | 6500 | 6500 |
| Robot Car Chassis | Equipment | 1 | 5000 | 5000 |
| Motor Drive Circuit | Equipment | 1 | 900 | 900 |
| DC Battery | Equipment | 1 | 4000 | 4000 |
| DC Motor | Equipment | 4 | 2500 | 10000 |
| Miscellaneous | Miscellaneous | 1 | 5000 | 5000 |
| Rasberry Pi Case | Equipment | 1 | 1000 | 1000 |
| DC Battery Charger | Equipment | 1 | 1000 | 1000 |
| Report Printing | Miscellaneous | 1 | 2000 | 2000 |
| Total in (Rs) | 60900 |
We are designing variable frequency drive for controling speed of single ph...
Making a Gesture Controlled Computer, based on new technologies will play a vital role in...
First X-ray Report is uploaded in application then software will differ between normal che...
In this project each axis will be control by servo/stepper motors. The robot is programmed...
In this project, we will make a web app in which customers will book rooms by paying onlin...