“Gestured Based Robot Control” Major Project Report Submitted in Partial Fulfillment of the Requirements for the Degree of Bachelor of Technology In Instrumentation and Control Engineering By Piyush Parmar

“Gestured Based Robot Control”
Major Project Report
Submitted in Partial Fulfillment of the Requirements for the Degree of
Bachelor of Technology
In
Instrumentation and Control Engineering
By
Piyush Parmar:
(14BIC032.)
Happy Patel:
(14BIC036.)

Department of Instrumentation and Control Engineering
Institute of Technology
NIRMA UNIVERSITY
Ahmedabad 382 481
May 2018
CERTIFICATE

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

This is to certify that the Major Project Report entitled “Gesture Based Robot control” submitted by Mr. Piyush Parmar (14BIC032) & Mr. Happy Patel (14BIC036) towards the partial fulfillment of the requirements for the award of degree in Bachelor of Technology in the field of IC Engineering of Nirma University is the record of work carried out by them under our supervision and guidance. The work submitted has in our opinion reached a level required for being accepted for examination. The results embodied in this major project work to the best of our knowledge have not been submitted to any other University or Institution for award of any degree or diploma.

Date:
Guide
Prof. Vidita R. Tilva
Assistant professor
At Nirma University
Dr. Dipak M Adhyaru
Head of Department
Department of IC Engineering
Institute of Technology
Nirma University
Ahmedabad Dr. Alka Mahajan
Director
Institute of Technology
Nirma University
Ahmedabad
Acknowledgement

This Project report is prepared not only by a solo effort. A great deal of effort and time has been involved in preparing this report. Many people are involved in it directly or indirectly. During the course of my project, work many without whom this project may not have seen the light of the day. It is our sincere desire to express heartfelt thanks for their guidance and support.
We would like to thank our Guide Prof. Vidita Tilva for her efforts and guidance throughout semester which from her vibrant pedagogy has really helped us in boarding the horizon of our knowledge.

I would like to thank MATLAB for providing us such a amazing software.

Abstract
This report explains about how gesture based controlled robot is made using various techniques and methods and what problems are faced during the process of building it.

This report contains the simple, effective and most innovative methods and processes in field of gesture recognition and control. We have find the new method of gesture recognition and named it as RAINBOW METHOD. Which is far simpler and error free compared to existing techniques using image processing. This report thoroughly explains how gesture is recognized when it is required.

List of figures
Fig Page no.

Fig 1.1 10
Fig 3.1 14
Fig 3.2 16
Fig 3.3 17
Fig 3.4 18
Fig3.5 19
Fig3.6 21
Fig3.7 24
Fig 3.8 28
Fig3.9 30
Fig 3.10 31
Fig3.11 32
Fig3.12 33
Fig3.13 34
Fig 4.1 35

List of tables
Tables Page no.

Table 3.1 21
Table 3.2 23-24
Table 3.3 26-29
CONTENTS
Acknowledgement (i)
Abstract (ii)
List of Figures/Captions (iii)
List of Tables (iv)
Contents v
Chapter 1: Introduction 1
Chapter 2: Literature review
2.1 Different methods available for gesture detection
2.1.1Accelerometer based gesture recognition
2.1.2 MEMs sensor based gesture recognition
2.1.3 Flex sensor based gesture recognition
2.1.4 Kinect 3D depth camera based recognition
2.1.5 Image processing method of gesture recognition 12
Chapter 3: Gesture Based Robot Control
3.1 Overview
3.2 Working principle
3.3 controlling unit: Software processing
3.3.1 Image Acquisition through smart phone camera
3.3.2 Color identification
3.4 Gesture
3.5 Proposed algorithm for gesture detection: Rainbow
3.6 Command executing unit: Hardware
3.6.1 HC-05 Bluetooth Module
3.6.2 Arduino Uno Controller
3.6.3 Motor driver L293d 14
Chapter 4: Results 35
Chapter 5: Applications
5.1 Boon for physically challenged
5.2 Medical applications
5.3 Military applications
5.4 Rescue missions 36
Appendix 37Undertaking for Originality of the Work
We, Piyush Parmar, Roll No. 14BIC032 & Happy Patel, Roll No. 14BIC036 give undertaking that the Major Project entitled “Gesture Based Robot Control” submitted by us, towards the partial fulfillment of the requirements for the degree of Bachelor of Technology in Instrumentation and control engineering of Nirma University, Ahmedabad, is the original work carried out by us and we give assurance that no attempt of plagiarism has been made. I understand that in the event of any similarity found subsequently with any other published work or any project report elsewhere; it will result in severe disciplinary action.
__________________ ___________________
Piyush Parmar Happy Patel
Date: _______________
Place: _______________
Endorsed by:
(Signature of Guide)
Prof. Vidita Tilva,
Asst. professor,
Institute of technology, Nirma University.

Chapter 1: Introduction
Nowadays there is fierce competition in the market between the tech. companies for bringing the innovative technologies in the market. The pace of change in the technology is changing even with more pace that it was observed a few years back, means technology in the market is changing very rapidly. This change in technology focuses on the luxury, efficiency and the needs of people of all categories, class and culture. No one is devoid of exposure to technology in the past couple of years. These technological innovations have a huge impact on the lives of people and are benefited from it. First, we were controlling the robots and machinery by buttons on the control panels then this buttons are replaced by touch screens and the future of it is gestures and telepathic communicable devices. Seeing the pace of innovations in the market, it is clear that in the next few years we are definitely using gestures and telepathy in our day-to-day lives.
Nowadays robotics field are becoming one of the most advanced field in the technology. The application of robotics are mainly involved in medical sector, automobiles, defense, research etc.
However, as the application of a particular robot increases we need to add more function into it to make sure that everything is under control. As the functions, increases we need to have more buttons on hand held electromechanical device called remote.

Now this become so much difficult to memorize all the buttons for controlling action. However, if we can replace conventional remote with something which we use daily and so, that we need not to memorize it. So this problem can be solved by using gesture instead of buttons to control robotics.

Future of gesture-based technology is very bright. Now days the any smart device somehow involved with gesture based control. We can take example of a laptop. There is some sensors are available in market which can turn your laptop into 3D touch device. Which is rather more comfortable then controlling your laptop with mouse for moving pointer on the screen.

Gesture literally means “an expressive movement of a part of the body, Gestures are extensively employed in human non-verbal communication. They allow to express orders (e.g. “stop”, “come”, “don’t do that”), mood state (e.g. Victory” gesture), or to transmit some basic cardinal information (e.g. “one”, “two”).

What’s more, in some uncommon circumstances they can be the main method for imparting, as in the instances of hard of hearing individuals (gesture-based communication).

The main purpose of gesture recognition research is to identify a particular Human gesture and convey information to the user pertaining to individual gesture.

In the past robots were controlled by human operators using hand held control device such as electromechanical device. This device limits the speed and neutrality of human interaction with machines.

Moreover, it requires some basic knowledge or experience to use hand held electromechanical device. While in gesture-based control, ordinary human being with general knowledge of gesture can also control the machines or robots. Some extraordinary gestures can be used to control full humanoids for search operations in space or place where human cannot reach. Alternatively, in some cases some expert doctors cannot reach to the operation theater in time in some of the remote locations where proper medical facility is not available this method is not less than blessing for people who can’t reach hospitals in time from villages. We can arrange communication link to the operation robot and we can do surgery or implantation without reaching to actual location. Using gestures given by expert.

Our focus is on gestures, human civilization is using gestures for communication even before the existence of languages. Gesture is the mode of communication in which we are using some signs and symbols to express our feelings and thoughts. Same gestures may have different meanings for different people. For a paradigm, the fig 1.1 shows a gesture, which may be used in different places to signify good luck, ok, and it is considered a curse in Australia.

Fig 1.1
In some of the latest mobile phones have started using gestures to control some of features of the phone for example, shaking for flashlight, blink or smile for capturing photos, triple touch for screenshot etc.
There are various types of gestures for example hand gestures, body gestures, face gestures, motion gestures, touch gestures, etc.

From all the types of gestures in the current day, the human-computer interaction application of hand gesture is being developed vigorously. The advantage of these application is that users can control devices without touching anything such as panel, keyboard, mouse, etc. The users just control devices with facing the camera and raising the hands. Among the various types of gesture, hand gestures are easy to be used and more convenient for communication.

The various techniques develop in human-computer interaction (HCI) can be extended to other areas, such as surveillance, robot control, and teleconferencing. Among these, sign language recognition has become an active topic of research as it provides an opportunity for the hearing impaired to communicate with the normal people without the need of an interpreter.

The detection and understanding of hand and body gestures is becoming an important and challenging task in computer vision.

A novel recognition scheme which exploits shape based static features as well as motion
Based properties of the gesture has been developed. The set of gestures considered can be divided broadly into two classes:
1. Gestures for which shape of the hand remains unchanged over a motion sequence
2. Gestures for which shape of the hand changes from frame to frame.

We have used both classes to control our robot.

Thus by seeing the future is of this technology we have tried our best to improve the existing gesture technology and introduced the most latest innovative method of gesture based robot.

Chapter 2: literature review
There are plethora of methods available on gesture recognition and control in different literature. The prevailing method involves hand tracking and template matching algorithm for detection of gestures.

2.1 Different methods available for gesture detection
Different methods available for gesture detection are as follows:
1) Accelerometer based gesture recognition
2) MEMs sensor based gesture recognition
3) Flex sensor based gesture recognition
4) Microsoft Kinect 3D depth camera based recognition
5) Image processing
2.1.1Accelerometer based gesture recognition
In this type of gesture recognition technique we can only recognize motion based gestures like tilting hand upward or downward or left or right side only. Limitation of this recognition is that it cannot recognize the real gesture and number of detection of gesture is limited.

2.1.2 MEMs sensor based gesture recognition
It is the same as accelerometer based gesture recognition in fact accelerometer is one class of MEMs sensor therefore it is having same limitation as the accelerometer based gesture recognition.

2.1.3 Flex sensor based gesture recognition
Flex sensor works on the measurement of the amount of deflection caused by bending the sensor, this can be used where we need to detect physical change in the gesture. Like stretching arms or fingers. For controlling robotic hand, this can be used to control every link of robotic arm via physical human hand. However, in this type of gesture control we need to have so much electrical connected to human body, which can be dangerous or can be annoying sometimes.

2.1.4 Microsoft Kinect 3D depth camera based recognition
This method is used to recognize full human body gesture and to control robotic applications. However, it is somehow good approach but still it requires more software for data acquisition and processing. Calculation and algorithm for gesture recognition is also complex.

Another technique is by motion tracking of gesture where we need to compare background and gesture frame by frame to find out the motion direction and to recognize the pattern.

All the existing methods do not provide vigorous solution for the gesture recognition as they involve predefined template to which real time gesture is being compared. The above describe methods are having either limitation in number of gesture detected or having multiple complexity in calculation and processing. Which uses so much memory of the processing device.

Thus, we have used the method which is hot among the researchers nowadays i.e. Image processing based method.

2.1.5 Image processing method of gesture recognition
There are also some pre-existing methods available in image processing based gesture recognition.

Fuzzy clustering algorithm based gesture recognition
Histogram based gesture detection
And many more…

But in all these methods are not perfect and universal thus we have introduced new method using simple algorithm which is very user friendly and accurate which is Rainbow method.

Chapter 3: Gesture Based Robot Control
Here we are going to discuss what methods we have used to control the gesture based robots.

3.1 Overview
Here first we are giving the gesture input by our hand by wearing the glove whose fingers are colored. Then we are capturing the gesture by using camera of mobile phone and transmitting it wirelessly to the PC where we are using image-processing software to recognize the gesture and after identifying the gesture we are transmitting the gesture number to controller by wireless communication and then the desired actions are taken by our robot.

3.2 Working principle
The basic working principle of almost all types of gesture-based robots can be explained by following block diagram:

Fig 3.1 Block diagram
Here first we are giving the gesture. Then we are capturing the gesture by gesture capturing device and transmitting it wirelessly to the PC where we recognizing the gesture and after identifying the gesture we are transmitting the identified gesture to controller by wireless communication and then command is given to our robot.

Thus this method is a great combination of manual input, Software processing and hardware output.

3.3 controlling unit: Software processing
In software processing, we are using the most innovative concept of rainbow method of gesture detection. Which uses color segmentation based gesture detection. Here we are detecting the colors and identifying gestures. We have in total detected four colors for identifying gestures. In order to detect colors we have to first have some photo or video, in our project we are using smart phone based image acquisition.

3.3.1 Image Acquisition through android smart phone camera

Today everybody is using android smartphones in their day-to-day life. This smartphones are having very good quality of camera attached to it. So we have used android smartphone camera to acquire data for image processing and ultimately for gesture detection.

There is one application is available with help of it, we can transmit our live camera data to image processing software. Now generally every device are equipped with Wi-Fi module and Bluetooth module. Therefore, there is one application Named IP Webcam which can turn our smartphone camera into wireless IP camera and it communicates to other device via Wi-Fi wireless protocol. It is having so much parameter apart from the camera for e.g. other sensor of the phone. But for or project we have only used the camera. So parameter setting is described as follows:

Fig 3.2
As per fig 3.2, first we need to set video preference for which our algorithm is define for correct static gesture tracking. Therefore, we need to set video resolution 320X240 pixel in video resolution setting. In addition, you can adjust other parameters like quality or FPS limit to get clear vision of gesture.

Fig 3.3
For different resolution setting, we need to calibrate the static gesture recognition parameters in fig 3.3 because whole image is divided into different parts of pixel resolution.

Further, we can set log in ID and password to protect our stream from unauthenticated user access. Therefore, that no one will interfere in between the communication of image processing device (i.e. Laptop) and smartphone. Also in connection settings, you can define your own static IP address and port for communication otherwise it will take device IP address as default IP address for transmitting camera signal. Fig 3.4

Fig 3.4
After doing all necessary setting start the server for transmitting camera signal. There IP displayed on the screen which can be used to receive camera signal data on desired device.

Now as we have acquired image we have to detect gestures and for that we are using RAINBOW METHOD.

3.3.2 Color identification
We have identified the color using two methods they are:
Segmentation based detection and
Color thresholding
Segmentation based color detection
It can be explained using the following flowchart.

Flow diagram
2430307111760Image Acquisition
00Image Acquisition

1727835956945Extracting red components from image
00Extracting red components from image
29857704318029825957340602982595141478029825952105660298259527863802982595346773522872703014980Remove small-detected area
00Remove small-detected area
22872702329180Thresholding image to binary
00Thresholding image to binary
22872701643380Median filter to filter out noise
00Median filter to filter out noise
20110453710305Extracting properties of detected Red color
00Extracting properties of detected Red color
2430145271780Convert to grayscale
00Convert to grayscale

Fig.3.5
This flow diagram can be explained in table 3.1
Captured image from image acquisition device
Converting RGB image to gray scale image and extracting red components from image by subtracting red layer from original image
Using median filter to filter out noise from image
converting image to binary image for removing extra-detected areas. And removing small area to find exact location of red color
Extracting properties of detected red color object like centroid, area etc.

Table 3.1
Color thresholding
It can be explained using the following flow diagram.

Flow diagram
2435225104184Image Acquisition
00Image Acquisition

299085036003
174461199695Extracting green components from image
00Extracting green components from image

2987675384746520161254090035Extracting properties of detected specific color
00Extracting properties of detected specific color
22923503394828Remove small-detected area
00Remove small-detected area
2987749170262717470922014323Thresholding image to find specific color using pixel value of color
00Thresholding image to find specific color using pixel value of color
298767531629352987675247205522923502701172Median filter to filter out noise
00Median filter to filter out noise
30041412032001733107559095Extracting blue components from image
00Extracting blue components from image
17329151242695Extracting red components from image
00Extracting red components from image
29876751019810

Fig. 3.6
This flow diagram can be explained in table 3.2
Image Acquisition using camera
Extracting red components from image
Extracting green components from image
Extracting blue components from image
Thresholding image between pixel value of yellow color to find out exact location yellow color and using median filter to remove noise.

Extracting properties of detected yellow color object like centroid, area etc.

Table 3.2
Now similarly we can detect the green and blue colors also. Now after detecting the colors we have to calibrate in terms of gestures. For example if only green color is detected then one gesture is detected. The actual gestures that we have used are shown above.

3.4 Gesture
The list of gestures we have used and the functions carried by robot through gestures are as follows:
We have used both the types of gestures i.e. static and dynamic. First in static hand, gesture is not changed but the position changes and thus control actions takes place. Second in dynamic hand gesture changes. In static gesture we have used red color when only red color is detected then joystick gesture is activated which means depending on the position of red color in the screen the mo-bot is moved forward, reverse, left, right or stop. The accurate pixel location is as shown in fig 3.4.

Fig 3.7
Thus, the total list of all gestures and their function is as shown as follows:
Gesture Colors detected Robot action taken
Red Forward
Backward
Left
Right
Based on location of center of red color in respective quadrant shown in fig 3.4
Green Fast speed
Blue Left circle
yellow Right circle
Red
Green Headlight on
Red
Blue Horn on
Red
Yellow Medium speed
Green
Blue Low speed
Green
Yellow Head light off
Blue
Yellow Horn off
Red
Green
Blue Robotic arm up
Red
Green
Yellow Robotic arm down
Red
Blue
Yellow Robotic arm adjustment up
Green
Blue
Yellow Robotic arm adjustment down
Red
Green
Blue
Yellow User defined
NO COLOR Stop
Table 3.3
These settings of gestures can also be changed by changing the position of the colors on the fingers the above actions are for the arrangement of colors on fingers as shown in the fig 3.8.

Fig 3.8
3.5 Proposed algorithm for gesture detection: Rainbow
This is the innovative method which we are using for the first time in the process of gesture detection we have named it Rainbow method. We are calling it as rainbow method because we are detecting colors and then identifying gestures, This is one of the indirect method of gesture detection. All the previously existing methods are using some binary color methods i.e. black and white images to detect gestures but we are detecting colors so this is also one of the reasons of calling it as rainbow method. The major part of this method is explained below:
We have utilized color segmentation based algorithm for detecting red colors and by utilizing this algorithm we have introduced new algorithm called RAINBOW for different gesture detection using four different colors i.e. Red, Green, Blue and yellow.

Fig 3.9 clearly explains the algorithm from extracting gesture from image using image processing. When only single red color is detected, we move to static gesture recognition where gesture shape remains constant but motion and location of color in image can be used to determine joystick gesture. For different combination of color detected, we can find derived gesture, which can be used to control other movements of robot.

37791952646075Red color is in
00Red color is in
23285305890437Gesture determination as per Table 3.3
0Gesture determination as per Table 3.3
2770343548767023262344930445Combination of color?
00Combination of color?
1888490470949000276359643942258772403876523Stop
Stop
1353312399409918843503598545003835021285238204697891284734027565353248822276368221869401679575245491000279400089304619563911254642Color segmentation and thresholding for detecting desired color from image
0Color segmentation and thresholding for detecting desired color from image
181816711376842235200342900RGB image Acquisition
RGB image Acquisition
19563912339160
5282639110830Gesture determined
0Gesture determined

59804301566090
20874746159Detected color is red?0Detected color is red?
377455810607500
583438029845 Forward
00 Forward
434589644406Forward region?
00Forward region?

55772055715
4687570146050
2390660100980No color?
00No color?

5768695111731 Backward
00 Backward
4340476112365Backward region?
00Backward region?
38170881658500
562974876200
4709470136229
381708815002300
571232763131Left
0Left
5643083171450411858138749Left region?
00Left region?

475200026833
5709403176530Right
0Right
4179408158750Right region?
00Right region?
38596197630400
5624033139065
475234037627
5662487157303Stop
0Stop
4182213166282At center
00At center
38442909052002115879552750
5645623128905
Fig 3.9

3.6 Command executing unit: Hardware
After software part let us move on to hardware part:
4327126144145Hardware
Hardware
990836143820Software
0Software

370205011227700845881112880
2076273108896699401283210
300309211487639194427747000
49009301485403837940130013HC-05
HC-05
179139448260Device
Bluetooth
00Device
Bluetooth

1905006985000
77326825400Wifi
Wifi

409353547935Application robot
0Application robot

70802514906300
1094075116722Mobile camera
0Mobile camera

Fig 3.10

As there are two section of gesture based robot control we have completed controlling unit where image processing, and gesture detection is done. For communication between actual robot and image processing software, we have used HC-05 Bluetooth module to receive data and to control movement and direction of robot. Following hardware components are required to control the movement of mobot.

HC-05 Bluetooth module
Arduino AT mega controller
Motor driver L293d
Dc motors
Servo motors
Chassis
12v Dc Battery
3.6.1 HC-05 Bluetooth Module
Bluetooth is an open source wireless standard for wireless connectivity mostly from the PC and cell phone industries. Now days for short-range communication Bluetooth communication protocol is very much useful. It is better than other RF link device because it does not require line of sight between sender and receiver device. Bluetooth provides significant advantages over other data transfer technology such as RF device communication speed is limited and is lees then Bluetooth; while Bluetooth is for short range but having capability of real-time data transfer and also gives high data rates while communication so it is preferred for quick command execution at the end side of robotic application. It is low powered (3.6 to 4.5V) and low coast device.

HC-05 module is having capability of serial data transfer and mainly it is made for interfacing with Arduino board or other such controller boards for serial communication.

HC-05 module is having two modes of operation. It can work as Master as well as slave communication device but for our project we have used only as slave device.

Fig 3.11
In Above fig. there are total six pins for communicating. But out of six only four pins are useful for using slave mode.

3.6.2 Arduino Uno Controller
Controller is the main part of any robotic application. Here we have used Arduino Uno controller board for controlling movement of robot.

Arduino Uno board is Microcontroller board based on the AT mega 328p.

It is having 14 digital input/output pins available out of which six pins can be used as PWM outputs.

It is having six analog inputs. 16MHz quartz crystal for scanning and running code. Can be powered with Power jack or Connecting to a computer with a USB cable or power It with an AC-to-DC adapter or battery to get started. Arduino board is having open source IDE for writing code apparently all the sensors and actuators can be interfaced with Arduino board and library for such sensors and actuators are easily available.

Fig.3.12
3.6.3 Motor driver L293d
L293d is motor driver IC that allows us control direction of a DC motor.

It works on the concept of H-bridge. H- Bridge is circuit, which allows flowing current in either direction. There are four input to motor driver IC for PWM. To control two motors direction.

Separate 12v Dc battery is connected to motor driver. Maximum voltage can be supplied is 36 volt and 600mA.

Fig 3.13
The motor driver moves the motor using following commands:
Command 1 Command 2 motion
High High STOP
Low Low STOP
High Low FORWARD
Low High BACKWARD
Chapter 4: Results
For the convenience of user and to make user friendly we have created the GUI as shown in fig 4.1. In this GUI we have given the camera slot which shows the gesture footage. The colors detected are shown in the small boxes on the top. And also we have shown that what gesture is detected and what actions take place.

Fig 4.1
When the start switch is pressed then the Bluetooth is connected and the camera footage starts showing up in the reserved slot of it and then the gesture detection begins with the help of rainbow method. Using this method whatever colors are detected are shown in the boxes of colors. The reports of the self assessment of some of the parameters are as shown in the table 4.1.

Parameter
Comments
User friendly Highly user friendly and easy to use.

Delay Less delay compared to other methods
Accuracy Very high compared to other methods
Processing speed Fast
Adaptability Very high, it can be changed depending upon the user.

Chapter 5: Applications
There are a bountiful of illustrations to demonstrate the applications of gesture based robot and they can be used anywhere where we want to take any control actions for example gestures can be used to control the motion of robotic arm, to move any mobile robot, it can also be used in some home automation applications like turning lights and fans by giving gestures, etc. this type of applications can be used to reduce the hardware in some applications such as we can use gestures to replace mouse from our personal computers and laptops.

5.1 Boon for physically challenged
The major advantage of this technology goes to our physically challenged friends. For the person who is not able to use his legs we can design a wheelchair that can be moved by just giving the gestures by hand. For the person who is having weak legs then we can design a robotic hand by which he can give gestures and the rest can be done by robotic arm.

The best example in the world to understand the application and the advantages of gesture-based robots for physically challenged can be understood by the wheelchair used by late scientist Stephen Hawking it was not just the chair but a full robot designed and build by Intel. As we, all know that Stephen hawking was not been able to move but he was only been able to move his one finger. Thus, by giving gesture from one finger he was doing multiple tasks such as to move wheelchair, control the speed of wheel chair and communicate. Due to his use of limited vocabulary, this method was successful for him. This shows that how this method is a boon for physically challenged.

5.2 Medical applications

In medical application it can be used as a the virtual doctor for which the doctor must not need to be present at the operation location which may be thousands of miles away from the doctor. By the use of gesture based robot the actions of the doctors are copied by the robot and hence the operation takes place.

It can also be useful for the operations in the part of the body where it is difficult for the doctor to reach or perform operation.

5.3 Military applications

In military application we can use this robots surveillance and spying the enemy troops.

It can also be used in the place as soldiers in the combat situation.

5.4 Rescue missions
We can use this kinds of robots in the situation in which we can rescue someone from dangerous situation without putting someone’s life in danger.

x

Hi!
I'm Belinda!

Would you like to get a custom essay? How about receiving a customized one?

Check it out