Saturday, October 30, 2010

Robotics - What is Robotics?



Roboticists develop man-made mechanical devices that can move by themselves, whose motion must be modelled, planned, sensed, actuated and controlled, and whose motion behaviour can be influenced by “programming”. Robots are called “intelligent” if they succeed in moving in safe interaction with an unstructured environment, while autonomously achieving their specified tasks.

This definition implies that a device can only be called a “robot” if it contains a movable mechanism, influenced by sensing, planning, actuation and control components. It does not imply that a minimum number of these components must be implemented in software, or be changeable by the “consumer” who uses the device; for example, the motion behaviour can have been hard-wired into the device by the manufacturer.

So, the presented definition, as well as the rest of the material in this part of the WEBook, covers not just “pure” robotics or only “intelligent” robots, but rather the somewhat broader domain of robotics and automation. This includes “dumb” robots such as: metal and woodworking machines, “intelligent” washing machines, dish washers and pool cleaning robots, etc. These examples all have sensing, planning and control, but often not in individually separated components. For example, the sensing and planning behaviour of the pool cleaning robot have been integrated into the mechanical design of the device, by the intelligence of the human developer.

Robotics is, to a very large extent, all about system integration, achieving a task by an actuated mechanical device, via an “intelligent” integration of components, many of which it shares with other domains, such as systems and control, computer science, character animation, machine design, computer vision, artificial intelligence, cognitive science, biomechanics, etc. In addition, the boundaries of robotics cannot be clearly defined, since also its “core” ideas, concepts and algorithms are being applied in an ever increasing number of “external” applications, and, vice versa, core technology from other domains (vision, biology, cognitive science or biomechanics, for example) are becoming crucial components in more and more modern robotic systems.

This part of the WEBook makes an effort to define what exactly is that above-mentioned core material of the robotics domain, and to describe it in a consistent and motivated structure. Nevertheless, this chosen structure is only one of the many possible “views” that one can want to have on the robotics domain.

In the same vein, the above-mentioned “definition” of robotics is not meant to be definitive or final, and it is only used as a rough framework to structure the various chapters of the WEBook. (A later phase in the WEBook development will allow different “semantic views” on the WEBook material.)

Components of robotic systems



This figure depicts the components that are part of all robotic systems. The purpose of this Section is to describe the semantics of the terminology used to classify the chapters in the WEBook: “sensing”, “planning”, “modelling”, “control”, etc.

The real robot is some mechanical device (“mechanism”) that moves around in the environment, and, in doing so, physically interacts with this environment. This interaction involves the exchange of physical energy, in some form or another. Both the robot mechanism and the environment can be the “cause” of the physical interaction through “Actuation”, or experience the “effect” of the interaction, which can be measured through “Sensing”. 

Motion Control [Build A Robot]




Do you want to build a robot? A good place to start is with the servo control systems - the robot's muscles!

"What is servo control?" Imagine a simple motor. If you connect it to a battery, it will start spinning. If you connect two batteries, it will spin faster. Now imagine you tell the motor to turn precisely 180 degrees (1/2 revolution) and stay there no matter how many batteries there are. That's servo control.

Central to the task of servo control is the concept of negative feedback. As an example of negative feedback, consider what happens when you are hungry. Hopefully, you will be able to get something to eat. As you eat, you become less and less hungry until you eventually stop eating. This is the idea of negative feedback. Imagine if the opposite were true and eating made you hungrier. You would eat until you exploded - out of control! Please don't build a robot that acts like that.

Negative feedback in a servo control system proceeds in a similar fashion. There is a desired position for the motor and a feedback sensor that tells the motor it is not at the correct position. In effect, it is "hungry" to get to that position and begins turning towards it. As the motor gets closer to the desired position, the feedback device tells the motor that it is becoming "less hungry" and the motor responds by turning more slowly. In a perfect control system, the motor will get to exactly the right position and then will turn no more until it is commanded to a new desired position.

Motion Control includes much more than servo control. Machine setup, for example, involves a very important set of motion control operations. Machine setup includes operations such as setting feed rates, setting offsets, setting limits and writing files. Similarly querying for feed rates, querying for offsets, querying for limits and reading files are important motion control operations. Sending motion start and motion stop commands are further examples of motion control operations. Querying for machine state is another example of an important motion control operation.

Robots In Space

Airborne Robots



The little device at the left is a mock-up of an ambitious project at UC Berkeley to develop an artificial fly. If you ask me, they don't have a chance of succeeding. The challenges are just too great. They need to get the tiny wings flapping at 150 times per second, there needs to be some means of keeping the system stable in the air and somehow it has to navigate. And all this on something the size of a dime. They have gotten one wing to flap fast enough that, if they mount it on a little wire boom, it will generate some thrust. In other words they are nowhere close after years of work. This may be the type of system that can only be developed via evolution.

Robots in the Military



Pretty much by definition, the military is a dangerous place for humans. This makes it a logical application for robotics, but I definitely have mixed feelings about that. I can live with robots assisting soldiers, but automated killing is taking it too far. At left we see the Smart Crane Ammunition Transfer System being developed by the Robotics Research Corporation. The goal is for one soldier to be able to unload the entire truck without ever leaving the cab. The system includes cameras, video screens, force sensors and special grippers.

Industrial Robots

Modern industrial robots are true marvels of engineering. A robot the size of a person can easily carry a load over one hundred pounds and move it very quickly with a repeatability of +/-0.006 inches. Furthermore these robots can do that 24 hours a day for years on end with no failures whatsoever. Though they are reprogrammable, in many applications (particularly those in the auto industry) they are programmed once and then repeat that exact same task for years.

A six-axis robot like the yellow one below costs about $60,000. What I find interesting is that deploying the robot costs another $200,000. Thus, the cost of the robot itself is just a fraction of the cost of the total system. The tools the robot uses combined with the cost of programming the robot form the major percentage of the cost. That's why robots in the auto industry are rarely reprogrammed. If they are going to go to the expense of deploying a robot for another task, then they may as well use a new robot.





Robot Forward Kinematics

The figure above is a schematic of a simple robot lying in the X-Y plane. The robot has three links each of length l1-3. Three joints (the little circles) connect the three links of the robot. The angles at each of these joints are Ø1-3. The forward kinematics problem is stated as follows: Given the angles at each of the robots joints, where is the robot's hand (Xhand, Yhand, Øhand)?
For this simple planar robot, the solution to the forward kinematics problem is trivial:


Xhand = l1cosØ1 + l2cos(Ø1 + Ø2) + l3cos(Ø1 + Ø2 + Ø3)

Yhand = l1sinØ1 + l2sin(Ø1 + Ø2) + l3sin(Ø1 + Ø2 + Ø3)

Øhand = Ø1 + Ø2 + Ø3

For the general spatial case, the solution is not so trivial. This is because the joint angles do not simply add as they do in the planar case.
Denavit and Hartenberg used screw theory in the 1950's to show that the most compact representation of a general transformation between two robot joints required four parameters. These are now known as the Denavit and Hartenberg parameters (D-H parameters) and they are the de-facto standard for describing a robot's geometry. Here is a description of the four D-H parameters:
a - the perpendicular distance between two joint axes measured along the mutual perpendicular. The mutual perpendicular is designated the x-axis.
a - the relative twist between two joint axes measured about the mutual perpendicular
d - the distance between the two perpendiculars measured along the joint axis
Q - joint angle about the z axis measured between the two perpendiculars
Learning the proper procedure for assigning the D-H parameters is a typical exercise in an upper-level undergraduate or first graduate course in robotics.
Once the parameters have been assigned we can solve the forward kinematics problem by moving from the base of the robot out to the hand using the following transformations at each joint:


Transformation = Screwx(a, a)Screwz(d, Q) = Transx(a)Rotx(a)Transz(d)Rotz(Q)

Though the D-H parameters are the most compact general representation of the robot's geometry, they are seldom the most computationally efficient. In practice more specialized and computationally efficient equations are developed for each particular robot.

Robots in Radioactive Environments


The robot at right was developed for the decontamination and dismantlement of nuclear weapons facilities. It has two six-degree of freedom Schilling arms mounted on a five-degree of freedom base. As the facilities used to develop our country's nuclear weapons enter their 50th year and beyond, we now have to dismantle them and safely store the waste. The radioactive fields makes this activity too hazardous for human workers so the use of robotics makes sense. The idea for this robot is that it can hold a part in one hand and use a cutting tool with the other; basically stripping apart the reactor layer by layer (something like peeling an onion). As the robot works it too will become contaminated and radioactive and ultimately need to be stored as radioactive waste.

Telerobotics



This is a 1970's era manual controller developed for controlling robots operating in radioactive environments. This controller is roughly the size of a human arm. At the back of the controller you can see a number of black disks. These are electric motors. These electric motors provide the energy to feed forces back to the operator. These forces are proportional to the current in the robot's motors which in turn are proportional to the forces being experienced by the robot. The placement of the motors at the back of the controller provides perfect counter balancing. 

The motors drive the robot joints with almost no friction via metal tape. Developed 40 years ago and without any computer control, I believe this controller works as well or better than any human-arm scale controller available today.

Robot Vision



If you're in an industrial setting using Machine Vision you will probably find an Adept robot at work. The company has spent many years interfacing their robots with vision based tools to allow for identification and assembly of parts. One of the most basic problems with industrial assembly is a process know as parts feeding. In this scenario objects/parts that are required for product assembly are contained in a large bin. The assembly process requires a single part to be isolated from the bin of parts. Adept has pioneered the use of vision in solving this part picking problem.

One of the most fundamental tasks that vision is very useful for is the recognition of objects (be they machine parts, light bulbs, DVDs, or your next door neighbor!). Evolution Robotics introduced a significant milestone in the near-realtime recognition of objects based on SIFT points. The software identifies points in an image that look the same even if the object is moved, rotated or scaled by some small degree. Matching these points to previously seen image points allows the software to 'understand' what it is looking at even if it does not see exactly the same image.

Research Robots

The Robotics Research Corporation of America produced this robot in 1988 for NASA to study the possibility of using robots to perform maintenance on the International Space Station. Each of the robot's arms has 7 joints and they are mounted on a torso with 3 joints, giving the robot a total of 17 degrees of freedom. This robot's arms are called redundant arms because they have seven joints. The extra joints enable the robot to perform many tasks in an infinite number of different ways - just like human arms. For example, he can reach around obstacles. The single serial chain (the torso) branching into two separate chains (each arm) makes this robot an excellent test bed for development of kinematic optimization algorithms.

Robotics Engineer



I'm always getting email asking about how one goes about becoming a robotics engineer. Robotics may be the most inter-disciplinary of engineering endeavors. A mechanical engineer will design the robot's structure, its joint mechanisms, bearings, heat transfer characteristics, etc. Electrical engineers design the robot's control electronics, power amplifiers, signal conditioning, etc. Electro-mechanical engineers may work on the robot's sensors. Computer engineers will design the robot's computing hardware. Robot kinematics is great application of mathematics applied to robotics engineering. An undergraduate college degree in any of these fields is an excellent way to get started as a robotics engineer.

So you want to be a robotics engineer? Software engineering is probably the Achilles heel of robotics. The mechanical, electrical and computer engineers have built awesome machines, but they still are extremely difficult to put into production. This is because they are so difficult to teach. An expert technician `has to program the robot's every motion down to the tiniest minutia. In my opinion, the biggest contributions yet to be made in robotics will come from the software engineers. Companies are hiring robotics engineers to develop everything from automated vacuum cleaners to robot dogs. On the industrial side, robot sales topped $1.6 billion last year, up 60 percent from 1998.

Here's how I became a robotics engineer. It started with trying to build a robotic hand as a teenager in my parent's garage. This was after I first learned how servo systems worked. I barely got the servo part working, but it was a start. Later I went to college to get an undergraduate degree as an electrical engineer. After that I worked as an electrical engineer for three years. I designed several automatic control systems that were very interesting. One of them controlled a motor with an armature as big as a phone booth! Then I went back to school, this time as a mechanical engineer, and completed the undergraduate mechanical engineering curriculum. After that it was a Master's degree in biomedical engineering and the PhD where I focused on robotics.

Ancient Robots & Automation in Mesopotamia


There is no definitive way to conclude whether Al-Jazari actually invented all of the devices in his book, but the descriptive documentation of the automated devices used at that time is valuable, regardless of the source. Robots, or devices using automation to accomplish tasks, were the basis for a variety of mechanical wonders, including clocks, water fountains, musical devices and water-raising machines.


One of the most remarkable devices described, and possibly invented by Al-Jazari, was the first programmable human-like robotic device. This device consisted of a boat with four robotic musicians. The boat floated on a lake, to provide amusement for royal guests at parties.

Robotics Technology

A robotic tuna? Dr. Jamie Anderson at MIT designed a "vorticity control unmanned undersea vehicle," or VCUUV -- and it looks like a yellowfin tuna, right down to the yellow fin. It even moves its tail side to side as it swims. Why? Dr. Anderson and others discovered that this flexible movement enables the robot to make sharper, quicker turns than robots with rigid shapes.


Unlike a tuna, though, the VCUUV is stuffed with equipment, including a computer and sensing instruments. Its uses include undersea surveillance (spying) and search-and-rescue missions.