The figure above is a schematic of a simple robot lying in the X-Y plane. The robot has three links each of length l1-3. Three joints (the little circles) connect the three links of the robot. The angles at each of these joints are Ø1-3. The forward kinematics problem is stated as follows: Given the angles at each of the robots joints, where is the robot's hand (Xhand, Yhand, Øhand)? For this simple planar robot, the solution to the forward kinematics problem is trivial: | ||||||
| ||||||
| ||||||
For the general spatial case, the solution is not so trivial. This is because the joint angles do not simply add as they do in the planar case. Denavit and Hartenberg used screw theory in the 1950's to show that the most compact representation of a general transformation between two robot joints required four parameters. These are now known as the Denavit and Hartenberg parameters (D-H parameters) and they are the de-facto standard for describing a robot's geometry. Here is a description of the four D-H parameters: a - the perpendicular distance between two joint axes measured along the mutual perpendicular. The mutual perpendicular is designated the x-axis. a - the relative twist between two joint axes measured about the mutual perpendicular d - the distance between the two perpendiculars measured along the joint axis Q - joint angle about the z axis measured between the two perpendiculars Learning the proper procedure for assigning the D-H parameters is a typical exercise in an upper-level undergraduate or first graduate course in robotics. Once the parameters have been assigned we can solve the forward kinematics problem by moving from the base of the robot out to the hand using the following transformations at each joint: | ||||||
| ||||||
Though the D-H parameters are the most compact general representation of the robot's geometry, they are seldom the most computationally efficient. In practice more specialized and computationally efficient equations are developed for each particular robot. |
Saturday, October 30, 2010
Robot Forward Kinematics
Robots in Radioactive Environments
The robot at right was developed for the decontamination and dismantlement of nuclear weapons facilities. It has two six-degree of freedom Schilling arms mounted on a five-degree of freedom base. As the facilities used to develop our country's nuclear weapons enter their 50th year and beyond, we now have to dismantle them and safely store the waste. The radioactive fields makes this activity too hazardous for human workers so the use of robotics makes sense. The idea for this robot is that it can hold a part in one hand and use a cutting tool with the other; basically stripping apart the reactor layer by layer (something like peeling an onion). As the robot works it too will become contaminated and radioactive and ultimately need to be stored as radioactive waste.
Telerobotics

This is a 1970's era manual controller developed for controlling robots operating in radioactive environments. This controller is roughly the size of a human arm. At the back of the controller you can see a number of black disks. These are electric motors. These electric motors provide the energy to feed forces back to the operator. These forces are proportional to the current in the robot's motors which in turn are proportional to the forces being experienced by the robot. The placement of the motors at the back of the controller provides perfect counter balancing.
The motors drive the robot joints with almost no friction via metal tape. Developed 40 years ago and without any computer control, I believe this controller works as well or better than any human-arm scale controller available today.
Robot Vision
If you're in an industrial setting using Machine Vision you will probably find an Adept robot at work. The company has spent many years interfacing their robots with vision based tools to allow for identification and assembly of parts. One of the most basic problems with industrial assembly is a process know as parts feeding. In this scenario objects/parts that are required for product assembly are contained in a large bin. The assembly process requires a single part to be isolated from the bin of parts. Adept has pioneered the use of vision in solving this part picking problem.

Research Robots
The Robotics Research Corporation of America produced this robot in 1988 for NASA to study the possibility of using robots to perform maintenance on the International Space Station. Each of the robot's arms has 7 joints and they are mounted on a torso with 3 joints, giving the robot a total of 17 degrees of freedom. This robot's arms are called redundant arms because they have seven joints. The extra joints enable the robot to perform many tasks in an infinite number of different ways - just like human arms. For example, he can reach around obstacles. The single serial chain (the torso) branching into two separate chains (each arm) makes this robot an excellent test bed for development of kinematic optimization algorithms.
Robotics Engineer
I'm always getting email asking about how one goes about becoming a robotics engineer. Robotics may be the most inter-disciplinary of engineering endeavors. A mechanical engineer will design the robot's structure, its joint mechanisms, bearings, heat transfer characteristics, etc. Electrical engineers design the robot's control electronics, power amplifiers, signal conditioning, etc. Electro-mechanical engineers may work on the robot's sensors. Computer engineers will design the robot's computing hardware. Robot kinematics is great application of mathematics applied to robotics engineering. An undergraduate college degree in any of these fields is an excellent way to get started as a robotics engineer.
So you want to be a robotics engineer? Software engineering is probably the Achilles heel of robotics. The mechanical, electrical and computer engineers have built awesome machines, but they still are extremely difficult to put into production. This is because they are so difficult to teach. An expert technician `has to program the robot's every motion down to the tiniest minutia. In my opinion, the biggest contributions yet to be made in robotics will come from the software engineers. Companies are hiring robotics engineers to develop everything from automated vacuum cleaners to robot dogs. On the industrial side, robot sales topped $1.6 billion last year, up 60 percent from 1998.
Here's how I became a robotics engineer. It started with trying to build a robotic hand as a teenager in my parent's garage. This was after I first learned how servo systems worked. I barely got the servo part working, but it was a start. Later I went to college to get an undergraduate degree as an electrical engineer. After that I worked as an electrical engineer for three years. I designed several automatic control systems that were very interesting. One of them controlled a motor with an armature as big as a phone booth! Then I went back to school, this time as a mechanical engineer, and completed the undergraduate mechanical engineering curriculum. After that it was a Master's degree in biomedical engineering and the PhD where I focused on robotics.
Ancient Robots & Automation in Mesopotamia
There is no definitive way to conclude whether Al-Jazari actually invented all of the devices in his book, but the descriptive documentation of the automated devices used at that time is valuable, regardless of the source. Robots, or devices using automation to accomplish tasks, were the basis for a variety of mechanical wonders, including clocks, water fountains, musical devices and water-raising machines.
One of the most remarkable devices described, and possibly invented by Al-Jazari, was the first programmable human-like robotic device. This device consisted of a boat with four robotic musicians. The boat floated on a lake, to provide amusement for royal guests at parties.
Subscribe to:
Posts (Atom)