Robotics

March 27, 2018 | Author: Christian Halim | Category: Kinematics, Matrix (Mathematics), Robot, Technology, Cartesian Coordinate System


Comments



Description

ROBOTICS: Control, Sensing, Vision, and Intelligence 1-r CAD/CAM, Robotics, and Computer Vision Consulting Editor Herbert Freeman, Rutgers University Fu, Gonzalez, and Lee: Robotics: Control, Sensing, Vision, and Intelligence Groover, Weiss, Nagel, and Odrey: Industrial Robotics: Technology, Programming, and Applications Levine: Vision in Man and Machine Parsons: Voice and Speech Processing ROBOTICS: Control, Sensing, Vision, and Intelligence K. S. Fu School of Electrical Engineering Purdue University R. C. Gonzalez Department of Electrical Engineering University of Tennessee and Perceptics Corporation Knoxville, Tennessee C. S. G. Lee School of Electrical Engineering Purdue University McGraw-Hill Book Company Hamburg New York St. Louis San Francisco Auckland Bogota London Madrid Mexico Milan Montreal New Delhi Panama Paris Sao Paulo Singapore Sydney Tokyo Toronto 3F° it Unzv To Viola, Connie, and Pei-Ling 14137 Library of Congress Cataloging-in-Publication Data Fu, K. S. (King Sun), Robotics : control, sensing, vision, and intelligence. (McGraw-Hill series in CAD/CAM robotics and computer vision) Bibliography: p. Includes index. 1. Robotics. I. Gonzalez, Rafael C. II. Lee, 86-7156 C. S. G. (C. S. George) TJ21I.F82 1987 III. Title. 629.8' 92 ISBN 0-07-022625-3 ISBN 0-07-022626-1 (solutions manual) This book was set in Times Roman by House of Equations Inc. The editor was Sanjeev Rao; the production supervisor was Diane Renda; the cover was designed by Laura Stover. Project supervision was done by Lynn Contrucci. R. R. Donnelley & Sons Company was printer and binder. ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE Copyright © 1987 by McGraw-Hill, Inc. All rights reserved. Printed in the United States of America. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a data base or retrieval system, without the prior written permission of the publisher. ISBN 0-07-022625-3 0.¢ 2 3 4 5 6 7 8 9 0 DOCDOC 8 9 8 7 ABOUT THE AUTHORS K. S. Fu was the W. M. Goss Distinguished Professor of Electrical Engineering at Purdue University. He received his bachelor, master, and Ph.D. degrees from the National Taiwan University, the University of Toronto, and the University of Illinois, respectively. Professor Fu was internationally recognized in the engineering disciplines of pattern recognition, image processing, and artificial intelligence. He made milestone contributions in both basic and applied research. Often termed the "father of automatic pattern recognition," Dr. Fu authored four books and more than 400 scholarly papers. He taught and inspired 75 Ph.D.s. Among his many honors, he was elected a member of the National Academy of Engineering in 1976, received the Senior Research Award of the American Society for Engineering Education in 1981, and was awarded the IEEE's Education Medal in 1982. He was a Fellow of the IEEE, a 1971 Guggenheim Fellow, and a member of Sigma Xi, Eta Kappa Nu, and Tau Beta Pi honorary societies. He was the founding president of the International Association for Pattern Recognition, the founding editor in chief of the IEEE Transactions of Pattern Analysis and Machine Intelligence, and the editor in chief or editor for seven leading scholarly journals. Professor Fu died of a heart attack on April 29, 1985 in Washington, D.C. 'L7 R. C. Gonzalez is IBM Professor of Electrical Engineering at the University of Tennessee, Knoxville, and founder and president of Perceptics Corporation, a hightechnology firm that specializes in image processing, pattern recognition, computer vision, and machine intelligence. He received his B.S. degree from the University of Miami, and his M.E. and Ph.D. degrees from the University of Florida, Gaines- ville, all in electrical engineering. Dr. Gonzalez is internationally known in his field, having authored or coauthored over 100 articles and 4 books dealing with image processing, pattern recognition, and computer vision. He received the 1978 UTK Chancellor's Research Scholar Award, the 1980 Magnavox Engineering Professor Award, and the 1980 M.E. Brooks Distinguished Professor Award for his work in these fields. In 1984 he was named Alumni Distinguished Service Professor v .°, 0j) 1j? one .., `'" VI ABOUT THE AUTHORS at the University of Tennessee. In 1985 he was named a distinguished alumnus by the University of Miami. Dr. Gonzalez is a frequent consultant to industry and government and is a member of numerous engineering professional and honorary societies, including Tau Beta Pi, Phi Kappa Phi, Eta Kappa Nu, and Sigma Xi. He is a Fellow of the IEEE. gin' C. S. G. Lee is an associate professor of Electrical Engineering at Purdue Univer- C17 sity. He received his B.S.E.E. and M.S.E.E. degrees from Washington State University, and a Ph.D. degree from Purdue in 1978. From 1978 to 1985, he was a faculty member at Purdue and the University of Michigan, Ann Arbor. Dr. Lee has authored or coauthored more than 40 technical papers and taught robotics short courses at various conferences. His current interests include robotics and automation, and computer-integrated manufacturing systems. Dr. Lee has been doing extensive consulting work for automotive and aerospace industries in robotics. He is a Distinguished Visitor of the IEEE Computer Society's Distinguished Visitor Program since 1983, a technical area editor of the IEEE Journal of Robotics and Automation, and a member of technical committees for various robotics conferences. He is a coeditor of Tutorial on Robotics, 2nd edition, published by the IEEE Computer Society Press and a member of Sigma Xi, Tau Beta Pi, the IEEE, and the SME/RI. p., C]. ._D .°A cad i.. `-- s"' CONTENTS Preface xi 1. - Introduction 1.1. Background 1.2. Historical Development 1.3. Robot Arm Kinematics and Dynamics 1.4. Manipulator Trajectory Planning 1.8. References 2. Robot Arm Kinematics 2.1. Introduction 2.2. The Direct Kinematics Problem 2.3. The Inverse Kinematics Solution 2.4. Concluding Remarks References Problems 3. Robot Arm Dynamics 3.1. Introduction 3.2. Lagrange-Euler Formulation 3.3. Newton-Euler Formation 3.4. Generalized D'Alembert Equations of Motion 3.5. Concluding Remarks References Problems 103 124 142 142 144 .-000 88,0 9 10 and Motion Control 1.5. Robot Sensing 1.6. Robot Programming Languages 1.7. Machine Intelligence 7 8 10 12 12 13 52 75 76 76 82 82 84 Vii Viii CONTENTS 4. Planning of Manipulator Trajectories 4.1. Introduction 4.2. General Considerations on Trajectory Planning 4.3. Joint-interpolated Trajectories 4.4. Planning of Manipulator Cartesian Path Trajectories 4.5. Concluding Remarks References Problems 5. Control of Robot Manipulators 5.1. Introduction 5.2. Control of the Puma Robot Arm 5.3. Computed Torque Technique 5.4. Near-Minimum-Time Control 5.5. Variable Structure Control 5.6. Nonlinear Decoupled Feedback Control 5.7. Resolved Motion Control 5.8. Adaptive Control 5.9. Concluding Remarks References Problems 6. Sensing 6.1. Introduction 6.2. Range Sensing 6.3. Proximity Sensing 6.4. Touch Sensors 6.5. Force and Torque Sensing 6.6. Concluding Remarks References Problems 7. Low-Level Vision 7.1. Introduction 7.2. Image Acquisition 7.3. Illumination Techniques 7.4. Imaging Geometry 7.5. Some Basic Relationships Between Pixels 7.6. Preprocessing 7.7. Concluding Remarks References Problems .AA CONTENTS iX 8. Higher-Level Vision 8.1. Introduction 8.2. Segmentation 8.3. Description 8.4. Segmentation and Description of Three-Dimensional Structures 8.5. Recognition 8.6. Interpretation 362 362 363 395 416 424 439 445 C1' 8.7. Concluding Remarks References 9. Robot Programming Languages 9.1. Introduction 9.2. Characteristics of RobotLevel Languages 9.3. Characteristics of TaskLevel Languages 9.4. Concluding Remarks References !-' 10. Robot Intelligence and Task Planning 0000000000 10.1. C/1 10.2. 10.3. 10.5. 10.6. 10.4. 'r1 10.7. 10.8. Robot Task Planning (IQ °a' Basic Problems in Task Planning 10.10. Expert Systems and Knowledge Engineering 10.11. Concluding Remarks 10.9. boo Appendix A Vectors and Matrices B Manipulator Jacobian Bibliography Index ricer 445 Problems 447 450 450 451 462 470 472 473 Problems .-a 474 474 474 484 489 493 0000000000 Introduction rig State Space Search Problem Reduction Use of Predicate Logic' Means-Ends Analysis Problem-Solving Robot Learning .L' 497 504 506 509 516 519 (N] References 520 522 522 544 '.D 556 571 PREFACE This textbook was written to provide engineers, scientists, and students involved in robotics and automation with a comprehensive, well-organized, and up-to- date account of the basic principles underlying the design, analysis, and synthesis of robotic systems. The study and development of robot mechanisms can be traced to the mid-1940s when master-slave manipulators were designed and fabricated at the Oak Ridge and Argonne National Laboratories for handling radioactive materials. The first commercial computer-controlled robot was introduced in the late 1950s by Unimation, Inc., and a number of industrial and experimental devices followed suit during the next 15 years. In spite of the availability of this technology, however, widespread interest in robotics as a formal discipline of study and research is rather recent, being motivated by a significant lag in productivity in most nations of the industrial world. Robotics is an interdisciplinary field that ranges in scope from the design of mechanical and electrical components to sensor technology, computer systems, and artificial intelligence. The bulk of material dealing with robot theory, design, and applications has been widely scattered in numerous technical journals, conference proceedings, research monographs, and some textbooks that either focus attention on some specialized area of robotics or give a "broadbrush" look of this field. Consequently, it is a rather difficult task, particularly for a newcomer, to learn the range of principles underlying this subject matter. This text attempts to put between the covers of one book the basic analytical techniques and fundamental principles of robotics, and to organize them in a unified and coherent manner. Thus, the present volume is intended to be of use both as a textbook and as a reference work. To the student, it presents in a logical manner a discussion of basic theoretical concepts and important techniques. For the practicing engineer or scientist, it provides a ready source of reference in systematic form. xi c+, Alvertos. We are indebted to a number of individuals who.. H. Abidi. R. Ms. A. G. W. For the instructor. H. emphasis is placed on the development of fundamental results from basic concepts. G. E. Others serve as supplements and extensions of the material in the book. The following individuals have worked with us in the course of their advanced undergraduate or graduate programs: J. and the University of Michigan. M. Jungclas. Bejczy. Day. Ms. Spivey Douglass. J. and mathematical analysis. Martin Marietta Energy Systems.T. computer programming. P. The Oak Ridge National Labora- may" tory. we wish to extend our appreciation to Professors W. Herrera. but also the topics covered in this book. C. Eason. Kelley. Dr. E. CAD The material has been tested extensively in the classroom as well as through numerous short courses presented by all three authors over a 5-year period. a complete solutions manual is available from the publisher. the Army Research Office. Safabakhsh. Fu R. In addition. Snyder. L. our students over the past few years have influenced not only our thinking. A. Chen. R. J. Tsai. Westinghouse. C. assisted in the preparation of the text. N. Burdette. H. machine intelligence. C. Lee. Ms. D. Luh. Hou. C. K. R. B. and Dr. and to Dr. Loh. Dr. H. As is true with most projects carried out in a university environment. and exercises of various types and complexity are included at the end of each chapter. K. Perez. Y. probability. A.-r "i7 . computer vision. Cate. Mary Ann Pruder for typing numerous versions of the manuscript. F. R. Meyer. R. Rinehart. P. Huang. E. crop C. Union Carbide. R. R. N. A. directly or indirectly. D. and the University of Tennessee Measurement and Control Center for their sponsorship of our research activities in robotics. A. K. G. King. Denise Smiddy. S. Brzakovic. K. Chang. Huarg. S. Woods. In presenting the material. the Air Force Office of Scientific Research. Mary Bearden. Lee gas O. Lockheed Missiles and Space Co. the Office of Naval Research. Martin Marietta Aerospace.. and D. Numerous examples are worked out in the text to illustrate the discussion. S. we express our appreciation to the National Science Foundation. Gonzalez CD' . Susan Merrell. R. and related areas. Saridis.Xii PREFACE The mathematical level in all chapters is well within the grasp of seniors and first-year graduate students in a technical discipline such as engineering and computer science. G. Dr. which require introductory preparation in matrix theory. T. . Thanks are also due to Ms. In particular. Dr. N. the University of Tennessee. Hayden. S. Lee. and Ms. Some of these problems allow the reader to gain further insight into the points discussed in the text through practice in problem solution. O. B. Green. L-W. Chung. The suggestions and criticisms of students in these courses had a significant influence in the way the material is presented in this book."3 ac' -on M. Frances Bourdas. L. This book is the outgrowth of lecture notes for courses taught by the authors at Purdue University. . D.PREFACE Xiii Professor King-Sun Fu died of a heart attack on April 29. S. C.C. G.. 1985. . L. in Washington. R. G. shortly after completing his contributions to this book. C. He will be missed by all those who were fortunate to know him and to work with him during a productive and distinguished career. . parts. while the other end is free and equipped with a tool to manipulate objects or perform assembly tasks.. Elbert Hubbard 1. [n' CIO . computer-controlled manipulator con- sisting of several rigid links connected in series by revolute or prismatic joints. At the present time. but no machine can do the work of one extraordinary man. Webster's dictionary defines robot as "an automatic device that performs functions ordinarily ascribed to human beings. A definition used by the Robot Institute of America gives a more precise description of industrial robots: "A robot is a reprogrammable multifunctional manipulator designed to move materials.CHAPTER ONE INTRODUCTION One machine can do the work of a hundred ordinary men.1 BACKGROUND With a pressing need for increased productivity and the delivery of end products of uniform quality. which is normally due to computer UQ" °'° algorithms associated with its control and sensing systems. a robot must possess intelligence. or specialOs. often called hard automation systems.t Z". The combination of the movements positions the `". a robot is a reprogrammable general-purpose manipulator with external sensors that can perform various assembly tasks. a robot is composed of an arm (or mainframe) and a wrist subassembly plus a tool. 0C7 The word robot originated from the Czech word robota. An industrial robot is a general-purpose. The arm subassembly generally can move with three degrees of freedom. industry is turning more and more toward computer-based automation. It is designed to reach a workpiece located within its work volume.U+ 1 . through variable programmed motions for the performance of a variety of tasks. ized devices. meaning work. With this definition. have led to a broad-based interest in the use of robots capable of performing a variety of manufacturing functions in a more flexible working environment and at lower production costs. One end of the chain is attached to a supporting base." With this definition. The inflexibility and generally high cost of these machines. washing machines may be considered robots. The motion of the joints results in relative motion of the links. tools." In short. most automated manufacturing tasks are carried out by special-purpose machines designed to perform predetermined functions in a manufacturing process. The work volume is the sphere of influence of a robot whose arm can deliver the wrist subassembly unit to any point within the sphere. Mechanically. `i7 . These concepts are illustrated by the Cincinnati Milacron T3 robot and the Unimation PUMA robot arm shown in Fig.2 ROBOTICS: CONTROL. ''d 'CS Waist rotation 320° Figure 1. AND INTELLIGENCE wrist unit at the workpiece. These last three motions are often called pitch. VISION. The combination of these motions orients the tool according to the configuration of the object for ease in pickup. and roll.1. (b) PUMA 560 series robot arm.1 (a) Cincinnati Milacron T3 robot arm. The wrist subassembly unit usually consists of three rotary motions. yaw. 0_o . SENSING. while the wrist subassembly is the orientation mechanism. the arm subassembly is the positioning mechanism. Hence. for a six jointed robot. 1. ° Figure 1.g. i. are'basically simple positional machines. such as material handling.. loading and unloading numerically controlled machines.) Revolute or articulated coordinates (three rotary axes) (e.g.g.2): Cartesian coordinates (three linear axes) (e.. though controlled by mini. paint spraying. They execute a given task by v. IBM's RS-1 robot and the Sigma robot from Olivetti) Cylindrical coordinates (two linear and one rotary axes) (e.and microcomputers.2 Various robot arm categories. and in handling hazardous materials.. 1.g. C1. Cartesian or xyz Cylindrical Spherical Revolute . T3 from Cincinnati Milacron and PUMA from Unimation Inc.. spot/arc welding. These robots fall into one of the four basic motion-defining categories (Fig. Unimate 2000B from Unimation Inc. parts assembly.INTRODUCTION 3 Many commercially available industrial robots are widely used in manufacturing and assembly tasks. prosthetic arm research.. space and undersea exploration. Versatran 600 robot from Prab) Spherical coordinates (one linear and two rotary axes) (e.) CD' Most of today's industrial robots. This image was reinforced by the 1926 German robot film Metropolis.4 ROBOTICS: CONTROL. the robots turned against their creators. Capek's play is largely responsible for some of the views popularly held about robots to this day. The key to this device was the use of a computer in conjunction with a manipula. During the late 1940s research programs were started at the Oak Ridge and Argonne National Laboratories to develop remotely controlled mechanical manipulators for handling radioactive materials. Later. More research effort is being directed toward improving the overall performance of the manipulator systems. Devol developed a device he called a "programmed articulated transfer device.R. Engelberger led to the first industrial robot. repetitive tasks. In the mid-1950s the mechanical coupling was replaced by electric and hydraulic power in manipulators such as General Electric's Handyman and the Minotaur I built by General Mills. displayed in 1939 at the New York World's Fair. In the mid-1950s George C. designed to reproduce faithfully hand and arm motions made by a human operator. by the walking robot Electro and his dog Sparko. and one way is through the study of the various important areas covered in this book. (Rossum's Universal Robots). but work tirelessly. Initially. SENSING. "-h 7. the robots were manufactured for profit to replace human workers but. in 1959. annihilating the entire human race. while the slave manipulator (3.U. Crop Early work leading to today's industrial robots can be traced to the period immediately following World War II. and more recently by the robot C3PO featured in the 1977 film Star Wars. toward the end. As a result. including the perception of robots as humanlike machines endowed with intelligence and individual personalities. AND INTELLIGENCE playing back prerecorded or preprogrammed sequences of motions that have been previously guided or taught by a user with a hand-held control-teach box. force feedback was added by mechanically coupling the motion of the master and slave units so that the operator could feel the forces as they developed between the slave manipulator and its environment. The work on master-slave manipulators was quickly followed by more sophisticated systems capable of autonomous." 0 duplicated the master unit as closely as possible.2 HISTORICAL DEVELOPMENT The word robot was introduced into the English language in 1921 by the playwright Karel Capek in his satirical drama. 1. These systems were of the "master-slave" type. introduced by Unimation Inc. repetitive operations. In this work.11 C1. "CD "y. Modern industrial robots certainly appear primitive when compared with the expectations created by the communications media during the past six decades. VISION." a manipulator whose operation could be programmed (and thus changed) and which could follow a sequence of motion steps determined by the instructions 'T1 "C7 H°+ in the program. robots are used mainly in relatively simple. The master manipulator was guided by the user through a sequence of motions. Further development of this concept by Devol and Joseph F. robots are machines that resemble people. these robots are equipped with little or no external sensors for obtaining the information vital to its working environment. CD" . Moreover. R. -. when an experimental walking truck was developed by the General Electric Company for the U." or 73. and manipulated them in accordance with instructions. manipulators. . Tomovic and Boni [1962] developed a prototype hand equipped with a pressure sensor which sensed the object and supplied an input feedback signal to a motor to initiate one of two grasp patterns. In 1974. While programmed robots offered a novel and powerful manufacturing tool. One of the more unusual developments in robots occurred in 1969. One experiment with the Stanford arm consisted of automatically stacking blocks according to various strategies. Called "The Tomorrow Tool. the Boston arm was developed. the Japanese company Kawasaki Heavy Industries negotiated a license with Unimation for its robots. information proportional to object size and weight was sent to a computer by these pressuresensitive elements. As early as 1968.-Y 0'0 o-2 'Ti 5-n o-2 -°. it could lift over 100 lb as well as track moving objects on an assembly line. McCarthy [1968] and his colleagues at the Stanford Artificial Intelligence Laboratory reported development of a computer with hands.c? . The manipulative system consisted of an ANL Model-8 manipulator with 6 degrees of freedom controlled by a `"T s. During this period. and ears (i.. it r-. '"a) . A.C enhanced significantly by the use of sensory feedback. and a television camera was added to the manipulator to begin machine perception research. various arm designs for manipulators were developed. 0Q4 . Unlike hard automation machines.0^>U ors: 0 vii C. could "feel" blocks and use this information to control the hand so that it stacked the blocks without operator assistance. This device. Meanwhile. In 1963. "saw" blocks scattered on a table. eyes. In the same year. Pieper [1968] studied the kinematic problem of a computer-controlled manipulator while Kahn and Roth [1971] analyzed the dynamics and control of a restricted arm using bang-bang (near minimum time) control.I' ''. and in the following year the Stanford arm was developed.INTRODUCTION 5 for to produce a machine that could be "taught" to carry out a variety of tasks automatically. Ernst [1962] reported the development of a computer-controlled mechanical hand with tactile sensors.. This was very sophisticated work for an automated robot at that time. such as the Roehampton arm and the Edinburgh arm. which was equipped with a camera and computer controller.. TV cameras. called the MH-1...3 In the late 1960s. H.e. This work is one of the first examples of a robot capable of adaptive behavior in a reasonably unstructured environment. "J' 5<. '_' 2V" . these robots could be reprogrammed and retooled at relative low cost to perform other jobs as manufacturing requirements changed.S. "vim . other countries (Japan in particular) began to see the potential of industrial robots. . They demon- strated a system that recognized spoken messages. the American Machine and Foundry Company (AMF) introduced the VERSATRAN commercial robot. Once the hand was in contact with the object. During the same period. Some of the most serious work in robotics began as these arms were used as robot manipulators. Army.. Early in that decade. Starting in this same year. This research program later evolved as part of project MAC. Cincinnati Milacron introduced its first computer-controlled industrial robot. and microphones). --t became evident in the 1960s that the flexibility of these machines could be TX-O computer through an interfacing device. Since the independent variables in a robot arm are the joint variables. In general.0 .6 ROBOTICS: CONTROL. implemented a computer-based torque-control technique on his extended Stanford arm for space exploration projects. SENSING. Since then. . various control methods have been . (74 Today. [1974] investigated sensing techniques based on compliance. sensing. introduced briefly in the following sections.. dynamics. Will and Grossman [1975] at IBM developed a computercontrolled manipulator with touch and force sensors to perform mechanical assem- bly of a 20-part typewriter.a! Cep 'C3 landfall navigation search technique was used to perform initial positioning in a precise assembly task. A . demonstrated a computer-controlled Stanford arm connected to a PDP-10 computer for assembling automotive water pumps.. At Stanford. s. and a task is usually stated in terms of the reference coordinate frame. 1. This work developed into the instrumentation of a passive compliance device called remote center compliance (RCC) which was attached to the mounting plate of the last joint of the manipulator for close parts-mating assembly.. At about the same time. Inoue [1974] at the MIT Artificial Intelligence Laboratory worked on the artificial intelligence aspects of force feedback. the .. At the Draper Laboratory Nevins et al. AND INTELLIGENCE During the 1970s a great deal of research work focused on the use of external sensors to facilitate manipulative operations.3 ROBOT ARM KINEMATICS AND DYNAMICS Robot arm kinematics deals with the analytical study of the geometry of motion of a robot arm with respect to a fixed reference coordinate system without regard to the forces/moments that cause the motion. constitute the core of the material in this book. control. programming languages. while the second problem is the inverse kinematics (or arm solution) problem. twig proposed for servoing mechanical manipulators. the inverse kinematics problem is used more frequently. using both visual and force feedback. These homogeneous transformation matrices are also useful in deriving the dynamic equations of motion of a robot arm. VISION. in particular the relations between the joint-variable space and the position and orientation of the end-effector of a robot arm. Thus. planning systems. kinematics deals with the analytical description of the spatial displacement of the robot as a function of time. Denavit and Hartenberg [1955] proposed a systematic and generalized approach of utilizing matrix algebra to describe and represent the ::r CIO CAD spatial geometry of the links of a robot arm with respect to a fixed reference frame. The first problem is usually referred to as the direct (or forward) kinematics problem.y -ti . 0. including kinematics. at the Jet Propulsion Laboratory. There are two fundamental problems in robot arm kinematics.-. dealing with research and development in a number of interdisciplinary areas.L3 direct kinematics problem to finding an equivalent 4 x 4 homogeneous transformation matrix that relates the spatial displacement of the hand coordinate frame to the reference coordinate frame. Bolles and Paul [1973].t CAD 'it tea) `"' t3. Bejczy [1974]. and machine intelligence. This method uses a 4 x 4 homogeneous transformation matrix to describe the spatial relationship between two adjacent rigid mechanical links and reduces the °. we view robotics as a much broader field of work than we did just a few years ago. These topics. 5 concentrates on the second part of the control problem. the motion control problem consists of (1) obtaining dynamic models of the manipulator. the movement of a robot arm is usually performed in two distinct control CAD . 3. The most commonly used methods are the matrix algebraic. on the other hand. -°. Detailed treatments of direct kinematics and inverse kinematics problems are given in Chap. 3. This leads to the development of dynamic equations of motion for the various articulated joints of the manipulator in terms of specified geometric and inertial parameters of the links. as well as the formalism for describing desired manipulator motion in terms of sequences of points in space through which the manipulator must pass and the space curve that it traverses.INTRODUCTION 7 inverse kinematics problem can be solved by several techniques. deals with the mathematical formulations of the equations of robot arm motion. +-' CJ. The actual dynamic model of an arm can be obtained from known physical laws such as the laws of newtonian and lagrangian mechanics. The control problem of a manipulator can be conveniently divided into two coherent subproblems-the motion (or trajectory) planning subproblem and the motion control subproblem. Detailed discussions of robot arm dynamics are presented in Chap. The space curve that the manipulator hand moves along from an initial location (position and orientation) to the final location is called the path. it is of interest to know whether there are any obstacles present in the path that the robot arm has to traverse (obstacle constraint) and whether the manipulator hand needs to traverse along a specified path (path constraint). 2. or geometric approach. In general. 1. Robot arm dynamics. Since the first part of the control problem is discussed extensively in Chap. The dynamic equations of motion of a manipulator are a set of mathematical equations describing the dynamic behavior of the manipulator. Such equations of motion are useful for computer simulation of the robot arm motion. Chapter 4 discusses the various trajectory planning schemes for obstacle-free motion.4 MANIPULATOR TRAJECTORY PLANNING AND MOTION CONTROL With the knowledge of kinematics and dynamics of a serial link manipulator. The trajectory planning (or trajectory planner) interpolates and/or approximates the desired path by a class of polynomial functions and generates a sequence of time-based "control set points" for the control of the manipulator from the initial location to the destination location. one would like to servo the manipulator's joint actuators to accomplish a desired task by controlling the manipulator to fol v a desired path. the design of suitable control equations for a robot arm. iterative. dOC a_. From the control analysis point of view. and the evaluation of the kinematic design and structure of a robot arm. Conventional approaches like the Lagrange-Euler and the Newton-Euler formulations can then be applied systematically to develop the actual robot arm motion equations. Before moving the robot arm. and (2) using these models to determine control laws or strategies to achieve the desired system response and performance. Chap. which are used for robot control. the use of sensing technology to endow machines with a greater degree of intelligence in dealing with their environment is indeed an active topic of research and development in the robotics field. sophisticated control approaches. and force-torque sensing. also commonly referred to as machine or computer vision. (4) description. touch. Vision sensors and techniques are discussed in detail in Chaps. This process.8 ROBOTICS: CONTROL. characterizing. . The second is the fine notion control in which the end-effector of the arm dynamically interacts with the object using sensory feedback information from the sensors to complete the task.O 0)" i. vision is recognized as the most powerful of robot sensory capabilities. This is in contrast to preprogrammed operations in which a robot is "taught" to perform repetitive tasks via a set of programmed functions.'L a+. proximity. and (6) interpretation. The servomechanism approach models the varying dynamics of a manipulator inadequately because it neglects the motion and configuration of the whole arm mechanism. touch. Although proximity.. (3) segmentation. 6 is on range. The result is reduced servo response speed and damping. The first is the gross motion control in which the arm moves from an initial position/orientation to the vicinity of the desired target position/orientation along a planned trajectory.W. Manipulators controlled in this manner move at slow speeds with unnecessary vibrations.. . Chapter 5 focuses on deriving gross motion control laws and strategies which utilize the dynamic models discussed in Chap. VISION. may be subdivided into six principal areas: (1) sensing. The focus of Chap. proximity. the topic of Chaps. and touch. is used for robot guidance.. as well as for object identification and handling. limiting the precision and speed of the end-effector and making it appropriate only for limited-precision tasks. Although the latter is by far the most predominant form of operation of present industrial robots. 6 through 8. Current industrial approaches to robot arm control treat each joint of the robot arm as a simple joint servomechanism. a. External state sensors. on the other hand. '-. These changes in the parameters of the controlled system sometimes are significant enough to render conventional feedback control strategies ineffective.-. Any significant performance gain in this and other areas of robot arm control require the consideration of more efficient dynamic models.5 ROBOT SENSING The use of external sensing mechanisms allows a robot to interact with its environment in a flexible manner. and interpreting information from images of a three-dimensional world. and force sensing play a significant role in the improvement of robot performance. s0. .. and the use of dedicated computer architectures and parallel processing techniques. 1. Internal state sensors deal with the detection of variables such as arm joint position. (2) preprocessing. Robot vision may be defined as the process of extracting. deal with the detection of variables such as range. . SENSING. External sensing. 3 to efficiently control a manipulator. "a) y. AND INTELLIGENCE phases.. 7 and 8. (5) recognition. The function of robot sensors may be divided into two principal categories: internal state and external state. In our discussion. and c0.YOB. it requires a large memory space to store speech data. This is usually accomplished in the following steps: (1) leading the robot in slow motion using manual control through the entire assembly task. 1. A more general approach to solve the human-robot communication problem is the use of high-level programming.. spot welding. High-level vision refers to processes that attempt to emulate cognition. Moreover.t CAD . and it usually requires a training period to build up speech templates . The method of teach and playback involves teaching the robot by leading it through the motions to be performed. Robots are commonly used in areas such as arc welding. . It can recognize a set of discrete words from a limited voca- bulary and usually requires the user to pause between words. and paint spraying. characterize. and the three major approaches to achieve it are discrete word recognition. medium-.. then the robot is run at an appropriate speed in a repetitive motion. .-C O>1 . and recognition of individual objects as medium-level vision functions. the usefulness of discrete word recognition to describe a task is limited. While there are no clearcut boundaries between these subdivisions. In terms of our six subdivisions.o .W0 ^'t A. and finally to the extraction of primitive image features such as intensity discontinuities. G]. .Q. Topics in higher-level vision are discussed in Chap..y :-1 O.t . We will associate with medium-level vision those processes that extract. 't3 '. These tasks require no interaction . and label components in an image resulting from low-level vision. '.0 system so that the user can direct the manipulator to accomplish a given task. and (3) if the taught motion is correct.-t «. preprocessing. they do provide a useful framework for categorizing the various processes that are inherent components of a machine vision system. This method is also known as guiding and is the most commonly used approach in present-day industrial robots.6 ROBOT PROGRAMMING LANGUAGES One major obstacle in using manipulators as general-purpose assembly machines is the lack of suitable and efficient communication between the user and the robotic There are several ways to communicate with a robot.. we shall treat sensing and preprocessing as low-level vision functions.fl for recognition. We consider three levels of processing: low-. "CS :Z. high-level programming languages. teach and playback. description. with the joint angles of the robot at appropriate locations being recorded in order to replay the motion. Current state-of-the-art speech recognition is quite primitive and generally speaker-dependent. The material in Chap.INTRODUCTION 9 It is convenient to group these various areas of vision according to the sophistication involved in their implementation. 8.. v~. and with concepts and techniques required to implement low-level vision functions. (2) editing and playing back the taught motion. and high-level vision. This will take us from the image formation process itself to compensations such as noise reduction. Although it is now possible to recognize words in real time due to faster computer components and efficient processing algorithms. we will treat segmentation. 7 deals with sensing. However." for example. Robot actions change one state. The discussion emphasizes the problem-solving or planning aspect of a robot.'' . A7' CS' C<.. . given some initial situation. of the world into another. A robot planner attempts to find a path from our initial robot world to a final robot world. planning means deciding on a course of action before acting.8 REFERENCES The general references cited below are representative of publications dealing with topics of interest in robotics and related fields. This effort is warranted because the manipulator is usually controlled by a computer. is still a very active area of research.. In the "blocks world. VISION.7 ROBOT INTELLIGENCE A basic problem in robotics is planning motions to solve some prespecified task. we introduce several basic methods for problem solving and their applications to robot planning. thus. SENSING.. In some situations.. A. The path consists of a sequence of operations that are considered primitive to the system. In a typical formulation of a robot problem we have a robot that is equipped with sensors and a set of primitive actions that it can perform in some easy-to-understand world. References given at the end of . we imagine a world of several labeled blocks resting on a table or on each other and a robot consisting of a TV camera and a movable arm and hand that is able to pick up and move blocks.. i. the use of robots to perform assembly tasks generally requires high-level programming techniques. the robot is a mobile vehicle with a TV camera that performs tasks such as pushing objects from place to place in an environment containing other objects. . In Chap. This increases the flexibility and versatility of the robot. s1) Cry 1. Research on robot problem solving has led to many ideas about problemsolving systems in artificial intelligence. AND INTELLIGENCE between the robot and the environment and can be easily programmed by guiding. Chapter 9 discusses the use of high-level programming techniques for achieving effective communication with a robotic system. 10. A solution to a problem could be the basis of a corresponding sequence of physical actions in the physical world.10 ROBOTICS: CONTROL. using programs to describe assembly tasks allows a robot to perform different jobs by simply executing the appropriate program. 1. Roh.-.. Here. and the most effective way for humans to communicate with computers is through a high-level programing language. we still need powerful and efficient planning algorithms that will be executed by high-speed special-purpose computer systems. Furthermore. A plan is. This action synthesis part of the robot problem can be solved by a problem-solving system that will achieve some stated goal. of planning. and then controlling the robot as it executes the commands necessary to achieve those actions.o 'CS -Cr 7C' 'U9 s. Clog CS' `C7 °'o .. which provides the intelligence and problem-solving capability to a robot system.. a representation of a course of action for achieving a stated goal. For real-time robot applications. or configuration. Proceedings of the International Symposium on Industrial Robots. and Automation in Design. Transmissions. Proceedings of IEEE International Conference on Robotics and Automation.. and ASME Journal of Mechanisms. and Fu [1986]. Complementary reading for the material in this book may be found in the books by Dodd and Rossol [1979]. Journal of Robotic Systems. The bibliography at the end of the book is organized in alphabetical order by author. and Craig [1986]. gyp . Robotica. Proceedings of the International Joint Conference on Artificial Intelligence. Dorf [1983]. Artificial Intelligence. ASME Journal of Dynamic Systems. Some of the major journals and conference proceedings that routinely contain articles on various aspects of robotics include: IEEE Journal of Robotics and Automation. Man and Cybernetics. IEEE Transactions on Systems. International Journal of Robotics Research. s. Computer Graphics. IEEE Transactions on Automatic Control. Measurement and Control. Vision. Proceedings of the Society of Photo-Optical and Instrumentation ^°h N Z3- fop Engineers. IEEE Transactions on Pattern Analysis and Machine Intelligence.INTRODUCTION 11 later chapters are keyed to specific topics discussed in the text. and Image Processing. Paul [1981]. Tou [1985]. Gonzalez. and it contains all the pertinent information for each reference cited in the text. Mechanism and Machine Theory. Engelberger [1980]. Snyder [1985]. Lee. ASME Journal of Mechanical Design. ASME Journal of Applied Mechanics. (t).I` °"' «. 12 .C q(t) = (q. Thus. Robot arm kinematics deals with the analytical study of the geometry of motion of a robot arm with respect to a fixed reference coordinate system as a function of time without regard to the forces/moments that cause the motion. Given a desired position and orientation of the end-effector of the manipulator and the geometric link parameters with respect to a reference coordinate system. she seems to feel the thrill of life! Henry Wadsworth Longfellow 2. she moves.. One end of the chain is attached to a supporting base while the other end is free and attached with a tool (the end-effector) to manipulate objects or perform assembly tasks.1 INTRODUCTION A mechanical manipulator can be modeled as an open-loop articulated chain with several rigid bodies (links) connected in series by either revolute or prismatic joints driven by actuators. 1."T' C`. given the joint angle vector . This chapter addresses two fundamental questions of both theoretical and practical interest in robot arm kinematics: fl" CS' °. one is interested in the spatial description of the end-effector of the manipulator with respect to a fixed reference coordinate system. she starts.. The relative motion of the joints results in the motion of the links that positions the hand in a desired orientation. in particular the relations between the joint-variable space and the position and orientation of the end-effector of a robot arm. while the second question is the inverse kinematics (or arm solution) problem. it deals with the analytical description of the spatial displacement of the robot as a function of time.- The first question is usually referred to as the direct (or forward) kinematics problem.+ q (t) ) T and the geometric link parameters. where n is the number of degrees of freedom. . For a given manipulator. In most robotic applications. how many different manipulator configurations will satisfy the same condition? q2 (t)( t ) . what is the position and orientation of the end-effector of the manipulator with respect to a reference coordinate system? 2. can the manipulator reach the desired prescribed manipulator hand position and orientation? And if it can.CHAPTER TWO ROBOT ARM KINEMATICS And see! she stirs. ROBOT ARM KINEMATICS 13 Link parameters Joint angle, III Direct kinematics - orientation of the end-cIIector Position and Link parameters 11! Joint angles Inverse kinematics Figure 2.1 The direct and inverse kinematics problems. placement of the "hand coordinate frame" to the reference coordinate frame. These homogeneous transformation matrices are also useful in deriving the _>_ niques. In general, the inverse kinematics problem can be solved by several techMost commonly used methods are the matrix algebraic, iterative, or .0.. ate) geometric approaches. A geometric approach based on the lifxk coordinatd'systems O5"CD dynamic equations of motion of a robot arm. ... wow Since the independent variables in a robot arm are the joint variables and a task is usually stated in terms of the reference coordinate frame, the inverse kinematics problem is used more frequently. A simple block diagram indicating the relationship between these two problems is shown in Fig. 2.1. Since the links of a robot arm may rotate and/or translate with respect to a reference coordinate frame, the total spatial displacement of the end-effector is due to the angular rotations and linear translations of the links. Denavit and Hartenberg [1955] proposed a systematic and generalized approach of utilizing matrix algebra to describe and represent the spatial geometry of the links of a robot arm with respect to a fixed reference frame. This method uses a 4 x 4 homogeneous transformation matrix to describe the spatial relationship between two adjacent rigid mechanical links and reduces the direct kinematics problem to finding an equivalent 4 x 4 homogeneous transformation matrix that relates the spatial dis.14 GCS and the manipulator configuration will be presented in obtaining a closed form joint solution for simple manipulators with rotary joints. Then a more general approach using 4 x 4 homogeneous matrices will be explored in obtaining a joint 't7 'L7 ,0, solution for simple manipulators. 'L3 2.2 THE DIRECT KINEMATICS PROBLEM Vector and matrix algebra' are utilized to develop a systematic and generalized approach to describe and represent the location of the links of a robot arm with I Vectors are represented in lowercase bold letters; matrices are in uppercase bold. .U) 0 C($ 0 0 14 ROBOTICS- CONTROL, SENSING. VISION, AND INTELLIGENCE respect to a fixed reference frame. Since the links of a robot arm may rotate and/ or translate with respect to a reference coordinate frame, a body-attached coordinate frame will be established along the joint axis for each link. The direct kinematics problem is reduced to finding a transformation matrix that relates the body-attached coordinate frame to the reference coordinate frame. A 3 x 3 rotation matrix is used to describe the rotational operations of the body-attached frame with respect to the reference frame. The homogeneous coordinates are then used to represent position vectors in a three-dimensional space, and the rotation matrices will be expanded to 4 x 4 homogeneous transformation matrices to include the translational operations of the body-attached coordinate frames. This matrix representation of a rigid mechanical link to describe the spatial geometry of a robot-arm was first used by Denavit and Hartenberg [1955]. The advantage of using the Denavit-Hartenberg representation of linkages is its algorithmic universality in deriving the kinematic equation of a robot arm. `n. ^_. 2.2.1 Rotation Matrices A 3 x 3 rotation matrix can be defined as a transformation matrix which operates on a position vector in a three-dimensional euclidean space and maps its coordinates expressed in a rotated coordinate system OUVW (body-attached frame) to a reference coordinate system OXYZ. In Fig. 2.2, we are given two right-hand rectangular coordinate systems, namely, the OXYZ coordinate system with OX, OY, and OZ as its coordinate axes and the OUVW coordinate system with OU, OV, and OW as its coordinate axes. Both coordinate systems have their origins coincident at point O. The OXYZ coordinate system is fixed in the three-dimensional space and is considered to be the reference frame. The OUVW coordinate frame is rotating with respect to the reference frame OXYZ. Physically,' one can consider the OUVW coordinate system to be a body-attached coordinate frame. That is, it is Oat CAD -OS go=CD .-- G.: x Figure 2.2 Reference and body-attached coordinate systems. CAD ROBOT ARM KINEMATICS 15 permanently and conveniently attached to the rigid body (e.g., an aircraft or a link of a robot arm) and moves together with it. Let (i,t, jy, k_) and (i,,, j,,, k,,,) be the unit vectors along the coordinate axes of the OXYZ and OUVW systems, respectively. A point p in the space can be represented by its coordinates with respect to both coordinate systems. For ease of discussion, we shall assume that p is at rest and fixed with respect to the OUVW coordinate frame. Then the point p can be represented by its coordinates with respect to the OUVW and OXYZ coordinate systems, respectively, as (P"' P,' and PXyz = (PX, P- " Pz) T (2.2-1) where pxyz and p the space with reference to different coordinate systems, and the superscript T on vectors and matrices denotes the transpose operation. We would like to find a 3 x 3 transformation matrix R that will transform the coordinates of p,,,,,,, to the coordinates expressed with respect to the OXYZ coordinate system, after the OUVW coordinate system has been rotated. That is, Pxyz = RPu11r (2.2-2) Note that physically the point p,,,,,,, has been rotated together with the OUVW coordinate system. Recalling the definition of the components of a vector, we have P.i + P,j, + (2.2-3) where px, py, and pz represent the components of p along the OX, OY, and OZ axes, respectively, or the projections of p onto the respective axes. Thus, using the definition of a scalar product and Eq. (2.2-3), PX = iX p = ix ' i.P + iX ' J,P,, + ix. Py = Jy P = Jy jy J,,P, + J, (2.2-4) Pz = kz ' p = kz illpl, + kz J,,P,, + kz or expressed in matrix form, PX ix i Py Pz Jy i k: i j, J, J,' iX i., k J, kz , PU Pv (2.2-5) kz j,, k Al, 16 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE Using this notation, the matrix R in Eq. (2.2-2) is given by ix i1 ix JV ix k,v (2.2-6) R= Jy i ill Jy Jv kz Jy kw kz k,v k, j. Similarly, one can obtain the coordinates of PUVw = QPxyz Pu or from the coordinates of pxyz: (2.2-7) i ' ix iv k,v lx ix i j), i kz jv kz k,v Px Pv .lv jy k,v *j), Py Pz (2.2-8) Pw kz Since dot products are commutative, one can see from Eqs. (2.2-6) to (2.2-8) that Q = R-' = RT and (2.2-9) (2.2-10) QR = RTR = R-'R = 13 .;3 -LS where 13 is a 3 x 3 identity matrix. The transformation given in Eq. (2.2-2) or (2.2-7) is called an orthogonal transformation and since the vectors in the dot products are all unit vectors, it is also called an orthonormal transformation. The primary interest in developing the above transformation matrix is to find the rotation matrices that represent rotations of the OUVW coordinate system about each of the three principal axes of the reference coordinate system OXYZ. If the OUVW coordinate system is rotated an a angle about the OX axis to arrive at a having coordinates (p,,, p,,, p,,,)T new location in the space, then the point with respect to the OUVW system will have different coordinates (px, py, pz )T with respect to the reference system OXYZ. The necessary transformation matrix C3' Rx,a is called the rotation matrix about the OX axis with a angle. derived from the above transformation matrix concept, that is Pxyz = Rx,a Puvw s.. R,,,, can be (2.2-11) with ix = i, and ix ill ix jv ix k,,, 1 0 0 0 cos a sin a - sin a cos a (2.2-12) kz . i Cr. kz jv kz k,v 0 Similarly, the 3 x 3 rotation matrices for rotation about the OY axis with 0 angle and about the OZ axis with 0 angle are, respectively (see Fig. 2.3), ROBOT ARM KINEMATICS 17 Z 0 C X Figure 2.3 Rotating coordinate systems. 18 ROBOTICS: CONTROL, SENSING, VISION. AND INTELLIGENCE cos la Ry, 0 0 1 sin 0 cos cp O,) cos0 RZ,o = -sin0 cos0 0 0 0 1 = 0 sin0 0 (2.2-13) -Sin o 0 The matrices Rx,a, Ry,,5, and RZ,0 are called the basic rotation matrices. Other finite rotation matrices can be obtained from these matrices. Example: Given two points au,,,,, = (4, 3, 2)T and b= (6, 2, 4)T with respect to the rotated OUVW coordinate system, determine the corresponding points ayz, by, with respect to the reference coordinate system if it has been rotated 60° about the OZ axis. SOLUTION: aXyz = Rz,60" auv,v and 0 0 1 bxyz = RZ,6o' buvw 4 3 0.500 axyZ = -0.866 0.500 0 0.866 0 2 4(0.5) +3( -0.866) +2(0) 4(0.866) + 3(0.5) +2(0) -0.598 4.964 2.0 'r? 4(0) + 3(0) +2(1) 0.500 bXyZ = 0.866 -0.866 0.500 0 0 1 6 2 r 1.268 6.196 4.0 ,,6 I'D 0 0 4 Thus, axyZ and bXyZ are equal to (-0.598, 4.964, 2.0)T and (1.268, 6.196, 4.0)T, respectively, when expressed in terms of the reference coordinate system. Example: If axyZ = (4, 3, 2)T and bXyZ = (6, 2, 4)T are the coordinates with respect to the reference coordinate system, determine the corresponding _°+ points ai,v,v, buv,v with respect to the rotated OUVW coordinate system if it has been rotated 60° about the OZ axis. SOLUTION: v-, auvw = (Rz, 60) 0.500 ono T aXyz and 4 3 buv,v = (RZ, 60)TbXyZ 0.866 0 4(0.5)+3(0.866)+2(0) 4( -0.866) + 3(0.5) +2(0) .+. auvw = -0.866 0.500 0 0 0 1 2 4(0)+3(0)+2(1) 4.598 -1.964 2.0 vii `r? ROBOT ARM KINEMATICS 19 0.500 0 0.866 0 2 0 1 4.732 -0.866 0.500 0 -4.196 4.0 r-j 2.2.2 Composite Rotation Matrix Basic rotation matrices can be multiplied together to represent a sequence of finite rotations about the principal axes of the OXYZ coordinate system. Since matrix multiplications do not commute, the order or sequence of performing rotations is important. For example, to develop a rotation matrix representing a rotation of a angle about the OX axis followed by a rotation of 8 angle about the OZ axis folQ.. lowed by a rotation of 0 angle about the OY axis, the resultant rotation matrix representing these rotations is co R = RY, 0 Rz,e Rx,« _ 0 0 1 '"' so co so 0 - se 0 co 0 0 1 o'° 1 0 0 0 0 0 -so 0 coca Ca - Sa Sa Ca Cg CgSCB SoSa - Cg5SBCa SgSBCa + Cg Sa so - So CO CgSBSa + SgCa - CBSa (2.2-14) Coca - sosesa where Co = cos 0; So = sin 0; CO = cos 8; SO = sin 8; Ca = cos a; Sa sin a. That is different from the rotation matrix which represents a rotation of angle about the OY axis followed by a rotation of 0 angle about the OZ axis followed by a rotation of a angle about the OX axis. The resultant rotation matrix is: 1 0 0 co - so 0 co 0 0 1 So 0 -%. R = R.,,.« RZ,o RY.O = Ca 0 Sa 0 - Sa Ca se 0 co 0 0 1 -so 0 Co CBCo -so CaCB SaCB CaSBCq + SaSc ceso casesc - sacq5 SaSBSq + CaCq (2.2-15) SaSBCc - CaSg In addition to rotating about the principal axes of the reference frame OXYZ, the rotating coordinate system OUVW can also rotate about its own principal axes. In this case, the resultant or composite rotation matrix may be obtained from the following simple rules: 20 ROBOTICS. CONTROL, SENSING, VISION, AND INTELLIGENCE 1. Initially both coordinate systems are coincident, hence the rotation matrix is a 3 x 3 identity matrix, 13. 2. If the rotating coordinate system OUVW is rotating about one of the principal axes of the OXYZ frame, then premultiply the previous (resultant) rotation matrix with an appropriate basic rotation matrix. 3. If the rotating coordinate system OUVW is rotating about its own principal axes, then postmultiply the previous (resultant) rotation matrix with an appropriate basic rotation matrix. about the OX axis with a angle (the axis r is in the XZ plane), followed by a rota- tion of -a angle about the OY axis (the axis r now aligns with the OZ axis). After the rotation of q5 angle about the OZ or r axis, reverse the above sequence of rotations with their respective opposite angles. The resultant rotation matrix is via r-+ Example: Find the resultant rotation matrix that represents a rotation of q5 angle about the OY axis followed by a rotation of 0 angle about the OW axis followed by a rotation of a angle about the OU axis. SOLUTION: ..1 R = Ry, 0 la Rw, 0 R.,, = Ry, 0 R,,,, 0 Ru. a co 0 0 1 Scb CO S0 - SO 0 0 1 1 0 0 0 co 0 0 0 Ca Sa - Sa Ca -So 0 CoCO SO Co 0 SgSa - CcSOCa COCa ScbSOCa + CoSa -ScCO CgSOSa + SgCa - COSa coca - SgSOSa Note that this example is chosen so that the resultant matrix is the same as Eq. (2.2-14), but the sequence of rotations is different from the one that generates Eq. (2.2-14). 2.2.3 Rotation Matrix About an Arbitrary Axis Sometimes the rotating coordinate system OUVW may rotate q5 angle about an arbitrary axis r which is a unit vector having components of r, ry, and rZ and passing through the origin O. The advantage is that for certain angular motions the OUVW frame can make one rotation about the axis r instead of several rotations about the principal axes of the OUVW and/or OXYZ coordinate frames. To derive this rotation matrix Rr,o, we can first make some rotations about the principal axes of the OXYZ frame to align the axis r with the OZ axis. Then make the rotation about the r axis with 0 angle and rotate about the principal axes of the OXYZ frame again to return the r axis back to its original location. With refer- ence to Fig. 2.4, aligning the OZ axis with the r axis can be done by rotating :'n ROBOT ARM KINEMATICS 21 R, = R, -a Ry,R Rz,, Ry, -R R, a 1 ,'b 0 0 co 0 0 1 S(3 Co So 0 -Sq Co 0 0 0 1 0 Ca Sa 0 0 - Sa Ca co 0 -so 0 1 co 0 - so 0 0 x 0 1 0 0 Ca - Sa Ca so 0 co Sa From Fig. 2.4, we easily find that sin a = sin /3 = r,, via ry rz Cos a = ry + rZ ry +r? Cos a = Substituting into the above equation, rXVcb+Co r + rZ rXrZVV+rySo rryVO-rZSo ryVcb+C4 ryrrVq+r., Scb Rr,, = r,' ryVcb+rrSO ryrZVV -rrSO (2.2-16) r., rrVg-rySO Y, V rZ VV+Co `-----------------/ 1. Rx.a 2. Ry,-R 3. Rz,o 4. Ry,A 5. R,_a r, c X, U Z, W Figure 2.4 Rotation about an arbitrary axis. 22 ROBOTICS. CONTROL, SENSING, VISION, AND INTELLIGENCE where V4 = vers 4 = 1 - cos 0. This is a very useful rotation matrix. Example: Find the rotation matrix Rr,, that represents the rotation of 0 angle about the vector r = (1, 1, 1) T. Since the vector r is not a unit vector, we need to normalize it and find its components along the principal axes of the OXYZ frame. Therefore, SOLUTION: rY r2+ry+rZ 2 3 2 jl 3 Substituting into Eq. (2.2-16), we obtain the Rr.O matrix: 113 Vc V3 1/3 V(b + 1/3 V(b + V3 Vq + Cq So 13 V(b + -So 3 Rr, 0 = 1/3 VCb + I 73 So So 1/3 Vv - 3- i 3S o 'I3 Vq - 73 So 1/3 V0 + Co 2.2.4 Rotation Matrix with Euler Angles Representation The matrix representation for rotation of a rigid body simplifies many operations, but it needs nine elements to completely describe the orientation of a rotating rigid body. It does not lead directly to a complete set of generalized coordinates. Such a set of generalized coordinates can describe the orientation of a rotating rigid body with respect to a reference coordinate frame. They can be provided by three angles called Euler angles 0, 0, and >G. Although Euler angles describe the orienCAD U.- tation of a rigid body with respect to a fixed reference frame, there are many different types of Euler angle representations. The three most widely used Euler angles representations are tabulated in Table 2.1. The first Euler angle representation in Table 2.1 is usually associated with gyroscopic motion. This representation is usually called the eulerian angles, and corresponds to the following sequence of rotations (see Fig. 2.5): F-' -°= Table 2.1 Three types of Euler angle representations Eulerian angles system I Sequence r_' 4'~ of rotations 0 about OZ axis 0 about OU axis ik about OW axis a.. +-' Euler angles system II Roll, pitch, and yaw system III ><i about OX axis 0 about OZ axis 0 about OV axis ¢ about OW axis 0 about OY axis 4 about OZ axis ROBOT ARM KINEMATICS 23 Z, W U," Figure 2.5 Eulerian angles system I. 1. A rotation of 0 angle about the OZ axis (R=,, ) 2. A rotation of 8 angle about the rotated OU axis (R,,, 0) 3. Finally a rotation of 1 angle about the rotated OW axis (R,,,, The resultant eulerian rotation matrix is Rm, 0, 0 = RZ, , R11, 0 RIV, >G The above eulerian angle rotation matrix R¢, 0, >G can also be specified in terms of the rotations about the principal axes of the reference coordinate system: a rotation of 0 angle about the OZ axis followed by a rotation of 0 angle about the OX axis and finally a rotation of 0 angle about the OZ axis. With reference to Fig. 2.6, another set of Euler angles 0, 0, and ' representation corresponds to the following sequence of rotations: 1. A rotation of 0 angle about the OZ axis (Rz. 4,) 2. A rotation of 0 angle about the rotated OV axis (R,,,0) 3. Finally a rotation of 0 angle about the rotated OW axis (R,,.,0) ¢¢w Co - So So 0 0 0 1 1 0 CO 0 - SO CO Ci1i S>1' Co 0 0 0 - Si Ci 0 0 0 1 Se 0 Corn - SOCOSO S4CO + WOS1i - CCS,i - socec,t -Soso + CcCOCJ SOCV1 Sose -CcSO Co (2.2-17) ses a.+ -e- ¢¢w This is mainly used in aeronautical engineering in the analysis of space vehicles.soso sgCeco + COW .v. A rotation of 0 about the OX axis (R..0 can also be specified in terms of the rotations about the principal axes of the reference coordinate system: a rotation of 0 angle about the OZ axis followed by a rotation of 0 angle about the OY axis and finally a rotation of 0 angle about the OZ axis. Another set of Euler angles representation for rotation is called roll. and yaw (RPY). 1. 0)-roll The resultant rotation matrix is 000 . They correspond to the following rotations in sequence: ate. g)-pitch 3. co The above Euler angle rotation matrix R0. AND INTELLIGENCE Z. Co so 0 -Ski Co 0 0 0 1 ce 0 0 1 se 0 co so 0 -so co 0 0 0 1 -so 0 co mil cocec . U Figure 2.0)-yaw 2.0R. A rotation of 0 about the OZ axis (R_. e. pitch.seal/ 4-. = RZ. aR. The resultant rotation matrix is R0.B.soco -so COW + Coco seso COO Sose (2. SENSING..6 Eulerian angles system II. V. W X.coCeW . VISION.2-18) . A rotation of 0 about the OY axis (Ry.24 ROBOTICS: CONTROL. the column vectors of the rotation matrix represent the principal axes of the OUVW coordinate system with respect to the reference frame and one can draw the location of all the principal axes of the OUVW coordinate frame with respect to the reference frame.t O. pitch. Let us choose a point p fixed in the OUVW coordinate system to be (1.I (2. 1.v = mil co .ROBOT ARM KINEMATICS 25 Z Y Figure 2. R0. i.0RX. In other words. S>G -S co -so o co Coco Soco COW ." 0'm coordinate system with respect to the reference coordinate system. 0) T and (0. Similarly. 2. pitch and yaw.0 = Rz.0Ry. one can identify that the second. and yaw can be specified in terms of the rotations about the principal axes of the reference coordinate system and the rotating coordinate system: a rotation of 0 angle about the OZ axis followed by a rotation of o angle about the rotated OV axis and finally a rotation of 0 angle about the rotated OU axis (see Fig.CcbS.. 0. L.& + cock cgsoc> + SOW sosoc . respectively.. Thus. 0 for roll.scbc>G ScSOS. The above rotation matrix R0 .c CBC.7).2. i...e. Lo. 0. 1)T. .So S0 co 0 0 0 0 1 Co 0 0 1 So 1 0 0 0 0 0 C. that is. given a reference frame OXYZ and a rotation matrix. a rotation matrix geometrically represents the principal axes of the rotated L.7 Roll.. 2.5 Geometric Interpretation of Rotation Matrices It is worthwhile to interpret the basic rotation matrices geometrically. choosing p to be (0. .2-19) -So cos.. of the OUVW coordinate system with respect to the OXYZ coordinate system.and third-column elements of a rotation matrix represent the OV and OW axes. 0)T. Then the first column of the rotation matrix represents the coordinates of this point with respect to the OXYZ coordinate system. . Since each row and column is a unit vector representation. the inner product (dot product) of each row with each other row equals zero. jv = (0. 1. AND INTELLIGENCE Since the inverse of a rotation matrix is equivalent to its transpose. VISION. Similarly.1 for a left-hand coordinate a-.. and k. Since each row is a vector representation of orthonormal vectors. Each column vector of the rotation matrix is a representation of the rotated axis unit vector expressed in terms of the axis unit vectors of the reference frame. 0)T jy = Oi.-. and each row vector is a representation of the axis unit vector of the reference frame expressed in terms of the rotated axis unit vectors of the OUVW frame. The inverse of a rotation matrix is the transpose of the rotation matrix... + cosaj. The original unit vectors are then ix = 1i1f + 0j + Ok. .v = (0. s'" R-1 = RT where 13 is a 3 x 3 identity matrix.sinak. . 2.. = (0. 0)T. This geometric interpretation of the rotation matrices is an important concept that provides insight into many robot arm kinematics problems. = (1. 0. 1)T since they are expressed in terms of SOLUTION: :bin themselves. This is a direct property of orthonormal coordinate systems. 0)T. and RR T = 13 Properties 3 and 4 are especially useful in checking the results of rotation matrix multiplications.a matrix can be reconstructed as . the determinant of a rotation matrix is + 1 for a right-hand coordinate system and . 3. and in determining an erroneous row or column vector. 0. cosa. and OW coordinate axes were rotated with a angle about the OX axis. OV.v = (0.26 ROBOTICS: CONTROL. Example: If the OU. Several useful properties of rotation matrices are listed as follows: 1. the magnitude of each row and column should be equal to 1. 4. the row vectors of the rotation matrix represent the principal axes of the reference system OXYZ with respect to the rotated coordinate system OUVW. cos a)T Applying property 1 and considering these as rows of the rotation matrix. Furthermore. system. . SENSING. since. the Rx. the inner product of each column with each other column equals zero. 0..sina)T kZ = Oi + sin aj + cos ak. what would the representation of the coordinate axes of the reference frame be in terms of the rotated coordinate system OUVW? ^T' The new coordinate axis unit vectors become i = (1. Thus. If this coordinate is unity (w = 1). The concept of a homogeneous-coordinate representation of points in a three-dimensional euclidean space is useful in developing matrix transformations that include rotation. A homogeneous transformation matrix can be considered to consist of four submatrices: cut Sao CAD . p) to indicate the representation of a cartesian vector in homogeneous coordinates. py. a fourth coordinate or component is introduced to a position vector P = (px.e. px. this scale factor will always be equal to 1.sin a which is the same as the transpose of Eq. and the physical Ndimensional vector is obtained by dividing the homogeneous coordinates by the (N + 1)th coordinate. W) T. Py. these "hats" will be lifted. px. The homogeneous transformation matrix is a 4 x 4 matrix which maps a position vector expressed in homogeneous coordinates from one coordinate system to another coordinate system.ROBOT ARM KINEMATICS 27 0 0 0 0 cos a sin a COS a . Later. translation.2-12).: 6°0 ova 0. w. pz)T. wpz.6 Homogeneous Coordinates and Transformation Matrix Since a 3 x 3 rotation matrix does not give us any provision for translation and scaling. w2 py. scaling. we use a "hat" (i. the transformation of an N-dimensional vector is performed in the (N + 1)-dimensional space.2. For example. the representation of an N-component position vector by an (N+ 1)-component vector is called homogeneous coordinate representation. although it is commonly used in computer graphics as a universal scale factor taking on any positive values. In this section.. w. We say that the position vector p is expressed in homogeneous coordinates. (2. WI py. as a scale factor. wt )T and 02 = (w. wpy. w)T in the homogeneous coordinate representation. In general. wpy. one can view the the fourth component of the homogeneous coordinates. The physical coordinates are O. In a homogeneous coordinate representation. and perspective transformation. pz)T in a three-dimensional space which makes it p = (wpx. then the transformed homogeneous coordinates of a position vector are the same as the physical coordinates of the vector. In robotics applications. Pz)T is represented by an augmented vector (wpx. a position vector p = (pX. 2. wt pz. W2 pz. wpz. in a three-dimensional space. py.o related to the homogeneous coordinates as follows: Px = x'Px w WPy WPz Py = w Pz = w There is no unique homogeneous coordinates representation for a position vector in the three-dimensional space. Thus. if no confusion exists. p1 = (w. w2 )T are all homogeneous coordinates representing the same position vector p = (px. the lower left 1 x 3 submatrix represents perspective transformation. SENSING.2-13). The homogeneous transformation matrix can be used to explain the geometric relationship between the body attached frame OUVW and the reference coordinate system OXYZ. 0 0 0 cos cp 0 0 1 sin 0 0 0 0 0 TX. is called the basic homogeneous translation .0 = 0 0 (2.2-21) 0 These 4 x 4 rotation matrices are called the basic homogeneous rotation matrices. Eqs. and the fourth diagonal element is the global scaling factor.2-20) scaling The upper left 3 x 3 submatrix represents the rotation matrix.28 ROBOTICS: CONTROL.2-12) and (2.. 1)T]. expressed as homogeneous rotation matrices. VISION. then using the transformation matrix concept.e.c. a 3 x 3 rotation matrix can be extended to a 4 x 4 homogeneous transformation matrix Trot for pure rotation operations. (2. dy. become 1 yam. cos a -sina cosa 0 0 0 1 0 0 sina 0 TY'a = -sin o 0 0 0 cos 0 0 0 1 cos 0 -sinB cos 0 0 0 0 0 1 0 0 0 1 sinB T. The upper right 3 x 1 submatrix of the homogeneous transformation matrix has the effect of translating the OUVW coordinate system which has axes parallel to the reference coordinate system OXYZ but whose origin is at (dx. dz) of the reference coordinate system: 1 0 1 0 0 1 dx Ttran = 0 0 0 dy dz 1 0 0 (2. If a position vector p in a three-dimensional space is expressed in homogeneous coordinates [i. pti pz. Thus. AND INTELLIGENCE P3x1 ' rotation matrix T= lxi perspective transformation position vector (2.. the upper right 3 x 1 submatrix represents the position vector of the origin of the rotated coordinate system with respect to the reference system. p = (pX.2-22) 0 This 4 x 4 transformation matrix matrix. 1' s 'LS ('' =1 (2. respectively. cz 1 0 0 1 Thus. Two do not produce any local scaling effect. and c. sz ay az T = Py pz f-+ 1 n 0 s a 0 p (2. the elements of this submatrix are set to zero to indicate null perspective transfor(DD mation. Note that the basic rotation matrices. as discussed in Chap. py s 11Z s w= "°. b. a 4 X 4 homogeneous transformation matrix maps a vector expressed in homogeneous coordinates with respect to the OUVW coordinate system to the reference coordinate system OXYZ. the fourth diagonal element in the homogeneous transformation matrix has the effect of globally reducing the coordinates if s > 1 and of enlarging the coordinates if 0 < s < 1. CAD pxyz = Tpuv1>> and nx sx (2.ROBOT ARM KINEMATICS 29 The lower left 1 x 3 submatrix of the homogeneous transformation matrix represents perspective transformation. In summary. which is useful for computer vision and the calibration of camera models.2-26a) ax px nY sy. In the present discussion. The physical cartesian coordinates of the vector are x px S `--1. as in a 0 0 0 0 b 0 0 0 c 0 x y z 1 ax 0 0 1-r by (2 2-23) . with w = 1. the coordinate values are stretched by the scalars a.2-25) Therefore. The first three diagonal elements produce local stretching or scaling. The principal diagonal elements of a homogeneous transformation matrix produce local and global scaling.2-24) z S 0 0 0 0 0 0 where s > 0. The fourth diagonal element produces global scaling as in 1 0 1 0 0 1 0 0 0 s CC' x y z 1 x y (2.. 7.2-26b) 1 nz 0 0 0 0 . That is. the inverse of a homogeneous transformation matrix is not equivalent to its transpose. Furthermore. In other words. Then the upper right 3 x 1 submatrix indicates the position of the origin of the OUVW frame with respect to the OXYZ reference coordinate frame. The position of the origin of the reference coordinate system with respect to the OUVW coordinate system can only be found after the inverse of the homogeneous transformation matrix is determined. 1. one can identify that the second-column (or s vector) and third-column (or a vector) elements of the homogeneous transformation matrix represent the OV and OW axes. VISION.2-27) -STp ax ay 0 az -aTP 1 ten. a homogeneous transformation matrix for a three-dimensional space can be represented as in Eq. This has the effect of making the elements in the upper right 3 x 1 submatrix a null vector. given a reference frame OXYZ and a homogeneous transformation matrix T. (2. 1. choosing p to be (0. 1) T.. the inverse of a homogeneous transformation matrix can be found to be .. the row vectors of a rotation submatrix represent the principal axes of the reference coordinate system with respect to the rotated coordinate system OUVW. Then the first column (or n vector) of the homogeneous transformation matrix represents the coordinates of the OU axis of OUVW with respect to the OXYZ coordinate system. (2. 0.. the column vectors of the rotation submatrix represent the principal axes of the OUVW BCD coo coordinate system with respect to the reference coordinate frame....p s-.' P (2. SENSING.2-26b). respectively.30 ROBOTICS: CONTROL. of the OUVW coordinate system with respect to the reference coordinate system. Next. In general. 0. is the origin of the OUVW coordinate system. 0 1)T.. 1)T and (0. the column vectors of the inverse of a homogeneous transformation matrix represent the principal axes of the reference system with respect to the rotated coordinate system OUVW. The fourth-column vector of the homogeneous transformation matrix represents the position of the origin of the OUVW coordinate system with respect to the reference system. that is i. 0. 1)T. Sy. p.7 Geometric Interpretation of Homogeneous Transformation Matrices In general.-+ nx ny. Similarly.2-27). However. and one can draw the orientation of all the principal axes of the OUVW coordinate frame with respect to the reference coordinate frame. . Since the inverse of a rotation submatrix is equivalent to its transpose. AND INTELLIGENCE 2. nz Sz -nTP -STp 83x3 0 0 0 T-' = Sx 0COC _ T 1 coordinate system (position and orientation) with respect to a reference coordinate Coo `3' °. that is. 0. we assume that the origins of both coordinate systems coincide at a point 0. -aTP 0 0 Thus.ti. and the upper right 3 x 1 subma'a. Let us choose a point p fixed in the OUVW coordinate system and expressed in homogeneous coordinates as (0. from Eq. 0. system. CDO '-' III '"' -. Thus. let us choose the point p to be (1. a homogeneous transformation matrix geometrically represents the location of a rotated '-h s.2. .-+ 1 4 0 The translated points are axyZ = (9. Using the appropriate homogeneous transformation matrix. 3. However. If the rotating coordinate system OUVW is rotating/translating about the principal axes of the OXYZ frame. This geometric interpretation of the homogeneous transformation matrices is an important concept used frequently throughout this book. 2. -1)T and bXy. and b.3 key. If the rotating coordinate system OUVW is rotating/translating about its own principal axes.ROBOT ARM KINEMATICS 31 trix represents the position of the origin of the reference frame with respect to the OUVW system..y. then postmultiply the previous (resultant) homogeneous transformation matrix with an appropriate basic homogeneous rotation/translation matrix.. 2. then premultiply the previous (resultant) homogeneous transformation matrix with an appropriate basic homogeneous rotation/ translation matrix. SOLUTION: 000 0 1 may 1 0 1 5 4 . Example: A T matrix is to be determined that represents a rotation of a angle about the OX axis. 4)T are to be translated a distance + 5 units along the OX axis and . The following rules are useful for finding a composite homogeneous transformation matrix: 1. = (11. .. 2. careful attention must be paid to the order in which these matrices are multiplied.2. 2)T and (6. '^w .8 Composite Homogeneous Transformation Matrix The homogeneous rotation and translation matrices can be multiplied together to obtain a composite homogeneous transformation matrix (we shall call it the T matrix). Example: Two points a. 3.-.. 1)T. = 0 0 0 0 0 0 -3 ..3 units along the OZ axis.--' 1 2 1 0 1 0 1 0 0 1 5 6 2 0 0 0 110 0 0 -3 . determine the new points a. since matrix multiplication is not commutative.. Initially both coordinate systems are coincident. hence the homogeneous transformation matrix is a 4 x 4 identity matrix. 3. 142. t3. followed by a translation of b units along the rotated OV axis.v = (4... 32 ROBOTICS. = cos ajy + sin akz. L=. a translation along the rotated OV axis of b units is b j _ bcosajy + bsin cakZ.... . of the reference system) j. followed by a translation of a units along the OX axis..CONTROL..e... i. Two approaches will be utilized.since 0 0 1 0 0 0 sina 0 cosa 0 0 0 1 0 0 0 0 cos a . then translation along the OV axis will accomplish the same goal. that is. followed by a rotation of 0 angle about the OZ axis. d T. followed by a translation of d units along the OZ axis. k. and the orthodox approach. column 2 of Eq. following the rules as stated earlier. a Tv.. 0 1 bc osa b s in a 1 0 0 0 cos a .sina cosa 0 sina 0 b cos a b sin a 1 In the orthodox approach. Thus. a = 0 0 0 R. which is simpler.. one should realize that since the T.. b = 0 0 0 cos a . SENSING. (2.. 0 0 0 0 1 1 1 0 0 1 0 T = Tx. a . jy.... SOLUTION: T = T<.. b T x. AND INTELLIGENCE This problem can be tricky but illustrates some of the fundamental components of the T matrix.. an unorthodox approach which is illustrative.. VISION. a Tx.2-21). matrix will rotate the OY axis to the OV axis. So the T matrix is 1 ''Y 0 1 0 0 >e' 1 0 0 0 T = Tv. After the rotation T. the rotated OV axis is (in terms of the unit vectors SOLUTION: i.since sina cosa 0 0 0 0 0 0 1 0 0 b 0 1 0 0 0 0 1 0 0 0 0 cos a -sina cosa 0 sina 0 b cos a b sin a 1 Example: Find a homogeneous transformation matrix T that represents a rotation of a angle about the OX axis. o TZ. since 0 cosa 0 0 1 0 0 0 cos 0 sin 0 -cosa sin 0 cos a cos 0. zi. 1)T representing a point in the link i coordinate system expressed in homogeneous coordinates pi.-.8). a 4 x 4 homogeneous transformation matrix is used. To describe the spatial displacement relationship between these two coordinate systems.R OBOT ARM KI NEMATICS 33 cos 0 -sin O cos 0 0 0 1 0 0 1 0 1 0 0 0 1 0 1 0 0 1 a 1 0 0 0 sin 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 d 1 0 0 0 0 0 cosa sina 0 .1 coordinate 2. >~' . the fixed reference coordinate frame OXYZ and the moving (translating and rotating) coordinate frame OUVW. If these two coordinate systems are assigned to each link of a robot arm. yi.. there are N joint-link pairs with link 0 (not considered part of the robot) attached to a supporting base where an inertial coordinate frame is usually established for this dynamic system. translation. Homogeneous transformation matrices have the combined effect of rotation. and global scaling when operating on position vectors expressed in homogeneous coordinates.1 and link i. Hence. say link i .1 (or OXYZ) coordinate system as Pi-I = TPi where T = 4 x 4 homogeneous transformation matrix relating the two coordinate systems pi = 4 x 1 augmented position vector (xi. and the last link is attached with a tool.2.9 Links. Using the T matrix. when joint i is activated. called links. (2. and Their Parameters A mechanical manipulator consists of a sequence of rigid bodies. z_1. The joints and links . then the link i . Each joint-link pair constitutes 1 degree of freedom. con- "C3 s. 2.1 coordinate system is the reference coordinate system and the link i coordinate system is the moving coordinate system.2-28) 1)T nected by either revolute or prismatic joints (see Fig.sin a cos 0 a cos 0 a sin 0 0 0 sina 0 cosa 0 d 1 We have identified two coordinate systems. Joints. perspective. sina sin 0 . respectively. system representing the same point pi in terms of the link i . we can specify a point pi at rest in link i and expressed in the link i (or OUVW) coordinate system in terms of the link i .I = is the 4 x 1 augmented position vector (xi_ I. yi_ I. for an N degree of freedom manipulator. 8 A PUMA robot arm illustrating joints and links. link i . A link i (i = 1. spherical.. at most. The significance of links.. respectively.9).1 and link i) is given by di which is the distance measured along the joint axis between the normals. screw. SENSING. They determine the relative position of neighboring CAD cam'.10)._. In general. joint 1 is the point of connection between link 1 and the supporting base. and planar (see Fig. Each link is connected to. cylindrical. two other links (e. 6 ) is connected to. 2. The joint angle Oi between the normals is measured in a plane normal to the joint axis. at most.. one for each of the links. AND INTELLIGENCE Figure 2. . thus. links. two joint axes are established at both ends of the connection. The relative position of two such connected links (link i . A joint axis (for joint i) is established at the connection of two links (see Fig. VISION. This joint axis will have two normals connected to it. Hence. di and Oi may be called the distance and the angle between the adjacent links. prismatic (sliding). . 2.g.34 ROBOTICS: CONTROL.1 and link i + 1). thus. Of these.. is that they maintain a fixed configuration between their joints which can be characterized by two CAD 5. . 7c' 1r- . two links are connected by a lower pair joint which has two surfaces sliding over one another while remaining in contact.. two others so that no closed loops are formed. Only six different lower-pair joints are possible: revolute (rotary). only rotary and prismatic joints are common in manipulators. from a kinematic perspective. are numbered outwardly from the base. Thus. Figure 2. is the angle between the joint axes measured in a plane perpendicular to ai. respectively.e. the z. ai and ai may be called the length and the twist angle of the link i.9 The lower pair.ROBOT ARM KINEMATICS 35 Revolute Planar Cylindrical Prismatic Spherical Screw Figure 2.10 Link coordinate system and its parameters... _ 1 and zi axes for joint i and joint i + 1. respectively). The parameter ai is the shortest distance measured along the common normal between the joint axes (i. . parameters: ai and a. They determine the structure of link i. and a. as long as the x. respectively. (x6. Note that these four parameters come in pairs: the link parameters (ai. Oi) which determine the relative position of neighboring links. are associated with each link of a manipulator. SENSING. 2..1. The zi_I axis lies along the axis of motion of the ith joint. The Denavit-Hartenberg (D-H) representation results in a 4 x 4 homogeneous transformation matrix representing each link's coordinate system at the joint with respect to the previous link's coordinate system. Thus.36 ROBOTICS: CONTROL.' C]. and pointing away from it.. it moves together with the link i. y6. the end-effector expressed in the "hand coordinates" can be transformed and expressed in the "base coordinates" which make up the inertial frame of this dynamic system. e-. axis is normal to the z. When the joint actuator activates joint i. four parameters. one is free to choose the location of coordinate frame 0 anywhere in the supporting base. The last coordinate frame (nth frame) can be placed anywhere in the hand. where i = 1 . yi. YI.) actually represent the unit vectors along the principal axes of the coordinate frame i. C". Since a rotary joint has only 1 degree of freedom. An orthonormal cartesian coordinate system (xi. Yo. but are used here to denote the coordinate frame i.o s...10 The Denavit-Hartenberg Representation To describe the translational and rotational relationships between adjacent links. 2.. y. as long as the zo axis lies along the axis of motion of the first joint. DC' 'b4 'F. for a o-' tea: . 4-. di. then these parameters constitute a sufficient set to completely determine the kinematic configuration of each link of a robot arm. the nth coordinate frame moves with the hand (link n). Thus. z. ZO). By these rules. yi. AND INTELLIGENCE In summary. Since the ith coordinate system is fixed in link i. 2. >. link i will move with respect to link i . (x1. ai. . each (xi.n (n = number of degrees of freedom) plus the base coordinate frame. `C1 "CD t (x. -I axis. chi r. through sequential transformations. yo. zi) coordinate frame of a robot arm corresponds to joint i + 1 and is fixed in link i. Every coordinate frame is determined and established on the basis of three 1. 3. .7" C'< '"r six-axis PUMA-like robot arm. If a sign convention for each of these parameters has been established. Denavit and Hartenberg [1955] proposed a matrix method of systematically establishing a coordinate system (body-attached frame) to each link of an articulated chain.2. zo) which is also the inertial coordinate frame of the robot arm. namely. The base coordinates are defined as the 0th coordinate frame (xo. '«i am. Z6). . and Oi. The D-H representation of a rigid link depends on four geometric parameters associated with each link. . . we have seven coordinate frames. VISION.. The xi axis is normal to the zi_I axis. v>' C]. rules: . Zl). (x0. .. These four parameters completely describe any revolute p. a`7 Coo Q^. cri) which determine the structure of the link and the joint parameters (di. ai. The yi axis completes the right-handed coordinate system as required.. Thus. zi)t can be established for each link at its joint axis. ai is the offset angle from the zi -I axis to the zi axis about the xi axis (using the right-hand rule). ai 0 d. Referring to Fig.07 mm 0 -100 to 100 -266 to 266 6 0 56.11 Establishing link coordinate systems for a PUMA robot.. ai is the offset distance from the intersection of the zi_ I axis with the xi axis to the origin of the ith frame along the xi axis (or the shortest distance between the zi _ I and zi axes).ROBOT ARM KINEMATICS 37 or prismatic joint.09 mm 0 -225 to 45 -45 to 225 -110 to 170 4 5 -90 90 433. .32 mm 0 0 0 149. PUMA robot arm link coordinate parameters Joint i 1 O o. s. these four parameters are defined as follows: Oi is the joint angle from the xi_ I axis to the x1 axis about the zi -I axis (using the right-hand rule).25 mm Figure 2.8 mm -20. 2. 0 Joint range 90 -90 0 90 -160 to + 160 2 3 0 90 0 0 0 431.10. _ I axis with the xi axis along the zi -I axis. di is the distance from the origin of the (i -1)th coordinate frame to the intersection of the z. +'9 Y6 (s) Z6 (a) OT". while Bi is the joint variable that changes when link i moves (or rotates) with respect to link i . With the above three basic rules for establishing an orthonormal coordinate system for each link and the geometric interpretation of the joint and link parameters. For the remainder of this book. For a prismatic joint.12 Establishing link coordinate systems for a Stanford robot. ai.- 0.. AND INTELLIGENCE y5 (Lift) xa Z3 en Stanford robot arm link coordinate parameters Joint i I 2 3 0i ai a. n. while di is the joint variable. 0 0 0 0 0 0 d. ai) for a prismatic joint. the varying quantity. or (0i._-90 02 = -90 -90 04 = 0 05 = 0 -90 90 0 d. and ai are the joint parameters and remain constant for a robot. Bi. SENSING.S . . and joint parameters refer to the remaining three geometric constant values (di.38 ROBOTICS: CONTROL. that is. ai) for a rotary joint. a procedure for establishing consistent orthonormal coordinate systems for a robot is outlined in Algorithm 2. For a rotary joint.1. Examples of applying this algorithm to a sixi1. di. VISION. d2 d3 4 5 -90 90 0 0 0 d6 6 06=0 Figure 2. joint variable refers to Bi (or di). ai. ai. and ai are the joint parameters and remain constant for a robot.1. ai. 1. n. perform steps D9 to D 12. N.) D8. D12. Dl l. along the direction of axis and pointing away from the robot. The relations between adjacent links can be represented by a 4 x 4 homogeneous transformation matrix. Establish xi axis. Establish x such that it is normal to both z axes.' CAD CD.° yn to complete the right-handed coordinate system. Initialize and loop. D6. D10. 2. D3. Establish a right-handed orthonormal coordinate system (xo..2.. Find ai. Locate the origin of the ith coordinate system at the intersection of the zi and zi _ i axes or at the intersection of common normal between the zi and zi_ I axes and the zi axis. zo) at the supporting base with the zo axis lying along the axis of motion of joint 1 and pointing toward the shoulder of the robot arm. The labeling of the coordinate systems begins from the supporting base to the end-effector of the robot arm. Find ai. Establish the origin of the ith coordinate system. i = 1. Assign yi = + (zi x xi)/Ilzi x xill to complete the right-handed coordinate system. Find 8i. ai is the angle of rotation from the zi_I axis to the zi axis about the xi axis. Find joint and link parameters. Algorithm 2. For each i. D9. For each i. Find di. D7. Establish the hand coordinate system. . (Extend the zi and the xi axes if necessary for steps D9 to D12). ai is the distance from the intersection of the zi_i axis and the xi axis to the origin of the ith coordinate system along the xi axis. di is the distance from the origin of the (i . Given an n degree of freedom robot arm. D2.1: Link Coordinate System Assignment.. respectively. For robots having left-right arm configurations..ROBOT ARM KINEMATICS 39 axis PUMA-like robot arm and a Stanford arm are given in Figs.12.11 and 2. The x0 and yo axes can be conveniently established and are normal to the zo axis. yo. D4. perform steps D3 to D6. Establish the base coordinate system.) D1.1)th coordinate system to the intersection of the zi_I axis and the xi axis along the zi_I axis. n . . 'Z3 ... 'The significance of this assignment is that it will aid the development of a consistent procedure for deriving the joint solution as discussed in the later sections. °a° . Establish xi = t (zi_I x zi )/I1zi_i x zill or along the common normal between the zi -I and zi axes when they are parallel. Align the zi with the axis of motion (rotary or sliding) of joint i + 1. Assign w. D5. 2. this algorithm assigns an orthonormal coordinate system to each link of the robot arm according to arm configurations similar to those of human arm geometry. Establish joint axis. . Establish yi axis. Establish z. 8i is the angle of rotation from the xi_ i axis to the xi axis about the zi_ I axis. i = 1.11 for more detail. the zI and z2 axes are pointing away from the shoulder and the "trunk" of the robot arm.. Usually the nth joint is a rotary joint. (Note that the assignment of coordinate systems is not unique. . It is the joint variable if joint i is rotary. (See Sec. It is the joint variable if joint i is prismatic. 00" Lt. the inverse of this transformation can be found to be cos 0i sin Oi 0 . Looking at Fig. "Ai = T.r.sin ai cos 0. (2. coincidence.10. 1 .sin ai cos ai 0 0 0 0 0 0 0 0 0 0 . sin 0i cos a.sin ai cos 0i 0 sin ai cos ai 0 . ai. Rotate about the xi axis an angle of ai to bring the two coordinate systems into 's. ai. For a prismatic joint. i.2-30) where ai. 4. Rotate about the zi-I axis an angle of 01 to align the xi_I axis with the xi axis (xi -I axis is parallel to xi and pointing in the same direction).s.. 2. di are constants while 0i is the joint variable for a revolute joint. d T. cos 0i sin ai sin ai sin Oi ai cos Oi . i-'Ai becomes CAD .1)th coordinate frame.2-29) 0 0 0 Using Eq. Each of these four operations can be expressed by a basic homogeneous rotation-translation matrix and the product of these four basic homogeneous CAD !'K transformation matrices yields a composite homogeneous transformation matrix. 2. AND INTELLIGENCE Once the D-H coordinate system has been established for each link. « 0 1 0 0 1 0 0 di 1 cos 0j sin 0i 0 0 .. 3.di sin ai . while ai.cos a. CONTROL. O T:.IAi. cos ai ai sin 0. Q T.2-27). and 0i are constants. SENSING. a homogeneous transformation matrix can easily be developed relating the ith coordinate frame to the (i . known as the D-H transformation matrix for adjacent coordinate frames.di cos ai 1 sin ai sin Oi 0 (2. Translate along the zi. +-+ s"' Thus. the joint variable is di.--01 0 0 0 1 cos Oi sin 0i 0 .p coincidence. In this case.I axis a distance of di to bring the xi.ai .cos ai sin Oi ['-'Ail-' = Ai-I = i cos ai cos 0i . it is obvious that a point ri expressed in the ith coordinate system may be expressed in the (i . di 1 (2. i C'3 !-t and i .40 ROBOTICS.1. Translate along the xi axis a distance of ai to bring the two origins as well as the x axis into coincidence.sin 0 cos O '°_"i 0 0 1 0 0 0 1 1 0 1 0 0 1 ai 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 cos ai sin ai .1)th coordinate system as ri_ by performing the following successive transformations: I 1.I and xi axes into . VISION. pi = position vector which points from the origin of the base coordinate system to the origin of the ith coordinate system. 2. 0 (2.C . 2 . ljj-IAA for i = 1.2-32) Using the `-IAi matrix. These `.cos ai sin 0.1 by .11. M.d T.I . The six i-IAi transformation matrices for the six-axis PUMA robot arm have been found on the basis of the coordinate systems established in Fig. one can relate a point pi at rest in link i.2-33) where pi -I = (xi-I.cy= 0 0 0 0 and its inverse is cos 0i sin 0i 0 0 i.ate Pi-I = `-IAi pi S3. '>~ f~/) (2. cos ai cos 0i sin ai sin a.sin ai cos Oi 0 sin ai cos ai 0 cos ai sin 0i 0 . 2.13.ROBOT ARM KINEMATICS 41 cos 0i . yi-I. sin 01 0 0 (2. n °Pi 1 1=' xi Yi Zi Pi 1 °R.iA_] . .2-31) di 1 '-IAi sin 0i -sin ai cos 0i cos ai =TZ. zi. yi.IAi matrices are listed in Fig. 1) T.11 Kinematic Equations for Manipulators The homogeneous matrix °Ti which specifies the location of the ith coordinate frame with respect to the base coordinate system is the chain product of successive coordinate transformation matrices of i-IAi. 0 0 0 where [xi. and is expressed as OTi = OA A2 i-iAi = II. .. zi ] = orientation matrix of the ith coordinate system established at link i with respect to the base coordinate system.. It is the upper right 3 X 1 partitioned matrix of °Ti.di sin ai .di cos ai 1 (2.o TZ.2.cos ai sin 0i i-I - cos ai cos 0i . to the coordinate system i .2-34) i d^. 1)T and pi = (xi.1 established at link i . yi. It is the upper left 3 x 3 partitioned matrix of °Ti. zi-I. 2. and expressed in homogeneous coordinates with respect to coordinate system i.. cos a.S1 0 0 .C4 C5 S6 . SENSING.. sin 8. 0 0 C.S4 a3 C3 a3 S3 0 2A3 = S3 .S5 C6 0 U+" C5 d6 C5 + d4 1 0 0 where C.S4 C6 . cos 8.S2 N(' C 0 0 1 a2 C2 a2 S2 C1 C2 °Al = 0 0 C3 -1 0 0 0 1 0 0 S3 0 1 1A2 = 0 0 0 0 d2 1 L 0 C4 S4 0 . III Specifically. AND INTELLIGENCE F cos 8.S6 0 0 0 0 1 0 0 0 1 0 0 0 1 C6 S6 0 0 d6 1 a . sin 8. 0 C2 S2 0 0 . T = °A6. Cii = cos (8.42 ROBOTICS: CONTROL.S23 0 0 C23 . S11 = sin (8.S4 C5 S6 + C4 C6 S5 S6 C4 S5 C/] d6 C4 S5 d6 S4 S5 T2 = 3 A4 4 A5 5A6 S4 C5 C6 + C4 S6 S4 S5 = . Si = sin 8. = cos 8. sin 8." Consider the T matrix to be of the form . we obtain the T matrix. cos a. . VISION.d2 S1 a2 SI C2 + a3 S1 C23 + d2 C1 S. for i = 6. 1A2 2A3 = . cos 8. a.C5 0 0 C1 C23 C6 A5 = 0 SA6 = 0 0 0 0 0 0 0 -S1 C1 Cl S23 S1 S23 a2 CI C2 + a3 Cl C23 . Sl cos a. a. = °A. 0 sin a.a2 S2 . C23 T. + 8j). sin 8. Figure 2. 1 . which specifies the position and orientation of the endpoint of the manipulator with respect to the base coordinate system. This T matrix is used so frequently in robot arm kinematics that it is called the "arm matrix.13 PUMA link coordinate transformation matrices. sin a. d.r+ III .sin a1 cos 8.S4 S6 ..a3 S23 1 0 0 C4 C5 C6 .C3 0 0 S5 0 C4 0 d4 1 0 0 C5 S5 0 1 3A4 = 0 -1 0 . + 8j). a. It is pointing in the direction normal to the palm of the hand (i... normal to the tool mounting plate of the arm). and B = refA0.14) 0 0 n = normal vector of the hand. it is orthogonal to the fingers of the robot arm.ROBOT ARM KINEMATICS 43 Figure 2. s. a]. It is pointing in the direction of the finger motion as the gripper opens and closes.. then the endpoint of the tool can be related to the reference coordinate frame by multiplying the matrices B. p = position vector of the hand. s = sliding vector of the hand. If the manipulator is related to a reference coordinate frame by a transformation B and has a tool attached to its last joint's mounting plate described by H. (2.2-35) 0 where (see Fig.2-36) .+ refTcool = B °T6 H (1U Note that H = 6At0. and H together as s.14 Hand coordinate system and [n.e. a = approach vector of the hand. 0r. which is usually located at the center point of the fully closed fingers. °T6. It points from the origin of the base coordinate system to the origin of the hand coordinate system.. arc T = X6 Y6 Z6 P6 1 °R6 0 °P6 1 n s a 0 p 1 0 nx I 0 sx 0 ax 10 0 Px ny nz sy s2 ay az Py Pz 1 (2. 2. Assuming a parallel jaw hand. . we express the elements of T.S4 S6 S4 C5 C6 + C4 S6 . + Of).S23 . SENSING. The most efficient method is by multiplying all six '. Az 2AS A3 I CI C23 SI C23 . which is a fairly straightforward task. Note that the direct kinematics solution yields a unique T matrix for a given q = (q.S4 C5 S6 + C4 C6 S5 S6 C4 S5 S4 S5 d6 C4 S5 d6 S4 S5 . = 8. S23 C23 .- i. for a robot arm. 2. enough).'A. is found from Fig. T. for a rotary joint and q. Having obtained all the coordinate transformation matrices '-1A.S arm matrix T = T.. and T2 out in a computer program explicitly and let the computer multiply them together to form the resultant '?» ... a2 C. . 2.2-38) where C.j = cos (O + O) and S. consists mostly of zero elements.1A. matrices and evaluating each element in the T matrix. . . matrices together to form T. simply a matter of calculating T = °A6 by chain multiplying the six '-IA. 2.. The disadvantages of this method are (1) it is laborious to multiply all six '-1A. f3.CONTROL.11. for a prismatic a.C4 C5 S6 .S4 C6 . T2.a3 S23 1 1 . C23 + d2 C. q2 . C2 + a3 C.. where q.SI C. joint. and (2) the arm matrix is applicable only to a particular robot for a specific set of coordinate systems (it is not flexible '_» CAD aCD f3. a2 S. the next task is to find an efficient method to compute T on a general purpose digital computer.'A. (2 2-37) . The table in Fig. The only constraints are the physical bounds of B. matrices and let the computer do the multiplication. matrices together to form T2 = 3A4 4A5 5A6. q6 )T and a given set of coordinate systems.a2 S2 . Then. A method that has both fast computation and flexibility is to "hand" multiply the first three '. AND INTELLIGENCE The direct kinematics solution of a six-link manipulator is. 0 0 0 and the T2 matrix is found to be T2 = 3A6 = 3A44A55A6 C4 C5 C6 . one can input all six '. = d. . .11 lists the joint constraints of a PUMA 560 series robot based on the coordinate system assigned in Fig. VISION. C23 . This method is very flexible but at the expense of computation time as the fourth row of '.d2 S.IA. 'A2 2A3 and also the last three '. 0 CI S23 S. for each joint of the robot arm. therefore.44 ROBOTICS.j = sin (B.t r-. C2 + a3 S.'AI matrices together manually and evaluating the elements of T matrix out explicitly on a computer program. matrices together. = °A.13 to be TI = OA3 = A. On the other extreme. For a PUMA 560 series robot.. coo p..S5 C6 L C5 d6 C5 + d4 1 0 0 0 (2. ROBOT ARM KINEMATICS 45 The arm matrix T for the PUMA robot arm shown in Fig.Sl (S4 C5 C6 + C4 S6 ) n y = SI [C23 (C4 C5 C6 .Sl (. if we combine d6 with the tool .C23 (C4 C5 S6 + S4 C6) + S23 S5 S6 1 . the arm matrix T requires 12 transcendental function calls. (2.11 is found to be Px I Py T = T1 T2 = °A1 1A22A33A44A55A6 = Pz 1 (2.12 T = 0 -1 0 0 0 20.2-40) through (2.11.2-39) where nx = Cl [C23 (C4 C5 C6 . 2.S4 S6) .a2 S2 (2.2-43) As a check. 2.05 =0'.S23 C4 S5) + C23 d4 .09 921.SI S4 S5 a y = S1 (C23 C4 S5 + S23 C5) + C1 S4 S5 (2.03 =90'.a3 S23 .S1 (d6 S4 S5 + d2 ) Py = SI [ d6 (C23 C4 S5 + S23 C5) + S23 d4 + a3 C23 + a2 C2 ] + C 1 (d6 S4 S5 + d2 ) Pz = d6 (C23 C5 .S23 [ C4 C5 C6 .2-43).2-40) Sx = Cl [ .S23 C4 S5 + C23 C5 P.2-41) ax = Cl (C23 C4 S5 + S23 C5) .32 1 0 which agrees with the coordinate systems established in Fig.S23 S5 C6 ] + Cl (S4 C5 C6 + C4 S6) nz = .04 = 00. = C1 [ d6 (C23 C4 S5 + S23 C5) + S23 d4 + a3 C23 + a2 C2 I .62 =0'. 40 multiplications and 20 additions if we only compute the upper right 3 x 3 submatrix of T and the normal vector n is found from the cross-product of (n = s x a).C23 (C4 C5 S6 + S4 C6) + S23 S5 S6 ] + CI ( -S4 C5 S6 + C4 C6 ) Sz = S23 (C4 C5 S6 + S4 C6) + C23 S5 S6 (2.S4 S6) .S4 S6 I .C23 S5 C6 (2. if 01 =90'.S4 C5 S6 + C4 C6 ) S Y = S1 [ .06 =0'.S23 S5 C6 I .2-42) az = . then the T matrix is 0 -1 0 0 0 1 -149. From Eqs. Furthermore. If the origin of the base coordinate system as seen by the camera can also be expressed by a homogeneous transformation matrix T. and 16 additions. The camera can see the origin of the base coordinate system where a six joint robot is attached.r. If a local coordinate system has been established at the center of the cube. AND INTELLIGENCE length of the terminal device. This reduces the computation to 12 transcendental function calls. this object as seen by the camera can be represented by a homogeneous transformation matrix T1.. 35 multiplications. VISION. Example: A robot work station has been set up with a TV camera (see the figure). s.O 0 0 1 1 0 0 -10 20 10 1 0 10 0 -1 0 0 0 -1 0 T2 = 9 1 0 0 -1 0 . It can also see the center of an object (assumed to be a cube) to be manipulated by the robot. 0 1 1 T1 = 0 0 0 0 (a) What is the position of the center of the cube with respect to the base coordinate system? (b) Assume that the cube is within the arm's reach. then d6 = 0 and the new tool length will be increased by d6 unit. a] if you want the gripper (or finger) of the hand to be aligned with the y axis of the object and at the same time pick up the object from the top? CAD . What is the orientation matrix [n. SENSING.46 ROBOTICS: CONTROL. and . we make use of n 0 s a 0 p 1 0 where p = (11.e. x. y. we want the approach vector a to align with the negative direction of the OZ axis of the base coordinate system [i. Its x.y. 10. 10. -1)T]. we obtain the resultant transformation matrix: 1 0 0 10 baser cube - - 0 1 1 0 0 1 0 0 0 0 -1 0 0 20 10 1 0 0 10 9 1 -1 0 0 0 1 0 -1 0 0 1 0 0 11 -1 0 0 10 1 1 0 0 0 0 .e. s. a]. and z axes are parallel to the . (2. 0.-v Therefore. 1)7'from the above solution. s = (± 1. To find [ n.ROBOT ARM KINEMATICS 47 SOLUTION: 0 cameral cube 1 0 1 =T= I 1 0 0 0 10 9 1 0 -1 0 0 and 1 0 0 0 0 -10 20 10 1 camera lbase = T2 = - 0 0 -1 0 0 -1 0 0 To find basercube. and z axes of the base coordinate system.2-27) to invert the T2 matrix. 1)T from the base coordinate system.. the cube is at location (11. From the above figure. 0)T]. we use the "chain product" rule: basercube = basercamera camerarcube = (T2)-' TI Using Eq. a = (0. 0 .. 0. the s vector can be aligned in either direction of the y axis of base Tcabe [i. respectively. 2-17).12 Other Specifications of the Location of the End-Effector In previous sections. and Yaw Representation for Orientation.s.a] _ +1 0 0 0 -1 0 0 0 0 -1 -1 2. and yaw (RPY).COW .1' + Cq CBC.48 ROBOTICS. one can construct the arm matrix °T6 by Eq.4 . This rotation submatrix is equivalent to °R6.CcSB CO Py PZ 1 ISBC. this matrix representation for rotation of a rigid body simplifies many operations. Pitch. As indicated in Sec. Euler Angle Representation for Orientation.b SoSB PX °T6 - Sg5C. From this vector. we analyzed the translations and rotations of rigid bodies (or links) and introduced the homogeneous transformation matrix for describing the position and orientation of a link coordinate frame. Using the rotation matrix with eulerian angle representation as in Eq. Of particular interest is the arm matrix °T6 which describes the position and orientation of the hand with respect to the base coordinate frame.4.2-44) 0 Another advantage of using Euler angle representation for the orientation is that the storage for the position and orientation of an object is reduced to a six-element vector XYZZO '.2-44). but it does not lead directly to a complete set of generalized coordinates.1' . the orientation matrix [n. The upper left 3 x 3 submatrix of °T6 describes the orientation of the hand. There are other specifications which can be used to describe the location of the end-effector. and ). Roll. AND INTELLIGENCE and the n vector can be obtained from the cross product of s and a: i j 0 0 k 0 0 n = t1 0 t1 0 -1 Therefore. 2. the arm matrix °T6 can be expressed as: CcC.S4S. Another set of Euler angle representation for rotation is roll.ScpCBS>li a-1 . VISION. s.SgCOC.2. SENSING. using Eq. Such a set of generalized coordinates can be provided by three Euler angles (0. i + COCBSO . (2. (2. . pitch. B.2. a] is found to be 0 1 01 0 or [n. Again. CONTROL.' 0 0 (2. 1. the position of the end-effector can be specified by the following translations/rotations (see Fig. pz )T in other coordinates such as cylindrical or spherical.2-46) 0 0 0 1 where °R6 = rotation matrix expressed in either Euler angles or [n. Cylindrical Coordinates for Positioning Subassembly. and yaw. there are different types of robot arms according to their joint motion (XYZ.. 2. p y. The resultant arm transformation matrix can be obtained by 1 0 1 0 0 1 PX °R6 0 0 0 °T6 0 0 0 0 Py Pz 1 0 0 (2. pitch. one can specify the position of the hand (pX. A translation of r unit along the OX axis (TX. pitch.2-19). a] or roll. cylindrical. s.15): 1.2-45) As discussed in Chap.scC>G °T6 = sgCO -so 0 SOSM + c/c> cost 0 Ccbsoco + SOW sc/sOC.. A rotation of a angle about the OZ axis (T. and yaw can be used to obtain the arm matrix °T6 as: Coco WOW .r) 2. a Figure 2. Thus. and articulated arm)..COO PX Py Pz 1 Coc I 0 (2. spherical.) 3. .15 Cylindrical coordinate system representation. A translation of d unit along the OZ axis (Tz d) 4.ROBOT ARM KINEMATICS 49 (2..c . Iri a cylindrical coordinate representation. the rotation matrix representing roll. This involves the following translations/rotations (see Fig. VISION.CO N y Figure 2. A translation of r unit along the OZ axis (Ti. We can also utilize the spherical coordinate system for specifying the position of the end-effector. c..50 ROBOTICS: CONTROL. AND INTELLIGENCE The homogeneous transformation matrix that represents the above operations can be expressed as: 1 0 1 0 0 1 0 0 Ca . x 0 0 0 0 1 (2. Spherical Coordinates for Positioning Subassembly. pz = d. A rotation of a angle about the OZ axis (TZ. A rotation of 0 angle about the OY axis (T y. r) 2. d Tz.2-47) 0 0 Since we are only interested in the position vectors (i.2-46).Sa Sa 0 0 0 0 r-+ 0 0 0 1 Ca 0 0 rCa rSa d 1 'ZS '--. . 2.16): --h a20 III 1. the arm matrix OT6 can be obtained utilizing Eq.e. py = rSa. p) 3. 0 °T6 0 °R6 0 0 (2. (2.16 Spherical coordinate system representation.Sa Sa 0 0 0 0 0 0 1 0 Ca 0 0 0 1 Tcylindrical = Tz. Tx. r= 0 0 0 0 d 1 0 0 1 0 1 0 0 1 r Ca . the fourth column of Tcylindrical). SENSING.2-48) 0 0 1 and px = rCa. «) . r = . pitch. s.X. pZ )T.Tposition Trot . our interest is the position vector with respect to the base coordinate system. Pz = rC/3. 0. a Ry. a ] or R06 . p. therefore.rSaS(3. The result of the above discussion is tabulated in Table 2. a] or Euler angles or roll. the position vector can be expressed in cartesian (ps..2 Various positioning/orientation representations Positioning Orientation III . and yaw). 0 Tz.. rCf3)T Cartesian [n.Sa Ca 0 0 CaS(3 rCaSf3 rSaS(3 x 0 0 SaC(3 SaS0 0 0 -so 0 'C3 co 0 rC0 1 (2. rC0 )T terms. a] Euler angles (0. pitch. (rCa. 1G).2-49) 0 0 1 Again. = rCaSf3. we have cartesian [n. s. In summary. For positioning. rSaSf. Ca Sa 0 0 . or spherical (rCaS/3. cylindrical -'. pZ)T Cylindrical (rCa..2-50) 0 0 0 where p. pitch. rsa. rSaS(3. Tposition - yr 0 0 1'. Table 2.".Sa Ca 0 0 Cu CQ 0 0 0 0 1 Co 0 0 1 S3 0 0 0 1 0 0 -so 0 co 0 0 0 1 0 1 0 1 0 0 1 0 0 r '-' . III Tsph = Tz. Euler angles (0. and yaw 0 [n. the arm matrix °T6 whose position vector is expressed in spherical coordinates and the orientation matrix is expressed in [n. d)T Spherical (rCaS/3.ROBOT ARM KINEMATICS 51 The transformation matrix for the above operations is r-.a_ Cartesian (pr. there are several methods (or coordinate systems) that one can III choose to describe the position and orientation of the end-effector. and yaw can be obtained by: 1 0 1 0 0 1 rCaS(3 rSaS(3 rC(3 1 °T6 = 0 0 0 III °R6 (2. p y. d)T. rSa. a]. For describing the orientation of the end-effector with respect to the base coordinate system. and (roll. Trot = 0 0 0 'T6 . Roll. py . s. s. 0.2. The 0°\o 'ti . Paul et al. two others so that closed loops are not formed. with the first link connected to a supporting base and the last link containing the terminal device (or tool).3 THE INVERSE KINEMATICS PROBLEM This section addresses the second problem of robot arm kinematics: the inverse kinematics or arm solution for a six-joint manipulator. it suffers from the fact that the solution does not give a clear indication on how to select an appropriate solution from the several possible solutions for a particular arm configuration. [1964]). the PUMA robot arm may be classified as 6R and the Stanford arm as 2R-P-3R. Hence. q4. we would like to find the corresponding joint angles q = (qI.13 Classification of Manipulators A manipulator consists of a group of rigid bodies or links. and closed form solution for the remaining unknowns. q5. the inverse kinematics solution is more important. A revolute joint only permits rotation about an axis. desired..2. >. [1981]). cue T. a manipulator. at most. AND INTELLIGENCE 2. where R is a revolute joint and P is a prismatic joint.. mot.. In order to control the position 44- and orientation of the end-effector of a robot to reach its object. while the prismatic joint allows sliding along an axis with no rotation (sliding with rotation is called a screw joint). The solution can be expressed as a fourth-degree polynomial in one unknown. dual matrices (Denavit [1956]). CAD In general. Although the resulting solution is correct. q3. dual quaternian (Yang and Freudenstein [1964]). q2. VISION. screw algebra (Kohli and Soni [1975]). With this convention. With this restriction." may be classified by the type of joints and their order (from the base to the hand). with the first link connected to ground and the last link containing the "hand. iterative (Uicker et al. given the position and orientation of the end-effector of a six-axis robot arm as °T6 and its joint and link parameters. such as inverse transform (Paul et al. each link is connected to. These links are connected and powered in such a way that they are forced to move relative to one another in order to position the end-effector (a hand or tool) in a particular position and orientation. two types of joints are of interest: revolute (or rotary) and prismatic. the inverse kinematics problem can be solved by various methods. Computer-based robots are usually servoed in the joint-variable space. We made the assumption that the connection between links (the joints) have only 1 degree of freedom. In other words. q6 )T of the robot so that the end-effector can be positioned as . [1981] presented an inverse transform technique using the 4 x 4 homogeneous transformation matrices in solving the kinematics solution for the same class of simple manipulators as discussed by Pieper..' 2. In addition. SENSING. Pieper [1968] presented the kinematics solution for any 6 degree of freedom manipulator which has revolute or prismatic pairs for the first three joints and the joint axes of the last three joints intersect at a point.3 . considered to be a combination of links and joints. whereas objects to be manipulated are usually expressed in the world coordinate system. and geometric approaches (Lee and Ziegler [1984]).52 ROBOTICS: CONTROL. o ^-t °. For example. and given nx sx ax ny sy ay nz sz az = Rz. We shall discuss Pieper's approach in solving the inverse solution for Euler angles. Since we have more equations than unknowns. for a PUMA robot arm.ROBOT ARM KINEMATICS 53 user often needs to rely on his or her intuition to pick the right answer.1 Inverse Transform Technique for Euler Angles Solution In this section. Three adjacent joint axes parallel to one another Both PUMA and Stanford robot arms satisfy the first condition while ASEA and MINIMOVER robot arms satisfy the second condition for finding the closed-form 4'b solution.r L:. which can also be used to find the joint solution of a PUMA-like robot arm. 2. and a geometric approach which provides more insight into solving simple manipulators with rotary joints. especially in the singular and degenerate cases.2-40) to (2.t tunately. Since the 3 x 3 rotation matrix can be expressed in terms of the Euler angles (gyp. we have the arm transformation matrix given as r T6 = nr s.. From Eq. ) as in Eq.2-17). 06. (2. .. We shall explore two methods for finding the inverse solution: inverse transform technique for finding Euler angles solution..2-39). equating the elements of the matrix equations as in Eqs. Furthermore. we shall show the basic concept of the inverse transform technique by applying it to solve for the Euler angles. we have twelve equations with six unknowns (joint angles) and these equations involve complex trigonometric functions. ny nz sY ay az sz Pz 1 = °AI'A22A33A44A55A6 (2.r ax Px py.3. 01 . as with the inverse transform technique. most of the commercial robots have either one of the following sufficient conditions which make the closed-form arm solution possible: 1. one can immediately conclude that multiple solutions exist for a PUMA-like robot arm. (2. there ova 0C' is no indication on how to choose the correct solution for a particular arm configuration.2-43). 0.3-1) 0 0 0 The above equation indicates that the arm matrix T is a function of sine and cosine of 0 1 . Three adjacent joint axes intersecting 2. [1964] and Milenkovic and Huang [19831 presented iterative solutions for most industrial robots. 4 R11. . It is desirable to find a closed-form arm solution for manipulators. (2. Uicker et al. . For- . The iterative solution often requires more computation and it does not guarantee convergence to the correct solution. C4SO co Sos. 0. therefore. 0 = 0° or 0 r t 180°. SENSING.3-3c) (2. AND INTELLIGENCE CoC. We must.54 ROBOTICS: CONTROL. (2.3-3d) (2.3-3e) sx = .3-3g) ay = -CgSO (2.SgCOCo Sy = -SSA + CoCBCb sz = SOC>G (2.3-3h) az = co is: .0). The arc cosine function does not behave well as its accuracy in determining the angle is dependent on the angle. + COCM O . When sin (0) approaches zero.b .3-3h).o . cos (0) = cos (. and (2. (2.3-4) -11 r >/i = cos' sz (2.SCOC.i -SOW + CgCOC%1i SBC>/i Sso . Equating the elements of (2.3-3a) ny = SCE + WOW nZ = Sos> (2. 2. the above matrix equation.3-3f) aX = SOSO (2.3-6) S oy The above solution is inconsistent and ill-conditioned because: 1. a solution to the above nine equations 0 = cos-I (az) (2. r-..3-3i) Using Eqs. (2.SOCOSIi ScbC>1. (2. we have: nX = CcOCi.ScpCOS> > . find a more consistent approach to determining the Euler angle solution and a more consistent arc trigonometric function in evaluating the s. That is.3-3b) (2.3-6) give inaccurate solutions or are undefined.fl (2.3-5) N So (2. VISION. .3-5) and (2. Eqs. that is.3-3f).3-3i).COOS.CcSIP .3-2) we would like to find the corresponding value of c1. . n. ar ay aZ 1 0 0 . In order to evaluate 0 for .. then move the next unknown to the LHS. and repeat the process until all the unknowns are solved. atan2 (y. Premultiplying the above matrix equation by RZj1. -so COco 0 -SOs.SO COD C>G . [1981]. we have: C4a.y for +x and . + S0a y = 0 which gives (2.se co (2.ROBOT ARM KINEMATICS 55 angle solution.G 0 0 1 . >G.90 ° < 0 < .C + Ccay Co cOS.3-9) 0 = tan- aX = atan2(a. >') on the RHS of the matrix equation. + Ccbny nZ C4sr + Scs>.S>G C. Cca.0 ° for +x and + y for -x and + y 0 0 for -x and . thus we have Co so 0 0 n..3-8). (2. an arc tangent function. which returns tan-'(y/x) adjusted to the proper quadrant will be used. .3-10) .3-7) Using the arc tangent function (atan2) with two arguments. co 0 0 COnr + Siny -SOn. + Cosy sZ . 3) elements of both matrices in Eq. we move one unknown (by its inverse transform) from the RHS of the matrix equation to the LHS and solve for the unknown.. x) _ 0 18090-180 ° 0 -90' . -Scba.. we have one unknown on the LHS and two unknowns (0. we shall take a look at a general solution proposed by Paul et al. (2. s. That is.li SOS 1.So 0 or Co 0 1 n}. From the matrix equation in Eq.C ss.y (2. 0. the elements of the matrix on the left hand side (LHS) of the matrix equation are given.7r < 0 < 7r. Paul et al. while the elements of the three matrices on the right-hand side (RHS) are unknown and they are dependent on 0. [1981] suggest premultiplying the above matrix equation by its unknown inverse transforms successively and from the elements of the resultant matrix equation determine the unknown angle. sy 0 0 co S0 Si. + Scba.3-8) al sOCi Equating the (1.ay) (2. x). It is defined as: 0° 0 < 90 ° 0 = atan2 (y.3-2). S>G Cq s. SENSING. + Sony J = atan2 ( . az) J Q4- (2. 1) elements of both matrices in the above matrix equation.3-1 la) W which lead to the solution for >fi.S%1i n.. 3) elements of the both matrices.Ccba.S% + s. 0. 3) and (3.Ccba y..C0 ay az .S1i = 0 (2. COn.CCa y.3-14) Since the concept of inverse transform technique is to move one unknown to the LHS of the matrix equation at a time and solve for the unknown. we have: SO = Sg ax .s. 1 by postmultiplying the above matrix equation by its inverse transform R.3-13) which gives us the solution for 0. 1) and (1.S0 n.SOsy. CO = az (2.S6 co Multiplying the matrices out.) Equating the (2. we can try to solve the above matrix equation for cp. + Son.. AND INTELLIGENCE Equating the (1. . . . we have nzCVi .Scbsy 1 Con. 2) elements of the both matrices. we have: vac C1 = Cain.Cps..3-16) .. we have. VISION.56 ROBOTICS: CONTROL..s. + Scan. .So co 0 1 so 0 0 1 0 0 co S6 .3-15) Again equating the (3.3-12) (2.3-11b) CJ = tan-I r L . nxCi1i .Ci/i .syS'1G n. 1 Ci 0 CD' so 0 0 Co . 6=tan-1 8 = tan - Scbax .Cq s_.s.S1 + syC nzS' + s.CoS6 co (2. Co So 0 -SoC6 CoC6 S6 SoS6 nyc> .C0 a.v. (2.SOS" (2..So Co 0 0 1 0 0 .. a = atan2 (SOa. 18.17): O (orientation) is the angle formed from the yo axis to the projection of the tool a axis on the XY plane about the zo axis. Initially the tool coordinate system (or the hand coordinate system) is aligned with the base coordinate system of the robot as shown in Fig. PUMA robots use the symbols 0. when O = A = T = 0 °.3-17) Equating the (3. T to indicate the Euler angles and their definitions are given as follows (with reference to Fig. 2. T (tool) is the angle formed from the XY plane to the tool s axis about the a axis of the tool. 2) and (3. Let us apply this inverse transform technique to solve the Euler angles for a PUMA robot arm (OAT solution of a PUMA robot). A (altitude) is the angle formed from the XY plane to the tool a axis about the s axis of the tool. az) (2. 2. 0=tan nZS>/i + sZCt/i 1 ' = atan2 (nzSt' + szCtk. A.3-19) I. 3) elements of both matrices. we have: SO = nZSI/i + szCt/i (2.C1 - nxC/i - (2.3-18a) 00 CO = aZ (2.3-20b) which gives 0 = tan - I nyCo-syS nxCi/i .ROBOT ARM KINEMATICS 57 which gives 0 = tan-' = atan2 (nZ. sz) sz (2.sxSi/i J atan2 (n_. . az Equating the (1. 1) and (2.sxS>/i SO = n yC>/i .3-20a) (2. That is. we have CO = nxC 1. 1) elements of both matrices. the hand points in the negative yo axis with the fingers in a horizontal plane. and the s axis is pointing to the positive x0 axis. The necessary .3-21) Whether one should premuitiply or postmultiply a given matrix equation is up to the user's discretion and it depends on the intuition of the user.3-18b) which leads us to the solution for 0.s yS>G (2. SENSING.) transform that describes the orientation of the hand coordinate system (n. AND INTELLIGENCE 0. s. and T. zo) is given by 0 0 1 0 . (Taken from PUMA robot manual 398H. A.17 Definition of Euler angles 0. yo. VISION.3-22) 0 -1 0 0 . a measurement of the angle formed between the WORLD Y axis and a projection of the TOOL Z on the WORLD XY plane TOOL Z A.1 (2. a measurement of the angle formed between the TOOL Z and a plane parallel to the WORLD XY plane T. a measurement of the angle formed between the TOOL Y and a plane parallel to the WORLD XY plane Figure 2. a) with respect to the base coordinate system (xo.58 ROBOTICS.CONTROL. SO CO 0 0 0 0 1 0 t`' . T. CT ST 0 CO SO .ST CT 0 0 0 1 0 0 -1 0 0 1 0 -1 SA CA 0 1 x 0 0 CA . we have: nZST + sZCT = 0 (2.ST 0 ST 0 0 0 -1 0 CA CT 0 0 1 0 -1 0 . (2.sxST nXST + sXCT ax .syST nZCT .SA 0 Postmultiplying the above matrix equation by the inverse transform of Ra.COCA -CA -SA (2.322)]. the relationship between the hand transform and the OAT angle is given by nx sX ax 7 0 RZ.ROBOT ARM KINEMATICS 59 zo ai Yo xo Figure 2.sZST nyST + syCT nZST + s. T -1 0 0 1 0 r-.SA 0 and multiplying the matrices out.CT ay aZ SO 0 .18 Initial alignment of tool coordinate system. o 1 0 ny nZ sy sZ ay aZ j 0 0 -1 0 RS. 2) elements of the above matrix equation. we have: nXCT .3-23) Equating the (3. From the definition of the OAT angles and the initial alignment matrix [Eq.3-24) . 0 CO SO .SO CO 0 0 0 1 CA 0 0 1 SA CT .SOSA COSA CO SOCA nyCT . A Ra. 2. Based on the link coordinate systems and human arm N=- C]. 2. -nt) (2.CT + sZST then the above equations give A = tan . Details about the PUMA robot arm joint solution can be found in Paul et al. nXST + ssCT) (2. 1) and (3.3-26b) CA = -n.2. [1981]. we have: SA = -as and (2. This approach is presented in Sec. AND INTELLIGENCE which gives the solution of T. -nZCT + sZST) (2. -Q.ST + sECT = atan2 (nyST + syCT.3-27) Equating the (1. VISION.3-28a) (2.3.az -nzCT + sZST J = atan2 ( -as.3.3-29) The above premultiplying or postmultiplying of the unknown inverse transforms can also be applied to find the joint solution of a PUMA robot. The discussion focuses on a PUMA-like manipulator.2-39). n>. it does not give a clear indication on how to select an appropriate solution from the several possible solutions for a particular arm configuration. 3) elements of the both matrices.nZ = atan2 (s.I . .60 ROBOTICS: CONTROL.ST + syCT (2. a geometric approach is more useful in deriving a consistent joint-angle solution given the arm matrix as in Eq. SENSING. we have: CO = nXST + sCT SO = nyST + syCT which give the solution of 0. Thus. 2) and (2. This has to rely on the user's geometric intuition.2 A Geometric Approach This section presents a geometric approach to solving the inverse kinematics prob- lem of six-link manipulators with rotary joints.3-26a) (2.3-25) Equating the (3. and it provides a means for the user to select a unique solution for a particular arm configuration. T = tan-' S . Although the inverse transform technique provides a general approach in determining the joint solution of a manipulator.3-28b) n. 2) elements of the both matrices. (2. nx sx s>. and the joint-angle solution can be applied to °T6 as desired. the third indicator selects a solution from the possible two solutions for the last three joints. First. nZ ay az py (2. With appropriate modification and adjustment. The first two configuration indicators allow one to determine one solution from the possible four solutions for the first three joints. the orientation submatrices of °Ti and iii . 2. f3.I Ai (i = 4. plane. various arm configurations of a PUMA-like robot (Fig.. a position vector pointing from the shoulder to the wrist is derived. From the geometry. If we are given refT. and the projection of the link coordinate frames onto the xi_. and WRIST)-two associated with the solution of the first three joints and the other with the last three joints.ROBOT ARM KINEMATICS 61 geometry. ABOVE ARM (elbow above wrist): Position of the wrist of the RIGHT LEFT arm with respect to the shoulder coordinate system has 00. 6).. sistently. Similarly.. plane. respectively.11 (and other rotary robot arms). one can easily find the arm solution con- mss' 0._h As a verification of the joint solution.19) RIGHT (shoulder) ARM: Positive 02 moves the wrist in the positive z° direction while joint 3 is not activated. _°o 0R° y0. The arm configuration indicators are prespecified by a user for finding the inverse solution. then we can find °T6 by premultiplying and postmultiplying rerT.% px °T6 = T = B-. LEFT (shoulder) ARM: Positive 02 moves the wrist in the negative z° direction while joint 3 is not activated. 2. refTtoo1 H-.3-30) PZ 1 sz 0 0 0 Definition of Various Arm Configurations. . The solution is calculated in two stages.N .1 as (Fig.1.oo. The last three joints are solved using the calculated joint solution from the first three joints. there are four possible solutions to the first three joints and for each of these four solutions there are two possible solutions to the last three joints. 3) for the O<7 first three joints by looking at the projection of the position vector onto the xi _ I yi _. 0 v-. a. For a six-axis PUMA-like robot arm.oo. 5. ELBOW. this approach can be generalized to solve the inverse kinematics problem of most present day industrial robots with rotary joints. the arm configuration indicators can be determined from the corresponding decision equations which are functions of the joint angles. For the PUMA robot arm shown in Fig. by B-1 and H. = ny. ms`s-' row `.11) can be identified with the assistance of three configuration indicators (ARM. yi_. 2. This is used to derive the solution of each joint i (i = 1. various arm configurations are defined according to human arm geometry and the link coordinate systems which are established using Algorithm 2. 2. SENSING. y5. the user can define a "FLIP" toggle as: FLIP = Flip the wrist orientation Do not flip the wrist orientation BCD . WRIST DOWN: The s unit vector of the hand coordinate system and the y5 unit vector of the (x5. the third indicator (WRIST) gives one of the two possible joint solutions for the last three joints.3-31) -1 +1 LEFT arm ABOVE arm BELOW arm WRIST DOWN (2.19) defined by these two indicators. AND INTELLIGENCE negative positive coordinate value along the Y2 axis. WRIST UP: The s unit vector of the hand coordinate system and the y5 unit vector of the (x5. 2. These two indicators are combined to give one solution out of the possible four joint solutions for the first three joints. BELOW ARM (elbow below wrist): Position of the wrist of the I RIGHT LEFT positive negative arm with respect to the shoulder coordinate system has coordinate value along the Y2 axis.`. These three indicators can be defined as: ARM = ELBOW = +1 RIGHT arm (2. z5) coordinate system have a positive dot product. (Note that the definition of the arm configurations with respect to the link coordinate systems may have to be slightly modified if one uses different link coordinate systems.3-34) -1 The signed values of these indicators and the toggle are prespecified by a user for finding the inverse kinematics solution. z5) coordinate system have a negative dot product.) With respect to the above definition of various arm configurations. VISION. For each of the four arm configurations (Fig.3-33) (2 3-32) . y5. We shall later give the decision equations that determine these indicator bow .62 ROBOTICS: CONTROL. (2. -1 +1 WRIST = -1 +1 WRIST UP In addition to these indicators. These indicators can also be set from the knowledge of the joint angles of the robot arm using the corresponding decision equations. two arm configuration indicators (ARM and ELBOW) are defined for each arm configuration. ROBOT ARM KINEMATICS 63 Right and below arm Figure 2.19 Definition of various arm configurations. values. The decision equations can be used as a verification of the inverse kinematics solution. Arm Solution for the First Three Joints. From the kinematics diagram of the PUMA robot arm in Fig. 2.11, we define a position vector p which points from the origin of the shoulder coordinate system (x°, y0, z°) to the point where the last three joint axes intersect as (see Fig. 2.14): P = P6 - d6a = (PX, py, Pz)T which corresponds to the position vector of °T4: PX (2.3-35) Cl (a2 C2 + a3 C23 + d4S23) - d2Sl py Pz = Sl(a2C2 +a3C23 +d4S23) +d2Cl d4 C23 - a3 S23 - a2 S2 (2.3-36) 64 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE Joint 1 solution. If we project the position vector p onto the x0 yo plane as in Fig. 2.20, we obtain the following equations for solving 01: of = IX 0 R = 7r + 0+ IX R (2.3-37) (2.3-38) (2.3-39) r= +p?-d; sin IX = R x0Y0 plane Yo d2 cosa - Rr (2.3-40) Inner cylinder with radius d2 OA = d2 AB=r= PX+Py-d2 OB = Px+PZ xii Figure 2.20 Solution for joint 1. ROBOT ARM KINEMATICS 65 where the superscripts L and R on joint angles indicate the LEFT/RIGHT arm configurations. From Eqs. (2.3-37) to (2.3-40), we obtain the sine and cosine functions of 0I for LEFT/RIGHT arm configurations: sin I = sin ( - a) = sin 0 cos a - cos 0 sin a = cos 0i = cos ( - a) = cos 0 cos a + sin 0 sin a = sin OR = sin (7r + d + a) _ -PyrR2 pd2 I pyd2 cos OR = cos (ir + 0 + a) _ -pxr + R2 p}r - pXdz R2 (2.3-41) pxr + pyd2 R2 (2.3-42) (2.3-43) (2.3-44) Combining Eqs. (2.3-41) to (2.3-44) and using the ARM indicator to indicate the LEFT/RIGHT arm configuration, we obtain the sine and cosine functions of 01, respectively: sin01 = - ARM pyVPX + Pv - d2 - Pxd2 Px + P2 - ARM px P? + p,2 - d22 + Pyd2 (2.3-45) cos 01 = (2.3-46) Px + P Y where the positive square root is taken in these equations and ARM is defined as in Eq. (2.3-31). In order to evaluate 01 for -7r s 01 5 ir, an arc tangent function as defined in Eq. (2.3-7) will be used. From Eqs. (2.3-45) and (2.3-46), and using Eq. (2.3-7), 01 is found to be: r sin 01 1 COS01 01 = tan-1 r = tan-1 - ARM Py px + p? - d2 - Pxd2 - ARM Px px2 + py - d2 + -7rz 01 <7r (2.3-47) Pyd2 Joint 2 solution. To find joint 2, we project the position vector p onto the x1 yt plane as shown in Fig. 2.21. From Fig. 2.21, we have four different arm configurations. Each arm configuration corresponds to different values of joint 2 as shown in Table 2.3, where 0 ° 5 a 5 360 ° and 0 ° < 0 5 90 °. 66 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE Table 2.3 Various arm configurations for joint 2 Arm configurations 02 ARM ELBOW +1 ARM ELBOW LEFT and ABOVE arm LEFT and BELOW arm RIGHT and ABOVE arm RIGHT and BELOW arm a-0 a+0 a + (3 -1 -1 +1 +1 -1 +1 +1 -1 +1 a- -1 -1 From the above table, 02 can be expressed in one equation for different arm and elbow configurations using the ARM and ELBOW indicators as: 02 = a + (ARM ELBOW)(3 = a + K 0 (2.3-48) where the combined arm configuration indicator K = ARM ELBOW will give an appropriate signed value and the "dot" represents a multiplication operation on the indicators. From the arm geometry in Fig. 2.21, we obtain: R=J sin a = +p2 +P2 -d2 _ PZ r= PZ p +py - d, (2.3-49) R px +plz+p -d2 r ARM (2.3-50) cosa = ARM R px + Pv - d, p2 dz 2 .N.. (2.3-51) pX +Py 2a2R cos 0 = a2 + R2 - (d4 + a3 ) (2.3-52) pX +py+pZ +a2-d2-(d2+a3) 2a2 VPX2 + P2 + p - d2 (2.3-53) sin (3 = COS2 From Eqs. (2.3-48) to (2.3-53), we can find the sine and cosine functions of 02: sin 02 = sin (a + K $) = sin a Cos (K 0) + cosa sin (K = sin a cos 0 + (ARM ELBOW) cosa sin l3 (2.3-54) cos0z = cos(a + cos a cos 0 - (ARM ELBOW) sin a sin (3 (2.3-55) C11 ROBOT ARM KINEMATICS 67 OA=d, EF=P, AB=a, EG=P, BC = a3 DE = P, CD = d4 AD=R= P2+P2+P2AE=r= C0 Figure 2.21 Solution for joint 2. From Eqs. (2.3-54) and (2.3-55), we obtain the solution for 02: 02 = tan-I L cOS 02 J (2.3-56) Joint 3 solution. For joint 3, we project the position vector p onto the x2y2 plane as shown in Fig. 2.22. From Fig. 2.22, we again have four different arm configurations. Each arm configuration corresponds to different values of joint 3 4-y as shown in Table 2.4, where (2p4)y is the y component of the position vector from the origin of (x2, y2, z2) to the point where the last three joint axes intersect. From the arm geometry in Fig. 2.22, we obtain the following equations for finding the solution for 03: R i+1 cos 0 _ sin PX +P2 +P2 -d2 a2 + (d4 + a3) - R2 2a2 (2.3-57) (2.3-58) d4 + a3 = ARM ELBOW d4 sin a = cos (3 = I a3 I (2.3-59) d4 + a3 d4 + a3 C/) F sin 02 1 ."3 - 7 < 02 < a 0.> 0 68 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE x3 03=4>-0 Left and below arm Left and below arm Left and above arm Figure 2.22 Solution for joint 3. From Table 2.4, we can express configurations: 03 in one equation for different arm 03 = 0 - a (2.3-60) From Eq. (2.3-60), the sine and cosine functions of 03 are, respectively, sin 03 = sin ( - i3) = sin ca cos 0 - cos ¢ sin Q (2.3-61) (2.3-62) cos 03 = cos (/ - (3) = cos 0 cos a + sin solution for 03: C17 sin R From Eqs. (2.3-61) and (2.3-62), and using Eqs. (2.3-57) to (2.3-59), we find the ROBOT ARM KINEMATICS 69 Table 2.4 Various arm configurations for joint 3 'LS Arm configurations (ZP4)y 03 ARM ELBOW ARM ELBOW LEFT and ABOVE arm LEFT and BELOW arm RIGHT and ABOVE arm RIGHT and BELOW arm a0 0 0-Q 0-a -/3 -e- -1 -1 +1 +1 +1 -1 +1 +1 03 = tan - I Arm Solution for the Last Three Joints. Knowing the first three joint angles, we can evaluate the °T3 matrix which is used extensively to find the solution of the last three joints. The solution of the last three joints of a PUMA robot arm can be found by setting these joints to meet the following criteria: 1. Set joint 4 such that a rotation about joint 5 will align the axis of motion of joint 6 with the given approach vector (a of T). 2. Set joint 5 to align the axis of motion of joint 6 with the approach vector. 3. Set joint 6 to align the given orientation vector (or sliding vector or Y6) and normal vector. Mathematically the above criteria respectively mean: Aviv/n\ -1 +1 0 0 -co 0-(3 -1 -1 r sin 03 1 COS 03 - 7r < 03 < it (2.3-63) t (z3 X a) Z4 = given a = (a,, ay, aZ)T 11 Z3 X a 1) given a = (a,, ay, aZ)T given s = (s, sy, sZ)T and n = (ni, ny, n,)T (2.3-64) a = z5 s = y6 (2.3-65) (2.3-66) In Eq. (2.3-64), the vector cross product may be taken to be positive or negative. As a result, there are two possible solutions for 04. If the vector cross product is zero (i.e., z3 is parallel to a), it indicates the degenerate case. This happens when the axes of rotation for joint 4 and joint 6 are parallel. It indicates that at this particular arm configuration, a five-axis robot arm rather than a six-axis one ..d would suffice. Joint 4 solution. Both orientations of the wrist (UP and DOWN) are defined by looking at the orientation of the hand coordinate frame (n, s, a) with respect to the (x5, Y5, z5) coordinate frame. The sign of the vector cross product in Eq. (2.3-64) cannot be determined without referring to the orientation of either the n or s unit vector with respect to the x5 or y5 unit vector, respectively, which have a 70 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE fixed relation with respect to the z4 unit vector from the assignment of the link coordinate frames. (From Fig. 2.11, we have the z4 unit vector pointing'at the same direction as the y5 unit vector.) We shall start with the assumption that the vector cross product in Eq. (2.3-64) has a positive sign. This can be indicated by an orientation indicator Sl which is defined as: ,_, .CO 0 if in the degenerate case SZ = ifs be rewritten as: 0 y5 = z4, and using Eq. (2.3-64), the orientation indicator SZ can if in the degenerate case 12 = . IIz3 x all (z3 X a) CC: N S. (z3 x a) if s (z3 x a) if s 0 (2.3-68) (z3 x a) = 0 If our assumption of the sign of the vector cross product in Eq. (2.3-64) is not correct, it will be corrected later using the combination of the WRIST indicator and the orientation indicator 0. The 12 is used to indicate the initial orientation of the z4 unit vector (positive direction) from the link coordinate systems assignment, while the WRIST indicator specifies the user's preference of the orientation of the wrist subsystem according to the definition given in Eq. (2.3-33). If both the orientation 12 and the WRIST indicators have the same sign, then the assumption of the sign of the vector cross product in Eq. (2.3-64) is correct. Various wrist acs orientations resulting from the combination of the various values of the WRIST and orientation indicators are tabulated in Table 2.5. C03 .~. Table 2.5 Various orientations for the wrist Wrist orientation DOWN DOWN UP UP 12 = s y5 or n y5 >1 0 0 ate, WRIST +1 +1 M = WRIST sign (Sl) +1 <0 >1 0 <0 -1 -1 -1 -1 +1 ROBOT ARM KINEMATICS 71 Again looking at the projection of the coordinate frame (x4, Y4, z4) on the X3 y3 plane and from Table 2.5 and Fig. 2.23, it can be shown that the following are true (see Fig. 2.23): sin 84 = -M(Z4 X3) COS04 = M(Z4 y3) (2.3-69) where x3 and y3 are the x and y column vectors of °T3, respectively, M = WRIST sign (9), and the sign function is defined as: sign (x) _ f+1 L-1 if x '> 0 if x < 0 (2.3-70) Thus, the solution for 84 with the orientation and WRIST indicators is: 84 = tan - I F sin 84 1 cos 04 J = tan - I M(Clay - Slax) M(CI C23ax + SI C23ay - S,3a,) J - 7r < 04 < 7r (2.3-71) If the degenerate case occurs, any convenient value may be chosen for 84 as long as the orientation of the wrist (UP/DOWN) is satisfied. This can always be ensured by setting 84 equals to the current value of 04. In addition to this, the user can turn on the FLIP toggle to obtain the other solution of 04, that is, r-+ Hip 04 = 84 + 1800. Joint 5 solution. To find 85, we use the criterion that aligns the axis of rotation of joint 6 with the approach vector (or a = z5). Looking at the projection of the E-+ a)-0 sin B4 = -(z4 . x3) cos 04= Z4 - Y3 X3 Figure 2.23 Solution for joint 4. 72 ROBOTICS- CONTROL, SENSING, VISION, AND INTELLIGENCE `N: 05 = tan-1 coordinate frame (x5, y5, z5) on the X4 Y4 plane, it can be shown that the following are true (see Fig. 2.24): sin B5 = a x4 cos 05 = - (a Y4) (2.3-72) where x4 and y4 are the x and y column vectors of °T4, respectively, and a is the approach vector. Thus, the solution for 05 is: C sin 05 LCOS 05 J - 7r < 05 < 7r (2.3-73) C1 S23ax+S1 S23ay+C23az If 05 = 0, then the degenerate case occurs. Joint 6 solution. Up to now, we have aligned the axis of joint 6 with the approach vector. Next, we need to align the orientation of the gripper to ease picking up the object. The criterion for doing this is to set s = Y6. Looking at the projection of the hand coordinate frame (n, s, a) on the x5 Y5 plane, it can be shown that the following are true (see Fig. 2.25): yon CD' sin 06 = n y5 "'h cos 06 = S ran y5 (2.3-74) where y5 is the y column vector of °T5 and n and s are the normal and sliding vectors of °T6, respectively. Thus, the solution for 06 is: r sin 06 COS 06 06 = tan-] - 7r ' < 06 < 7r = tan' (-S1C4-C1C23S4)nx+(CIC4-SiC23S4)ny+(S4S23)nz (-SIC4-C1C23S4)Sx+(CIC4-S1C23S4)SY+(S4S23)sz J (2.3-75) Y4 sin BS = a x4 cos B5 = - (a Y4) Figure 2.24 Solution for joint 5. N/1 ..i = tan-I (CiC23C4-SiS4)ax+(SiC23C4+C1S4)ay-C4S23az ROBOT ARM KINEMATICS 73 sin 06 =n - ys cos 06 = S ys x5 Figure 2.25 Solution for joint 6. The above derivation of the inverse kinematics solution of a PUMA robot arm is based on the geometric interpretation of the position of the endpoint of link 3 and the hand (or tool) orientation requirement. There is one pitfall in the above derivation for 04, 05, and 86. The criterion for setting the axis of motion of joint 5 equal to the cross product of z3 and a may not be valid when sin 85 = 0, which means that 05 = 0. In this case, the manipulator becomes degenerate with both the axes of motion of joints 4 and 6 aligned. In this state, only the sum of 04 and 06 is significant. If the degenerate case occurs, then we are free to choose any value for 04, and usually its current value is used and then we would like to have 84 + 86 equal to the total angle required to align the sliding vector s and the normal vector n. If the FLIP toggle is on (i.e., FLIP = 1), then 04 = 04 + ir, 85 = 85, and 06 = 86 + 7r . In summary, there are eight solutions to the inverse kinematics problem of a six joint PUMA-like robot arm. The first three joint solution (81, 82, 03) positions the arm while the last three-joint solution, (04, 05, 06), provides appropriate orientation for the hand. There are four solutions for the first three joint solutions-two b0.0 U4' A.. ,--. for the right shoulder arm configuration and two for the left shoulder arm configuration. For each arm configuration, Eqs. (2.3-47), (2.3-56), (2.3-63), (2.371), (2.3-73), and (2.3-75) give one set of solutions (01, 82, 83, 04, 05, 86) and (81, 02, 03, 04 + ir, - 85, 86 + ir) (with the FLIP toggle on) gives another set of solutions. Decision Equations for the Arm Configuration Indicators. The solution for the PUMA-like robot arm derived in the previous section is not unique and depends on the arm configuration indicators specified by the user. These arm configuration indicators (ARM, ELBOW, and WRIST) can also be determined from the joint angles. In this section, we derive the respective decision equation for each arm configuration indicator. The signed value of the decision equation (positive, zero, or negative) provides an indication of the arm configuration as defined in Eqs. 'J' ''d p,; 'C3 (2.3-31) to (2.3-33). w-+ r/, L.. CJ" ,O`1. own a)" C). from the decision equation in Eq. SENSING. a decision equation for the ARM indicator can be found to be: ZI x p.3-70). one can relate its signed value to the ARM indicator for the RIGHT/LEFT arm configuration as: ARM = sign( -d4S23 -a3 C23 . and z0 = (0. the decision equation for the ELBOW indicator is based on the sign of the y component of the position vector of 2A3 3A4 and the ARM indicator: . 0)T from the third column vector of °T1.py sin 01 . P)] = sign (. 2. the determination of the LEFT/RIGHT arm configuration is reduced to checking the sign of the numerator of g(0.4. If g(0. following the definition of the RIGHT/LEFT arm.py sin 01) '77 000 3. Using (2p4)y and the ARM indicator in Table 2.p) = z0. (2. cos 01. In this case.3-77) where the sign function is defined in Eq.19). it is default to the RIGHT arm (ARM = + 1). p) < 0. p)] = sign [g(0)] = sign ( -d4S23 -a3C23 ..3-36)] onto the x0 y0 plane. Eq.sin 01. The arm is within the inner cylinder of radius d2 in the workspace (see Fig. 1)T. Substituting the x and y components of p from Eq. then the arm is in the LEFT arm configuration. we follow the definition of ABOVE/BELOW arm to formulate the corresponding decision equation. zI = (.pX cos 01 Hzl X p'll (2. If g(0.3-79) For the ELBOW arm indicator.sin 01 PX cos 01 X p'I! Py .a2 C2) (2. (2.a2 C2) - +1 RIGHT arm -1 LEFT arm (2.3-78). 0)T is the projection of the position vector p [Eq.74 ROBOTICS: CONTROL.pX cos 01 . then the criterion for 'C1 finding the LEFT/RIGHT arm (2. AND INTELLIGENCE For the ARM indicator. (2.3-77) becomes: ARM = sign [g(0. p): ARM = sign [g(0. VISION.1 k 0 0 Ikzl 1 g(0. then the arm is in the RIGHT arm configuration. (2. If g(0. '"b configuration cannot be uniquely determined. p) = 0.3-76) where p' _ (P_" py. i . (2. We have the following possibilities: 1. p) > 0. 2.3-36). 0. Since the denominator of the above decision equation is always positive. 11zl X P'll = z0 .3-78) Hence. z4 > 0 if n . ELBOW.ROBOT ARM KINEMATICS 75 Position and orientation Joint angles Direct kinematics of the end-elector n s () a p () I I) Decision equations Error ARM.3-81) If s z4 = 0.a3 S3) = +1 ELBOW above wrist (2 3-80) -1 ' ELBOW below wrist For the WRIST indicator.3-82). we have if S if S Z4 0 WRIST = +1 WRIST DOWN (2.Z4 < 0 = sign (n z4) (2. we follow the definition of DOWN/UP wrist to obtain a positive dot product of the s and y5 (or z4 ) unit vectors: +1 WRIST = if s z4 > 0 if -1 +1 s z4 < 0 = sign (s z4) (2. (2.3-82) Combining Eqs. They are inputed into the direct kinematics routine to obtain the arm matrix T. These decision equations provide a verification of the arm solution.26 Computer simulation of joint solution. WRIST Inverse kinematics Figure 2. .3-81) and (2.26). These indicators together with the arm matrix T are fed into the inverse solution routine to obtain the joint angle 'G1.11. I- ELBOW = ARM sign (d4 C3 . 2. These joint angles are also used to compute the decision equations to obtain the three arm configuration indicators.3-83) sign (n z4) z4 = 0 -1 =>WRIST UP Computer Simulation. A computer program can be written to verify the validity of the inverse solution of the PUMA robot arm shown in Fig. The software initially generates all the locations in the workspace of the robot within the joint angles limits. 2. then the WRIST indicator can be found from: WRIST = -1 sign (s z4) if n . We use them to preset the arm configuration in the direct kinematics and then use the arm configuration indicators to find the inverse kinematics solution (see Fig. The forward kinematic equations for a six-axis PUMAlike robot arm are derived. can be generalized to other simple industrial robots with rotary joints. [1960]. and Fu [1986]. 2. The geometric approach. and WRIST).. The validity of the forward and inverse kinematics solution can be verified by computer simulation.76 ROBOTICS: CONTROL. There are eight solutions to a six joint PUMA-like robot CDN arm-four solutions for the first three joints and for each arm configuration. ((DD CAD (IQ ono 7r0 -CD Q'5 O-. Further reading about homogeneous coordinates can be found in Duda and Hart [1973] and Newman and Sproull [1979]. The inverse solution is determined with the assistance of three arm configuration indicators (ARM. This is discussed in a paper by Chase [1963]. with appropriate modification and adjustment. . it does not provide geometric insight to the problem. Other robotics books that discuss the kinematics problem are Paul [1981]. The inverse kinematics problem is introduced and the inverse transform technique is used to determine the Euler angle solution. Although matrix representation of linkages presents a systematic approach to solving the forward kinematics problem. ELBOW. two more solutions for the last three joints. Frazer et al. a geometric approach is introduced to find the inverse solution of a six joint robot arm with rotary joints. The geometric approach to solving the inverse kinematics for a six-link manipula- -7. REFERENCES Further reading on matrices can be found in Bellman [1970]. 3 for deriving the equations of tin motion that describe the dynamic behavior of a robot arm.ti 'C) .4 CONCLUDING REMARKS We have discussed both direct and inverse kinematics in this chapter. This technique can also be used to find the inverse solution of simple robots. More discussion in kinematics can be found in Hartenberg and Denavit [1964] and Suh and Radcliffe [1978]. However. The discussion of the inverse transform technique in finding the arm solution was based on the paper by Paul et al. A computer simulation block diagram is shown in Fig. and Gantmacher [1959].C . Pieper [1968] in his doctoral dissertation utilized an algebraic approach to solve the inverse kinematics problem. 2. the vector approach to the kinematics problem presents a more concise representation of linkages. AND INTELLIGENCE solution which should agree to the joint angles fed into the direct kinematics routine previously. SENSING.26. The kinematics concepts covered in this chapter will be used extensively in Chap. The discussion on kinematics is an extension of a paper by Lee [1982]. and Snyder [1985]. VISION. The parameters of robot arm links and joints are defined and a 4 x 4 homogeneous transfor:-t mation matrix is introduced to describe the location of a link with respect to a fixed coordinate frame. Utilization of matrices to describe the location of a rigid mechanical link can be found in the paper by Denavit and Hartenberg [1955] and in their book (Hartenberg and Denavit [1964]). Lee. Gonzalez. [1981]. Thus. Duffy and Rooney [1975]. Kohli and Soni [1975]. Yang [1969]. followed by a rotation of B angle about the OY axis? 2. but which results in the same rotation matrix. !'o 2. 3. Yang and Freudenstein [1964]. 2. and °A. Gonzalez. a0-+ PROBLEMS 2.3 Find another sequence of rotations that is different from Prob.6 For the figure shown below.4 Derive the formula for sin (0 + 0) and cos (0 + 0) by expanding symbolically two rotations of 0 and B using the rotation matrix concepts discussed in this chapter. Uicker et al. and Fu [1986] contains numerous recent papers on robotics. 2. Finally. followed by a rotation of 90° about the OY axis? 2. The arm solution of a Stanford robot arm can be found in a report by Lewis [1974].1 What is the rotation matrix for a rotation of 30° about the OZ axis. followed by a rotation of ¢ angle about the OV axis. for i = 1. the tutorial book edited by Lee.2. Other techniques in solving the inverse kinematics can be found in articles by Denavit [1956]. followed by a rotation of >li angle about the OW axis.ROBOT ARM KINEMATICS 77 for with rotary joints was based on the paper by Lee and Ziegler [1984]. followed by a rotation of 60° about the OX axis. 2. [1964]. 2. 4. followed by a translation of b unit of distance along the OZ axis. find the 4 x 4 homogeneous transformation matrices '-'Ai ivy m-1 '0- .2 What is the rotation matrix for a rotation of 0 angle about the OX axis.5 Determine a T matrix that represents a rotation of a angle about the OX axis. 5. Yuan and Freudenstein [1971]. The camera can see the origin of the base coordinate system where a six-link robot arm is attached. (a) Unfortunately.8 A robot workstation has been set up with a TV camera.7 For the figure shown below. 2. as '-' seen by the camera. where . after the equipment has been set up and these coordinate systems have been taken. VISION.78 ROBOTICS: CONTROL. 4. 4 in 2. SENSING. the same person rotated the object 90° about the x axis of the object and translated it 4 units of distance along the 'pp >.11. 2. Also.J. AND INTELLIGENCE 2. and also the center of a cube to be manipulated by the robot. 0 0 -1 0 9 T2 = 0 0 -1 0 . What is the position/orientation of the camera with respect to the robots base coordinate system? (b) After you have calculated the answer for question (a).^y .3 0 1 1 0 0 1 1 0 0 -10 20 10 1 0 0 0 10 0 and -1 0 0 0 T = . and °A. If a local coordinate system has been established at the center of the cube. find the 4 x 4 homogeneous transformation matrices A. someone rotates the camera 90° about the z axis o the camera. the origin of the base coordinate system as seen by the camera can be expressed by a homogeneous transformation matrix T2. 3. as shown in the example in Sec. then this object. can be represented by a homogeneous transformation matrix T1.2. for i = 1. z. a. di 4 5 6 . . 6 for the PUMA 260 robot arm shown in the figure below and complete the table. Waist rotation 330° ?D+ L1. 2. 2.) for i = 1. . y.10 Establish orthonormal link coordinate systems (xi. Find the computational requirements of the joint solution in terms of multiplication and addition operations and the number of transcendental calls (if the same term appears twice. Shoulder rotation 310° Flange rotation 360° Wrist rotation PUMA robot arm link coordinate parameters Joint i 0. a. QC-. the computation should be counted once only). What is the position/orientation of the object with respect to the robot's base coordinate system? To the rotated camera coordinate system? 2..9 We have discussed a geometric approach for finding the inverse kinematic solution of a PUMA robot arm.ROBOT ARM KINEMATICS 79 rotated y axis... 6) of the PUMA robot arm in Fig. .14 Repeat Prob. 22 cni. 2.12 A Stanford robot arm has moved to the position shown in the figure below. 2. 2.13 for the Stanford arm shown in Fig. 90°)T. .11 Establish orthonormal link coordinate systems (xi.. Establish the orthonormal link coordinate systems (xi.. yi. 2. . A first-order approximation solution is adequate.. 2. . MINIMOVER robot arm link coordinate parameters Joint i Bi cxi ABC ai di 5 yo '2. SENSING.80 ROBOTICS: CONTROL.12.13 Using the six "Ai matrices ( i = 1 .13. 2. 0°. . 5 for the MINIMOVER robot arm shown in the figure below and complete the table. OB2. . Stanford arm link coordinate parameters Joint i Oi ai ai di 2. zi) for i = 1. The joint variables at this position are: q = (90°. yi. VISION..6. zi) for i = 1.. for this arm and complete the table. 2. MB3). find its position error at the end of link 3 due to the measurement error of the first three joint angles (MB1. AND INTELLIGENCE 2. 70°. . -120°.. Find . .. and 'A2.17 For the Stanford robot arm shown in Fig.z.12.15 A two degree-of-freedom manipulator is shown in the figure below. Given that the length of each link is 1 m. 2.16 for the Stanford arm shown in Fig.373). establish its link coordinate frames and find °A.3-75). i = 1. 02. Use the inverse transformation technique to find the solution for the last three joint angles (04. . You may use any method that you feel comfortable with.11. 2. 2..ROBOT ARM KINEMATICS 81 2.16 For the PUMA robot arm shown in Fig. '2.-. derive the solution of the first three .18 Repeat Prob. 03) correctly and that we are given '-'A. joint angles. (2. 2. . 2. assume that we have found the first three joint solution (0. (2. 06). 2... 05.12. and (2. 6 and °T6. C7. 2.3-71). bow .fl the inverse kinematics solution for this manipulator. Compare your solution with the one given in Eqs. .7" C)' pt- '. the dynamic performance of a manipulator directly depends on the efficiency of the control algorithms and the dynamic model of the manipulator. However. modeling and evaluating the dynamical properties and behavior of computer-controlled robots.y CAD . . The dynamic equations of motion of a manipulator are a set of mathematical equations describing the dynamic behavior of the manipulator.. and properties of the dynamic equations of motion that are suitable for control purposes. Bejczy [1974]). r3. characteristics.. This leads to the development of the dynamic equations of motion for the various arti:t7 . At' f7' Y'. In this chapter. Luh's Newton-Euler equations (Luh et al.'3 acs . Oliver Wendell Holmes 3. This chapter deals mainly with the former part of the manipulator control problem.CHAPTER THREE ROBOT ARM DYNAMICS The inevitable comes to pass by effort. the design of suitable control equations for a robot arm. we shall concentrate on the formulation. p.3'' .. the structure of these equations E3. These motion equations are "equivalent" to each other in the sense that they describe the dynamic behavior of the same physical robot manipulator. Hollerbach's Recursive-Lagrange (R-L) equations (Hollerbach [1980]).°. `/d Obi .. that is.o 'L7 °°' culated joints of the manipulator in terms of specified geometric and inertial parameters of the links. [1983]). Conventional approaches like the Lagrange-Euler (L-E) and Newton-Euler (N-E) formulations could then be applied systematically to develop the actual robot arm motion equations. such as Uicker's Lagrange-Euler equations (Uicker [1965]. [1980a]). In general. Such equations of motion are useful for computer simulation of the robot arm motion... Various forms of robot arm motion equations describing the rigid-body robot arm dynamics are obtained from these two formulations.1 INTRODUCTION Robot arm dynamics deals with the mathematical formulations of the equations of robot arm motion. `CD °t= 82 . The actual dynamic model of a robot arm can be obtained from known physical laws such as the laws of newtonian mechanics and lagrangian mechanics. The purpose of manipulator control is to maintain the dynamic response of a computer-based manipulator in accordance with some prespecified system performance and desired goals. The control problem consists of obtaining dynamic models of the physical robot arm system and then specifying corresponding control laws or strategies to achieve the desired system response and performance. . and Lee's generalized d'Alembert (G-D) equations (Lee et al. and the evaluation of the kinematic design and structure of a robot arm. and involves vector crossproduct terms. the generalized forces/torques are computed. The most significant result of this formulation is that the computation time of the generalized forces/torques is found linearly proportional to the number of joints of the robot arm and independent of the robot arm configuration.Q . Orin et al. and (3.. C3' pr' .. these torques/forces depend on the manipulator's physical parameters. As an alternative to deriving more efficient equations of motion.D. others are obtained to facilitate control analysis and syn- thesis. and c. (3. defined in Eqs. excluding the dynamics of electronic control devices.> . respectively. The resulting dynamic equations. the resulting equations of motion. _00 . Assuming rigid body motion. but messy. C's .2-33). Thus.3 CAD "3' . The derivation is simple.'G. difficult to utilize for real-time control purposes unless they are simplified. and linear accelerations at the center of mass of each link-from the inertial coordinate frame to the `J' °s' .-r . excluding the dynamics of the S]. or for the inverse dynamics problem. h. This set of recursive equations can be applied to the robot links sequentially.. Furthermore..'. The backward recursion propagates the forces and c"4" moments exerted on each link from the end-effector of the manipulator to the base reference frame. Some are obtained to achieve fast computation time in evaluating the nominal joint torques in servoing a manipulator. attention was turned to develop efficient algorithms for computing the generalized forces/torques based on the N-E equations of motion (Armstrong [1979]. angular accelerations..k. that is. and gear friction. joint velocity and acceleration.. and gear friction.. backlash. [1979]. s0. The forward. ue.ROBOT ARM DYNAMICS 83 may differ as they are obtained for various reasons and purposes. 4°) Q.fl CD. angular velocities. are a set of second-order coupled nonlinear differential equations. coupling reaction forces between joints (Coriolis and centrifugal). given the desired torques/forces. and the load it is carrying. '=r hand coordinate frame.. and gravity loading effects. it may be required to compute the dynamic coefficients Dik. The derivation of the dynamic model of a manipulator based on the L-E formulation is simple and systematic. Luh et al. '.. In both cases. has shown that the dynamic motion equations for a six joint Stanford robot arm are highly nonlinear and consist of inertia loading. using the 4 x 4 homogeneous transformation matrix representation of the kinematic chain and the lagrangian formulation.-t '. o"> ice. the dynamic equations are used to solve for the joint accelerations which are then integrated to solve for the generalized coordinates and their velocities. are a set of forward and backward recursive equations. a'- . the computation of these coefficients requires a fair amount of arithmetic operations.N. and still others are obtained to improve computer simulation of robot motion. With this algorithm. backlash. they are being used to solve for the forward dynamics problem. (74 ice. one can implement simple real-time control of a robot arm in the joint-variable space. given the desired generalized coordinates and their first two time derivatives. [1980a]). recursion propagates kinematics information-such as linear velocities.s6 O(" b1) (CS . that is. To a lesser extent.d.2-31). '-' [17 (1) .t .^.fl control device. instantaneous joint configuration. the L-E equations are very Q.+ -. Bejczy [1974].2-34). Unfortunately. (3. The L-E equations of motion provide explicit state equations for robot dynamics and can be utilized to analyze and design advanced joint-variable space control strategies.fl r+' . In addition. BCD CAD CAD CND s. In addition to allowing faster computation of the dynamic coefficients than the L-E equations of motion. where n is the number of degrees of freedom of the robot arm. To further improve the computation time of the lagrangian formulation. Hollerbach [1980] has exploited the recursive nature of the lagrangian formulation. --] ^'. s-. and the motion equations of a two-link manipulator are worked out to illustrate the use of these equations.2 LAGRANGE-EULER FORMULATION The general motion equations of a manipulator can conveniently be expressed through the direct application of the Lagrange-Euler formulation to nonconservative systems.. Cam' °-h TV' '-s D~. For state-space control analysis. the interaction and coupling reaction forces in the equations should be easily identified so that an appropriate controller can be designed to compensate for their effects (Huston and Kelly [1982]). The computation of the applied forces/torques from the generalized d'Alembert equations of motion is of order 0(n3). 3. together with the DenavitHartenberg link coordinate representation. while the L-E equations are of order 0(n4) [or of order 0(n3) if optimized] and the N-E equations are of order 0(n). the L-E. the mathematical operations and their computational issues for these motion equations are tabulated. The algorithm is expressed by matrix operations and facilitates both analysis and computer implementation. However..S CAD analysis and computer simulation. The computational efficiency is achieved from a compact formulation using Euler transformation matrices (or rotation matrices) and relative position vectors between joints. The direct application of the lagrangian dynamics formulation. one . Many investigators utilize the Denavit-Hartenberg matrix representation to describe the spatial displacement between the neighboring link coordinate frames to obtain the link kinematic information. VISION. SENSING. C13 '-' o'. In this chapter. AND INTELLIGENCE The inefficiency of the L-E equations of motion arises partly from the 4 x 4 homogeneous matrices describing the kinematic chain. one would like to obtain an explicit set of closed-form differential equations (state equations) that describe the dynamic behavior of a manipulator. while the efficiency of the N-E formulation is based on the vector formulation and its recursive nature. results in a convenient and compact algorithmic description of the manipulator equations of motion. and they employ the lagrangian dynamics technique to derive the dynamic equations d a manipulator.84 ROBOTICS. Since the computation of the dynamic coefficients of the equations of motion is important both in control . N-E. the recursive equations destroy the "structure" of the dynamic model which is quite useful in providing insight for designing the controller in state space. and G-D equations of robot arm motion are derived and discussed. The evaluation of the dynamic and control equations in functionally Cam. Such information is useful for designing a controller in state space. Another approach for obtaining an efficient set of explicit equations of motion is based on the generalized d'Alembert principle to derive the equations of motion which are expressed explicitly in vector-matrix form suitable for control analysis. CONTROL. the G-D equations of motion explicitly identify the contributions of the translational and rotational effects of the links. = d. various sets of generalized coordinates are available to describe the manipulator..2. qi = Bi. The 4 x 4 homogeneous coordinate transformation matrix. 2. one is required to properly choose a set of generalized coordinates to describe the system. dt L aqr J = T. the joint angle span of the joint. which in turn requires knowledge of the velocity of each joint. . The following derivation of the equations of motion of an n degrees of freedom manipulator is based on the homogeneous coordinate transformation matrices developed in Chap._. K = total kinetic energy of the robot arm P = total potential energy of the robot arm qi = generalized coordinates of the robot arm q. Generalized coordinates are used as a convenient set of coordinates which completely describe the location (position and orientation) of a system with respect to a reference coordinate frame.ROBOT ARM DYNAMICS 85 explicit terms will be based on the compact matrix algorithm derived in this section.potential energy P From the above Lagrange-Euler equation. a0. in effect..1 Joint Velocities of a Robot Manipulator The Lagrange-Euler formulation requires knowledge of the kinetic energy of the physical system. 3. since the angular positions of the joints are readily available because they can be measured by potentiometers or encoders or other sensing devices. the distance traveled by the joint. q. '-'A. . It relates a point fixed in link i expressed in homogeneous coordinates with respect to the ith coordinate system to the (i -1)th coordinate system. they provide a natural correspondence with the generalized coordinates. 2. For a simple manipulator with rotary-prismatic joints. q. in the case of a rotary joint. In I0. Ti = generalized force (or torque) applied to the system at joint i to drive link i . p-.2-1) where L = lagrangian function = kinetic energy K .. _ first time derivative of the generalized coordinate. The Lagrange-Euler equation d r aL 1 aL aq. 2. This. . n (3. which describes the spatial relationship between the ith and the (i -1)th link coordinate frames. corresponds to the generalized coordinates with the joint variable defined in each of the 4 x 4 link coordinate transformation matrices. whereas for a prismatic joint. i = 1. Thus.. However. The derivation of the dynamic equations of an n degrees of freedom manipulator is based on the understanding of: 1. SENSING.2-3) °Ai = °A1'A2 . AND INTELLIGENCE this section. let 'r. 1)T (3. then °ri is related to the point 'ri by °ri = °Ai where (3. Yi. With reference to Fig. 3. zi.2-2) Let °ri be the same point 'r1 with respect to the base coordinate frame.1 A point 'r.. VISION. the velocity of a point fixed in link i will be derived and the effects of the motion of other joints on all the points in this link will be explored.1. in link i. -'Ai the homogeneous coordinate transformation matrix which relates the spatial displacement of the ith link coordinate frame to the (i -1)th link coordinate frame.. the coordinate transformation matrix which relates the ith coordinate frame to the base coordinate frame. be a point fixed and at rest in a link i and expressed in homogeneous coordinates with respect to the ith link coordinate 'C3 frame.2-4) Figure 3. xi Yi zi 1 = (xi. and °A.'-1Ai (3. .86 ROBOTICS: CONTROL. (3. In order to derive the equations of motion that are applicable to both revolute and prismatic joints. ai. other points as well as the point 'ri fixed in the link i and expressed with respect to the vii ith coordinate frame will have zero velocity with respect to the ith coordinate frame (which is not an inertial frame).2-7) J The above compact form is obtained because 'ii = 0.2-31). ir. for a revolute joint. from Eq. is defined as 0 . we shall use the variable qi to represent the generalized coordinate of joint i which is either 0i (for a rotary joint) or di (for a prismatic joint).... and assuming rigid body motion. '_' 1Ai'ri + °A. The velocity of 'ri expressed in the base coordinate frame (which is an inertial frame) can be expressed as `"' °O.. it follows from Eq.cos ai sin 0i cos ai cos Oi sin ai sin 0i 0 (3. (2.'-'Ai'ri + a °Ai 'ri + °A..2-8a) Qi = 1 0 0 0 0 '-s 4-" C/) .sin ai cos 0i 0 cos ai di 0 1 In general. is given by cos Oj '-'A... i. the general form of . . "A2 = .2-5) cos ai 0 di 1 or. 0i).2-29) that the general form of '-'A. 02. °v' = vi = d (°ri) = ddt(°A' 'r') dt III _ °A1 'A2 . . The partial derivative of `. Since the point 'ri is at rest in link i. (2. di are known parameters from the kinematic structure of the arm and 0i or di is the joint variable of joint i. if joint i is prismatic.C -1 0 0 0 0 0 0 0 0 0 (3. °Ai with respect to qj can be easily calculated with the help of a matrix Qi which. + °A. = sin 0i 0 0 .'Ai is cos 0i '-'A_ . and ai. ...ROBOT ARM DYNAMICS 87 If joint i is revolute.cos ai sin 0i cos ai cos 0i sin ai 0 sin ai sin 0i ai cos 0i . aq.IAi'r. all the nonzero elements in the matrix °Ai are a function of (0.sin ai cos 0i ai sin 0i (3..2-6) _ sin 0i 0 0 sin ai 0 .. sin 0i a'-'A1 . °AI'A2 . . (3. "'. qi = 0i.cos ai sin 0i 0 0 sin ai cos 0i sin ai sin 0i 0 .cos ai cos 0i .. VISION. .2-10) can be written as follows for i = 1.' Ai for j < i forj>i I i (3.ai sin 0i a i cos 0i 0 0 1 cos 0i 0 0 a0i 0 0 1 -1 0 0 0 0 0 0 0 0 cos Oi .88 ROBOTICS: CONTROL.2-11) Using this notation. . AND INTELLIGENCE and. vi can be expressed as vi = j=I E Ui4. (3.2-10) can be interpreted as the effect of the motion of joint j on all the points on link i.cos ai sin 0i 0 sin aisin 0i ai cos 0i ai sin 0i di 1 sin 0i 0 cos ai cos0i sin ai .. . 2-9) For example. n.'Ai r (3 . In order to simplify notations.2-12) .sin ai cos 0i cos ai 0 0 Qi'-'Ai 0 0 0 0 0 0 Hence. for a robot arm with all rotary joints. for a prismatic joint.25).2-10) Eq. then Eq.. (3. and using Eq. 2. let us define Uij ° a °Ai/aqj. n. Qi = 0 0 0 1 0 0 0 It then follows that a'-'Ai aq1 = Q '... j-2Aj_IQjJ-'Aj 0 . ..'-'Ai for j <i forj>i . 2. Uij = 0 J-I Qj j . (3. for i = 1. SENSING. . as 0 0 0 0 0 0 0 0 0 (3 2-8b) . `ri (3. and let dKi be the kinetic energy of a particle with differential mass dm in link i. (3.v7T) dm a) ate) (3. negating all the elements of the first row. we need to find the interaction effects between joints as s. (3. auir aqk °Ay-.) = QIQI°Ai Eq.ROBOT ARM DYNAMICS 89 It is worth pointing out that the partial derivative of-'Ai with respect to qi results in a matrix that does not retain the structure of a homogeneous coordinate transformation matrix. the kinetic energy of the differential mass is t Tr A a.2-12)..QJ'-'Ak-. = 81. Let Ki be the kinetic energy of link i. For a rotary joint.2. . i = j = k = 1 and q. for a robot arm with all rotary joints. Next. .2-13) i < j or i < k For example.Qkk-'Ai o °Ak-1Qkk-'Aj_iQj'-'Ai 0 ikj j i k (3. as expressed in the base coordinate system.l = To (Q1°A. n.2 Kinetic Energy of a Robot Manipulator After obtaining the joint velocity of each link. The advantage of using the Q matrices is that we can still use the '-'Ai matrices and apply the above operations to '-'Ai when premultiplying it with the Q. .. and zeroing out all the elements of the third and fourth rows. i = 1. so that a B.2-14) where a trace operatort instead of a vector dot product is used in the above equa- tion to form the tensor from which the link inertia matrix (or pseudo-inertia matrix) Ji can be obtained. we need to find the kinetic energy of link i. .z. 3. row of '-'Ai and zeroing out the elements in the other rows. 2. For a prismatic joint.. the effect is to replace the elements of the third row with the fourth.2-13) can be interpreted as the interaction effects of the motion of joint j and joint k on all the points on link i. then a'0 . d K i = ' 2 (.? + yi2 + zit) dm = 'h trace (vivT) dm = '/z Tr (v. the effect of premultiplying '-'Ai by Qi is equivalent to interchanging the elements of the first two rows of-'Ai. Substituting vi from Eq. jxi2 dm 1xi yi dm yi2 $ $xizi dm $xi dm Ji = S'ri'r[T dm = 1xi yi dm dm Jyizidm $z? dm $zi dm $yidm $zi dm J where 'r1 = (x.2-18) I. VISION.2-17) Sxizi din Sxi dm $yizi dm $yidm dm -'a+Iyy+Izz 2 Ixy ". hence.2-16) The integral term inside the bracket is the inertia of all the points on link i. Also c.Izz 2 miff mi inixi miYi mini . 1)T as defined before. + Iyy . i i Ki = J dKi = 'hTr p=1r=1 Uip(f 'ri `riTdm) Uirlfpgr (3. AND INTELLIGENCE Ti yr.90 ROBOTICS: CONTROL.l which is defined as r bit i- h. are independent of the mass distribution of link i. It is constant for all points on link i and independent of the mass distribution of the link i. If we use inertia tensor Ii. then Ji can be expressed in inertia tensor as mixi l I._ (3. so summing all the kinetic energies of all links and putting the integral inside the bracket. f in.Iyy + Izz 2 Iyz Iyz mi Yi Ji = Irz (3. y1. z1.. . SENSING. k indicate principal axes of the ith coordinate frame and bii is the so-called Kronecker delta. j.2-15) The matrix Uii is the rate of change of the points ('ri) on link i relative to the base coordinate frame as qi changes. E Uipgp P=1 F. Uirgr Lr=1 din J J r i i Uip 1ri `rfTUirgpgr dm L p = I r=I i i p=1r=1 E E Uip('ri dm'rT) Ur4pcr 'L3 (3.Y v_~ 3-0 xk xixi dm L Lk J where the indices i. 1 1/2 i=1p=1r=1 [Tr (Uip Ji Ui1')4p 4r I (3.^ 2 z 2 .33 2 x.y. 3. i = 1. 0) is a gravity row vector expressed in the base coordinate system.2-19) where ki23 is the radius of gyration of link i about the yz axes and 'fi = (xi. yi. .4222 + ki33 kiz23 2 2 Yi 23 Ji = mi z 2 2 z 2 ki13 k i23 Y. .2..3 Potential Energy of a Robot Manipulator Let the total potential energy of a robot arm be P and let each of its link's potential energy be Pi: Pi = . .1 1 i=1 p=lr=l F. in the (x. 1 (3. g = (0. gZ.ROBOT ARM DYNAMICS 91 or using the radius of gyration of the rigid body m. zi) coordinate system. Hence.33 2 ki12 2 2 ki13 xi 2 k. Ji can be expressed as N. For a level system.2-20) which is a scalar quantity.22 + k. Note that the Ji are dependent on the mass distribution of link i and not their position or rate of motion and are expressed with respect to the ith coordinate frame. Z.k. 1)T is the center of mass vector of link i from the ith link coordinate frame and expressed in the ith link coordinate frame.. kill + ki22 . Yi> zi. 0. mig (°Ai'fi ) (3. j UipJ1Ur9p 4. . n (3.kilI + k.. 2. the Ji need be computed only once for evaluating the kinetic energy of a robot arm.8062 m/sec2). P= P.2-21) and the total potential energy of the robot arm can be obtained by summing all the potential energies in each link.g I. the total kinetic energy K of a robot arm is n n K = E Ki = '/z E Tr i=1 .12 z ki 1 i . gy.) . 0) and g is the gravitational constant (g = 9.2-22) where g = (gg.mig°fi = -mig(°Ai'f.J M_. Hence. 2-20) and (3.. .f' [Tr (Uij Ji UT )4j 4k] + mig(°Ai'ri) i=I (3....2. a. expressed as (3. .4"(t))T (3. The above equation can be expressed in a much simpler matrix notation form as Ti = E Dikgk + E E hikgk4.n (3. AND INTELLIGENCE 3. ..2-22).2-23) s. SENSING. (3.. 2..n + Ci k=l k=I m=I i = 1. q2(t). . Jj U ) 4k4n. the lagrangian function L = K . n.2-27) q(t) = an n x 1 vector of the joint variables of the robot arm and can be q(t) = ( q 1 ( t ) .2-30) . 4(t)) + c(q(t)) where 'i7 (3. - r. 2.92 ROBOTICS: CONTROL.P is given by n i i L=l n i=1 j=1 k=I . .+ T(t) = (T1(t). T2(t).2-28) q(t) = an n x 1 vector of the joint velocity of the robot arm and can be 4(t) = (41(t).. 2. .. n. 42(t). .2-24) for i = 1. . .2-29) ij(t) = an n x 1 vector of the acceleration of the joint variables q(t) and can be 9(t) = (41(t)..n=I j=i mjgUji'rj (3. . that is.Tn(t))T (3.4 Motion Equations of a Manipulator From Eqs. (3. .2-25) or in a matrix form as T(t) = D(q(t)) ij(t) + h(q(t). Ti = d dt aL a 4i J aL aqi -I- j _ j=i k=I Tr (Ujk Jj Uji) qk + T n j j Tr (Ujk.. . VISION. Applying the Lagrange-Euler formulation to the lagrangian function of the robot arm [Eq.. . . n j=i k=I . 42(t)... ..2-23)] yields the necessary generalized torque Ti for joint i actuator to drive the ith link of the manipulator.2-26) T(t) = n x 1 generalized torque vector applied at joints i = 1.(t))T expressed as (3.q. . expressed as . .2-35) D(O) = D13 D14 D35 V'1 D24 D25 D34 D35 D45 D55 D15 D16 D26 D36 D46 D56 D66 where D11 = Tr (UIIJ1UIi) + Tr (U21J2U21) + Tr (U31J3U3i) + Tr (U41J4Ua1) + Tr (U51 J5U5) + Tr (U61 J6U61 ) . . we have D11 D12 D13 D23 D33 D14 D24 D34 D44 D45 D15 D25 V'1 D16 D12 D22 D23 D26 D36 D46 D56 (3.) i. n (3.ROBOT ARM DYNAMICS 93 D(q) = an n x n inertial acceleration-related symmetric matrix whose elements are n Dik = j=max(i. 2.2-32) where hi = E E hikgkgn. n (3. . m = 1.2-26) to (3. 2. .2-31) h(q. c2.cn)T n where ci = E (.5 Motion Equations of a Robot Arm with Rotary Joints If the equations given by Eqs.. . (3... k=1 m=1 n i = 1. k = 1.2-31)..mj g Uji Jfj) j=i i = 1.2-34) are expanded for a six-axis robot arm with rotary joints.k) Tr (Ujk Jj U.. 4) = ( h 1 n n . m) Tr (UjkJj U) (3.. q) = an n x 1 nonlinear Coriolis and centrifugal force vector whose elements are h(q.... n and hik. . h2. . D(O). .2. k. . 2.2-34) 3. From Eq..2-33) c(q) = an n x 1 gravity loading force vector whose elements are c(q) = (c1. hn)T (3. k..n i. 2...n = j=max(i. . (3. then the following terms that form the dynamic motion equations are obtained: The Acceleration-Related Symmetric Matrix. VISION. and defined in the following way: + . h(6. AND INTELLIGENCE D12 = D21 = Tr (U22J2U21) + Tr (U32J3U31) + Tr (U42J4U41) + Tr (U52J5U51) + Tr (U62J6U6 ) D13 = D31 = Tr (U33J3U31) + Tr (U43J4U41) + Tr (U53J5U51 + Tr (U63J6U6 ) D14 = D41 = Tr (U44J4U41) + Tr (U54J5U51) + Tr (U64J6U61 ) hip D15 = D51 = Tr (U55J5U51) + Tr (U65J6U61) D 16 = D61 = Tr (U66J6U61) D22 = Tr (U22J2U22) + Tr (U32J3U32) + Tr (U42J4U42) + Tr (U52J5U52) + Tr (U62J6U62 ) 32 62 D23 = D32 = Tr (U33J3UT ) + Tr (U43J4U4z) + Tr (U53J5UT52 ) + Tr (U63J6UT ) hip D24 = D42 = Tr (U44J4U42) + Tr (U54J5U T ) + Tr (U64J6UT) 52 62 D25 = D52 = Tr (U55J5U5i) ± Tr (U65J6U62) D26 = D62 = Tr (U66J6U62) hip D33 = Tr (U33J3U33) + Tr (U43J4U43) + Tr (U53J5U53) + Tr (U63J6U63) D34 = D43 = Tr (U44J4U43) + Tr (U54J5U53) + Tr (U64J6U63) D35 = D53 = Tr (U55J5U53) + Tr (U65J6U63 ) D36 = D63 = Tr (U66J6U63 ) D44 = Tr (U44J4U46) + Tr (U54J5UT ) + Tr (U64J6U6T4) 54 D45 = D54 = Tr (U55J5U56) + Tr (U65J6UT4) D46 = D64 = Tr (U66J6U6T4 ) D55 = Tr (U55J5U55) + Tr (U65J6U63 ) D56 = D65 = Tr (U66J6U63 ) D66 = Tr (U66J6U66) The Coriolis and Centrifugal Terms.. SENSING. The velocity-related coefficients in the Coriolis and centrifugal terms in Eqs. (3.2-33) can be expressed separately by a 6 x 6 symmetric matrix denoted by H.2-32) and (3.94 ROBOTICS: CONTROL.. 0).. c(0). 0): where the subscript i refers to the joint (i = 1 hl OTHI. 05(t).6 The Gravity Terms. Y0 (3..2-32) can be expressed in the following compact matrix-vector product form: hi = BT Hi t." The expression given by Eq. . 0) _ h3 6TH3 0 0TH4. 66(t) ]T (3. v0 OTH2. Eq. (3. v= hi 13 hi23 hi33 i = 1...2-36) hi 14 hi 15 hi16 hi24 hi25 hi26 hi34 hi35 hi36 hi46 hi56 hi66 hi45 hi55 hi46 hi56 Let the velocity of the six joint variables be expressed by a six-dimensional column vector denoted by 8: 0(t) _ 01(t). 04(t)..ROBOT ARM DYNAMICS 95 hi11 hi12 hi13 hi23 hi14 hi15 hi24 hi34 hi44 hi16 hi26 hi36 hi12 hi22 hi25 hi35 hi45 Hi.. 6) at which the velocityinduced torques or forces are "felt. .2-39) h4 h5 BTHS. 6 (3. v0 h2 h(0.(mlgUlI lrl + m29U212r2 + m39U313r3 + m4gU414r4 + m59U515r5 + m69U616r6) C2 = -(m29U222T2 + m39U323r3 + m49U424r4 + m59U525r5 + m69U626r6) . 6 (3.2-38) .2-40) where CI = .2-34) we have (3. ..2-38) is a component in a six-dimensional column vector denoted by h(0. From (3. 2. (3.6 h6 BTH6. 03(t).2-37) Then. . 02(t). " In particular. Due to particular variations in the link configuration during motion. VISION. It is noted that. In evaluating these coefficients. (3. The particular kinematic design of a manipulator can eliminate some dynamic coupling (Did and hik. The coefficient ci represents the gravity loading terms due to the links and is defined by Eq. it can be shown that Dik = Dki. The coefficient hik. Dik is related to the reaction torque (or force) induced by the acceleration of joint k and acting at joint i. The physical meaning of these dynamic coefficients can easily be seen from the Lagrange-Euler equations of motion given by Eqs..-. that is. the first index i is always related to the joint where the velocity-induced reaction torques (or forces) are "felt.a. Dii is related to the acceleration of joint i where the driving torque Ti acts. The coefficient Dik is related to the acceleration of the joint variables and is defined by Eq.) 3.2-31). are related to the velocities of joints k and m. it is worth noting that some of the coefficients may be zero for the following reasons: 1. however. in Eqs. 'H.96 ROBOTICS: CONTROL.. hik. 2. is related to the Coriolis force generated by the velocities of joints k and m and "felt" at joint i. we can have hjii # 0. that is. 3. COD . .. while for k $ m.m6gU666r6 The coefficients ci. for i = k.' .. instance.2-34). (3. they are physically nonexistent... by Eqs.2-32) and (3. whose dynamic interplay induces a reaction torque (or force) at joint i. km.. AND INTELLIGENCE C3 = -(m3gU333r3 + m4gU434r4 + m5gU535r5 + m6gU636r6) c4 = -(m4gU444r4 + m5gU545I"5 + m6gU646f6) c5 = -(m5gU555r5 + m6gU656r6) C6 = . = hi. the centrifugal force will not interact with the motion of that joint which generates it. (3... . and hik. and sometimes are called the dynamic coefficients of the manipulator. hiii = 0 always.2-31) to (3. (3.. In particular. Dik... CAD . (For fro reasoning. 2.2-34) are functions of both the joint variables and inertial parameters of the manipulator. some dynamic coefficients may become zero at particular instants of time. The last two indices.-. hikk is related to the centrifugal force generated by the angular velocity of joint k and "felt" at joint i. or vice versa. Thus.2-33). is related to the velocity of the joint variables and is defined d^. while for i #k. Some of the velocity-related dynamic coefficients have only a dummy existence in Eqs. that is. (3. we have hik.2-34): 1.2-33). SENSING..2-26) to (3. Since the inertia matrix is symmetric and Tr (A) = Tr (AT).2-32) and (3.. coefficients) between joint motions. for a given i. it can interact with motions at the other joints in the chain. (3. for k = m.k which is apparent by physical chi. ..1) Additions 24n(n . and the joint accelerations is known ahead of time from a trajectory planning program.1)(n + 1) Tr [U... Multiplicationst 32n(n . Or.2-34) can be utilized to compute the applied torques T(t) as a function of time which is required to produce the particular planned manipulator motion.° 0-. the joint velocities.1 Computational complexity of Lagrange-Euler equations of motiont Lagrange-Euler formulation 'A. centrifugal and Coriolis.. 5. k) (128/3) n`(n + 1)(n + 2) 0 D(q)q + h(q. q) + c(q) (128/3) 114 + (512/3)n3 + (844/3) 113 + (76/3) n (98/3)n4 + (781/6) 113 + (637/3) 02 + (107/6) n t n = number of degrees of freedom of the robot arm Ti (i = 1.7) 0 n 5111 .1) -mjgUji jr. This form allows design of a control law that easily compensates for all the nonlinear effects. Then. (3. the time history of the joint variables can be transformed to obtain the time history of the hand motion (hand trajectory) by using the appropriate homogeneous transformation matrices. However. (3. 2. For a given set of applied 0.. second-order ordinary differential equations.2-26) to (3.=m:ix(i. This subject will be discussed in Chap. j. torques . Eq. closed-loop control is more desirable for an autonomous robotic system.226) to (3. .. and gravitational effects of the links.)7] n.45 2 E mjgUji Irj j=i '/z n(n . (3. the L-E equations of motion are appealing from the closed-loop control viewpoint in that they give a set of state equations as in Eq.1)(n + 1) (65/2) n'(n + 1)(n + 2) (1/6)n2(n .2-26) should be integrated simultaneously to obtain the actual motion of the manipulator in terms of the time history of the joint variables q(t). t3. Because of its matrix structure. nonlinear. This is known as open-loop control. . Quite often in designing a feedback controller for a manipulator. .. It 4n(9n . then Eqs.. (3.1) Tr [UkjJk(Uki)T] It (128/3) n(n + l)(n+2) 0 (65/2) n(n + 1)(n + 2) Tr [UkjJh(Uki)T] d=max(i. if the time history of the joint variables. n) as a function of time..2-34) are coupled. These equations are in symbolic differential equation form and they include all inertial.ROBOT ARM DYNAMICS 97 Table 3. the dynamic coefficients are used to minimize the nonlinear effects of the reaction forces (Markiewicz [1973]).jkJ..2-26). f)' :-y c. (U". j) (1/6) n(n .1 o-1 R-07 The motion equations of a manipulator as given by Eqs.(Ui)T] Tr [ UmjkJ. mass of each link. using Eq. 3.. Cii = cos (Bi + 8j) . (3.2-11). from Fig. and coordinate systems are shown below. we have E.2. for a rotary joint i. link parameters = a1 = a2 = 0. the homogeneous coordi- nate transformation matrices'-'Ai (i = 1. (3.2-31) to (3.2-34). we shall develop the motion equations of a robot arm which will prove to be more efficient in computing the nominal torques. Si = sin 9i .2-34). VISION. L'.. S1 °AI = where Ci = cos Bi . 3. ) 0 1 Cl2 0 0 0 1 °A2 = °A11A2 = 0 0 0 7. (3. 3. as shown in Fig. Sid = sin (Bi + Of).1 summarizes the computational complexities of the L-E equations of motion in terms of required mathematical operations (multiplications and additions) that are required to compute Eq. All the rotation axes at the joints are along the z axis normal to the paper surface.2. We assume the following: joint variables = 01. is.226) to (3..S2 C2 0 1C2 1S2 C. 82. we have: X17 Then.2-34). (3. (3.S12 0 0 0 1 0 0 C12 S12 0 1(Cl2 + Cl ) 1(S12 + S. In the next section. these equations of motion are extremely inefficient as compared with other formulations.--1 m2.2-26) for every set point in the trajectory. Table 3.98 ROBOTICS: CONTROL. SENSING. Then. Computationally. 2) are obtained as C. We would like to derive the motion equations for the above two-link robot arm using Eqs. and the discussion in the previous section. From the definition of the Q matrix. 3.2.S1 0 0 1 1C. L1.6 A Two-Link Manipulator Example To show how to use the L-E equations of motion in Eqs. The physical dimensions such as location of center of mass. 0 1 'A2 = 0 0 0 0 0 1 0 0 . . an example is worked out in this section for a two-link manipulator with revolute joints. d.^ ono . C2 S2 .2-26) to (3. 0 1 -1 0 0 0 0 0 0 0 0 Qi = 0 0 F'9 0 0 0 . = d2 = 0. mass of the links = nil. AND INTELLIGENCE It is of interest to evaluate the computational complexities inherent in obtaining the coefficients in Eqs. and a1 = a2 = 1. ) l(S.2 + C.-O 0 1 -1 0 0 0 0 0 0 0 0 C.) 0 0 0 0 0 a°A2 U22 = ae2 = °A.) 1 l(C12 + C. 0 0 have 0 0 Similarly. °A2 = 0 0 0 l(C.2 A two-link manipulator. -S.2 C12 - C'2 0 0 0 . C.512 0 0 -1(S.2 + S. ae.Q2'A2 . -C.2 S12 -S12 C12 0 0 1 a °A 2 U21 = aB. lc. for U21 and U22 we . = Q. 0 0 0 0 0 0 -is. 0 -1 0 0 0 0 0 0 0 0 0 0 0 U_ a°A.ROBOT ARM DYNAMICS 99 Figure 3.2 + S. ) 0 1 0 0 0 0 0 0 0 -S. = 0 0 -S. 1 = Qi°A. -+ 1 -1 0 0 0 0 0 0 0 0 0 C2 S2 _S2 C2 0 0 1 1C2 IS2 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1-' 1 0 0 -S12 C12 -C12 .2 1C.2 + S. SENSING. ) I(C12 + C1) 0 0 0 0 0 0 -512 + Tr C.2 -C12 -S12 0 0 0 0 '/3m212 0 0 0 0 0 -'/zm21 0 0 m2 0 0 0 0 0 0 -'/zm21 0 0 0 = '/3m. (3. we have D = Tr(U1. : ''/3m112 0 0 0 0 0 0 -'/xm1I 0 0 m1 '/3m212 0 0 0 0 0 0 0 0 -'hm21 0 0 m2 0 0 J2 = J. using Eq.100 ROBOTICS: CONTROL.S12 -IS12 1C12 0 0 0 0 0 0 0 0 0 From Eq. AND INTELLIGENCE C1 S.2 '/3m212 0 0 -'/zm21 0 0 m2 = Tr C12 0 0 0 0 0 0 0 0 0 0 0 0 0 -'/zm21 =mzlz(-/b +'/2 +'/z C2) _'/3mZlz+'/zm2I2C2 . I 0 0 -'/sm2l Then. -S.U11) + -S1 = Tr C. Tr(U21J2U2i) -C1 . assuming all the products of inertia are zero.2-31).12 0 0 0 0 -'/2m11 0 0 m. 0 . 0 0 0 0 0 0 0 0 0 UT 11 -'/zmll -l(S. 0 0 0 -1S1 IC1 '/3m. (3.2-18). we can derive the pseudo-inertia matrix J.S. = 0 0 -'/sm.2 -C12 -512 0 0 0 0 0 -15. C1 0 0 1 lC1 is. VISION.12 + 4/3 m212 + m2 C212 For D12 we have D12 = D21 = Tr(U22J2U2'1) S.J. -g. = -(m1gU11 'rl + m2gU212r2) -S1 -CI . 0) _ . (3. Therefore. 0) Cl 0 0 0 0 0 0 0 0 1 .. 0. using Eq. we use Eq.m2 S212 0102 Similarly. Ok 0m = h21 1 0 j + k=1 h21. we have: Using Eq.ROBOT ARM DYNAMICS 101 For D22 we have D22 = Tr ( U22J2U22) -S12 = Tr C12 -C12 . (3. -m1(0.'/2 m2 S2 l2 02 . (3.m2 S212 01 02 'hm2S2128 Next. the above value which corresponds to joint 1 is h1 = -'hm2 S2 l2 02 . 01 02 + h221 02 01 + h222 02 'hm2S2120 Therefore.k..2-32). for i = 2 we have 2 2 h2 = E F h2k.S12 0 0 0 -1S12 1C12 /3m212 0 0 0 0 0 0 -'hm21 0 0 0 0 0 0 0 U22 T 0 0 0 0 -'/2m21 0 0 m. (3.2-32). c = (c1.2-34). J '/sm21'-ST2 + '/sm2I2C12 = /3m2122 To derive the Coriolis and centrifugal terms. h(0.S1 0 0 0 -is1 1C. c2 )T.». For i = 1. we need to derive the gravity-related terms. we can obtain the value of h.2-33). we have _ Y' 2 2 k=1 m=1 hlk0k0m = hlll0l + 2 h1120102 + h1210102 + h12202 Using Eq. -g. the Lagrange-Euler equations of motion for the two-link manipulator are found to be T(t) = D(8) B(t) + h(O. VISION. SENSING. B) + c(O) TI r '/3mI l2 + 4/3 m212 + m2 C212 '/3m212 + 1hm212C2 '/sm212 + 'hm212C2 '/3m212 B1 T2 B2 -'/zm2S212B2 - m2S2l26I B2 'hm2S2l2BI 'hm1glC1 + '/2m2g1C12 + m2g1CI '/2m2 g1C12 .91C12) Hence.102 ROBOTICS: CONTROL. -g. 0. we obtain the gravity matrix terms: cl '/2M191CI + '/sm2glCI2 + m2glC1 'hm2g1Ct2 C2 J Finally. 0) C12 l(C12 + C1 ) 0 0 0 0 = 'hm1g1CI + 'hm2g1C12 + m2glCI C2 = _M2 9U22 2r2 0 -1512 = -m2 (0.m2 (0. 0) = -m2 ('hglC12 . 0. AND INTELLIGENCE -S12 -C12 -S12 0 0 0 0 0 0 -1(S12 + S1 ) . we shall develop the necessary mathematical relation between a rotating coordinate system and a fixed inertial coordinate frame.a set of forward and backward recursive equations with "messy" vector cross-product terms.g. The problem is due mainly to the inefficiency of the Lagrange-Euler equations of motion... The most significant aspect of this formulation is that the computation time of the applied torques can be reduced significantly to allow real-time control. This reduces the computation time for the joint torques to an affordable limit (e.-. In order to understand the Newton-Euler formulation. Luh et al.. OZ. s. whose origins are coincident at a point 0. the simplified robot arm dynamics restricts a robot arm motion to slow speeds which are not desirable in the typical manufacturing environment. 3. From Fig.. 4°.3 NEWTON-EULER FORMULATION In the previous sections. f3. OZ* are rotating relative to the axes OX. As an alternative to deriving more efficient equations of motion. vii -as O`. we need to review some concepts in moving and rotating coordinate systems. j. the errors in the joint torques resulting from ignoring the Coriolis and centrifugal forces cannot be corrected with feedback control when the arm is moving at fast speeds because of excessive requirements on the corrective torques. CZ. [1980a]. k) and (i*.3.3. Walker and Orin [1982]). Furthermore.1 Rotating Coordinate Systems In this section. OY. Orin et al.. Let (i. A point r fixed and at rest in the starred coordinate system can be expressed in terms of its components on either set of axes: (!Q `J' .. we have derived a set of nonlinear second-order differential equations from the Lagrange-Euler formulation that describe the dynamic behavior of a robot arm. s. and accelerations for each trajectory set point in real time has been a computational bottleneck in openloop control. f)' . and then extend the concept to include a discussion of the relationship between a moving coordinate system (rotating and translating) and an inertial frame. i. velocities. and the axes OX*. a simplified robot arm dynamic model has been proposed which ignores the Coriolis and centrifugal forces. two righthanded coordinate systems. The use of these equations to compute the nominal joint torques from the given joint positions. which use the 4 x 4 homogeneous transformation matrices. several investigators turned to Newton's second law and developed various forms of NewtonEuler equations of motion for an open kinematic chain (Armstrong [1979]. This formulation when applied to a robot arm results in.. In order to perform real-time control. j*. [1979].. .N-. Thus. CAD . an unstarred coordinate system OXYZ (inertial frame) X17 UDC 70c C1' and a starred coordinate system OX*Y*Z* (rotating frame). the Coriolis and centrifugal forces are significant in the joint torques when the arm is moving at fast speeds. 3. . The derivation is based on the d'Alembert principle and a set of mathematical equations that describe the kinematic relation of the moving links of a robot arm with respect to the base coordinate system. OY*.ROBOT ARM DYNAMICS 103 3. respectively. However. less than 10 ms for each trajectory point using a PDP 11/45 computer). k*) be their respective unit vectors along the principal axes. = xi + yj + ik + x di + y dj + z dk dt dt dt (3.. Let us distinguish these two time derivatives by noting the following notation: d( dt o ti me d er i vati ve with respec t to the fixe d re f erence coordi nat e system which is fixed = time derivative of r(t) v.3-1).3-1) r = x* i* + y* j* + z*k* 'C3 (3. and because the coordinate systems are rotating with respect to each other.3-2).3-5) t* = x* i* + y* j* + z* k* + x* d* + y* d*L + z* d* k* dt dt dt = x* i* + y* j* + i* k* (3.104 ROBOTICS: CONTROL. using Eq.3 The rotating coordinate system. dr. (3. (3.3-4) system which is rotating = starred derivative of r(t) Then. dt = xi + yj + ik and.3-2) We would like to evaluate the time derivative of the point r. using Eq. AND INTELLIGENCE Z Y Figure 3.3-3) dt d( 411 ) = ti me d eri vati ve with respec t to th e s t arre d coordi nate (3. the time derivative of r(t) can be expressed as C]. VISION. (3.3-6) . SENSING. the time derivative of r(t) can be taken with respect to two different coordinate systems. r = xi + yj + zk or (3. the starred derivative of r(t) is d* r dt I0. 3-7) dt In evaluating this derivative. and dk*ldt because the unit vectors.4. with angular velocity co (see Fig. and recalling that a vector has magnitude and direction.3-8) by showing that .d dt ds =wxs (3 3-8) . the time derivative of r(t) can be expressed as d =z*ix + y * * +i k +z * * dr _ d*r dt +x*di* +y + y*dj* +Z*dk* dt dt * di* dt * dj* dt + Z * dk* dt (3.s(t) or-o At (3.ROBOT ARM DYNAMICS 105 Using Eqs.3-10) both in direction and magnitude.C RIB ds dt = I co x sI = cos sing (3.3-6). (3. The magnitude of dsldt is . and k. 3.r ds dt = w X s = lim s(t+Ot) . i*.3-10) With reference to Fig. 3.3-7) becomes . The direction of w x s can be found from the definition of the vector cross product to be perpendicular to s and in the plane of the circle as shown in Fig. 3.. then the angular velocity w is defined as a vector of magnitude w directed along the axis OQ in the direction of a right-hand rotation with the starred coordinate system.s(t) Al-0 At (3.3-8) is applied to the unit vectors (i*. and we would like to show that its unstarred derivative is ds -4n passing through the origin 0. If Eq.3-2) and (3. j*. (3.. (3. are rotating with respect to the unit vectors i.4. we encounter the difficulty of finding di*/dt. and k*. then Eq..3-11) The above equation is correct because if At is small. j.3-9) we can verify the correctness of Eq. b-0 CF' . (3.4. j'*. k*). Its starred derivative is zero.3-12) which is obvious in Fig.4). Since the time derivative of a vector can be expressed as dt = lim s(t + At) . 3. dj*/dt. we need to verify the correctness of Eq. In order to find a relationship between the starred and unstarred derivatives. let us suppose that the starred coordinate system is rotating about some axis OQ Consider a vector s at rest in the starred coordinate system. (3. then I As I _ (s sin 0) (w At) (3. 3-14) is called the Coriolis theorem.3-8) again to r and d* r/dt. dr dt dtr + x* (w X i*) + y*(co x j*) + z* (w x k*) d* dtr + w X r (3. The third term is called the centripetal (toward the center) acceleration of a point in rotation about an axis. 0 . The first term on the right-hand side of the equation is the acceleration relative to the starred coordinate system. The last term vanishes for a constant angular velocity of rotation about a fixed axis.y d*2r dt2 + 2w x d* r + co x (wxr) + at dt xr CIE (3. (3. (3. VISION. Taking the derivative of right.. AND INTELLIGENCE 0 Figure 3.and left-hand sides of Eq. SENSING. The second term is called the Coriolis acceleration. One can verify that w X (w x r) points directly toward and perpendicular to the axis of rotation. .106 ROBOTICS: CONTROL.4 Time derivative of a rotating coordinate system.3-13) and applying Eq. we obtain the second time derivative of the vector r(t): d2r dt2 = dt d Fd*r1 + w x r+ dt dt '- dw J at xr dt d*2r dt2 dt + w X +wxr dw = 1'.3-13) This is the fundamental equation establishing the relationship between time derivatives for rotating coordinate systems.3-14) + w x d* r d*r Equation (3. the acceleration of the particle p with respect to the unstarred coordinate system is alt . The relation between the position vectors r and r* is given by (Fig.3-13). the starred coordinate system O*X*Y*Z* is rotating and translating with respect to the unstarred coordinate system OXYZ which is an inertial frame. (3. is the velocity of the starred coordinate system O*X*Y*Z* relative to the unstarred coordinate system OXYZ. A particle p with mass m is located by vectors r* and r with respect to the origins of the coordinate frames O* X* Y* Z* and OXYZ.3-16) (ZS a(t) d d2* dtt) Z* dt2 + dth .ROBOT ARM DYNAMICS 107 3. (3. 3. Using Eq. From Fig.3. Origin O* is located by a vector h with respect to the origin O.5 Moving coordinate system.5) r = r* + h CAD Similarly.3-18) Figure 3.cam (3.a* + all 2 d'r SIT (3.3-15) dt dt + dt v* + v1. and v1.5. . respectively. Eq. (3. . v(t) where v* and v are the velocities of the moving particle p relative to the coordinate frames O* X* Y* Z* and OXYZ.3-16) can be expressed as f3.2 Moving Coordinate Systems Let us extend the above rotating coordinate systems concept further to include the translation motion of the starred coordinate system with respect to the unstarred coordinate system. respectively. 3. then sue.al If the starred coordinate system O*X*Y*Z* is moving (rotating and translating) with respect to the unstarred coordinate system OXYZ. z. Coordinate system (xo. respectively.3-19) With this introduction to moving coordinate systems._1). (3. 4-. y. z. z. respectively. y.* + v. zo)..) with respect to the base coordinate system are [from Eq.3.2. VISION._1. With reference to Fig. respectively. z. i.. Using Eq.) are attached to link i . zo) is then the base coordinate system while the coordinate systems (x. respectively. -1 (3. yo. z. = and d* P. and the angular s. and w. (3. _ 1. = w. y. . 3.O .3-20) w. x P. -.1 with origin 0* and link i with origin 0'.* dt + w. Let v_ i and w. of the coordinate system (x.3-17) can be expressed as a(t) = d*2r* dt2 + 2w x d* r* dt d2h + co x (w x r*) + dw x r* + ate dt (3. and the angular acceleration (. AND INTELLIGENCE where a* and a are the accelerations of the moving particle p relative to the coordinate frames O*X*Y*Z* and OXYZ. yo. Origin 0* is located by a position vector p. velocity w. we would like to apply this concept to the link coordinate systems that we established for a robot arm to obtain the kinematics information of the links. acs (x_1. with respect to the origin 0 and by a position vector p. y. The linear acceleration v.* from the origin 0* with respect to the base coordinate system.3-17)]._1._1.3-19)]. zo) and (x1_1.. yo. _ 1. z1) with respect to the base coordinate system (xo.. recall that an orthonormal coordinate system (x._i + w._. z1_1). Let w. y. °0. is the acceleration of the starred coordinate system O* X* Y* Z* relative to the unstarred coordinate system OXYZ.3-14). SENSING.U. and then apply the d'Alembert principle to these translating and/or rotating coordinate systems to derive the motion equations of the robot arm.108 ROBOTICS: CONTROL.3 Kinematics of the Links The objective of this section is to derive a set of mathematical equations that... z i-1) and (x.1. _ I.3._I from the origin 0 with respect to the base coordinate system.. and a1.3-21) where d*( )ldt denotes the time derivative with respect to the moving coordinate system (x. describe the kinematic relationship of the moving-rotating links of a robot arm with respect to the base coordinate system. Origin 0' is located by a position vector p. _ 1) is established at joint i.6.. Then. Eq. v. _ I be the linear and angular velocities of the coordinate system BCD ti. of the coordinate system (x.. (3.. y. based on the moving coordinate systems described in Sec. 3. y. (3.* (3. y.* be the angular velocity of 0' with respect to (xo. respectively. the linear velocity v. respectively. 3.) with respect to the base coordinate system are [from Eq. _ I). If link i is translational in the coordinate system (x. y. (3._1.. z._. x d* p* dt (3._I x (w. z. respectively.._1 x p* + 2w. the coordinate systems (x.. from Eq.3-24) therefore. y.3-13).3-22) + w. 0* and 0' frames. d* 2p * dt2 + 6._1) is d *w dt t17 + w.) with respect to (x. Eq.3-25) Recalling from the definition of link-joint parameters and the procedure for establishing link coordinate systems for a robot arm. y.3-23) can be expressed as + d* w. y.. _ I + ui. y. z. (3.1 and i._1. _ I X w.) are attached to links i .6 Relationship between 0._1 x Pi*') + ii -I and cu._1) and (x.* s._. the angular acceleration of the coordinate system (x. (3.* (3._1.* dt + w. z.3-23) then._1.* (3. z. it travels in the coo . _ i x w. = w.ROBOT ARM DYNAMICS 109 zr_I Figure 3.._... + w. .1. + w. VISION. y.3-25) can be expressed.3-31) L zi.3-30) d*2p._ _ X P7 if link i is rotational c>~ dt L z.3-30) and (3. 0 if link i is rotational if link i is translatio nal `-C (3 .* dt2 x p.+ wr 7c" d*P x.110 ROBOTICS. Similarly.-I x (z _Iqr) wi (3.3-8). as a._I. (3.5C (3 3-28) .x (w* ._ I. x p* + v. x P11 + Vi_ I zr-1 q.3-33) . If it is rotational in coordinate system (x. z._1 axis.* and the angular motion of link i is about the z. it has an angular velocity of w.CONTROL. z.I qI dt 0 if link i is rotational if link i is translational (3.3-21). Therefore._ I). 3.1 can be obtained. t zi.I q. = wi-I if link i is rotational if link i is translational if link i is rotational if link i is translational -14 . respectively. respectively.3-27) Using Eqs. (3. _ I with a joint velocity cj. cu.3-29) -I Using Eq.'` + w:.. x D-*) If link i is rotational dt (3. (3. y. (a x b) (3. using Eqs._I if link i is rotational if link i is translational (3.I qr if link i is translational Therefore._I + z.3-32) Using the following vector cross-product identities. AND INTELLIGENCE direction of z.* z. I qr if link i is translational (3.I. d* w. the linear velocity and acceleration of link i with respect to link i .3-26) and (3. (3. as wi-I + z._Iq.3-27). SENSING. Eqs.3-20)] Vi = I w.3-21) and (3. (3.1. relative to link i ._I). w._Iq.26) where 4i is the magnitude of angular velocity of link i with respect to the coordinate system (x. the linear velocity of link i with respect to the reference frame is [from Eq. zi ) pi* . d'Alembert's principle applies the conditions of static equilibrium to problems in dynamics by considering both the externally applied driving forces and the reaction forces of mechanical elements which resist motion.3-35) Note that wi = wi_ I if link i is translational in Eq.7.' + 2wi x (zi-19i) L + wi x (wi x pi*) + vi_ I if link i is rotational if link i is translational (3. d'Alembert's principle applies for all instants of time. and let the origin 0' be situated at its center of mass. Yo' zo). linear acceleration of the center of mass of link i Fi = total external force exerted on link i at the center of mass AEI vii 'Z2 -00i .14 CAD ai = dt . linear velocity of the center of mass of link i . the acceleration of link i with respect to the reference system is [from Eq. (3. nate system dri dt dvi . (3. are: mi = total mass of link i ri = position of the center of mass of link i from the origin of the base reference 'an frame si = position of the center of mass of link i from the origin of the coordinate system (xi. yi.3-29).3-31).6 with variables defined in Fig.c(a b) (3. 3.3-22)] r cii x p + wi x (wi X pi*) + zi_1 qi + coi x p. (3. (3.3-32).3-34) and Eqs. Then. 3. Equations (3. Consider a link i as shown in Fig.1)th coordi11. It is actually a slightly modified form of Newton's second law of motion. we would like to describe the motion of the robot arm links by applying d'Alembert's principle to each link.3-26) to (3.3-32). 3.3-35) describe the kinematics information of link i that are useful in deriving the motion equations of a robot arm. and (3. the origin of the ith coordinate frame with respect to the (i . by corresponding the variables defined in Fig.3-28). expressed with respect to the base reference system (xo. 3.3. the algebraic sum of externally applied forces and the forces resisting motion in any given direction is zero. the remaining undefined variables.ROBOT ARM DYNAMICS 111 a x (b x c) = b(a c) .7.4 Recursive Equations of Motion for Manipulators From the above kinematic information of each link. and can be stated as: For any body. (3. from Fig. (3.I . The above equations are recursive and can be used to derive the forces and moments (fi.3-38) (3.c . zo) fi = force exerted on link i by link i . zi_ I) to support link i and the links above it ni = moment exerted on link i by link i . y. yi. omitting the viscous damping effects of all the joints.fi+ I and (3. then it t Here (xi. ni) at the links for i = 1. yi_I.3-35).112 ROBOTICS: CONTROL. the kinematic relationship between the neighboring links and the establishment of coordinate systems show that if joint i is rotational. Then.3-32) and (3.t and 4-.3-42) Ni = ni .ri) x fi+ i = ni ni+ I + (pi. From Chap.3-43) (3. respectively.3-41) (3.3-37) where. the linear velocity and acceleration of the center of mass of link i are. . link i . we have d(mi vi) = mi ai dt Fi = and (3.3-39) Then. .I ri) x Fi pi* x fi+ I Then.1 at the coordinate frame (x1_1.ri) x fi .n for an n-link manipulator.3-44) ni = ni+ I + pi* x fi+ I + (pi* + si ) x Fi + Ni ti. VISION.. yo. noting that and are. the total external force Fi and moment Ni are those exerted on link i by gravity and neighboring links.3-36) Ni = d(Ii wi) = Ii cui + wi x (Iiwi) dt (3. Fi = fi .1.7. and applying the d'Alembert principle to each link. and looking at all the forces and moments acting on link i. ". 3. the above equations can be rewritten into recursive equations using the fact that ri . 2. That is.. zi) is the moving-rotating coordinate frame. SENSING. Vi = wi x si + vi Ni = cai x si + wi x (wi x si) + vi (3. the forces and moments exerted by the manipulator hand upon an external object.ni+ I + (pi.1 at the coordinate frame (xi_I.3-40) (3. using Eqs. AND INTELLIGENCE Ni = total external moment exerted on link i at the center of mass Ii = inertia matrix of link i about its center of mass with reference to the coordinate system (xo.(pi .i = pi* + si fi = Fi + fi+ I = mi ai + fi+ I and (3. 2.1 and link i + 1. respectively.pi. n. _ I axis. yi _ 1. then coo = wo = 0 and vo = 0 and vo (to include gravity) is 0. zi. zi _ I ) along the z. gy 9z where I g = 9.7 zi _ f. qi if link i is rotational if link i is translational . then it translates qi unit relative to the coordinate system (xi _ 1. actually rotates qi radians in the coordinate system (xi-1.t) about the zi_ I axis.7 Forces and moments on link i.8062 m/s2 . + bigi where bi is the viscous damping coefficient for joint i in the above equations. q.ROBOT ARM DYNAMICS 113 Figure 3.. ti.. the input torque/force for joint i is s..O gx vo = g = bow i+ b...3-45) (3. Then.3-46) . the input force Ti at that joint is the sum of the projection of fi onto the zi_ I axis and the viscous damping force in that coordinate system.T zi-. onto the zi_1 axis and the viscous damping moment in that coordinate system. if joint i is translational. (3. Thus.. If the supporting base is bolted on the platform and link 0 is stationary... However. yi_ 1. Hence. the input torque at joint i is the sum of the projection of n. Hence.) x Fi + Ni nilzi_i + bi4i Ti = if link i is rotational if link i is translational fiTz _. (3. X (wi x pi*) + vi_. (3. x (zi_14i) Vi = L + w. are propagated from the base reference system to the end-effector.. gZ)T (to include gravity). while the backward equations compute the necessary torques/forces for each joint from the hand to the base reference system.3-39).2.. . linear velocity and acceleration.114 ROBOTICS: CONTROL. (3._14i + thi x p* + 2w._i z. where I g I = 9. (3. 2. . AND INTELLIGENCE In summary. X (wi x pi*) + v. .3-43) to (3.3-28).8062 m/s2. They are Eqs. gy. the forward equations propagate kinematics information of each link from the base reference frame to the hand. n if link i is translational ooi _ i + z_ i 4i + w_ i x (z_ i 4i) if link i is rotational if link i is translational f6i x pi* + w. if link i is rotational if link i is translational a. For the forward recursive equations. n -1. VISION. the Newton-Euler equations of motion consist of a set of forward and backward recursive equations. + bi4i where bi is the viscous damping coefficient for joint i. SENSING.2 Recursive Newton-Euler equations of motion Forward equations: i = 1 . . + wi x (I.3-35)..3-29). = wi x si + wi x (w. angular velocity and acceleration of each individual link..t pi) C0) (1] Table 3. wi ) fi = Fi + fi+ i ni = ni+ + p* x fi+ i + (pi* + s. For the backward recursive equations. 1 Fi = mi ai Ni = Ii w.. the torques and forces exerted on each link are computed recursively from the end-effector to the base reference system. . The "usual" initial conditions are wo = cuo = vo = 0 and vo = (gs. and (3.3-45) and are listed in Table 3. x -9i) + vi Backward equations: i = n. . . cos a. p.sin a. angular accelerations. inertial matrices. and linear accelerations from the base reference frame (inertial frame) to the end-effector. cos 0..*) are referenced to the base coordinate system.. f. sin a.. F. cos 0. [1980a] improved the above N-E equations of motion by referencing all velocities.sin a. cos a. This enables the implementation of a simple real-time control algorithm for a robot arm in the joint-variable space.-. computations are much simpler. cos a. s-. and the applied joint torques are computed from these forces.5 Recursive Equations of Motion of a Link About Its Own Coordinate Frame The above equations of motion of a robot arm indicate that the resulting N-E dynamic equations. and Ti which are referenced to the base coordinate system.'Roui. and the physical geometric parameters (r.3. This is the upper left 3 x 3 submatrix of '. and sin a.'Ai. cos a. = sin a.-1. Let'-1R.3-48) 0 cos 0. sin a. 'RoN.. Ni. The forward recursion propagates kinematics information such as angular velocities.. As a result. 0 ['-'Rr]-l = . p. they change as the robot arm is moving. be a 3 x 3 rotation matrix which transforms any vector with reference to coordinate frame (x. z. sin 0..*. we compute 'Row. The backward recursion propagates the forces exerted on each link from the end-effector of the manipulator to the base reference frame.. v.. Luh et al.Ri)T where cos B. cos 0.cos a. sin 0..-. The most important consequence of this modification is that the computation time of the applied torques is found linearly proportional to the number of joints of the robot arm and independent of the robot arm configuration. sin 0..3-49) . cos B. (3. a.cos a.. location of center of mass of each link.) to the coordinate system (x. p. sin 0.-. 'Ron.'RoF. n. and 'Ror. y.. This set of recursive equations can be applied to the robot links sequentially. sin B. (3. are a set of compact forward and backward recursive equations.ROBOT ARM DYNAMICS 115 3. s. '-'R. and forces/moments to their own link coordinate systems. = ('-.'Rov...'Roa. Instead of computing w. sin 0. ci. One obvious drawback of the above recursive equations of motion is that all m_. accelerations..... Because of the nature of the formulation and the method of systematically computing the joint torques. which are referenced to its own link coordinate sys- . z. It has been shown before that cab E"' ('-'Ri)-' = 'Ri-. the inertial matrices I. . excluding gear friction..). 'Rof.3-47) . s.. y.. (3. -i('-'Row.3-44). AND INTELLIGENCE tem (xi.l if link i is rotational (3. (3.-i) if link i is rotational (3. and 1Ropi' is the location.3-45). ) x [('Raw.I.+I) + 'RoF. °Ri) ('RoW1) l (3.-j) r) x zo9. yi.) x [ ('Row.116 ROBOTICS: CONTROL.-.(zo9i + ''Roy. .) x ('Rosi) ] + `Rovr 'R0F1 = miiRoa. I (`Roni)T(`Ri-Izo) + bigi if link i is rotational (3.3-56) CIA 'Rofi = 'Ri+I('+IRof.[i+'Roni+l + i+iR * X i+iR f (3.) x [ ('Row. = if link i is rotational 'R.) + ('Row.*) + ('RoW. SENSING. (3.3-37). (3.3-43).-i + zoq. R.3-54) (3. (3.3-36). = 'Ri+.-i' 'Rotor-i + zo9. + (' 'R.-i) 'Roy.*) + 2('Row. (3. 'R n. zi ). (3. if link i is translatio nal 'RoW. 'Rosi is the center of mass of link i referred to its own link coordinate system (xi. of (xi.3-57) + ('ROPi* + 'Rosi) X ('R0F1) + 'RoN.3-58) and Ti = (iRofi)T(`Ri-Izo) + bi'1 if link i is translational where z° = (0. become: = 'R.) `R. 1)T. y. (3. VISION. yi.-izoQ. Eqs.) x ('R.-i(`-'RoW.3-39).-1) + ('Row. yi.3-51) if link i is translational ('Roc ) x ('Rop.3-55) (3.3-52) if link i is translational 'Roai = ('RoWr) X ('Rose) + ('Row. zi) from the origin of (x_1. 0.) + ('RoWr) x [ ('ROI. and (3.3-53) (3.3-28).-i(' 'Roy'. respectively.3-29). zi).-j('-'RoW.) x ('Rapr')I +'R. Rohr = ('RDI1°R1) ('R0t .) x ('Rop. Hence.3-50) W.) x ('Rop. (3.*)1 (3.3-35)._1) with respect to the ith coordinate frame and is found to be 000 . z. . in summary. efficient Newton-Euler equations of motion are a set of forward and backward recursive equations with the dynamics and kinematics of each link referenced to its own coordinate system. 4r. n Link variables are i. 3.3-59) and (`RoIioRi) is the inertia matrix of link i about its center of mass referred to its own link coordinate system (xi. (3. otherwise set i . sin ai di cos a.6 Computational Algorithm The Newton-Euler equations of motion represent the most efficient set of computational equations running on a uniprocessor computer at the present time.3.4. N3. Algorithm 3. ni. 'Rovi.8062 m/s2 Joint variables are qi.1. Initial conditions: n = number of links (n joints) Wo=wo=vo=0 vo=g=(gx. zi). [Set counter for iteration] Set i -.By. go to step N4. fi.Sz)T where IgI = 9.3. yi. it is advisable to state an algorithmic approach for computing the input joint torque/force for each joint actuator. Computations are based on the equations in Table 3. 4i for i = 1. this computational procedure generates the nominal joint torque/force for all the joint actuators. [Forward iteration for kinematics information] Compute 'Ro wi.ti . and 'Roai using equations in Table 3. .i + 1 and return to step N2. Given an n-link manipulator... iRowi. The total mathematical operations (multiplications and additions) are proportional to n.1: Newton-Euler approach. 2. Such an algorithm is given below.3. [Check i = n?] If i = n. A list of the recursive equations are found in Table 3. The computational complexity of the Newton-Euler equations of motion has been tabulated in Table 3.ROBOT ARM DYNAMICS 117 ai 'R0 pi* = d. Since the equations of motion obtained are recursive in nature. N2. Ti Forward iterations: N1. the number of degrees of freedom of the robot arm. Hence. . Fi.3. ..)T('R.... .) + ('Row. ICE ) x ('Rosi) + ('Rowi) x [('Row.118 ROBOTICS: CONTROL.8062 m/s2.. Backward iterations: N4.(z0gi + '-'Ro'.9.) x ('R0pi*) + 2('Row.) if link i is rotational 'Row._) x zo9. if link i is translational 'Ri1('-'R0 ('Row.) if link i is rotational 'Roi._. g. If no load.+'Roni+. where = 9.) x [ ('Row. SENSING. is the viscous damping coefficient for joint i.) l + 'R. n 'Ri-1('-'Rowi_. + ('+1R0pi*) x (i+IRofi+.) x [('Roil°R.-1 zo) + b.-1) + ('R0 ..)l ('Ron. = where z° = (0..)l + ('Ropi* + 'R0 ) x ('ROF.1. ] Set and to the required force and moment.. l if link i is rotational if link i is translational . 'Rofi = 'Ri+ 1('+ I Rof. IgI conditions are coo = uio = vo = 0 and vo = (gX.) X ('R.-1zo) + bici T.) x [ ('Row._. 'Roai = 'Ri+1[._ . 1) T and b. 'Ri i('-'Rowi-I) 'Ri-1['-'RoWi-I + 'Roc. 'Roai = ('R1 Backward equations: i = n.) T('R. VISION. to carry the load. ('-'R0 '..) + ('RoI..*) ] if link i is translational 3 zogr + ('Row.) x ('Rop.°Ri)('Roc. 1 if link i is rotational if link i is translational The usual initial ('Rof.. 0. [Set and n.) x ('Rop. respectively. they are set to zero. . = 'Ri-.3 Efficient recursive Newton-Euler equations of motion Forward equations: i = 1 . n . + zot. 1) d. . 2.+ 1) + mi'Roa.) x ('Rop. .*) + ('Row.. g_)T (to include gravity). AND INTELLIGENCE Table 3.)('Row.) x ('Rosi)l + 'Roy'.zocli) + ('Row. i = 1. 0 0 1 2R. All the rotation axes at the joints are along the z axis perpendicular to the paper surface. _ 'R0 = z First.2 using Eqs. 3.24 t n = number of degrees of freedom of the robot arm.4 Breakdown of mathematical operations of the Newton-Euler equations of-motion for a PUMA robot arm Newton-Euler equations of motion 'Row.2 °R2 = S12 C12 C. and T. 0 I C2 S2 0 C12 S12 0 0 1 -S1 0 C.S2 0 0 1 -S. `Rof. 'R2 = S2 C2 C12 0I 0 1 0 0 0 0 0 0 C.15 9n-6 18n 24n . and mass of each link and coordinate systems are given in Sec. N6. center of mass. 'RoF. 'RoN. '11 (3. the same two-link manipulator with revolute joints as shown in Fig. . otherwise set i -. 3. The physical dimensions.6. Multiplications 9nt Additions 7n R0 1 'Knit 9n 27n 9n 22n Roa. 'Ron.ROBOT ARM DYNAMICS 119 Table 3. [Compute joint force/torque] Compute `RoF.7 A Two-Link Manipulator Example In order to illustrate the use of the N-E equations of motion. with and given... I S. N5. we obtain the rotation matrices from Fig.15 103n .1 and go to step N5..3-49): OR.21 Total mathematical operations 117n . (3.i . . S. 'RoN.2 is worked out in this section. 3.S1 0 0 1 C2 . Rof. [Backward iteration] If.2.3. 3. _ -S2 0 C2 0 1 2R0 - -S12 0 C12 0 0 . then stop. 'R0n.3-48) and C. 15n 14n 3n 0 9(n-1) 24n 21n . 120 ROBOTICS: CONTROL. with coo = 0. g.S2 S2 0 0 1 0 0 1 [0 02 C2 0 1 (81 + 82) 0 0 Using Eq. (3. SENSING.) x (' Rop. we have: 'Bowl = 'Ro(c o + zo01 + woxzo01) = (0. 0)T.) x [('Row.3 we assume the following initial conditions: coo = wo = vo = 0 and vo = (0. 0. I) T (0.8062 m/s2 Forward Equations for i = 1. with wo = wo = 0.) x z0021 = (0. compute the angular acceleration for revolute joints for i = 1.*) l + ' Rovo 0 0 1 0 0 1 0 l 01 x 0 1 0 0 L J . 1) T O1 For i = 2. we have: ' Roi. 2.3-51). 2: For i = 1. 2: For i = 1. we have: 'Row. we have: Z RO62 = 2 R 1 [' Rocs 1 + zo02 + ('Row. 2. compute the angular velocity for revolute joint for i = 1. compute the linear acceleration for revolute joints for i = 1. 0)T with g = 9. + 02 ) Using Eq.3-50). 0. (3. VISION. So for i = 1.3-52). (3. AND INTELLIGENCE From the equations in Table 3. Using Eq. with vo = (0. = (' Roc 1) x (' Rop1*) + ('Row. we have: 2ROw2 = 2RI('Rowl + zo02) C2 . g. 1 0 0 1 -Si C1 01 = 0 1 01 0 0 For i = 2.) CI S1 0 0 r-. _ ' Ro(wo + zoo. 0 l 2 81 x 0 0 61 1ROa1 = 0 1 0 0 j .2lC1 .--I Cl S1 0 !Y.S2 S2 0 C2 0 0 1 0 1(5201 .i s.3-53).02 . = ('R01) x (' Ro-s1) + ('Row 1) x [('Row. 2 0 0 0 0 Thus. we have: ' Boa.0 . 1C 2 ' 'Ross = -Si 0 C1 0 1 0 .C201 .) x ('R01)] + 'Roil where sl = . SIN .2010.-.i s. we have: 2Rov2 = (2RoW2) X (2Rop2*) + (2RoW2) x [(2RoW2) x (2Rop2*)] + 0 1 I x 0 x 0 x 0 0 01 +02 0 C2 . (3. compute the linear acceleration at the center of mass for links 1 and 2: For i = 1.ROBOT ARM DYNAMICS 121 For i = 2. 0 Using Eq.) + 9S12 l(0 + 02 + C20 + S20i) + gC1. 2 . Assuming no-load conditions. VISION.6102) + gm2S12 ) m21(C201 + 5261 + ' 01 + '/z02) + gm2C12 0 III For i = 1. 1: r'? m21(S201 . (3.3-56) to compute the force exerted on link i for i = 2.122 ROBOTICS.C202-'/201 -'/202 -0102) +gm2S12 m2l(C201 + S20I + '/201 + ' 02) + gm2C12 0 's7 C2 + m1 'Roar 0 0 . we have: 2Roa2 = (2Row2) x (2Ro92) + (2R0o2) X [(2ROW2) x (2Ro92)] + 2Roiz where l .01 ./z61 .o We use Eq.6162) + gS12 l(C201 + 5261 + '/z01 + '/z02) + gC12 0 Backward Equations for i = 2.ZC 1 12 . CONTROL.26162)+9S12 l(0 +62+C261+S20?)+gC12 0 l(5201 . AND INTELLIGENCE For i = 2.'/20 .1C12 s2 = C 12 2RQS2 = S 12 C12 0 . we have: 'Rofl = 'R2(2Rof2) + 'R0F1 LIT C2 S2 . 1.022 .'/z0 .S2 0 0 1 m21(S201 . f3 = n3 = 0. `.. SENSING. we have: 2Rof2 = 2R3(3Rof3) + 2ROF2 = 2ROF2 = m22ROa2 I.C201 .2I 0 -S12 0 0 S 12 1 S12 0 0 Thus.C2O 1 . with f3 = 0. For i = 2. 0 l 'SIN 2 0 0 0 2ROa2 = 0 8I + 62 x 0 0 x 0 01 + 02 X 61 + 62 l(5261 -C261 .'/z62 . [ lC2 I 2Ro P1* = 1 1 P1* _ is. compute the moment exerted on link i for i = 2.3-57). we have: 2Ronz = (2ROP2* + 2ROS2) x (2ROF2) + 2RON2 where IC12 C12 S12 0 1C12 1512 1 P2* = 1512 2RO Pz* _ -S12 0 C12 0 0 1 0 0 I+1 0 0 Thus. 1 rm21(S201 . with n3 = 0.0102) + gm2S12 0 0 x m21(C20.'h01 0 2Ron2 = 2 'h02 . Ro P1* = 0 0 0 . we have: 'Ron. 1: For i = 2. (3. + 5201) + '/zm2glC12 For i = 1.ROBOT ARM DYNAMICS 123 m21[-01-'h C2(01+62) 2-C20102-' S2(01+02)]-m2$(C12S2-C2S12)-'/zm1101+m1$S1 m2l[01 2 2 2 .C201 . + 520 + '201 + '/z02) + gm2C12 10 0 '/12m212 0 0 0 + 0 0 0 '/12 m212 0 0. = 1R2[2Ron2 + (2ROP1*) x (2Rof2)] + (1Rop1* +'Ros1) x ('ROF1) +'RON1 where 1C.S2 (62 +02)-520102+/z C2 (01+02)]+m2gC1 +'/zm1101+gm1 C1 0 Using Eq.. + 02 0 0 '/3m21201 + '/3m21202 + 'hm212(C20. 3. SENSING. AND INTELLIGENCE Thus. with b1 = 0. 3. and apply the Lagrange-Euler formulation to obtain the equations of motion. the angular velocity w.. 0. (3. (3. the L-E equations of motion are inefficient due to the 4 x 4 homogeneous matrix manipulations. one can utilize the relative position vector and rotation matrix representation to describe the kinematic information of each link.4-1) . We shall only focus on robot arms with rotary joints. with b2 = 0. T 'Ron1 = 1R2(2Ron2) + 'R21 (2Rop1*) x (2Rof2)j + 2 .8). using Eq. -.6. CAD to form the lagrangian function.4 GENERALIZED d'ALEMBERT EQUATIONS OF MOTION Computationally. In order to obtain an efficient set of closed-form equations of motion. 3. VISION.'hm2S21282 + /2mlgiG1 + '/zm291C12 + m2glC1 The above equations of motion agree with those obtained from the Lagrange-Euler formulation in Sec. W. 0 x 'ROFI + 'RON1 Finally. obtain the kinetic and potential energies of the robot arm `. In this section. we have: ?2 = (2Ron2)T(2R1zo) '/3m21201 + '/3m21282 + 'hm212C281 + '/zm2giC12 + 'hm2l2S2BI For i = 1. while the efficiency of the N-E formulation can be seen from the vector formulation and its recursive nature. Assuming that the links of the robot arm are rigid bodies. we derive a Lagrange form of d'Alembert equations of motion or generalized d'Alembert equations of motion (G-D). of link s with respect to the base coordinate frame can be expressed as a sum of the relative angular velocities from the lower joints (see Fig. = E 0. we obtain the joint torques applied to each of the joint actuators for both links. we have: Tl = ('Ron1)T('Rozo) = '/3m1 1281 + 4/3 m21201 + '/3m21202 + m2 C212B1 + ihm2l2C2B2 .M2 S2126162 .2.124 ROBOTICS: CONTROL.3-58): For i = 2. with respect to the base coordinate frame can be computed as a sum of the linear velocities from the lower links. Premultiplying the above angular velocity by the rotation matrix SRo changes its reference to the coordinate frame of link s. be the position vector to the center of mass of link s from the base coordinate frame. Using Eqs.ROBOT ARM DYNAMICS 125 Xe Base coordinate system Figure 3.. let T.4-2) In Fig. that is.. G. This position vector can be expressed as s-1 rs = E pl* + cs 1=' (3. the linear velocity of link s. where zj_1 is the axis of rotation of joint j with reference to the base coordinate frame.8 Vector definition in the generalized d'Alenmbert equations.4-3) where cs is the position vector of the center of mass of link s from the (s .4-3). (3. a-1 k Vs = k=1 E Bjzj-1 J=1 J x pk* + ` rj=1OiZj-1 x c. . 3.8. (3. vs.Rows = EO 1=1 .4-1) to (3.SRozi-1 (3..4-4) J .1)th coordinate frame with reference to the base coordinate frame. VISION. . 1.4-6) becomes d dt a(Ks)tran ae.c + cs) (3. Summing all the links from i to n gives the reaction torques due to the translational effect of all the links. Applying the Lagrange-Euler formulation to the above translational kinetic energy of link s with respect to the generalized coordinate Bi CID (s > i).)tran a0i n d dt a(Kstran a(Ks)tran J [ aei a0 )t ms is [zi-I x (rs . aet d'' ms VS Ms Vs avs a8.E. the equations of motion due to the translational. + msvs .E.-i x (PI.4-6) where avs aOi = z. AND INTELLIGENCE The kinetic energy of link s (1 < s < n) with mass ms can be expressed as the summation of the kinetic energies due to the translational and rotational effects at its center of mass: Ks = (Ks)tran + (K5).ms Vs as a8.Pi-I) Using the identities d dt ais and ass ae. we have d 3(K5)tran ae. = ae. (3.126 ROBOTICS: CONTROL. avs ae.4-8) Eq.t = t/2ms(vs vs) + t/2(sROws)T I5(sROws) (3. 8(K5 )tran ae.49) d d a(K. SENSING. + Pi+i* + + Ps.a(K.4-10) . (3.4-7) =zi-I x (rs . For ease of discussion and derivation. avs I- dt d dt ms vs avs ae. rotational. ^4y a(Ks)tran aei I)] (3.)tran ae. and gravitational effects of the links will be considered and treated separately.pi-I) ] (3. (3. d dt avs ae.4-5) where IS is the inertia tensor of link s about its center of mass expressed in the sth coordinate system. the acceleration of link s is given by s-1 k k=1 j=I .4-15) = sRozj. using Eqs. the kinetic energy due to the rotational effect of link s is: (Ks)rot = I/Z(SR0w)TIs(sR0ws) 1 s T s ej sRozjJ=I 2 I Is ej sRozj_ 1 (3.3-8).4-14) s dt (sRozj-1) _ de j=i aeJ sRozj r s I dt m (3.I X i=i ej sRozj.4-4) and (3. (3.4-11) L _q=l Next. I E ej zj _ I X l.I .4-12) j=1 Since a(Ks)rot aei = (sRozj-I)T Is j=I ej sRozj_ I s>i (3.j=l ejzj-1 j=1 J X es J t P-I E egzq_I x epzp-I X pk* _q=1 (-1 r P-I. ejzj-I J X pk* + [eiz1_11 j X cs s--`1 r k k = k=1 E ejzj-I j=I X pk* + J E [e1_ 1 x j1 J j=I Ei ejzj _ 1 J 1 rs cs C . E egzq-I J x 0pZp_I J (3. I r X is r + .4-13) ae and (SRozj-1) = SRozj-1 x SRozi-I 1 i j (3.ROBOT ARM DYNAMICS 127 where. 1 X C k=j+I 1S Eek sR04..-. a(Ks)rot aei Subtracting Eq.)rot aBi 3(K5)rot 0(Kr)rot . (3.sRoz. SENSING. (3.4-17) from Eq. that is. (sRoZi-I)TIS.n I-1 aei S (j=I E Bj sRozj- 'j J (3. B''Rozj-1 j=1 T I + r.0 i1ej Rozj-I S T Lj=1 J + (5Rozi-1)"1.128 ROBOTICS: CONTROL. VISION.4-16) y.. .D 'C3 `m. .. J E BjsRozj-1 j=1 S j=1 E Bj sRozj-1 dt dd sRozi-I X EOjsRozj-1J Is j=i + (sRozi-I)TIs + (sRozi-I)TI5 Next. (3.. T = [e. ..SRoz. AND INTELLIGENCE then the time derivative of Eq.. 2.t with respect to the generalized coordinate Oi (s >.4-16) and summing all the links from i to n gives us the reaction torques due to the rotational effects of all the links.1 L k=I T Is .s F.4-13) is d dt d 'rROZ_1Is 0. (3. '+" J J [sRozi_Ij s Nom j=1 r r s 1 "W° S rRozj =I O1 1 k=j+I XE Ok SRoZk I (3.4-17) j=I 4-. i).4-14).I1 (3.s=i r» aei (sRozi-1)TIs s=i L j=I S EBjsRozj. using Eq.I1 j=1 x sRozi-I Is [e.!] d clt a(K.4-18) . )rot aei a(K. ] i = 1. .E.E. we can find the partial derivative of (Ks)r.I Bk cRozk- + (sRozi-I)TI5 sRoZi-I X L Bj sRozj. = E g ms [zi-1 X ( rS .)rot ae.E. )T and g = 9.4-18).g ' ms rS = .4-10). g.-I X (rs-Pi-I)] j=1 . a(K. i).-I)Tls IE1W Rozj-lJ j= S k r rs-I +E S=i ms Fi k=1 HI BgZq-I X J [Oqz_1 rd 1q=1 X pk* J J L.pi-1)l (3.E.ROBOT ARM DYNAMICS 129 The potential energy of the robot arm equals to the sum of the potential energies of each link.8062 m/s2.I + p1* + .pi-1)] (3.i q= I X O P Zp-I [zi-IX(rs-pi-1)] J J ..) aei a(P. (3.E.) aei =)1 " a(PS) i aei IL.) aei s-I k=I 11 k Eej Zj j=1 X pk* + r.g mS ( pi . gy. = E PS S=1 (3.a(K.pi-1) ae = g'mS [zi-1 X (rs . + cs) aei = g'mS acts .E.E.E. P. [ezii1 Xes )'[z.4-20) where g = (gX. b-3 + d dt S a(K. .)rot aei + a(P..)tran ae. Summing all the links from i to n gives the reaction torques due to the gravity effects of all the links..+E1 it (SRo2. + cS ) (3. Applying the Lagrange-Euler formulation to the potential energy of link s with respect to the generalized coordinate ei (s >.E.. (3. and (3.4-22) is equal to the generalized applied torque exerted at joint i to drive link i.4-21) where pi-1 is not a function of O.4-22) The summation of Eqs.4-19) where PS is the potential energy of link s given by Ps = . we have d dt a(PS) aei a(PS) aei a(PP) aei =g' ms a(pi-1 + pi* + . d di' a(P. s SRozi-I X F.. 2. VISION. .P.I.4-24) where. . The above equation can be rewritten (for i = 1. n): /I D. for i = 1. 6) + ci = Ti(t) j=I (3. [e_I J X cs 11 1 F Bqz _ 1 g=1 X BpZp_ I . .1 X pk* + cs L k=j [Zi-I x RS . .4-25) . n. . r» Dij = D. . 2.n. AND INTELLIGENCE i [Opz_ 11 X +S ' [ms P . . a more "structured" form as .pi-1)] s=j [zi-1 X (rs .2 .4-23) i = 1.jr` + D. cup .i [(SRozi-I)TIs (sRoZj-1)] s-1 S=j 11 + s=j E _ n [ms Zj .Jan = G.. .pi-1)] i < j [(SRozi-I)TIS (SRozj-1)] s=j r + E j ms[zj-I x (rs .130 ROBOTICS: CONTROL._ r j=1 r k=i+l Bk'Rozk-I . SENSING. BvsRozn-I P=1 I. r.jej(t) + hvan(B. 9) + hip°t(6.-1)] oil i<j (3.[Zi-1 X (rs-Pi-1)1 J r urS ('Rozi-) TIS M.s E BgSRoZq_ q=1 J J J _g for ZiIX [Emi(fiPi_i) Its j=i j in a-+ (3. while if i # j. k ins htran(e 0) S=i X t J Fd q=1 BgZq- X Pk* I rIi'egzq-1 X BPZP J J X pk* J J [zi-1 X (rs . These coefficients have the following physical interpretation: 1.ROBOT ARM DYNAMICS 131 also.4-25) indicates the inertial effects of moving link j on joint i due to the rotational motion of link j. and vice versa.pi-1)] (3. we have ci = -g 11 zi-I x Em1(rj .pi-I) J=i (3. The first term of Eq. Equation (3. it is the pseudoproducts of inertia of link j felt at joint i due to the rotational motion of link j. The elements of the Di. (3.4-25) reveals the acceleration effects of joint j acting on joint i where the driving torque Ti acts. If i = j. II =w5 ms [f[ I I S'6PZP-1 J BgZq-I X X Cs J 9=1 S X BPZP-1 J X ie' E [zi-1 X (1's .4-28) The dynamic coefficients Did and ci are functions of both the joint variables and inertial parameters of the manipulator. it is the effective inertias felt at joint i due to the rotational motion of link i. while the hits" and h[ot are functions of the joint variables. The second term has the same physical meaning except that it is due to the translational motion of link j acting on joint i. . the joint velocities and inertial parameters of the manipulator.Pi-1)] q_' J +r.4-27) SRozi-1 X E BPSRozP-1 P=i J IS eq SRozq-1 Lq=1 Finally.j matrix are related to the inertia of the links in the manipulator.4-26) P=2 Lq=' and 11 hirot(8 0) = EJ (SRozi-1)TIS S=i T 'I" J (3. However. As an indication of their computational complexities. most of the cross-product terms can be computed very fast. If q.132 ROBOTICS: CONTROL. The first and third terms of Eq. respectively. SENSING. +-' . then it indicates the Coriolis forces acting on joint i. Eqs. due to the translational motion of the links.9. (3. the Coriolis reaction forces contributed from the links below link s and link s itself.tran (0. 3. Table 3. 0) term is related to the velocities of the joint variables..a) ^L7 boo `gyp p4. s. the centrifugal and Coriolis reaction forces from all the links below link s and link s itself. (3. The coefficient c= represents the gravity effects acting on joint i from the links above joint i. . Eq.4-25) to (3. (3. 8). then it indicates the ..4-27) indicates purely the Coriolis reaction forces of joints p and q acting on joint i due to the rotational motion of the links. At first sight.4 Rotation matrices and position vectors Recursive equations Kinematics representation 4 x 4 Homogeneous matrices Equations of motion Closed-form differential equations 4/3n3+44n2 + 146/3n+45 Rotation matrices and position vectors Closed-form differential equations t n = number of degrees of freedom of the robot arm. but if p # q. If p = q. Similar to h. Table 3. c°'. respectively. a block diagram explicitly showing the procedure in calculating these coefficients for every set point in the trajectory in terms of multiplication and addition operations is shown in Fig. The second term is the combined centrifugal and Coriolis reaction forces acting on joint i. ti. due to the translational motion of the links.yan (0. 4. (3. `CS O'5 . then it represents the Coriolis forces acting on joint i due to the rotational motion of the links. The h. 0) term is also related to the velocities of the joint variables.5 Comparison of robot arm dynamics computational complexitiest Approach Lagrange-Euler 128/3 n4 + 512/3 n3 °-n Newton-Euler Generalized d'Alembert 13/6 n3 + 105/2 n2 Multiplications Additions + 739/3 n2 + 160/3 n 98/3 n4 + 781/6n3 132n + 268/3 n + 69 + 559/3 n2 + 245/6 n llln . The first term of Eq. The second and p fourth terms of Eq.:. If p = q. No effort is spent here to optimize the computation.4-27) reveals the combined centrifugal and Coriolis reaction torques felt at joint i due to the velocities of joints p and q resulting from the rotational motion of links p and q. centrifugal reaction forces felt at joint i. and G-D equations of motion in terms of required mathematical operations per trajectory set point. AND INTELLIGENCE 2. The K0'(8. (3. 3.4-26) represents the combined centrifugal and Coriolis reaction torques felt at joint i due to the velocities of joints p and q resulting from the translational motion of links p and q. VISION. then it represents the centrifugal reaction forces felt at joint i.4-28) would seem to require a large amount of computation.4-26) constitute. Equation (3. N-E.4-26) indicate.5 summarizes the computational complexities of the L-E.. . .Pi.(28)S (6n )A. 1) 'IOrzr 1] [Ia=lOaza x O 11.I 3 Li 9 X Pk 91 c=1 +1 Okrl. Rozk I zi I x\ rrI. .(7n )S (3 )S Oi .9 Computational procedure for Dip hiy.-P. a (3 n2 + 2nn + O) S n2-Zn+3)S Figure 3. Orzr P=1 1 1 } IJ 09z9 11 x Orzr a=1 I r. °i . g 1.rn1(0 0 ) (ins t5n2+ 3n)M (3n3t 2n't 12n)M (Zns + 4n2 + n )A n (na + 12n2 + 5n )A 11. . and c1.otk 09 rRoz9 1 1 P 'Rozi 1 Or 1 I 1 x ( r.n.(15n + 15)A. 0 ) ROBOT ARM DYNAMICS 133 (Zn2+Zn)S N=W 3 (7n2 + 28n + 42)M 31n2-t'2n-0)M (3n2+3n)S 22+ 3n)M (3112 3n )A ( o3n2 + I2 n + 33)A (20n2 + Ila -3)A M indicates multiplication operation A indicates addition operation 5 indicates memory storage requirement nQ indicates output from bloc. h f °t.(=.Pi-I ) (P= . P.(18n + 36)M. 1 Lr-I x Z9 IJ x Pk = 9=11O9Z9 I) O I 0vzs r Il _ IIO'Rozr lr 1 xlk ` 0." °"(0 . 134 ROBOTICS: CONTROL. AND INTELLIGENCE 3.4-28)] is to facilitate the design of a suitable controller for a manipulator in the .4.879 0 000 s' 0600 ' a 1800 1200 2400 ' Magnitude of elements b 3000 . (3.3842 0 000 ' I 0600 1200 1800 2400 3000 Time (ms) Acceleration effects on joint 1 (D11) Time (ms) Acceleration effects on joint 2 (D12) elements . ' 0600 1200 1800 2400 3000 1200 1800 2400 3000 Time (ms) Acceleration effects on joint 3 (D13) Time (ms) Acceleration effects on joint 4 (D14) 0100 o_ 3 920 X 0066 Magnitude of elements x) 0 V E 1 423 0033 / 0600 1200 / ' I C) C) w 0 -1 073 B on 0000 f' 0 000 .----L-__.for rotational effect r--. SENSING. .i 0 000 0600 i --.for translation effect --. for combined effect 1567 Magnitude of elements 2.4-24) to (3.10 The acceleration-related D11 elements. _. VISION. I `° -3 570 0 000 0600 1200 1800 1800 2400 3000 2400 3000 Time (ms) Acceleration effects on joint 5 (D15) Time (ms) Acceleration effects on joint 6 (D16) Figure 3..00014n' 0 000 Magnitude of elements x) Magnitude I ' 0 000 .1 An Empirical Method for Obtaining a Simplified Dynamic Model One of the objectives of developing the G-D equations of motion [Eqs. --.) . the G-D equations of motion are explicitly expressed in vector-matrix form and all the interaction and coupling reaction forces can be easily identified.J 0600 1200 --L--3 2400 3000 1800 Time (ms) Acceleration effects on joint 2 (D22) Time (ms) Acceleration effects on joint 3 (D23) Magnitude of elements 1200 1800 2400 Magnitude of elements 3000 1200 1800 Time (ms) Acceleration effects on joint 4 (D24) Time (ms) Acceleration effects on joint 5 (D25) Magnitude of elements i. Similar to the L-E equations of motion [Eqs. Furthermore. 0 000 0600 1200 1800 0247 2400 3000 0 000 -. (3. t httran hP t and ci can be clearly identified as coming from the translational and the rotational effects of the motion. 0307. Comparing the magnitude of the translational and rotational effects for each term of the dynamic equations of motion.--0.3-25)]. D.10 (Continued. the extent of <`° Magnitude of elements Magnitude of elements a`.3-24) and (3.ROBOT ARM DYNAMICS 135 state space or to obtain an approximate dynamic model for a manipulator.+ 1 0600 1200 1800 2400 Time (ms) Acceleration effects on joint 6 (D26) Figure 3. an. D. 10 and 3. Figure 3. we can approximate the elements of the D matrix along the trajectory as follows: (1) The translational effect is dominant for the D12.2400 ___ 3000 00 .. coo The less dominant terms or elements can be neglected in calculating the dynamic equations of motion of the manipulator. AND INTELLIGENCE dominance from the translational and rotational effects can be computed for each set point along the trajectory. D23. D55. we consider a PUMA 560 robot and its dynamic equations of motion along a preplanned trajectory. (3) Both translational and rotational effects are dominant for the remaining elements of the D matrix. D45.11 shows the Coriolis and centrifugal elements hr`ra" and hTo' These figures show the separate and combined effects from the translational and rotational terms.7 9781 0 (8%) _ 1 I L_ 1800 ova _ 2400 3000 06(5) 1200 Time (ms) Acceleration effects on joint 3 (D33) Time (ms) Acceleration effects on joint 4 (D34) 0133 elements Magnitude of elements c c 3933 Magnitude o 0 00161 I I . The total number of trajectory set points used is 31. and D66 elements. D46. D33. Figure 3. and h. D22. 3. As an example of obtaining a simplified dynamic model for a specific trajectory. SENSING. In Fig. the elements tran and D45" show a staircase shape which is due primarily to the round-off _-.10 shows the acceleration-related elements D.) . and D56 elements. (2) The rotational effect is dominant for the D44. 3.11.11..10 (Continued. 3. 9245 Magnitude of elements d O V 0 7 a) 6246 Q) Magnitude of elements 10-3) N 3247 -5319 Cp c W 0247 0 000 ---- 0600 0 1200 200 1800 . The D.1 053 I 0 000 2 -2 5(X) 24110 3(X%) 0600 120)) 1800 0 000 060() 1_100 180() 2400 3000 Time (ms) Time (ms) Acceleration effects on joint 5 (D35) Acceleration effects on joint 6 (D36) Figure 3.t an and DT`.10.ro` elements along the trajectory are computed and plotted in Figs. a" DIJ ht`ra".136 ROBOTICS: CONTROL. This greatly aids the construction of a simplified dynamic model for control purpose. VISION. From Fig. ) x) 1 010 5 444 2 778 0100 0 000 0600 1200 11106 __t'__t 0 000 0600 1200 1800 2400 3000 T---_i 1800 2400 3000 Time (ms) Acceleration effects on joint 6 (D46) Time (ms) Acceleration effects on joint 5 (DS5) (9-01 0000 7 o_ 0 X 3 010 X 2 010 O v -2 667 1 010 -c en 4 o(K) 0 000 0600 1200 0100 0 000 -I 0600 - i _.4000 Magnitude of elements 4 853 0 2.10 (Continued.ROBOT ARM DYNAMICS 137 error generated by the VAX-11/780 computer used in the simulation. 7 275 O_ . (3) Both translational and rotational effects are dominant for the h5 and h6 elements.431 O .1 333 Magnitude of elements Figure 3.8000 00 0090. we can approximate the elements of the h vector as follows: (1) The translational effect is dominant for the hl . These elements are very small in magnitude when compared with the rotational elements. 2400 3000 1800 2400 3000 1200 Time (ms) Acceleration effects on joint 6 (D56) Time (ms) Acceleration effects on joint 6 (D66) . and h3 elements.-I LW 2400 0 000 0600 1200 1800 Time (ms) Acceleration effects on joint 4 (D44) Time (ms) Acceleration effects on joint 5 (D45) 3 010 8111 41 x) 2010 Magnitude of elements Magnitude of elements 10-6) Magnitude of elements . 0 000 _ 0600 1200 1800 .(88X) Magnitude of elements (x u d X x) . (2) The rotational effect is dominant for the h4 element. I 1800 1 -. Similarly. h2. h6 . VISION. The resulting simplified model retains most of the major interaction and coupling a-t reaction forces/torques at a reduced computation time.5667 5033 b () (XX) (M) -q 5(1() (X8) 12)X) ( 80X) 24(8) 1(8X) 1((0) 12(8) (8(8) 2_4(X) l(Xx) Time (ms) Coriolis and centrifugal effects on joint 5.11 The Coriolis and centrifugal terms h. r-U 6209 for translational effect for rotational effect for combined effect 0000 Magnitude of elements 7- Magnitude of elements 2 0000 0 000 0600 1200 1800 2400 3000 12771 0 (XX) i i r91 1800 I I 060X) 1200 2400 M5%) Time (ms) Time (ms) h1 Coriolis and centrifugal effects on joint I. SENSING. h5 Time (ms) Coriolis and centrifugal effects on joint 6. x) E .0027 O (XX) Magnitude of elements 10-3) OM81 12(X) )81X1 24)X) 7(XX) (( ON) 0600 1200 1800 2400 3000 Time (ms) Time (ms) Coriolis and centrifugal effects on joint 3.138 ROBOTICS: CONTROL. AND INTELLIGENCE The above simplification depends on the specific trajectory being considered. h2 1170 Magnitude of elements 0770 0371 . which greatly aids the design of an appropriate law for controlling the robot arm. h4 elements (x I 113 E u v u Magnitude of elements E Magnitude Figure 3.. Coriolis and centrifugal effects on joint 2. h3 Coriolis and centrifugal effects on joint 4. 3. and S.2.12 0 0 0 0 The rotation matrices are. 0 0 1 C2 . yields the following expressions for the link inertia tensors: 10 0 '/12m112 0 0 0 0 '112 m212 0 0 'h2 m212 I1 = 0 0 112m.. The physical parameters of the manipulator such as p. and assuming that each link is 1 units long.2 A Two-Link Manipulator Example Consider the two-link manipulator shown in Fig. We would like to derive the generalized d'Alembert equations of motion for it.i = cos(01 + O ).. Cl _S1 C.4. are: 1C' i(C' + C12) Pz = P'* =P'= is.j = sin (O + Of). and p. 0 P2 = 1(Sl + S12) 0 cI 1 C12 1 [ 1C1 + 1 C12 1 r2 = cl = r1 = 2 Si c2 = 1 1 2 l S12 l 2 1S' + 2 S12 1 + 0 0 1 0 . )T 2R0 = (OR2)T where Ci = cos8 Si = sin 0. r. and m2 represent the link masses. = S1 'R2 = S2 C2 0 0 0 0 C'2 OR2=oR -S12 C12 01 0 1 R2 = S'2 0 and 0 'R0 = (OR.*. c. Letting m.ROBOT ARM DYNAMICS 139 3. respectively. C .S2 0 0 1 OR. AND INTELLIGENCE Using Eq. r'/3m112 + 4/3m212 + m212C2 '/3m212 + 'hm2C212 '/3m212 [Di1 ] = D21 D22 '/3m212 + '/2m2C212 To derive the hf' (0. we obtain the elements of the D matrix.PI) I Thus.0.4-27) in our example because the other terms are zero. 0) components. we need to consider only the following terms in Eqs.1)I2 NIA 0 1 r0 0 fCI x 1S 2 ' + m1 0 1 x 0 + m. (3. (3.l2 + 'hm2C212 D22 = (2R0z1)TI2(2Roz1) + m2(z1 X e2) = '/12m212 + '/4m212 = '/3m212 . as follows: DII = ('Rozo)T11('Rozo) + (2 Rozo)TI2(2Rozo) + m1 (zo x CI) (zo x 'fl) + m2[zo X (PI* + c2)] (zo X r2) 0 =(0.140 ROBOTICS: CONTROL.[zl X (r2 .4-26) and (3. . SENSING.0.1)I1 FBI 0 +(0. 0 1 x 0 0 1_ x 0 = '/3m112 + 4/3m212 + C2m212 D12 = D21 = (2 Rozo)TI2(2Roz1) + m2(zI X CZ) (z0 X r2) '/3m. VISION.4-25). 0) and h/01(0. m212S2010.n2[01 z0 x (01 zo x Pl*)] [z1 x (r2 . (3.P1)] = 'hm2g1C12 . 52120102 '/zm2S2120 `+l h = h2 To derive the elements of the c vector. hl = h2ran = hoot = -'hm212S202 -m212S20162 Similarly. we use Eq.'hm212S202 . we can find . h2 = htran + hz t = '/2 m212S2O Therefore. 5.1/2m. Thus.1262 .P1)] x [(01 zo + 02z1) X c2] + ( 01z0 x 0221 ) X c2} 'h m212S201 h2t = (2Roz1)TI2(012Rozo x 022Roz1) + [2Roz1 x (012Rozo + 022Roz1)]TI2(012Rozo + 022Roz1) = 0 We note that h.O1 = h2 ` = 0 which simplifies the design of feedback control law.'hm212S20 . r h1 1 .4-28): C1 = -g [zo x (mlrl + M2-f2)] = ('/2m1 + m2)g1C1 + 'hm2g1C12 C2 = -g [ zl x M2(-f2 . hI°t = (1Rozo x 01'Rozo)TI1(01'Rozo) + (2Rozo)TI2(012Rozo x 622Roz1) + [2Rozo x (6] 2Rozo + 622Roz1)]TI2(012Rozo + 622Roz1) = 0 Thus.m.ROBOT ARM DYNAMICS 141 ran m2[01 zo X (01 zo x P1*)] (zo X r2) + m1 [01 zo X (01 zo x C1)] (zo x -fl) + m2[(01zo + 02z1) x [(01zo + 02z1) x c2) + (01zo x 02z1) X c2] (zo x r2) 'hm212S20 .POI + m2{(01 zo + 0221) [zl x (r2 . -g..142 ROBOTICS: CONTROL. the G-D equations of motion explicitly indicate the contributions of the translational and rotational effects of the links. . Thus. a user is able to choose between a formulation which is highly structured but computationally a. . The N-E formulation results in a very efficient set of 7-+ recursive equations.. AND INTELLIGENCE where g = (0. Based on the above results. The L-E equations of motion can be expressed in a well structured form. The G-D equations of motion give fairly well "structured" equations at the expense of higher computational cost. a formulation which has efficient computations at the expense of the "structure" of the equations of motion (N-E). SENSING. the gravity loading vector c becomes ('/zm1 + m2 )glC1 + 'hm2g1C12 1 c = C2 C. To briefly summarize the results. but they are computationally difficult to utilize for real-time control purposes unless they are simplified. In addition to having faster computation time than the L-E equations of motion.5 CONCLUDING REMARKS Three different formulations for robot arm dynamics have been presented and discussed. 0)T. Such information is useful for control analysis in obtaining an appropriate approximate model of a manipulator. I 'hm2 g1CI2 where g = 9. and a formulation which retains the "structure" of the problem with only a moderate computational penalty (G-D). the G-D equations of motion can be 't7 s. [1968]).+ mss REFERENCES Further reading on general concepts on dynamics can be found in several excellent mechanics books (Symon [1971] and Crandall et al. The derivation of 'C3 inefficient (L-E). but they are difficult to use for deriving advanced control laws. VISION.M2 S2120162 '/2m2 S2 l28 ('/zm1 + m2 )glC1 + '/zm2glC12 '/2m2 g1C12 3. it follows that the equations of motion of the two-link robot arm by the generalized d'Alembert method are: TI(t) T2(t) C '/3m112 + 4/3m212 + m2C212 '/3m212 + '/2m2C212 '/3m212 + '/zm2C212 '/3m212 01(t) 02(t) m2S2l2B2 .fl used in manipulator design.8062 m/s2. c. Furthermore. accelerations. based on the generalized d'Alembert principle. Luh and Lin [1981b] utilized the N-E equations of motion and compared their terms in a computer to eliminate various terms and then rearranged the remaining terms to form the equations of motion in a symbolic form. As an alternative to deriving more efficient equations of motion is to develop efficient algorithms for computing the generalized forces/torques based on the N-E equations of motion. °rn >a' -°o 4. and forces/ moments.. [1980a] improved the computations by referencing all velocities.k. to their own link coordinate frames.. [1979] were among the first to exploit the recursive nature of the Newton-Euler equations of motion. Simplification of L-E equations of motion can be achieved via a differential transformation (Paul [1981]). Bejczy and Lee [1983] developed the model reduction method which is based on the homo.. rya Nay Kelly [1982] developed an algorithmic approach for deriving the equations of motion suitable for computer implementation.Y. '-t o. a model reduction method (Bejczy and Lee [1983])... Lee et al.. Luh et al. location of the center of mass of each link. Exploiting the recursive nature of the lagrangian formulation. Though the structure of the L-E and the N-E equations of motion are different. Walker and Orin [1982] extended the N-E formulation to computing the joint accelerations for computer simulation of robot motion. However. [1980] explicitly verified that one can obtain the L-E motion equations from the N-E equations. the Coriolis and centrifugal term.LS gyp. and Orin et al. which contains the second-order partial derivative was not simplified by Paul [1981].b' 0 ate) °)w s.0 Neuman and Tourassis [1983] and Murray and Neuman [1984] developed computer software for obtaining the equations of motion of manipulators in symbolic form. inertial matrices.. The differential transformation technique converts the partial derivative of the homogeneous transformation matrices into a matrix product of the transformation and a differential matrix.ROBOT ARM DYNAMICS 143 Lagrange-Euler equations of motion using the 4 x 4 homogeneous transformation matrix was first carried out by Uicker [1965]. [1983]. Huston and bon t-+ "C3 . thus reducing the acceleration-related matrix DIk to a much simpler form. fl. Neuman and Tourassis [1985] developed a discrete dynamic model of a manipulator.. Hollerbach [1980] further improved the computation time of the generalized torques based on the lagrangian formulation. An excellent report written by Bejczy [1974] reviews the details of the dynamics and control of an extended Stanford robot arm (the JPL arm). The report by Lewis [1974] contains a more detailed derivation of Lagrange-Euler equations of motion for a sixjoint manipulator. >.z . z-. . Turney et al. o'- . derived equations of motion which are expressed explicitly in vector-matrix form suitable for control analysis. and an equivalent-composite approach (Luh and Lin [1981b]). h. The report also discusses a scheme for obtaining simplified equations of motion. . Armstrong [1979]. while Silver [1982] investigated the equivalence of the L-E and the N-E equations of motion through tensor analysis.fl geneous transformation and on the lagrangian dynamics and utilized matrix numeric analysis technique to simplify the Coriolis and centrifugal term. respectively.. a particle at rest in the starred coordinate sys- tem is located by a vector r(t) = 3ti + 2tj + 4k with respect to the unstarred coordinate system (reference frame). 1) T. yo.1 (a) What is the meaning of the generalized coordinates for a robot arm? (b) Give two different sets of generalized coordinates for the robot arm shown in the figure below.5 With references to the cube of mass M and side 2a shown in the figure below.1) is another body-attached coordinate frame at the center of mass .1 and 3. w) is the body-attached coordinate frame.144 ROBOTICS: CONTROL. k) are unit vectors along the principal axes of the reference frame. and (xc. 1.2. j. y'. x1) is located at (-1. (u.3-17) when (a) h = 0 and ((DD (b) dhldt = 0 (that is. a particle fixed in an intermediate coordinate frame (xi. 3.-' . find the Coriolis and centripetal accelerations. zo) is the reference coordinate frame. where (i. and zo axes. (3. h is a constant vector). Ycm. v.3..3. VISION. If the starred coordinate frame is only rotating with respect to the reference frame with w = (0.2 As shown in the figure below. Draw two separate figures of the arm indicating the generalized coordinates that you chose.. zc. j. xo) where i.3-13) and Eq.72 3. 3. yo.. 3. (3.3 With reference to Secs. Find the acceleration of the particle with respect to the reference frame. 0.4 Discuss the differences between Eq. (xo. AND INTELLIGENCE PROBLEMS 3. yo. The intermediate coordinate frame is moving translationally with a velocity of 3ti + 2tj + 4k with respect to the reference frame (xo. 2) in that coordinate frame. zp xi Yo ---------y xo 3. and k are unit vectors along the x0. ((D . SENSING. (b) Find the inertia tensor at the center of mass in the (xcm. zcm) coordinate system.ROBOT ARM DYNAMICS 145 of the cube. and 2c: zo yn to . 3. (a) Find the inertia tensor in the (x0. zo) coordinate system. 2b. yo.6 Repeat Prob. ycm.5 for this rectangular block of mass M and sides 2a. zo Ycm 3. ). z°) coordinate system. 3. SENSING. .t . However. the h(q"(tI )).. i7' Eat --i .7 for the rectangular block in Prob. 7i' 3. ). one should be able to find the D(gd(t1 )). and n needed to find all the elements in the D(q) matrix in the L-E equations of motion. Determine the inertia tensor in the (x°.8 Repeat Prob.) is the inertial tensor of link i about the ith coordinate frame. namely. °'i -a= . gd(ti )). .3-22) is a row vector of the form (0.01 tions and additions in terms of N. -3. (q°(t. Assume that N multiplications and M additions are required to compute the torques applied to the joint motors for a particular robot. then its Coriolis and centrifugal forces/torques can be omitted from the equations of motion formulated by the Lagrange-Euler approach.. 3. where there is a negative sign for a level system.h CAD '°h robot? 3. 3.11. g I ) ' for a level system. and there is no negative sign. 3. most researchers still use the Lagrange-Euler formulation. .9 We learned that the Newton-Euler formulation of the dynamic model of a manipulator is computationally more efficient than the Lagrange-Euler formulation.13 In the Lagrange-Euler derivation of equations of motion. the gravity vector g given in Eq.146 ROBOTICS: CONTROL. can you state a procedure indicating how you can obtain the above matrices from the N-E equations of motion using the same set point from the trajectory? 3. gd(ti )).IgI. (3. their equations of motion should be "equivalent. VISION.V.11 We discussed two formulations for robot arm dynamics in this chapter. M. the gravity effect as given in Table 3.fl chi -O" 's..5 is being rotated through an angle of a about the z° axis and then rotated through an angle of 0 about the u axis. Why is this so? (Give two reasons.6.2 is (0. 3.14 In the recursive Newton-Euler equations of motion referred to its own link coordinate frame. 3. In the Newton-Euler formulation. 3. 0.10 A robotics researcher argues that if a robot arm is always moving at a very slow speed. 0)." Given a set point on a preplanned trajectory at time t. the matrix ('1° Ii °R. and the c(gd(t1 )) matrices from the L-E equations of motion. Instead of finding them from the L-E equations of motion. the Lagrange-Euler formulation and the Newton-Euler formulation. What is the smallest number of multiplicar'+ was CND pN. of the '-' Lagrange-Euler equations of motion. . . 3. where n is the number of degrees of freedom of the .. AND INTELLIGENCE 3. Since they describe the same physical system.12 The dynamic coefficients of the equations of motion of a manipulator can be obtained from the N-E equations of motion using the technique of probing as discussed in Prob.7 Assume that the cube in Prob. Will these "approximate" equations of motion be computationally more efficient than the Newton-Euler equations of motion? Explain and justify your answer. Derive the relationship between this matrix and the pseudo-inertia matrix J. y0.15 Compare the differences between the representation of angular velocity and kinetic energy of the Lagrange-Euler and Newton-Euler equations of motion in the following table (fill in the blanks): Lagrange-Euler Angular velocity Kinetic energy Newton-Euler "'+ °+. gd(t1 ). Explain the discrepancy. 0. °R. d1. '-+ Cry C/ 3. z0) is the reference frame. y0. 1 have zero reaction force/torque. i = 1. CAD . the mass of each link is lumped at the end of the link. h(O. (c) Derive the Lagrange-Euler equations of motion by first finding the elements in the D(O).16 The two-link robot arm shown in the figure below is attached to the ceiling and under the influence of the gravitational acceleration g = 9. 01.16.17 Given the same two-link robot arm as in Prob.. (d) Derive the Newton-Euler equations of motion for this robot arm. for each link. ~^. such as 'Rosi and 'Ro p. for each link. 3. (a) What are the initial conditions for the recursive Newton-Euler equations of motion? (b) Find the inertia tensor 'R01. and m1 . do the following steps to derive the Newton-Euler equations of motion and then compare them with the Lagrange-Euler equations of motion.*. (c) Find the other constants that will be needed for the recursive Newton-Euler equations of motion. 2. (b) Find the pseudo-inertia matrix J. (x0. 02 are the generalized coordinates.8062 m/sec'-. and c(0) matrices. m2 are the respective masses. d2 are the lengths of the links. assuming that 1 and ~O.ROBOT ARM DYNAMICS 147 3. (a) Find the link transformation matrices '-'A. Under the assumption of lumped equivalent masses. 8). y4? . zo) is the reference frame.18 Use the Lagrange-Euler formulation to derive the equations of motion for the two-link B-d robot arm shown below.CONTROL. yo. and in.. are the link masses.148 ROBOTICS. B and d are the generalized coordinates. AND INTELLIGENCE 3. of link 1 is assumed to be located at a constant distance r. Mass in. and mass m2 of link 2 is assumed to be located at the end point of link 2. in. SENSING. VISION. from the axis of rotation of joint 1. where (xo. ) C]. `O0 A. These two constraints combined give rise to four possible control modes. However. 4-. a. It also deals with the formalism of describing the desired manipulator motion as sequences of points in space (position and orientation. there exists a number of possible trajectories between the two given endpoints. . If joint coordinates are desired at these locations. This chapter focuses attention on various trajectory planning schemes for obstacle-free motion. of the manipulator) through which the manipulator must pass. joint coordinates are not suitable as a working coordinate system because the joint axes of most manipulators are not orthogonal and they do not separate position from orientation. as well as the space curve that it traverses. it is of considerable interest to know whether there are any obstacles present in its path (obstacle constraint) and whether the manipulator hand must traverse a specified path (path constraint). For example. Alexander Pope 4. Path endpoints can be specified either in joint coordinates or in Cartesian coordinates. The space curve that the manipulator hand moves along from the initial location (position and orientation) to the final location is called the path. From this table. Trajectory planning schemes generally "interpolate" or "approximate" the desired path by a class of polynomial functions and generates a sequence of timebased "control set points" for the control of the manipulator from the initial location to its destination. Before moving a robot arm.CHAPTER FOUR PLANNING OF MANIPULATOR TRAJECTORIES A mighty maze! but not without a plan. 0 Quite frequently.D. it is noted that the con- trol problem of a manipulator can be conveniently divided into two coherent subproblems-motion (or trajectory) planning and motion control..1.1 INTRODUCTION With the discussion of kinematics and dynamics of a serial link manipulator in the previous chapters as background. Cartesian coordinates than in joint coordinates. they are usually specified in cartesian coordinates because it is easier to visualize the correct end-effector configurations in . We are interested in developing suitable formalisms for defining and describing the desired motions of the manipulator hand between the path endpoints. as tabulated in Table 4. then the inverse kinematics solution routine can be called upon to make the necessary conversion. one may want to move the manipulator along 149 CAD .p" s. Furthermore. we now turn to the problem of controlling the manipulator so that it follows a preplanned path.. expressed either in joint or cartesian coordinates. A systematic approach to the trajectory planning problem is to view the trajec- tory planner as a black box. for some n. In the first approach. VISION. the manipulator must traverse by an analytical function. C). AND INTELLIGENCE Table 4. we discuss the formalisms for planning both joint-interpolated and straight-line path trajectories. and acceleration of the manipulator's generalized coordinates at selected locations (called knot points or interpolation points) along the trajectory. tf]) that "interpolates" and satisfies the constraints at the interpolation points.1. from the initial location to the final location. velocity. such as a straight-line path in cartesian coordinates. Since no constraints are imposed on the manipulator hand. In the second approach. In this chapter.. The trajectory planner then selects a parameterized trajectory from a class of functions (usually the class of polynomial functions of degree n or less._ . The first approach requires the user to explicitly specify a set of constraints (e. continuity and smoothness) on position. the constraint specification and the planning of the manipulator trajectory are performed in joint coordinates.150 ROBOTICS: CONTROL. it is difficult for the user to trace the path that =me mph '"' coo 'ZS .. velocity. SENSING. as shown in Fig. the user explicitly specifies the path that s.1 Control modes of a manipulator Obstacle constraint Yes Off-line collision-free Yes No Off-line path planning Path constraint path planning plus on-line path tracking Positional control plus on-line obstacle detection and avoidance plus on-line path tracking Positional control No a straight-line path that connects the endpoints (straight-line trajectory).g. or to move the manipulator along a smooth. and the trajectory planner determines a desired trajectory either in joint coordinates or cartesian coordinates that approximates the desired path. polynomial trajectory that satisfies the position and orientation constraints at both endpoints (joint-interpolated trajectory). We shall first discuss simple trajectory planning that satisfies path constraints and then extend the concept to include manipulator dynamics constraints. in the time interval [to.. Two common approaches are used to plan manipulator trajectories. and acceleration). The trajectory planner accepts input variables which indicate the constraints of the path and outputs a sequence of `r1 time-based intermediate configurations of the manipulator hand (position and orien- tation. 4. 4.PLANNING OF MANIPULATOR TRAJECTORIES 151 Path constraints Path specifications Trajectory planner i---- . joint-interpolated trajectory in Sec. For cartesian space planning. smooth.3. the path constraints are specified in cartesian coordinates. the sequences of the time-based joint-variable space vectors {q(t). Thus. the manipulator hand may hit obstacles with no prior warning. the manipulator hand traverses. the time history of the i. 4. one must convert the Cartesian path constraints to joint path constraints by some functional approximations and then find a parameterized trajectory that satisfies the joint path constraints. v(t). the time history of all joint variables and their first two time derivatives are planned to describe the desired motion of the manipulator.4.3. fi(t). and accurate with a fast computation time (near real time) for generating the sequence of control set points along the desired trajectory of the manipulator.3 (D' . 4. In the second approach. Hence.4. This chapter begins with a discussion of general issues that arise in trajectory planning in Sec. 4(t). Section 4. Hence. 4. 4.2 GENERAL CONSIDERATIONS ON TRAJECTORY PLANNING Trajectory planning can be conducted either in the joint-variable space or in the cartesian space. to find a trajectory that approximates the desired path closely.1 Trajectory planner block diagram.4. However. We shall discuss this problem in Sec.2. and a cubic polynomial trajectory along a straight-line path in joint coordinates with manipulator dynamics taken into consideration in Sec.3. The above two approaches for planning manipulator trajectories should result in simple trajectories that are meant to be efficient. q(t)} {p(t). large tracking errors may result in the servo control of the manipulator.5 summarizes the results. For joint-variable space planning. 4(t). 4. Sl(t)} Manipulator's dynamics constraints Figure 4. {q(t). q(t)} are generated without taking the dynamics of the manipulator into consideration. straight-line trajectory planning in Sec. and the joint actuators are servoed in joint coordinates. . then 3(p + 1) coefficients are required to specify initial and terminal conditions (joint position. If t = t f. VISION. loop: Wait for next control interval. If the joint trajectory for a given joint (say joint i) uses p polynomials. two intermediate positions may be specified: one near the initial position for departure and the other near the final position for arrival which will guarantee safe departure and approach directions.. and accelerations are derived from the hand information. Finally. . and the corresponding joint positions. velocity. go to loop. velocity. we see that the computation consists of a trajectory function (or trajectory planner) h(t) which must be updated in every control interval. Second.. four constraints are imposed on the planned trajectory. the continuity of the joint position and its first two time derivatives must be guaranteed so that the planned joint trajectory is smooth. and acceleration) and guarantee continuity of these variables at the polynomial boundaries.. in addition to a better controlled motion.. velocities. or five cubic (3-3-3-3-3) trajectory segments. the tra- jectory set points must be readily calculable in a noniterative manner. Planning in the joint-variable space has three advantages: (1) the trajectory is planned directly in terms of the controlled variables during motion. CONTROL. Third. the basic algorithm for generating joint trajectory set points is quite simple: t=to. C.a: . extraneous . . AND INTELLIGENCE manipulator hand's position. then exit. In general.fl C. as would two quartic and one cubic (4-3-4) trajectory segments. (2) the trajectory planning can be done in near real time. The associated disadvantage is the difficulty in determining the locations of the various links and the hand during motion. The above four constraints on the planned trajectory will be satisfied if the time histories of the joint variables can be specified by polynomial sequences.p P') ." must be minimized. This will be discussed further in the next section. intermediate positions must be determined and specified deterministically. SENSING.CD Cat . h (t) = where the manipulator joint position should be at time t.-' 'CJ 'T1 C. motions. If an additional intermediate condition such as position is specified. and (3) the joint trajectories are easier to plan. a task that is usually required to guarantee obstacle avoidance along the trajectory. then an additional coefficient is required for each intermediate condition. From the above algorithm. t = t + At. In general. Thus.y r93 0-1 'C7 'L3 . such as "wandering. where At is the control sampling period for the manipulator. Thus. and acceleration are planned. First. one seventh-degree polynomial for each joint variable connecting the initial and final positions would suffice.152 ROBOTICS. p'. two cubics and one quintic (3-5-3) trajectory segments. ono .. $ The error actuating signal to the joint actuators is computed based on the error between the target joint position and the actual joint position of the manipulator hand.c-4 rte.-. W-. then exit. (2) The joint space-oriented method in which a low-degree polynomial function in the joint-variable space is used to approximate the path segment bounded by two adjacent knot points on the straight-line path and the resultant control is done at the joint level. Generally. Q [ H (t) ] = joint solution corresponding to H (t) . H (t) = where the manipulator hand should be at time t.. the criteria chosen are quite often dictated by the following control algorithms to ensure the desired path tracking. 4. [1983]) all used low-degree polynomials in the joint-variable space to approximate the straight-line path. and Luh and Lin [1981] all reported methods for using a straight line to link adjacent cartesian knot points. ward concept. . CDW. If t = t f. we need to convert the Cartesian positions into their corresponding joint solutions. Q[H(t)]. t=t+At.PLANNING OF MANIPULATOR TRAJECTORIES 153 For Cartesian path control.4. and a certain degree of accuracy is assured along the desired straight-line path. at this time. as discussed in Sec. t The servo sample points on the desired straight-line path are selected at a fixed servo interval and are converted into their corresponding joint solutions in real time while controlling the manipulator. `G' °`0 o. There are two major approaches for achieving it: (1) The Cartesian spaceoriented method in which most of the computation and optimization is performed in Cartesian coordinates and the subsequent control is performed at the hand level. since all the available control algorithms are invariably based on joint coordinates because. Here. The matrix function H(t) indicates the desired location of the manipulator hand at time t and can be easily realized by a 4 x 4 transformation matrix. loop: Wait for next control interval. 0 s. For the latter step. in addition to the computation of the manipulator hand trajectory function H(t) at every control interval. r'3 . Taylor's bounded deviation joint path (Taylor [1979]) and Lin's cubic polynomial trajectory method (Lin et al. However. go to loop.-. Taylor [1979]. Cartesian path planning can be realized in two coherent steps: (1) generating or selecting a set of knot points or interpolation points in Cartesian coordinates according to some rules along the cartesian path and then (2) specifying a class of functions to link these knot points (or to approximate these path segments according to some criteria. there are no sensors capable j' The error actuating signal to the joint actuators is computed based on the error between the target cartesian position and the actual cartesian position of the manipulator hand. "-w . Paul [1979]. the above algorithm can be modified to: t=to. The resultant trajectory is a piecewise straight line. /The cartesian space-oriented method has the advantage of being a straightforCAD 'CJ 'C! p'.. $ The resultant cartesian path is a nonpiecewise straight line. velocity. In addition. AND INTELLIGENCE of measuring the manipulator hand in Cartesian coordinates.. 0. Paul [1972] showed that the following considerations are of interest: 1. 4. it is required that its robot arm's configuration at both the initial and final locations must be specified before the motion trajectory is planned. we have four positions for each arm motion: initial.e. (c) Set-down position: same as lift-off position. Because of the various disadvantages mentioned above. SENSING. 1U+ . the motion of the hand must be directed away from an object. the resulting optimization problem will have mixed constraints in two different coordinate systems. we could then control the speed at which the object is to be lifted. In planning a joint-interpolated motion trajectory for a robot arm..4. When picking up an object.. if manipulator dynamics are included in the trajectory planning stage. and final (see Fig. such as torque and force. we must move to a normal point out from the surface and then slow down to the final position) so that the correct approach direction can be obtained and controlled. are bounded in joint coordinates.e. Position constraints (a) Initial position: velocity and acceleration are given (normally zero). smooth polynomials. the joint space-oriented method.2). 4. However. If we specify a departure position (lift-off point) along the normal vector to the surface out from the initial position and if we require the hand (i. VISION. cartesian space path planning requires transformations between the cartesian and joint coordinates in real time-a task that is computationally intensive and quite often leads to longer control intervals' Furthermore. which converts the cartesian knot points into their corresponding joint coordinates and uses low-degree polynomials to interpolate these joint knot points. is widely used.+ '-n a. (d) Final position: velocity and acceleration are given (normally . 4. Thus.ze o).. set-down.. it loses accuracy along the cartesian path when the sampling points fall on the fitted. and acceleration limits of each joint motor. '-' r-. If we further specify the time required to reach this position. then path constraints are specified in cartesian coordinates while physical constraints.154 ROBOTICS: CONTROL. lift-off. 3. the origin of the hand coordinate frame) to pass through this position. we then have an admissible departure motion. (b) Lift-off position: continuous motion for intermediate points.3 JOINT-INTERPOLATED TRAJECTORIES To servo a manipulator. otherwise the hand may crash into the supporting surface of the object. We shall examine several planning schemes in these approaches in Sec. The same set of lift-off requirements for the arm motion is also true for the set-down point of the final position motion (i. a-" C13 '-' U-+ x"1 4. . This approach has the advantages of being computationally faster and makes it easier to deal with the manipulator dynamics constraints. 2. From the above. 5. the transformation from cartesian coordinates to joint coordinates is ill-defined because it is not a one-to-one mapping. 7. The constraints of a typical joint trajectory are listed in Table 4. and final positions) are satisfied. and the maximum of these times is used (i. and acceleration at these knot points (initial.2. An alternative approach is to split the entire joint trajectory into several trajectory segments so that different interpolating polynomials of a lower degree can be used to interpolate in each trajectory . the use of such a high-degree polynomial to interpolate the given knot points may not be satisfactory.PLANNING OF MANIPULATOR TRAJECTORIES 155 Joint i 0(tj) 9(t2) Final Time Figure 4. One approach is to specify a seventh-degree polynomial for each joint i. It is difficult to find its extrema and it tends to have extraneous motion. Based on these constraints. tion). velocity.. In addition to these constraints.`3 (4. lift-off. tf ]. (b) Intermediate points or midtrajectory segment: time is based on maximum velocity and acceleration of the joints. velocity.CD qi(t) = a7 t7 + a6 t6 + a5t5 + a4 t4 + a3t3 + a2 t2 + alt + ao . 6. we are concerned with selecting a class of polynomial functions of degree n or less such that the required joint position.e.-.3-1) where the unknown coefficients aj can be determined from the known positions and continuity conditions.2 Position conditions for a joint trajectory. However. and the joint position. `CD Sao CS. the maximum time of the slowest joint is used for normaliza. set-down. . Time considerations (a) Initial and final trajectory segments: time is based on the rate of approach of the hand to and from the surface and is some fixed constant based on the characteristics of the joint motors. and acceleration are continuous on the entire time interval [to. the extrema of all the joint trajectories must be within the physical and geometric limits of each joint. Set-down position (continuous with next trajectory segment) 10. normally zero) 3. that is. Set-down position (given) 9. Same as 4-3-4 trajectory. Velocity (continuous with next trajectory segment) 11. Acceleration (continuous with previous trajectory segment) 8. but uses polynomials of different degrees for each segment: a third-degree polynomial for the first segment. Velocity (given.2 Constraints for planning joint-interpolated trajectory Initial position: 1. There are different ways a joint trajectory can be split. Each joint has the following three trajectory segments: the first segment is a fourth-degree polynomial specifying the trajectory from the initial position to the lift-off position. Lift-off position (given) 5. The last trajectory segment is a fourth-degree polynomial specifying the trajectory from the set-down position to the final position. and a third-degree polynomial for the last segment. and each method possesses different properties. We . Velocity (continuous with previous trajectory segment) 7. normally zero) segment. Position (given) 13.156 ROBOTICS: CONTROL. normally zero) Intermediate positions: 4. normally zero) 14. Acceleration (continuous with next trajectory segment) Final position: 12. VISION. SENSING. The second trajectory segment (or midtrajectory segment) is a third-degree polynomial specifying the trajectory from the lift-off position to the set-down position. Acceleration (given. Velocity (given. The number of polynomials for a 4-3-4 trajectory of an N-joint manipulator will have N joint trajectories or N x 3 = 3N trajectory segments and 7N polynomial coefficients to evaluate plus the extrema of the 3N trajectory segments. The most common methods are the following: 4-3-4 Trajectory. 000 Note that the foregoing discussion is valid for each joint trajectory. 3-5-3 Trajectory. AND INTELLIGENCE Table 4. Position (given) 2. Lift-off position (continuous with previous trajectory segment) 6. Cubic spline functions of third-degree polynomials for five trajectory segments are used. Acceleration (given. a fifth-degree polynomial for the second segment. 5-Cubic Trajectory. each joint trajectory is split into either a three-segment or a five-segment trajectory. which together form the trajectory for joint j. t o [0. Set-down position = 02 = 0 (t2 ) 9. Lift-off position = 01 = 0 (t. Initial position = 00 = 0(to ) 2. Let us define the following variables: normalized time variable. Continuity in acceleration at t1 [that is. a(te) = a(tI')] 8.1 Calculation of a 4-3-4 Joint Trajectory Since we are determining N joint trajectories in each trajectory segment. hi(t). 1] real time in seconds Ti real time at the end of the ith trajectory segment ti = Ti . with time varying from t = 0 (initial time for all trajectory segments) to t = 1 (final time for all trajectory segments). The unknown coefficient aji indicates the ith coefficient for the j trajectory segment of a joint trajectory. ) ] 11. Magnitude of initial acceleration = ao (normally zero) 4. 0 (ti) = 0 (t )] 6. it is con- venient to introduce a normalized time variable. 1]. N-. ) 5.-4 O. t e [0. Final position = Of = 0(tf) . v(t2) = v(t. t e [0.3. 1] The trajectory consists of the polynomial sequences.Ti_ I real time required to travel through the ith segment t : T : : t T .3-3) (4. Continuity in position at t2 [that is. Magnitude of initial velocity = vo (normally zero) 3.. 4. Continuity in velocity at t. Continuity in acceleration at t2 [that is.Ti. a(t2) = a(t2) ] 12. [that is.3-4) h. Continuity in position at t1 [that is.I 1 . T E [Ti_1. Ti]. 0(t2) = 0(t2 ) ] 10.PLANNING OF MANIPULATOR TRAJECTORIES 157 shall discuss the planning of a 4-3-4 joint trajectory and a 5-cubic joint trajectory J n the next section.3-2) h2 (t) = a23 t3 + a22 t2 + a21 t + a20 and (2nd segment) (last segment) (4. which allows us to treat the equations of each trajectory segment for each joint angle in the same way. and n indicates the last trajectory segment. The boundary conditions that this set of joint trajectory segment polynomials must satisfy are: 1. (t) = an4t4 + an3t3 + aii2t2 + anlt + ari0 a-+ The subscript of each polynomial equation indicates the segment number. v(tF ) = v(tI )] 7. Continuity in velocity at t2 [that is. a-+ .Ti Ti . The polynomial equations for each joint variable in each trajectory segment expressed in normalized time are: h1(t) = a14t4 + a13t3 + a12t2 + allt + a10 (1st segment) (4.. 3-6) For the first trajectory segment.Ti_I i = 1. SENSING. Magnitude of final velocity = of (normally zero) 14. 2. AND INTELLIGENCE 13. the governing polynomial equation is of the fourth degree: h1 (t) = a14t4 + a13t3 + a12t2 + allt + alo t e [0.158 ROBOTICS: CONTROL. The first and second derivatives of these polynomial equations with respect to real time T can be written as: dhi(t) dhi(t) dt _ 1 dhi(t) Vi(t) = 1 dr dhi(t) dt dT Ti .3-7) IN.3 Boundary conditions for a 4-3-4 joint trajectory. n (4. 1] (4. VISION.. n dt (4.3. 2. .3-5) ti dt -hi (t) ti and ai(t) d2hi(t) dT2 _ 1 d2hi(t) dt2 (Ti . 4. Joint i 0(r) = 0(7'+) 0(72) = B(T2) e(72) = 6(7-2) 0(r. Magnitude of final acceleration = a f (normally zero) The boundary conditions for the 4-3-4 joint trajectory are shown in Fig.) 0(72) B(7n) = Of of of TO T1 72 T Real time Figure 4.Ti_ 1 )2 1 d2hi(t) _ 1 tit dt2 -hi (t) tit i = 1. 3-8) tl hl (t) 12a14t2 + 6a13t + 2aI2 al (t) = z tl = ti z (4. (4. )t + 00 t c.O . we relax the requirement that the interpolating polynomial must pass through the position exactly. Satisfying the boundary conditions at this position leads to a10 = hl (0) hI (0) v0 r 4a14t3 + 3a13t2 + 2a12t + all t=0 0'q 00 (given) (4. Eq.PLANNING OF MANIPULATOR TRAJECTORIES 159 From Eqs.3-9) 1. We only require that the velocity and acceleration at this position 'r7 1. (4.3-12) ti J t=0 t2 1 which yields a0ti a12 2 With these unknowns determined. 1] (4.[0.3-13) 004 2. For t = 1 (at the final position of this trajectory segment).3-10) all tI (4.3-5) and (4.3-7) can be rewritten as: h1 (t) = a14t4 + a13t3 + alt 2 1 i I t2 + (v0t. At this position.3-6).3-11) h1(0) a0 = 2 ti = r 12a14t2 + 6a13t + 2a12 1 2a12 (4. For t = 0 (at the initial position of this trajectory segment). its first two time derivatives with respect to real time are vI (t) and hl (t) tI 4a14t3 + 3a13t2 + 2a12t + all (4. 1.3-20) t2 2 t1 . the governing polynomial equation is of the third degree: h2(t) = a23t3 + a22t2 + a21t + a20 'S. SENSING. respectively.160 ROBOTICS. respectively. 1] (4.3-5) and (4.3-15) t e [0. (4.3-16) V1 = _ r 3a23 t2 + 2a22 t + a21 1 t2 a21 (4. For t = 0 (at the lift-off position). we have h2(0) t2 i7.3-6).3-18) J t=0 t2 which gives a21 = v1 t2 and a1 = h2 (0) 2 t2 = 6a23 t + 2a22 1 2 2a22 t2 t=0 t2 2 (4. _ h1(1) tl and h2(0) = h1(1) (4. at the beginning of the next trajectory segment. AND INTELLIGENCE have to be continuous with the velocity and acceleration. Using Eqs.3-19) which yields a1 t2 a22 2 Since the velocity and acceleration at this position must be continuous with the velocity and acceleration at the end of the previous trajectory segment respectively.'. h2(0) = a20 = 02(0) h2(0) t2 (4. . the velocity and acceleration at this position are.3-14) al(l) o ai = For the second trajectory segment.CONTROL. tI vote (4.3-17) all h1(1) _ 12a14 + 6a13 tI2 tI2 + anti (4.. The velocity and acceleration at this position are: V1 (1) = h1(1) VI tl 4a14 + 3a13 + anti + N. VISION. respectively.3-23) .3-22) and 6a23 t + 2a22 1 t2 2 1 12a14t2 + 6a13t + 2a12 (4.3-27) For the last trajectory segment. leads to 3a23 t2 + 2a22t + a21 1 t2 I 4a14t3 + 3a13t2 + 2a12t + all 1 tl L t=0 J t=1 (4. 1] (4. t1 2 J t=0 J t=1 or -2a22 2 t2 + 12a14 2 ti + 6a13 2 ti + aotl 2 tl =0 (4... The velocity and acceleration at this position are obtained. the governing polynomial equation is of the fourth degree: w. For t = 1 (at the set-down position).3-21) or -a21 t2 + 4a 14 + 3a13 + aotl2 tl tl tl + vats tl =0 (4.3-28) .3-26) t2 J 1=1 3a23 + 2a22 + a21 t2 and (1) a2 h2 (1) 6a23 t + 2a22 Il 6a23 + 2a22 J t=1 2 t2 = t2 obi t2 (4.. respectively. as: h2 (1) = a23 + a22 + a21 + a20 (4. hn(t) = an4t4 + an3t3 + an2t2 + anlt + an0 t E [0.PLANNING OF MANIPULATOR TRAJECTORIES 161 which. Again the velocity and acceleration at this position must be continuous with the velocity and acceleration at the beginning of the next trajectory segment.3-24) 2.3-25) v2(1) = h2(1) t2 = r 3a23 t2 + 2a22 t + a21 1 (4. 3-30) and hn(t) 12ai4t2 + 6an3t + 2an2 t2 n an(t) t2 n (4. (4.an3 + aftnz . Satisfying the boundary conditions at this final position of the trajectory.3-6).3-33) which gives and = Vftn and of which yields _ hn (0) t2 n _ 2an2 (4.Vftn + Of = 02(1) (4. we have shifted the [0. AND INTELLIGENCE If we substitute t = t . SENSING.[ . For t = -1 (at the starting position of this trajectory segment).3-29) Using Eqs.1 normalized time t from t hn(t) = an4t4 into t in the above equation.3-32) and tn _ - (4.162 ROBOTICS: CONTROL.3-31) 1. its first and second derivatives with respect to real time are h (t) Vn(t) tn 4an4t 3 + 3ai3t 2 + 2an2t + and tn (4. 1 ] to t e [ -1.3-34) t2 n a f to ant 2 2. at the set-down position. For t = 0 (at the final position of this segment). we have. 0].3-5) and (4.3-35) . 0] + an3t3 + an2t2 + anlt + (4. Then Eq. ai4 .1. (4.3-28) becomes t c. Satisfying the boundary conditions at this position. we have hn(0) = an0 = Of 4n(0) Vf = tn (4. VISION. .3-41).6an3 + a 2 tn (4.. = 0 f .3-41) Iv' and All the unknown coefficients of the trajectory polynomial equations can be determined by simultaneously solving Eqs.v ft + to 3a23 +. .01 = h2 (1) .PLANNING OF MANIPULATOR TRAJECTORIES 163 and hn(-1) to - 4an4t 3 + 3an3t 2 + 2an2t + ant tn J t= -1 (4. (4.. (4.(0) = a14 + a13 + S2 = 02 ..3-43) "`" .hn (-1) ai4 + an3 - a 2 2 n + V ftn (4.3-22).(1) -h. (4.h2 (0) = a23 + a22 + a21 `0N +v0t. t2 + 2a22 t2 + a21 t2 =0 (4.3-42) S. (4.a ft.3-36) .fl 2 61 = 01 -00=h..3-24).3 + a ftn .3-37) The velocity and acceleration continuity conditions at this set-down point are h2(1) t2 h.02 = hn (0) . (4.3-39) and 6a23 MIN -12an4 + t6an3 .4 .3a. .3-42)..2 + V ftn tn and hn(-1) t n2 12an4t 2 + 6an3t + 2an2 tn 2 J `'1 12an4 .a ftn + 2 n + 2a22 =0 The difference of joint angles between successive trajectory segments can be found to be (4.4an4 + 3an3 ..3-38) or LSD 4a.(-1) to and h2(1) t2 2 - hn(-1) to 2 Y-+ (4. 1] to [ -1.3-40). S2.3-44): 7 Yi = E cijxj j=1 (4. 2. Sn + 1 .3-39).3-43).v0t. . .? 0 -1 (4. (4.3-49) The structure of matrix C makes it easy to compute the unknown coefficients and the inverse of C always exists if the time intervals ti.3-48) or x = C -ly (4.'af. we obtain all the coefficients for the polynomial Since we made a change in normalized time to run from [ 0.3-44) y= a0ti2 61 2 . i = 1. 6/t. 0 0 0 12/t2 0 0 0 0 0 1 -2/t2 1 C = 0 (4. equations for the joint trajectory segments for joint j.164 ROBOTICS. Solving Eq. aftn 2 T (4. CONTROL.3-47) and x = (a13 ..vo . VISION. after obtaining the coefficients ani from the above A'+ 0 t. -aot1 . we have y = Cx where (4. 0 ] for the last trajectory segment. n are positive values.3-46) 0 0 0 11t2 2/t2 3/t2 -31t. a23 . AND INTELLIGENCE (4. a14 .a0 . Rewriting them in matrix vector notation.aftn + Vf. Cs' . SENSING. a22 . a21 . an3 ) an4)T Then the planning of the joint trajectory (for each joint) reduces to solving the matrix vector equation in Eq.3-49). (4. and (4.3-45) .2 1 0 2/t2 2 0 6/t2 2 0 -12/t.VftnJ 0 0 0 1 1 0 0 0 0 0 0 0 0 4/tn 31t1 61t 2 4/t1 -1/t2 r-. (4. 3 + The resulting polynomial equations for the 4-3-4 trajectory. 1] (4. In using five-cubic polynomial interpolation. two extra interpolation points must be selected to provide enough boundary conditions for solving the unknown coefficients in the polynomial sequences.2an2 + anI)t anI + ano) t e [0. 4.3-29). with continuity of derivative of order k .3. set-down. namely. Thus. This can be accomplished by substituting t = t + 1 into t in Eq.PLANNING OF MANIPULATOR TRAJECTORIES 165 matrix equation. the first derivative represents continuity in the velocity and the second derivative represents continuity in the acceleration.2 Cubic Spline Trajectory (Five Cubics) The interpolation of a given function by a set of cubic polynomials.3-50) +( + (an4 . preserving continuity in the first and second derivatives at the interpolation points is known as cubic spline functions. Second.1. Similarly. However. are listed in Table 4. It is not necessary to know these two locations exactly.a.3 . We can select these two extra knot points between the lift-off and setdown positions. The polynomial equations for a 3-5-3 joint trajectory are listed in Table 4. lift-off. require that the time intervals be known and that continuity of velocity and BCD _O_ I--. set-down. 3. In the case of cubic splines. initial. (t) = ai4t4 + ( -4an4 + an3)t3 + (6an4 . obtained by solving the above matrix equation. Thus we obtain h. 1]. n 'L3 '-' C-. and final positions and (2) continuity of velocity and acceleration at all the interpolation points. The boundary conditions o2.4. we need to have five trajectory segments and six interpolation points. we need to reconvert the normalized time back to [0. The unknown coefficient aj1 indicates the ith coefficient for joint j trajectory segment and n indicates the last trajectory segment. we only have four positions for interpolation. lift-off. First.3a. from our previous discussion. and final positions. The degree of approximation and smoothness that can be achieved is relatively good.3. Thus.. the boundary conditions that this set of joint trajectory segment polynomials must satisfy are (1) position constraints at the initial. 1 ] .. In general.3-51) . low-degree polynomials reduce the effort of computations and the possibility of numerical instabilities. This is left as an exercise to the reader.. 4. ax) with r j -I < z < r 1 and t e [ 0. Cubic splines offer several advantages.. The general equation of five-cubic polynomials for each joint trajectory segment is: hi(t) = aj3t3 + aj2t2 + at + ado 'CS j = 1. it is the lowest degree polynomial function that allows continuity in velocity and acceleration. a spline curve is a polynomial of degree k '"' t". at the interpolation points. we only acceleration be satisfied at these two locations. we can apply this technique to compute a 3-5-3 joint trajectory.3 + an2)t2 3a. -fl (4. 2. (4. .apflt +..)t + 02 J where a = fig and f = 261 4 + 2t + 2t. + 3t2 t2 t1 6211 + C3 t. VISION..7 2 .3 Polynomial equations for 4-3-4 joint trajectory First trajectory segment: 2 h1(t) _ 61 .a0t1 12v0 a tI t1 h. t2 t. + L `'0 2 a2 t. t1 ti . + aft. 5t2 2t.42 t.ttA z 2 1t4 -86 + 5vft - a2" + 3v2t J a2t772 t3 + 2 t2 + (V2t. SENSING. t 3t2 tI .a J 461 t1 t4 + at3 + rapt? 1 2 t2 + (V0t1 )t + 80 h1(1) V1 3v0 .+ 2+ t2 t.vot.Vfti . t2 t2 J 5 3 t 2t.2v1 6v1 a1t2 2 h2(1) 662 a2 = t2 2 2a1 t2 t2 t2 Last trajectory segment: 96 ..: t. ..a2 .(1) a1 tl _ 125. AND INTELLIGENCE Table 4.Vpti 6+ 6t2 t1 + 4t t1 + 3t t2 t + 2t g = .. tz 1 2 J t3 + C2 2 a1 t2 + (V1t2)t + 81 h2(1) . Second trajectory segment: h2(t) _ V2 = t2 52 V. t2 362 t2 a.166 ROBOTICS: CONTROL.7 1 + t. c+7 MIA .5vft + NCI 2 E.Sap 6a t.. v. 3v2t2 .2vo 6vo aot1 2 SIN _ a1 h1(1) ti _ 661 ti . to Last trajectory segment: hn(t) = 6n ..PLANNING OF MANIPULATOR TRAJECTORIES 167 Table 4.a2 t21 t4 NIA L 2 1 NIA + 1062 .4 Polynomial equations for a 3-5-3 joint trajectory First trajectory segment: h1(t) = 61 .aftn )t2 aftn2 t + 82 36 . 32 tz 2 + a2 2 J al t2 t3 + 2J t2 + (v1t2)t + B1 h2(1) V2 = t2 36 to -2vf+ aft 2 6vf to h2(1) a2 = t2 -66.2ao tl Second trajectory segment: h2(t) = C!! 2 z z 662-3v1t2 .1562 + 8v1 t2 + 7v2 t2 + 2 .N r h1(1) V1 tl 2 2 t2 + (vot1 )t + Bo J .a2 + a2z J 3a1 t22 + .6v1 t2 .Vfto + r anz t3 + (-36 + 3vft .2v t' + f n 2 J .vote 361 tl aot 2 l _.4v2 t2 o°. 's 'C3 where tj is the real time required to travel through the jth trajectory segment.. we have h1 (0) = alo = Bo vo (given) .2 2 j = 1.3-54) fin At t = 0.168 ROBOTICS: CONTROL. 3 . 4.4. polynomial equations are calculated. and h4 (t) can be determined using the position constraints and continuity conditions. Once these. 2. satisfying the boundary conditions at this position. h3 (t). ti hJ(t) and af(t) = 2 = 6aj3t + 2a. 4. . The first and second derivatives of the polynomials with respect to real time are: hi(t) v3(t) 3aj3t2 + 2ai2t + ail ti j = 1 2 3 4 n . VISION. h2(t).4 Boundary conditions for a 5-cubic joint trajectory. 3. n (4. and accelerations at the initial and final positions. the polynomial equations for the initial and final trajectory segments [h1 (t) and h. (4 3-52) . velocities. AND INTELLIGENCE q(t) h 1(t) h2(t) h3(t) I.. the governing polynomial equation is h1(t) = a13t3 + a12t2 + alit + alo (4. where the underlined variables represent the known values before calculating the five-cubic polynomials.3-55) (4 .-11 to t2 t3 4 t ---. for a five-cubic joint trajectory are shown in Fig.L1 (4. Time If Figure 4. SENSING. .56 ) o hi (0) tl = all tI . Given the positions. (t) ] are completely determined. . For the first trajectory segment.3-53) tJ tl . the polynomial equation is hn(t) = an3t3 + an2t2 + an1t + ano (4.3-63) . Thus.. the first trajectory segment polynomial is completely determined: h1(t) = 61 -vote - (-l With this polynomial equation. .3-62) . we have z from which a13 is found to be where S...3-60) h1(1) tI o .3-58) a13 =61 -vote - aotl (4. the velocity and acceleration at t = 1 are found to be III and acceleration at the beginning of the next trajectory segment.2aot2 .3-61) The velocity and acceleration must be continuous with the velocity and For the last trajectory segment. . 'r7 t2 2 h1(1) = a13 + a21 + vote + 00 = 01 (4.VI 1 361 ..3-57) all which yields aotlz a12 At t = 1.(aoti)/2 .0.6vot1 0 = a1 = 2 all ti _ t . satisfying the position constraint at this position.PLANNING OF MANIPULATOR TRAJECTORIES 169 from which all = vote and ao p - h1 (0) t2 = 2a12 (4.2vot1 = t1 361 tI 661 . = 0.2ao tI (4.y h1 (1) t2 661 . _ 1..2v o 6vo aotl 2 all (4.3-59) 2 a21 I t3 + I a21 I tz + (vo tl )t + 00 (4. VISION. (1) to (4.of 3an3 + 2an2 + an tn 1 (4.3-70) vl = t2 _ _ h1(1) tl (4. AND INTELLIGENCE At t = 0 and t = 1.2v ftn + + h2 (0) = a20 = 01 h2(0) (given) a21 t2 (4. we have tie 3Sn . = Of .3-66) hn (1) and n tn 6ai3 + 2an2 t2 n (4. we obtain hn(t) = S"-vft"+ 1 '-.2 t + 04 2 (4.04.3-67) Solving the above three equations for the unknown coefficients ai3.3-72) which gives alt2 a22 = 2 . we have hn(0) = an0 = 04 (given) (4. satisfying the boundary conditions..3-69) At t = 0.3-68) where S. an1..170 ROBOTICS: CONTROL. an2. the equation is h2 (t) = a23 t3 + a22 t2 + a21 t + a20 (4. satisfying the position constraint and the continuity of velocity and acceleration with the previous trajectory segment..3-71) so that a21 = VI t2 and al __ h2(0) _ 2a22 t2 __ h1(1) t2 tl (4. For the second trajectory segment.aftn2)t2 aft.3-65) =vf= . aft" 2 t3 + (-36. + 3vftn . SENSING.3-64) hn (1) = an3 + ant + and + 04 = Of h. we obtain the velocity and acceleration which must be continuous with the velocity and acceleration at the beginning of the next trajectory segment.3-73) where v1 = t . With this polynomial equation.3-77) At t = 0.3-75) and = a2 = a1 + 2 (4.3-79) so that a31 = V2 t3 and a2 = A h3 (0) t3 2a32 h2 (1) 2 = t3 2 = 2 t2 (4. satisfying the continuity of velocity and acceleration with the previous trajectory segment. we have a22 2 h3(0) = a30 = 02 = a23 + + V1 t2 + 01 (4. For the third trajectory segment. and a2 all depend on the value of a23 .t2 h2(t) = a23t3 + L 381 2 J t2 + (vl t2 )t + 01 (4.2vo - a0ti 2 a1 = 661 t2 1 - 6v0 t .3-80) which yields a2 t3 a32 = 2 .2a0 1 and a23 remains to be found.3-74) = V2 = 3a23 + a1 t2 + v1 t2 t2 6a23 + a1t2 2 t2 = v1 + a1 t2 + 6a23 t2 3a23 t2 (4.3-78) A h3(0) V2 = III t3 _-_ a31 t3 h2(1) t2 (4. v2.O + V1 t2 + 01 (4. at t = 1. the polynomial equation becomes 2 a.PLANNING OF MANIPULATOR TRAJECTORIES 171 With these unknowns determined. t2 2 h2 (1) = 02 = a23 + h2(1) t2 '. the equation is h3 (t) = a33 t3 + a32 t2 + a31 t + a30 (4. a .3-76) Note that 02. SENSING. we obtain the velocity and acceleration which are continuous with the velocity and acceleration at the beginning of the next trajectory segment. and a3 all depend on a33 and implicitly depend on a23 . For the fourth trajectory segment.172 ROBOTICS: CONTROL.ti h4(t) = a43t3 + a42t2 + a41t + a40 At t = 0. AND INTELLIGENCE With these undetermined unknowns. v3.3-82) h3 (1) t3 3a33 + a2 t3 + V2 t3 t3 = V2 + a2 t3 + - 3a33 (4.3-87) t3 _ a3 - h4(0) t4 2 _ 2a42 t4 2 _ h3(1) t3 (4.3-83) t3 ``N and = a3 = 6a33 + a2 3 = a2 + t2 3 6a33 t3 3 (4. we have 2 h4 (0) = a40 = 63 = 02 + V2 t3 + a V3 = which gives a41 = v3 4 and .3-81) At t = 1. the polynomial equation can be written as h3 (t) = a33 t3 + a2 t3 L2J 2 a23 t2 + V2 t3 t + 82 (4.3-88) which yields a3 t4 2 a42 2 . VISION.3-84) Note that B3. the equation is . ''C 'en . satisfying the position constraint and the continuity of velocity and acceleration with the previous trajectory segment.. h3 (1) =83=02 V3 = + v2 t3 + .3-85) t3 2 + a33 (4.3-86) _-_ a4I t4 .m + a33 (4.~ h4(0) t4 (4.. h3(1) (4. --O h1(t) = S1 . aot2 1 i t 3 + l aotI 1 2 66. (4. a33.. In order to completely determine the polynomial equations for the three middle trajectory segments.PLANNING OF MANIPULATOR TRAJECTORIES 173 With these unknowns determined.3-95) 82 = a23 + 2 2 + VI t2 + 81 3a23 t2 (4.3-84). and (4. a33.3-99) . and a43.3-82). v3.' (4.2a0 (4.3-93) V. respectively. and a23. Solving for a23. a33. and a3 are given in Eqs. This can be done by matching the condition at the endpoint of trajectory h4 (t) with the initial point of h5 (t): 2 h4 (1) = a43 + a Z 4 + v3 t4 + 03 = 04 h4 (1) t4 j.3-89) where 03. .3-92) to These three equations can be solved to determine the unknown coefficients a23. = + t2 - .2v0 - 2 a.3-83).3-91) 2 h4(1) and t4 2 = 6a43 t4 2 -66 2 + CAD 6vf . (4.v0t1 38. we need to determine the coefficients a23.3-97) INN h3 (t) = a33t3 + 1 a2 t3 2 J a2 t3 2 t2 + V2 t3 t + 92 (4.3-96) V2 = VI + a1 t2 + a2 = aI + W-4 6a23 t2 2 (4.3-90) 3a43 t4 + a3 t4 + V3 = V4 = . a43 remain to be found. and a43 . and a43. 2 a0t1 t2 + (v0t1 )t + 0 6v0 (4.3-94) h2 (t) = a23 t3 + a 2 aI tz 1 2 J t2+(vlt2)t+01 (4. the five-cubic polynomial equations are completely determined and they are listed below. afto (4.2af tn (4.2Vf + + t + a3 = a4 = 36. the polynomial equation becomes h4 (t) = a43 t3 + l+J a3 t4 2 2J t2 + (V3t4)t + 03 O.3-98) 03 = 02 + V2 t3 + 2 + a33 (4. a33. = t . (4.. Z)t2 aftn 2 (4.2vf + 2x1 36 aft.aI )u/2 k2 = a4 . five-cubic polynomial equations can be uniquely determined to satisfy all the position con- .3-102) r 36 .c) + k3[(t4 ..174 ROBOTICS: CONTROL.2v ftn + L.3-105) (4. it has been demonstrated that given the initial. VISION.3-104) x1 =k1(u-t2) +k2(t4-d) -k3[(u-t4)d+t2(t4 -t2)] x2 = -kl (u + t3) + k2 (c .3-106) (4. 2 a4 = -66 to 2 + 6vf .3-113) (4.t2 )c .t4) + k2 (d .t2)] D = u(u .3ut2 + t2 f.3-100) t3 2 h4(t) = a43t3 + a24 t2 + (V3t4)t + 03 z (4.3-101) h.2af tn (4.t2)] (4.t4) U = t2 + t3 + t4 k1 = 04 . So.d(u . AND INTELLIGENCE V3 = v2 + a2 t3 + 3a33 6a33 a3 = a2 + t3 (4.3-107) (4.t4) + k3 [(u .3-114) d=3t4+3t3t4 +t3 A. and the final positions.V1 . + a2" t3 + (-36 + 3Vft .t2)(u .V ft. as well as the time to travel each trajectory (ti).a1 2 V4 . (4.aft. the set-down.3-108) (4. t + 04 J v4 = tt .t32x2 D a43 2x3 t4 D (4.v1u .(a4 .: . (t) _ r6.3-103) a23 = t2 D with a33 .t4 )c + t4(u .a1 3 k3 = 6 c = 3u2 .3-109) 111 X3 = k1(u .01 .. SENSING. the lift-off.a1 u . . "C7 cartesian knot points can be computed from the inverse kinematics solution routine and a quadratic polynominal can be used to smooth the two consecutive joint knot points in joint coordinates for control purposes. CAD sections. they do not specify how the manipulator hand is to be moved from one transform to another. `J° °'. we are more concerned with the formalism of describing the target positions to which the manipulator hand has to move. The velocity and acceleration of the hand between these segments are controlled by converting them into the joint coordinates and smoothed by a quadratic interpolation routine. the desired motion can be specified as sequences of cartesian knot points. For a more sophisticated robot system. the manipulator hand is controlled to move along a straight line connected by these knot points. as well as the space curve (or path) that it traverses. transitions between the hand locations due to rotational operations require less computation.4. each of which can be described in terms of homogeneous transformations relating the manipulator hand coordinate system to the workspace coordinate system.3. programming languages are developed for controlling a manipulator to accomplish a task.4.-. Because of the properties of quaternions. 4. We shall examine their approaches in designing straight-line Cartesian paths in the next two ins BCD `CS straints and continuity conditions. Paul [1979] used a straight-line translation and two rotations to achieve CD' (OD -CD `. Taylor [1979] extended and refined Paul's method by using the dual-number quaternion representation to describe the location of the hand. Thus. What we have just discussed is using a fivecubic polynomial to spline a joint trajectory with six interpolation points.4 PLANNING OF CARTESIAN PATH TRAJECTORIES In the last section. we described low-degree polynomial functions for generating joint-interpolated trajectory set points for the control of a manipulator. This tech- nique has the advantage of enabling us to control the manipulator hand to track moving objects.PLANNING OF MANIPULATOR TRAJECTORIES 175 discussed in Sec.P. Thus. Although the target positions are described by transforms. 4. they are not suitable for specifying a goal task because most of the manipulator joint coordinates are not orthogonal and they do not separate position from orientation. In such systems.. CCD °J' 5. in describing the motions of the manipulator in a task. A more general approach to finding the cubic polynomial for n interpolation points will be C)' BCD 1-t CAD 'o' CIO . The corresponding joint coordinates at these a?. 4." . Paul [1979] describes the design of manipulator Cartesian paths made up of straight-line segments for the hand motion.1 CIO "-y .1 Homogeneous Transformation Matrix Approach In a programmable robotic system.'. while the translational operations yield no advantage. a task is usually specified as sequences of cartesian knot points through which the manipulator hand or end effector must pass. Although the manipulator joint coordinates fully specify the position and orientation of the manipulator hand. 176 ROBOTICS: CONTROL. basepobj = 4 x 4 homogeneous transformation matrix describing the desired gripping position and orientation of the object for the end-effector with respect to the working coordinate frame. It describes the tool endpoint whose motion is to be controlled. Thus. In general. we can solve for °T6 which describes the configuration of the manipulator for grasping the object in a correct and desired manner: OT6 = 0Cbase(Z) basepobj 6Ttool I (4. VISION. one can see that the left-hand-side matrices describe the gripping position and orientation of the manipulator.°C base(t ) basepobj where °T6 = 4 x 4 homogeneous transformation matrix describing the manipulator hand position and orientation with respect to the base coordinate frame.4-2) If °T6 were evaluated at a sufficiently high rate and converted into corresponding joint angles. 6Ttoo1 = 4 x 4 homogeneous transformation matrix describing the tool position and orientation with respect to the hand coordinate frame. SENSING. The first rotation is about a unit vector k and serves to align the tool or end-effector along the desired approach angle and the second rotation aligns the orientation of the tool about the tool axis.4-1). Utilizing Eq. a sequence of N target positions defining a task can be expressed as O T6 ( Ttool) I = T6 0 base Cbase(Z) 1 ( pobj ) 1 (4. Looking at Eq. AND INTELLIGENCE the motion between two consecutive cartesian knot points. (4. then °Cbase(t) is a 4 x 4 identity matrix at all times. the manipulator could be servoed to follow the trajectory. °Cbase(t) = 4 x 4 homogeneous transformation matrix function of time describing the working coordinate frame of the object with respect to the base coordinate frame. then 6Ttoo1 is a 4 x 4 identity matrix and can be omitted.4-3) °T6 (6Ttool)2 = °C base (t)12 (basepobj )2 = 1°Cbase(t)IN(basep °T6 (6Ttool)N ob)N -CDs r~. (4. If the working coordinate system is the same as the base coordinate system of the manipulator. the manipulator target positions can be expressed in the following fundamental matrix equation: °T6 6Ttool . while the right-hand-side matrices describe the position and orientation of the feature of the object where we would like the manipulator's tool to grasp.4-1). If the 6Ttoo1 is combined with °T6 to form the arm matrix. cps . = CI(t) PI1 via °a. Thus. This has the advantage that the tool appears to be at rest from the moving coordinate system. Since tools and moving coordinate systems are specified at positions with respect to the base coordinate system. we can obtain the distance between consecutive points.r+1 (toolTr+1)-1 °o. expressing it with respect to its own coordinate system. In order to do this. we have T6too1T1 = CI(t) PI = C2 (t) P2 (4. P12 = C2 I(t) C1(t) Pll (taolTl)-l T6 = Cr+1(t) Pr+l.4-7) (4.1+I (toolTi+1)-1 to where Pr. we have T6too1T2 We can now obtain P12 from these equations: The purpose of the above equation is to find P12 given P11. if the manipulator needs to be controlled from position 1 to position 2.4-9) . C1. transform using a two subscript notation as Pi1 which indicates the position Pr expressed with respect to the jth coordinate system. the motion between any two consecutive positions i and i + 1 can be stated as a motion from T6 = Cr+1(t) P1. (4. then at position 1.4-8) (4. and expressing it with respect to the position 2 coordinate system. This can easily be done by redefining the P.4-6) tao1T2 (4.4-4) T6t001T2 T6t001TN = CN(t) PN From the positions defined by C1(t) P.4-5) = C2(t) P12 (4. The scheme involves a translation and a rotation about a fixed axis in space coupled with a second rotation about the tool axis to produce controlled linear and angular velocity motion of the manipulator hand. and if we are further given linear and angular velocities. we need to redefine the present position and tools with respect to the subsequent coordinate system. we have T6t00IT1 M'. we can obtain the time requested T to move from position i to position i + 1.r+1 and Pr+l. as discussed above. Paul [1979] used a simple way to control the manipulator hand moving from one transform to the other.i+l represent transforms. The first rotation serves to align the tool in the required approach direction and the second rotation serves to align the orientation vector of the tool about the tool axis. Thus.PLANNING OF MANIPULATOR TRAJECTORIES 177 Simplifying the notation of superscript and subscript in the above equation. moving from one position to the next is best done by specifying both positions and tools with respect to the destination position. as T6(X) = Ci+I(X) Pt.i+I = Pi.i+I =A= 0 nA SA aA PA 1 ny nZ sy sZ ay aZ (4.178 ROBOTICS: CONTROL.i+I =0 B= nB SB aB PB 1 ny B B sy sZ ay B py B Using Eq. we obtain nA D(1) = where the dot indicates the scalar product of two vectors. then the resultant motion represented by all (4. X is zero. SENSING.4-15) SA11B aA SB SB aB aB 0 nB 0 aA aA aA(PB-PA) 1 0 . the real time is zero.4-12) Expressing the positions i and i + 1 in their respective homogeneous transform A A ax px PA PA 1 Pi.2-27) to invert Pi. VISION.4-14) B nZ B 0 0 0 B aZ P B 0 0 0 1 nB nA SB SA nAaB SA nA (PB . If X varies linearly with the time. If the drive function consists of a translational motion and two rotational motions.i+I matrices.i+.. which is a function of a normalized time X.i+I D(X) where X (toolTi+I)-I (4. AND INTELLIGENCE The motion from position i to position i + 1 can be expressed in terms of a "drive" transform.i+I D(1) which gives (4. we have A nx sx (4.i+I and multiply with Pi+I.4-10) t = real time since the beginning of the motion T = total time for the traversal of this segment At position i.PA) 1 (4.4-11) D(1) = (Pi. D(0) is a 4 x 4 identity matrix. D(X). (2.i+I)-I Pi+I. and Pi+I.4-13) 0 0 0 0 B 0 B sx 0 ax B B nx Px and Pi+I. then both the translation and the rotations will be directly proportional to X. +I about the tool axis.cos (A0) (4.4-19) 0 0 where V(XO) = Versine(XO) = 1 . to the approach vector at P.to rotate the approach vector from P. which is rotated an angle of 0 about the approach vector. The translational motion can be represented by a homogeneous transformation matrix L(X) and the motion will be along the straight line joining P. 1 ] . and Pi+1. + . (4.liS(X0) 0 0 0 1 C(XO) 0 (4.. RB(X) represents a rotation of 0 about the approach vector of the tool at P. Thus. into the orientation vector at P.4-20) C(X0) = cos (a0) S(XO) = sin (X0) C(X) = COs(X4) S(X ) = sin (Xc) 4-+ and X e [0.PLANNING OF MANIPULATOR TRAJECTORIES 179 D(X) will correspond to a constant linear velocity and two angular velocities.4-17) 0 0 0 -WCIV(X0) C2>GV(X0) + C(X0) -S>GS(X0) 0 CbS(XO) Si. The first rotational motion can be represented by a homogeneous transformation matrix RA (X) and itserves. The rotation matrix RA(X) indicates a rotation of an angle 0 about the orientation vector of P. . + I .4-18) C(Xq) -S(Xq) RB(X) = S(Xg5) 0 0 0 1 0 0 0 1 C(X0) 0 0 (4. The second rotational motion represented by RB (X) serves to rotate the orientation vector from P. the drive function can be represented as D(X) = L(X) RA(X) RB(X) where 1 CAD L(X) = RA(X) = S2p(A0) + C(X0) -S%W(X0) -CiS(X0) 0 C].4-16) 0 1 0 0 1 Xx 0 0 0 Xy Xz 1 (4. 0=tan-' 0 = tan-1 [(nA aA . (4.PA) By postmultiplying both sides of Eq. z by postmultiplying Eq. SENSING.//V(X0)(nA SB) + [C20V(X0) + C(X0)](sA .4-22) z = aA (PB .PA) (4.4-19) together.4-24) To find 0.4-16).GS(X0)(aA nB) and (4.4-16) by RB' (X) and then premultiplying by L -' (X ). we can solve for 0 and >G by equating the elements of the third column with the elements of the third column from Eq.4-26) . y. AND INTELLIGENCE Multiplying the matrices in Eqs.4-16) by L(X) and then RA 1(X ) and equate the elements to obtain S(p = -SI.&V(X0)(nA nB) + [C20V(X0) + C(XO)](sA nB) -S.4-25) SB) Co _ -SOC. VISION. (4. we have D(X) = [dn do da dp1 (4. (4.-: Xx da = C(XO) dp = Xy Xz and do = do x da Using the inverse transform technique on Eq.bC. we may solve for x.4-16).4-23) aB)2 + (SA ' aB)2] aB 0<0<7 J (4. x=nA(PB-PA) Y = SA (PB .4-17) to (4. (4. (4. we premultiply both sides of Eq.4-16) by RB ' (X) RA 1(X) and equating the elements of the position vector.180 ROBOTICS: CONTROL.it < ' < ir L nA ' aB (4. (4.4-21) 0 0 0 1 where -S(Xq)[S20V(X0) + C(X0)] + C(Xc)[-S0C1GV(X0)] do = -S(Xq)[-S>GC0V(X0)] + C(x0)[00V(x0) + C(X0)] -S(Xq)[-C0(X0)] + C(X0)[-SIS(X0)] C>GS(X8) .S>GS(X0)(aA SB) (4. If the acceleration for 004 004 each variable is maintained at a constant value from time -T to T.PLANNING OF MANIPULATOR TRAJECTORIES 181 then 0 = tan-' So . a manipulator has to move on connected straight-line segments to satisfy a task motion specification or to avoid obstacles.Sir < 0 < 7 (4. we must accelerate or decelerate the motion from one segment to another segment.5). Quite often. . 4.4-27) LCOJ Transition Between Two Path Segments. In order to avoid discontinuity of velocity at the endpoint of each segment. then the acceleration necessary to change both the position and velocity is 0)) 9(t) = (_. respectively.4-28) where -T< t< T and XBC YBC ZBC XBA YBA AB = ZBA BBC BBA OBA OBC where AC and AB are vectors whose elements are cartesian distances and angles from points B to C and from points B to A. 2T2 rAC T + AB1 qua (4. This can be done by initiating a change in velocity T unit of time before the manipulator reaches an endpoint and maintaining the acceleration constant until T unit of time into the new motion segment (see Fig.5 Straight line transition between two segments. Path AB A 7 Timc Figure 4. 4-28). as in Eq. (4.4-23). In summary.T < t < T as Y' = (OBC . X represents normalized time in the range [0. quadratic polynominal functions can be used to interpolate between the points obtained from the inverse kinematics routine. Example: A robot is commanded to move in straight line motion to place a bolt into one the holes in the bracket shown in Fig. (4. °0w . If necessary. as a linear interpolation between the motions for . different for different time intervals. we define a . (4.4-33) where 'JAB and OBC are defined for the motion from A to B and from B to C. however.4-29) T r q(t) = where I ACT + AB X-2AB X + AB (4. VISION. For the motion from A to B and to C. Write down all the necessary matrix equations relationships as discussed above so that the robot can move along the dotted lines and complete the task.4AB) X + OAB respectively.182 ROBOTICS: CONTROL. >G will change from Y'AB to V'BC (4. (4. 4.4-32) It is noted that. The reader should bear in mind. Thus. the velocity and position for -T < t < T are given by ACT + AB AB (4.4-27).^1. the drive function D(X) is computed using Eqs.6. then T6(X) can be evaluated by Eq.4-31) 2T 2 T For T < t < T.4-30) r. to move from a position P. as before. SENSING. that the normalization factors are usually . the motion is described by q = ACX where all t 4=0 T (4.4-10) and the corresponding joint values can be calculated from the inverse kinematics routine. You may use symbols to indicate intermediate positions along the straight line paths. to a position P1+1. 1]. all q(t) = 7- (4. AND INTELLIGENCE From Eq.4-16) to (4. [T6] is expressed is (4. 2. and IN are 4 x 4 coordinate matrices.4-36) (4. [BASE]. At P4. [BASE]. (4. 1. [Pa] . [PI]. [PN]. and [P4] and IN are expressed with respect to [BR]. [E] is expressed with respect to [T6]. At P5: ''d expressed with respect to [INIT ] . 4.. 3.4-38) [BASE] [T6] [E] _ [BR] IN (4.4-37) [BASE] [T6] [E] = [B0] [P3] [BASE] [T6] [E] = [BR] [P4] (4. [BO]. [T6] = [BASE]-' [INIT] [Poo] [E]-' !ti v0] traverse (i = 0.4-35) (4.4-34) At Pi: At P2. [P4]. From Eq.4-34). [BR].R. [E].4-34) with respect to P1 coordinate frame. [INIT].4-40) . To move from location P0 to location P1. (4. with respect to [BASE]. we have `-+ 8". [INIT]. SOLUTION: Let Pi be the Cartesian knot points that the manipulator hand must At P0. [BO].4-39) where [WORLD]. [P3].6 Figure for the example. [P2]. At P3: a. 5). and [ P3 ] are expressed with respect to [BO]. we use the double subscript to describe Eq. (4. [P2]. [P1]. [T6].PLANNING OF MANIPULATOR TRAJECTORIES 183 Figure 4. and [BR] are expressed with respect to [WORLD]. Then the governing matrix equations are: [BASE] [T6] [E] _ [INIT] IN [BASE] [T6] [E] _ [BO] [P1] [BASE] [T6] [E] = [BO] [P2] (4. The quaternion concept has been successfully applied to the analysis of spatial mechanisms for the last several decades.4-42) Thus. Taylor [1979] noted that using a quaternion to represent rotation will make the motion more uniform and efficient.' . 2.184 ROBOTICS: CONTROL.4-43a) (4. This representation is easy to understand and use. requires a motion planning phase which selects enough knot points so that the manipulator can be controlled by linear interpolation of joint values without allowing the manis.4-40) and (4. .2 Planning Straight-Line Trajectories Using Quaternions Paul's straight-line motion trajectory scheme uses the homogeneous transformation matrix approach to represent target position. and this may lead to numerical inconsistencies. called Cartesian path control. we have (4.. 4.. 4.}. AND INTELLIGENCE and expressing it with respect to PI.. in a straight-line motion means that the manipulator hand must change configuration from [T6] = [BASE]-' [BO] [Pol] [E]-' to (4.4-43b) [T6] _ [BASE]-' [BO] [P.. moving from location P0 to location P. technique but using a quaternion representation for rotations. The first approach. 3. called bounded deviation joint path.d s0.. 'a" . Furthermore. pulator hand to deviate more than a prespecified amount from the straight-line path. The second approach.CD ti' . SENSING.4-41). matrix representation for rotations is highly redundant. i = 1. it requires considerable real time computation and is vulnerable to degenerate manipulator configurations. (4. is a refinement of Paul's (CD .I] [E]-' Moving from locations Pi to Pi. can be solved in the same manner.. We shall use quaternions to facilitate the representation of the orientation of a manipulator hand for planning a straight-line trajectory. However.4.. A quaternion is a quadruple of ordered real c0. This approach greatly reduces the amount of computation that must be done at every sample interval. This method is simple and provides more uniform rotational motion. Quaternion Representation.. He proposed two approaches to the problem of planning straight-line motion between knot points. .4-41) [Poll = [B0]-' [INIT] [Po] (4. However. VISION. the matrices are moderately expensive to store and com- putations on them require more operations than for some other representations. we have [T6] = [BASE]-' [BO] [Poll [E]-' Equating Eqs. having cyclical permutation: i2=j2=k2= -1 ij = k ji= -k vector part v: jk=i kj -i ki=j ik= -j The units i.4-44) The following properties of quaternion algebra are basic: Scalar part of Q: Vector part of Q: Conjugate of Q: Norm of Q: Reciprocal of Q: Unit quaternion: S ai + bj + ck s . the complex numbers (s. Q1 = [0 + vil _ (0. 0. where s2+a2+b2+c2 = 1 It is important to note that quaternions include the real numbers (s. b. That is. k. associated. b. j. a1.(ai + bj + ck) s2 + a2 + b2 + C2 s . s. a2. 0) with two units 1 and i.V1 V2 + V1 X V2 With the aid of quaternion algebra. j.PLANNING OF MANIPULATOR TRAJECTORIES 185 numbers. 0. 0. Q1 Q2 = . S = sin 0 and C = cos CJ' . b2.w0.(ai+bj+ck) s2 + a2 + b2 + c2 s + ai + bj + ck.4-45). bl. The addition (subtraction) of two quaternions equals adding (subtracting) corresponding elements in the quadruples. with four units: the real number + 1. respectively. c) in a three-dimensional space. and three other units i. c. If we use the notation . The multiplication of two quaternions can be written as Q1 Q2 = (s1 + a1 i + b1 j + c1 k)(s2 + a2i + b2 j + c2 k) _ (s] s2 . is not a vector but a quaternion. finite rotations in space may be dealt with in a simple and efficient manner. Thus. a. (4.v1 v2 + s2 V1 + S1 V2 + v1 x v2) (4. c) (4. c2) and from Eq. In general. k of a quaternion may be interpreted as the three basis vectors of a Cartesian set of axes. 0) with a single unit 1. c1) and Q2 = [0 + v2] _ (0.4-45) and is obtained by distributing the terms on the right as in ordinary algebra. a. the product of two vec- tors in three-dimensional space. a. b. and the vectors (0. a quaternion Q may be written as a scalar part s and a Q = [s + v] = s + ai + bj + ck = (s. except that the order of the units must be preserved. expressed as quaternions. a. 0) of angle 0 about an axis n by a quaternion. using quaternion and matrix representations.5 lists the computational requirements of some common rotation operations.r32 cos 60 ° + sin 60 ° i+j+k1 = Rot .Rot (n. SENSING. 10 multiplies. 22 multiplies 4 multiplies. 24 multiplies 6 adds.i (Im For the remainder of this section. one can change the representation for rotations from quaternion to matrix or vice versa. 0) = [cos (0/2) + sin (0/2)n] for a rotation of angle 0 about an axis n.. finite rotations will be represented in quaternion as Rot (n. 120° The resultant rotation is a rotation of 120 ° about an axis equally inclined to the i. k axes. 0) 9 adds. 16 multiplies 12 adds.4-46) Example: A rotation of 90 ° about k followed by a rotation of 90 ° about j is represented by the quaternion product (cos45° + j sin45°)(cos45° + ksin45°) = (1/2 + j1/2 + k' + i'h) '/2 + i+j+k -. j. i+j+k `J. Note that we could represent the rotations about the j and k axes using the rotation matrices discussed in Chap. r I- 1 Rot(n. AND INTELLIGENCE then we can represent a rotation Rot (n.186 ROBOTICS: CONTROL. 15 adds. Table 4.5 Computational requirements using quaternions and matrices Operation R1R2 Quaternion representation Matrix representation Rv R . Table 4. 1 arctangent 1 arctangent 'L7 . VISION. 2 square roots.. However. Thus. 2. the quaternion gives a much simpler representation. 0) = L cos `2J + sin \2 n J J (4. 1 square root. a"+ 0. 9 multiplies 8 adds. the transition must start T time before the manipulator reaches the intersection of the two segments and complete the transition to the new segment at time T after the intersection with the new segment has been reached. t The motion along the path consists of translation of the tool frame's origin from po to pi coupled with rotation of the tool frame orientation part from Ro to R1. (4.po.4-47) where T is the total time needed to traverse the segment and t is time starting from the beginning of the segment traversal. It is required to move the manipulator's hand (4. If the manipulator hand is required to move from one segment to another while maintaining constant acceleration. . This can be accomplished by the pursuit formulation as described by Taylor [1979]. if the destination point is changing. From this requirement. -0 X (t)] where Rot(n.T) = P1 - T API (4. On the other hand. .X (t)(P1 . In this case. Then for uniform motion. 0) is a rotation by 0 about an axis n to reorient Ro into R 1. F1 = fi(t) = T ham. by °.Po) R(t) = R1 Rot [n. we have T (4.. It is worth noting that p.4-48) (4. then it must accelerate or decelerate from one segment to the next. (4. the boundary conditions for the segment transition are . III Cartesian Path Control Scheme.4-49) need to be evaluated only once per segment if the frame F1 is fixed. 0) = Ro 1 R 1 (4.4-51) T 1 . p. The tool frame's position and orientation at time t are given.po in Eq. Let X(t) be the remaining fraction of the motion still to be traversed at time t.PLANNING OF MANIPULATOR TRAJECTORIES 187 coordinate frame along a straight-line path between two knot points specified by F0 and F1 in time T.4-50) where Rot (n. and 0 should be evaluated per step.o P(t) = P1 .C P(TI . n.4-48) and n and 0 in Eq. then F1 will be changing too. In order to accomplish this.4-49) Rot (n. 0) represents the resultant rotation of Ro 1R1 in quaternion form. respectively. where each coordinate frame is represented by a homogeneous transformation matrix. . (4.PI. 01) = Ro 1 RI . P( t') = P1 - (T . (T + t')2 4TT2 02 (4. The above equations for the position and orientation of the tool frame along the straight-line path produce a smooth transition between the two segments. AP2 = P2 .P2.t')2 API + (T + t')2 4TT1 4TT2 OP2 (4. One . The cartesian path control scheme described above requires a considerable amount of computation time. Several possible ways are available to deal with this problem.4-53) d OP2 P(t)11=T+1 = dt T2 (4.a) and Rot (n2. and T1 and T2 are the traversal times for the two segments.4-54) where Opi = PI .4-56) where t' = T1 .4-55) then integrating the above equation twice and applying the boundary conditions gives the position equation of the tool frame. Bounded Deviation Joint Path.188 ROBOTICS. It is worth pointing out that the angular acceleration will not be constant unless the axes n1 and n2 are parallel or unless one of the spin rates or 02 = z 02 2 T2 is zero.t is the time from the intersection of two segments.4-52) API 1 (4. AND INTELLIGENCE P(T1 + T) = Pl + d TOP2 T2 (4. Similarly. the orientation equation of the tool frame is obtained as n2. 02) = Ri 1 R2 The last two terms represent the respective rotation matrix in quaternion form.CONTROL. If we apply a constant acceleration to the transition. and it is difficult to deal with the constraints on the joint-variable space behavior of the manipulator in real time. VISION. SENSING. d2 dtz p(t) = ap 1a.4-57) where Rot (n1. Then motion execution would be trivial as the servo set points could be read readily from memory.PLANNING OF MANIPULATOR TRAJECTORIES 189 could precompute and store the joint solution by simulating the real time algorithm before the execution of the motion. Taylor [1979] proposed a joint variable space motion strategy called bounded deviation joint path. on the desired cartesian straight-line path. we have CAD q(t) = q1 - T1 . for transition between q0 to q. which selects enough intermediate points during the preplanning 'C7 cob `C1 phase to guarantee that the manipulator hand's deviation from the cartesian The scheme starts with a precomputation of all the joint solution vectors q. and q.4-61) .° and. and t' have the same meaning as discussed before. T2. r. The above equations achieve uniform velocity between the joint knot points and make smooth transitions with constant acceleration between segments. coo Q.q1. to q2. and T1. The difficulty of this method is that the number of intermediate points required to keep the manipulator hand acceptably close to the cartesian straight-line path depends on the particular motion being made.4-60) SR = I angle part of Rot (n.o. That is. Defining the displacement and rotation deviations respectively as 5p = I Pi(t) . which corresponds to the the manip ilator hand frame 'CD at the cartesian knot point Fl(t).t')2 4TT1 q (T _ t')2 + 47-T2 Oq2 (4. where Oq. are then used as knot points for a joint-variable space interpolation strategy analogous to that used for the position equation of the cartesian control path. corresponding to the knot points F. for motion from the knot point q0 to q1. . `'. and Fd(t). we have a. c°). The deviation error can be seen from the difference between the Fi(t).0 coo tea) 043H Q.q2.t Ti .Pd(t)1 (4. However. In view of this.D.) Oqi (4.4-58) q(t')=qi- (T . Oq2 = q2 .. Any predetermined interval small enough to guarantee small deviations will require a wasteful amount of precomputation time and memory storage.4-59) (4. i) = Rd '(t) Rj(t) = 101 'l7 . Another possible way is to precompute the joint solution for every nth sample interval and then perform joint interpolation using low-degree polynominals to fit through these intermediate points to generate the servo set points. The joint-space vectors q. which corresponds to the manipulator hand frame at the joint knot point qj(t). = q. the tool frame may deviate substantially from the desired straight-line path. straight-line path on each motion segment stays within prespecified error bounds. VISION. His algorithm is as follows. this algorithm selects enough joint knot points such that the manipulator hand frame will not deviate more than the prespecified error bounds along the desired straight-line path. (4.4-62) is satisfied... = RI Rot ni. Sp ax and SR ax for the displacement and orientation parts. Check error bounds. The maximum deviation error is usually reduced by approximately a factor of 4 for each recursive iteration. Err Si. we need to select enough intermediate points between two consecutive joint knot points such that Eq.2 1 where Rot (n. Find joint space midpoint. Compute the deviation error between Fand F. then stop.4-62) is satisfied. Compute the joint solution vectors q0 and qI corresponding to F0 and F1. Find the deviation errors. If Sp 5 Sp ax and S R S gR ax... SENSING. and the cartesian knot points Fi along the desired straight-line path. S2. (4. corresponding to the joint values q.. The algorithm converges quite rapidly to produce a good set of intermediate points. AND INTELLIGENCE and specifying the maximum deviations.. respectively. Compute joint solution. Algorithm BDJP: Given the maximum deviation error bounds SP ax and SR 'ax for the position and orientation of the tool frame. Sp = I An . and use q. Otherwise. Find cartesian space midpoint. though they are not a minimal set.PC SR = I angle part of Rot (n.q0. Compute the corresponding cartesian path midpoint Fc: Po + Pi PC = and 2 R. S4. and apply steps S2 to S5 recursively for the two subsegments by replacing FI with Fc and Fc with F0.. . Convergence of the above algorithm is quite rapid. .4-62) With this deviation error bounds. S3. compute the joint solution vector qc corresponding to the cartesian space midpoint FC.. we would like to bound the deviation errors as bp Smax p and bR G smax R (4. respectively. 1/z Aqi to compute the hand frame F. 0) = RC I R. Taylor [1979] presented a bounded deviation joint path which is basically a recursive bisector method for finding the intermediate points such that Eq. respectively. I = ICI S5.. 0) = Ro I R1.190 ROBOTICS: CONTROL. Compute the joint-variable space midpoint q»z =qiwhere Oq1 = qI . `?. Although it involves numerous nonlinear transformations between the cartesian and joint coordinates. This suggests that the control of the manipulator should be considered in two coherent phases of execution: off-line optimum trajectory planning. and each path segment specified by two adjacent knot points can then be interpolated by N joint polynomial functions.. Thus. 4.. to approximate a desired cartesian path in the joint-variable space. the actuator of each joint is subject to saturation and cannot furnish an unlimited amount of torque and force. the bounded deviation joint path scheme relies on a preplanning phase to interpolate enough intermediate points in the joint-variable space so that the manipulator may be driven in the joint-variable space without deviating more than a prespecified error from the desired straight-line path. [1983] adopted the idea of using cubic spline polynomials to fit the segment between two adjacent knots.problem with mixed constraints (path and torque constraints) in two different coordinate systems. i00 C/1 CS. [1983] proposed a set of joint spline functions to fit the segments among the selected knot points along the given cartesian path. torque and force constraints must be considered in the planning of straight-line trajectory. it becomes an optimization.. the joint level (Lee and Chung [1984]). Since cubic polynomial trajectories are smooth and have small overshoot of angular displacement between two adjacent knot points. These functions must pass through the selected knot points.. Lin et al. one for each joint. One must either convert the cartesian path into joint paths by some low-degree polynomial function approximation and optimize the joint paths and control the robot at . Lin et al. one function for each joint trajectory.3 Cubic Polynomial Joint Trajectories with Torque Constraint Taylor's straight-line trajectory planning schemes generate the joint-space vectors {q(t). '. However. q(t)} along the desired cartesian path without taking the dynamics of the manipulator into consideration._ 00p -W. In planning a cartesian straight-line trajectory. . one can select enough knot points along the path. '=t (~=p CAD CI- '-... This approach involves the conversion of the desired cartesian path into its functional representation of N joint trajectories.PLANNING OF MANIPULATOR TRAJECTORIES 191 Taylor [1979] investigated the rate of convergence of the above algorithm for a cylindrical robot (two prismatic joints coupled with a rotary joint) and found that it ranges from a factor of 2 to a factor of 4 depending on the positions of the manipulator hand. Q. .w. In summary. the path is constrainted in cartesian coordinates while the actuator torques and forces at each joint is bounded in joint coordinates.On.d CQt :D_ . 4(t). . it is easier to approach the trajectory planning problem in the joint-variable space...-. c-.2 .4. ti' vii A.. Hence.a? . . 04x. Joint displacements for the n . Thus. Since no transformation is known to map the straight-line path into its equivalent representation in the joint-variable space. or convert the joint torque and force bounds into their corresponding cartesian bounds and optimize the cartesian path and control the robot at the hand level (Lee and Lee [1984]). followed by on-line path tracking control. the curve fitting methods must be used to approximate the Cartesian path. . . SENSING.1 cubic polynomials. .)]. ti I ] . [H(t1 ) .. the objective is to find a cubic polynomial trajectory for each joint j which fits the joint positions [qj1(t1 ) . 2... Let H(t) be the hand coordinate system expressed by a 4 X 4 homogeneous transformation matrix.. H(t.. these ':V '-t '°G ments q j k at t = to for k = 3. velocity. all and qj. joint displacewhere t1 < t2 < .... two extra knot points with unspecified joint displacements must be added to provide enough degrees of freedom for solving the cubic polynomials under continuity conditions.2 equations need to be solved.11(ti) Ui Q11(t) = + (t .. aj. vj. . AND INTELLIGENCE selected knot points are interpolated by piecewise cubic polynomials. The hand is required to pass through a sequence of n cartesian knot points. as qj1.ti satisfying is the time spent in traveling segment i. . vj1. . is an ordered time sequence indicating when the hand should pass through these joint knot points. Using the continuity conditions.qNI). . ..- . VISION... . its second-time derivative QJi (t) must be a linear function of time t. time intervals must be adjusted subject to joint constraints within each of the n . defined on the time interval [ti. n . and acceleration are specified. 4. < t.^y Let Qui(t) be the piecewise cubic polynomial function for joint j between the knot points Hi and Hi+ 1. .4-63) where ui = ti+I twice and ... After solving the matrix equation.1 s. i = 1.qN2). .. . q21.. H(t2 ) .. Thus. The corresponding joint position vectors.. where q11 is the angular displacement of joint j at the ith knot point corresponding to Hi(t). the total number of knot points becomes n and each joint trajectory consists of n . However. qN ).. In addition. q2 and I are not specified. ( q 1 2 . the two extra knot points are then expressed as a combination of unknown variables and known constants.. velocity. n-1. n -1 ti+I . these are the two extra knot points required to provide the freedom for solving the cubic polynomials. .. `-' . a. Thus.. . Thus. Then. . In order to satisfy the continuity conditions for the joint displacement. N (4.. . for i = 1.ti) Ui Qji(ti+1) j = 1. To minimize the total traversal time along the path. velocity.1 CD' '=t 'L7 Q. at these n cartesian knot points can be solved using the inverse kinematics routine.192 ROBOTICS: CONTROL..ti Q. ( q 1 I . (q111. At the initial time t = t1 and the final time t = t. 0. the problem reduces to an optimization of `C7 minimizing the total traveling time by adjusting the time intervals..s: r-3 a-° piecewise cubic polynomials... q22. the joint displacement. the resulting spline functions are expressed in terms of time intervals between adjacent knots. q2I. together such that the required displacement. . ...2 are also specified for the joint trajectory to pass through. and acceleration are satisfied and are continuous on the entire time interval [ti.. Since the polynomial Qji(t) is cubic... only n .. qj2(t2). and accelera- tion on the entire trajectory for the Cartesian path.°r in. respectively. Integrating Qui(t) the boundary conditions of Q1 (ti) = q11 and '-' . The resultant matrix equation has a banded structure which facilitates computation. Then the problem is to spline Qui(t).. . ... This leads to a system of n .1.PLANNING OF MANIPULATOR TRAJECTORIES 193 Qji(t..4-64) j = 1. . AQ = b where Qj2(t2 ) Qj3(t3 ) 400 (4.l+I Ui uiQji(ti+I) 6 1 (t . ... Qji(t) is determined if Qji(ti) and Qji(ti+l) are known. .t) 3 + Qji(ti+I) (t . 2...i+I leads to the following interpolating functions: Qji(t) = Qji(ti) (ti+I .. +2u2 + u2 U2-i 2 I U2 U2 0 0 0 0 0 0 2(u2+U3) U3 U2 u3 0 2(u3+Ua) U4 0 2(U4+Un-3) Un-3 0 2(Un-3+Un-2) 0 0 Un-2 0 0 0 0 Ui_2 . for i = 1. ui uiQji(ti) 6 (ti+I .4-65) Q = Qj.ti)3 6U 6u . n-1 (4. n . . .1. + qi.+I) = qj. N Thus.1 and knowns ui for i = 1.2 linear equations with unknowns Qji(ti) for i = 2. n ..t) i = 1. 2.ti) q1.n-1(tn-I) 3u. .. n .. . qj3 U3 b= J _ 6 Ui-2 u. SENSING.. the total time spent on traveling the specified path approximated by the cubic polynomials is constrainted by the max. the cubic polynomial joint trajectory always has a unique solution.»-2 + 6qj» 6 U»-3 qj.n-3 -6 r L I U»-i + 111 J L qj»-vjnan+ 3 + + 6q»-z -U»-lajn Un-hilt-2 The banded structure of the matrix A makes it easy to solve for Q which is substituted into Eq. velocities. (4. 0-0 t-{ o-3 T = i=I ui (4. N i = 1.. The above banded matrix A of Eq....4-66) subject to the following constraints: Velocity constraint: I Qj. The problem can then be stated as: Minimize the objective function n-I ox.q4 . and jerk which is the rate of change of acceleration.. and accelerations. the total traveling time for the manipulator must be minimized. acceleration. o-'1 `. 11 qj. + u. 1 6 ( u»2 I + Ui-2 U»-3 a1. AND INTELLIGENCE and 6 qj3+qjs 1 U' U2 r -6 I I +ul vjl + 3 aj. and torque constraints. (4. The resulting solution Qji(t) is given in terms of time intervals ui and the given values of joint displacements. Since the actuator of each joint motor is subject to saturation and cannot furnish an unlimited amount of torque and force.3 imum values of each joint velocity.z -i 3 ap.qj3 U3 L 6 J r qjs ... . jerk..4-63) to obtain the resulting solution Qji(t). ..qj4 U4 q4 . Thus.J -ulaj! 1.qj4 U4 . qjl z Ut 1 o:) `U1 + U2 J r 2 6 U2 qjl +ul vjl +u"ajl 3 +6qj4 -6 U3 U2 U3 I qj3 r 6 qjs . acceleration.n-1 .. VISION.. In order to maximize the speed of traversing the path.194 ROBOTICS: CONTROL. (t) -per Vj j = 1.4-64) is always nonsingular if the time intervals ui are positive. This can be achieved by adjusting the time intervals ui between two adjacent knot points subject to the velocity. - uiwj.wj. respectively. . where T is the total traveling time...N Acceleration constraint: I Qji(t) I < Aj i = 1.4-64). Vj. - F qj. (4. n ..+1 . - t E [t. and torque limits of joint j. IQji(ti)I ] < Vj I i = 1. Differentiating Eq.i+l ...qji u.i+1)ui 2 u` + wj. leads to the following expressions for Qji(t) and Qji(t): G. Ui (t ..) ji Qji(t) = 2ui (t 1 .. and rj are. . ti+1. t.... 2. J1..i+I 1 qji Ui and Qji(t) = Li1 Ui (t . j = 1.N where I Qji(ti) I = wji qj. _ The maximum absolute value of velocity exists at ti.i+ 1 + 6 IQji(ti+1)I = 2 u + qj.. (t 2u.... or ti. the velocity.N i = 1. where ti e [ti. Velocity Constraints. t1] and satisfies Qji (ti) = 0... (Wji .. . Aj.1 (4... respectively.. IQji(ti+1)I.i+1.qji + (wji . 2. jerk. i + I .PLANNING OF MANIPULATOR TRAJECTORIES 195 i= 1. The above constraints can be expressed in explicit forms as follows. . N 3 Jerk constraint: dt3 Qji(t) < Jj I rj(t) I Torque constraint: < 1'j r-. max IQjil = max [IQji(ti)I. acceleration. The velocity constraints then become mar.wj..ti+1) where w ji is the acceleration at Hi and equal to Qji (ti) if the time instant at which Qji(t) passes through Hi is ti..t)2 + `+ wj. and replacing Qji(ti) and Qji(ti+1) by wji and wj. .ti) °14 i-.n-1 = 1..i+1)ui ui 6 .i+1 .4-69) j = 1. . Hollerbach [1984]). Thus the constraints are represented by Torque Constraints.QN. CAD r-. . 2.196 ROBOTICS: CONTROL. (3. . accelerations..i+1 .(Oj i+l)ui + qj.q Jerk Constraints. the objective is to find an appropriate optimization algorithm that will minimize the total traveling time subject to the velocity. The jerk is the rate of change of acceleration. 2..i+l . .N Qi(t) = (Q11(t). N (4..2. jerks.wj. VISION.. Results using this optimization technique can be found in Lin et al. then dynamic time scaling of the trajectory must be performed to ensure the satisfaction of the torque constraints (Lin and Chang [1985]. L1..i+I or ti 0 [ti.. [1983] utilized Nelder and Mead's flexible polyhedron search to obtain an iterative algorithm which minimizes the total traveling time subject to the constraints on joint velocities. ti+1] if wji = wj. .1 If the torque constraints are not satisfied.i+l and ti E [ti. .. -"+ .wji Ui j = 1. .. . ti+l] Acceleration Constraints. AND INTELLIGENCE and wjiwj.4-72) where j = 1. With this formulation. the Iwj.-.N Jj i = 1..2-25)] 7_j (t) N k=I C17 N N = EDjk(Qi(t))Qji(t) + E E hjk. [1983]. acceleration.4-70) wj. and Lin et al.qji 6 Ui 2(wji . . and torque constraints. The torque r(t) can be computed from the dynamic equations of motion [Eq.. Q2i(t)..n . i+ I I } . Thus.(Qi(t))Qki(t)Qmi(t) + Cj(Qi(t)) k=tin=1 (4. 2. SENSING.1 (4. .n . Thus. jerk... The acceleration is a linear function of time between two adjacent knot points. . the maximum absolute value of acceleration occurs at either ti or ti+ I and equals the maximum of { I wji I acceleration constraints become max {Iaw11I. There are several optimization algorithms available.(t))T i = 1.i+1) I Qji(ti) I = 0 if wji $ wj.. and torques.2...I} < Aj j = 1 . I wj.4-71) Cab .i+IUi + (wji .... acceleration.t. the most common approach is to plan the straight-line path in the joint-variable space using low-degree polynomials to approximate the path. In order to yield faster computation and less extraneous motion. Lin et al. REFERENCES Further reading on joint-interpolated trajectories can be found in Paul [1972].5 CONCLUDING REMARKS Two major approaches for trajectory planning have been discussed: the jointinterpolated approach and the cartesian space approach. They focused on the requirement that the joint trajectories must be smooth and continuous by specifying velocity and acceleration bounds along the trajectory. this decomposes the control of robot manipulators into off-line motion planning followed by on-line tracking control. [1983] used cubic joint polynomials to spline n interpolation points selected by the user on the desired straight-line path. a topic that is discussed in detail in Chap. 4-3-4 and five-cubic polynomial sequences have been discussed. Taylor [1979] improved the technique by using a quaternion approach to represent the rotational operation. These techniques represent a shift away from the real-time planning objective to an off-line planning phase. Brady et al. Lewis [1973. and torque constraints. using the homogeneous transformation matrix to represent target positions for the manipulator hand to traverse. 5. He also developed a bounded deviation joint control scheme which involved selecting more intermediate interpolation points when the joint polynomial approximation deviated too much from the desired straight-line path. Paul [1979] used a translation and two rotations to accomplish the straight-line motion a'° of the manipulator hand. the total traveling time along the knot points was minimized subject to joint velocity. Then.PLANNING OF MANIPULATOR TRAJECTORIES 197 4. CAD cartesian space is discussed by Paul [1979]. [1986]. In particular. 'CD ti. 1974]. and Lee et al. jerk. Movement between two consecutive target positions is accomplished by two sequential operations: a translation and a rotation to align the approach vector of the manipulator hand and a final rotation about the tool axis to align the gripper orientation. 'u. CC' '-' ate) . Several methods have been discussed in the cartesian space planning. Most of these joint-interpolated trajectories seldom include the physical manipulator dynamics and actuator torque limit into the planning schemes. lower-degree polynomial sequences are preferred. The joint-interpolated approach plans polynomial sequences that yield smooth joint trajectory. [1982]. In addition to the continuity coo constraints. Because servoing is done in the joint-variable space while a path is specified in cartesian coordinates. The joint trajectory is split into several trajectory seg0)) Coop C`5 ments and each trajectory segment is splined by a low-degree polynomial. Hollerbach [1984] developed a time-scaling scheme to determine whether a planned trajectory is realizable within the dynamics and torque limits which depend on instantaneous joint position and velocity. The design of a manipulator path made up of straight line segments in the . In essence. f3. AND INTELLIGENCE A quadratic polynomial interpolation routine in the joint-variable space is then used to guarantee smooth transition between two connected path segments.198 ROBOTICS: CONTROL. using the quaternion representation. by relaxing the normalized time to the servo time. Lee [1985] developed a discrete time trajectory planning scheme to determine the trajectory set points exactly on a given straight-line path which satisfies both the smoothness and torque constraints.50) to a final position (xf.. r.75). Due to the discrete time approximations of joint velocity.2 With reference to Prob. Thus. You may split the joint trajectory into several trajectory segments. one usually assumes that the maximum allowable torque is constant at every position and velocity. You may split the joint trajectory into several trajectory segments.1 A single-link rotary robot is required to move from 0(0) = 30° to 0(2) = 100° in 2 s.6. to reduce the computational cost. `o' '+. instead of using varying r. Other existing cartesian planning schemes are designed to satisfy the continuity and the torque constraints simultaneously. acceleration. The initial and final velocity and acceleration are "fit °a° n?. (a) determine the coefficients of a cubic polynomial that accomplishes the motion. The trajectory planning problem is formu- °Q.1. (b) determine the coefficients of a quartic polynomial that accomplishes the motion. extended Paul's method for a better and uniform motion. the optimization is realized by iterative search algorithms. To include the torque constraint in the trajectory planning stage. zero. solved the inverse kinematics.00. and found appropriate smooth.r 'C3 PROBLEMS 4. SENSING. and assume that each link is 1 m long. '-j "'t o. 4. and jerk.2. lower-degree polynomial functions which guaranteed the continuity conditions to fit through these knot points in the joint-variable space.o s. acceleration.0 9.' CD. For example. Then. COD ((DD -?'r Chi) 0. and jerk bounds which are assumed constant for each joint. yo) = (1. Taylor [1979].3 Consider the two-link robot arm discussed in Sec. [1983] and Luh and Lin [1984] used the velocity. Lin et al.. the optimization solution involves intensive computations which prevent useful applications. . -`=' dynamic constraint with the constant torque bound assumption was included along the trajectory. '. yf) _ (1. 0.96. VISION. the °°? both approaches neglect the physical manipulator torque constraint. The robot arm is required to move from an initial position (xo. 4.. and (c) determine the coefficients of a quintic polynomial that accomplishes the motion. 0. 'LS lated as a maximization of the distance between two consecutive cartesian set points on a given straight-line path subject to the smoothness and torque constraints. The joint velocity and acceleration are both zero at the initial and final positions. (a) What is the highest degree polynomial that can be used to accomplish the motion? i-+ `D_ (b) What is the lowest degree polynomial that can be used to accomplish the motion? 4. but rather on the joint-interpolated polynomial functions. They selected several knot points on the desired cartesian path. 3. Due to the joint-interpolated functions. In order to achieve real-time trajectory planning objective. Determine the coefficients of a cubic polynomial for each joint to accomplish the motion. the location of the manipulator hand at each servo instant may not be exactly on the desired path. torque constraint. 051 -0.4 In planning a 4-3-4 trajectory one needs to solve a matrix equation.ff and Tset-down)? 4.0 0 -1 -50.N.612 0 . -184.250 -34.. Ti. you are asked to design a 4-3-4 trajectory for the following conditions: The initial position of the robot arm is expressed by the homogeneous transformation matrix Tinitial -1 0 1 0 0 T initial - 0 0 0 600.0.660 Tinitial - -0.982 0. Does the matrix inversion of Eq.355 -0..933 Tfinal -0.0 0 -1 0 -100. 2.ft-off) if the hand is rotated 60° about the s axis at the initial point to arrive at the lift-off point? (b) What is the homogeneous transformation matrix at the final position (that is.789 -0. C1. Tfinal) if the hand is rotated -60' about the s axis at the set-down point to arrive at the final position? . What are the homogeneous transformation matrices at the lift-off and set-down positions (that is.876 596.25 mm). as in Eq.ft-t.0 1 0 0 The set-down position of the robot arm is expressed by the homogeneous transformation matrix Tset-down 0 1 1 0 0 100. you are asked to design a 4-3-4 trajectory for the following conditions: The initial position of the robot arm is expressed by the homogeneous transformation -°.599 1 -0.122 -0..25 mm) plus any required rotations.064 0.3-46).3-46) always exist? Justify your answer.612 0. CND matrix Tinitial: .750 0. 4.5 Given a PUMA 560 series robot arm whose joint coordinate frames have been established as in Fig.339 0 -0.6 Given a PUMA 560 series robot arm whose joint coordinate frames have been established as in Fig.0 0 1 0 0 (a) The lift-off and set-down positions of the robot arm are obtained from a rule of thumb by taking 25 percent of d6 ( the value of d6 is 56. 2.PLANNING OF MANIPULATOR TRAJECTORIES 199 4.047 0 0 The final position of the robot arm is expressed by the homogeneous transformation matrix Tfinal -0.436 0.924 -545. r7' '-n 000 c`" .099 892.500 -0. Tl. (4.145 0 CAA 412.11. What is the homogeneous transformation matrix at the lift-off (that is.0 Tset-down - 0 0 400..433 0.179 0 -0. 0 .869 1 The lift-off and set-down positions of the robot arm are obtained from a rule of thumb by taking 25 percent of d6 (the value of d6 is 56.11. (4. 10 Give a quaternion representation for the following rotations: a rotation of 60 ° about j followed by a rotation of 120 ° about i. z for the drive transform. 4. 4. Find the resultant rotation in quaternion representation. 4.4.9 Express the rotation results of Prob. . Determine 0. SENSING. VISION.200 ROBOTICS: CONTROL. The points A and B are given by a 4 x 4 homogeneous transformation matrices as -1 0 0 1 0 0 10 10 0 0 -1 0 0 0 1 10 30 10 1 A = 0 0 0 -1 0 10 1 B = -1 0 0 0 0 0 .11 Show that the inverse of the banded structure matrix A in Eq. 4. (4.7 A manipulator is required to move along a straight line from point A to point B.8 in quaternion form. AND INTELLIGENCE 4. as described in Sec. >G. y.4-65) always exists.8 A manipulator is required to move along a straight line from point A to point B rotating at constant angular velocity about a vector k and at an angle 0. 1 0 0 0 0 -1 0 B = -1 0 0 0 0 The motion from A to B consists of a translation and two rotations.-. where A and B are respectively described by -1 0 1 0 5 0 0 and -1 0 0 0 1 20 30 5 1 A= 0 0 10 15 .4' way Find the vector k and the angle 0. 0 and x. Aslo find three intermediate transforms between A and B. 4. 4.1. Also find three intermediate transforms between A and B. The second is the fine motion control in which the end-effector of the arm dynamically interacts with the object using sensory feedback information to complete the task.AC.G. and gravity loading on the links. 201 . the movement of a robot arm is usually accomplished in two distinct control phases.1 INTRODUCTION Given the dynamic equations of motion of a manipulator. The result is reduced servo response speed and damping. In general.fl s.CHAPTER FIVE CONTROL OF ROBOT MANIPULATORS Let us realize that what happens around us is largely outside our control. the purpose of robot arm control is to maintain the dynamic response of the manipulator in accordance with some prespecified performance criterion. From the control analysis point of view. but that the way we choose to react to it is inside our control. the control .. sophisticated control techniques. The first is the gross motion control in which the arm moves from an initial position/orientation to the vicinity of the desired target position/orientation along a planned trajectory. Any significant performance gain in this and other areas of robot arm control require the consideration of more efficient dynamic models. The first part of the control problem has been discussed extensively in Chap. Quoted by J. and the use of computer architectures. Current industrial approaches to robot arm control system design treat each joint of the robot arm as a simple joint servomechanism. manipula- tors controlled this way move at slow speeds with unnecessary vibrations. its solution is complicated by inertial forces. Although the control problem can be stated in such a simple manner. Petty in "Apples of Gold" may 5." . and (2) using these models to determine control laws or strategies to achieve the desired system response and performance. BCD problem consists of (1) obtaining dynamic models of the manipulator. These changes in the parameters of the controlled system are significant enough to render conventional feedback control strategies ineffective. This chapter concentrates on the latter part of pas the control problem. The servomechanism approach models the varying dynamics of a manipulator inadequately because it neglects the motion and configuration of the whole arm mechanism. This chapter focuses on deriving fro CDR . limiting the precision and speed of the end-effector and making it appropriate only for limited-precision tasks. coupling reaction forces. As a result. 3." CAD .O' .Oh row L. Resolved motion controls (cartesian space control) Resolved motion rate control Resolved motion acceleration control Resolved motion force control 3. VISION. 5. --- 0 4'. AND INTELLIGENCE strategies which utilize the dynamic models discussed in Chap. Considering the robot arm control as a path-trajectory tracking problem (see Fig. SENSING. C17 Trajectory planning Controller H (}Interface Disturbances Manipulator Sensors and estimators Figure 5. Each of the above control methods will be described in the following secs.. -fl tions.202 ROBOTICS: CONTROL. motion control can be classified into three major categories for the purpose of discussion: 1. we assume that the desired motion is specified by a time-based path/trajectory of the manipulator either in joint or cartesian coordinates. For these control methods.1). Adaptive controls Model-referenced adaptive control Self-tuning adaptive control Adaptive perturbation control with feedforward compensation Resolved motion adaptive control 'CS O0. ''' . Joint motion controls Joint servomechanism (PUMA robot arm control scheme) Computed torque technique Minimum-time control Variable structure control Nonlinear decoupled control 2.1 Basic control block diagram for robot manipulators. 3 to efficiently control a manipulator. 5°`0 . and decoding the VAL commands. At the lower level are the six 6503 microprocessors-one for each degree of freedom (see Fig.U+ 1. 2. At the top of the system hierarchy is the LSI-11/02 microcomputer which serves as a supervisory computer.2). which reside in the EPROM memory of the LSI-11/02 computer.CONTROL OF ROBOT MANIPULATORS 203 5.g.2). in addition to reporting appropriate error messages to the user.s. It communicates with the LSI-11/02 computer through an interface board which functions as a demultiplexer that routes trajectory set points information to each joint controller. c'°) 0"o 0Th "O' t VAL is a software package from Unimation Inc. and a current amplifier. The microprocessor computes the joint error signal and sends it to the analog servo board which has a current feedback designed for each joint motor. These functions.. a digital-to-analog converter (DAC). 5. each of which consists of a digital servo board. The 6503 microprocessor is an integral part of the joint controller which directly controls each axis of motion. 5. from world to joint coordinates or vice versa). Joint-interpolated trajectory planning. Coordinate systems transformations (e. various internal routines are called to perform scheduling and coordination functions.2 CONTROL OF THE PUMA ROBOT ARM Current industrial practice treats each joint of the robot arm as a simple servomechanism. and a power amplifier for each joint. 5. and (2) subtask coordination with the six 6503 microprocessors to carry out the command. an analog servo board. Once a VAL command has been decoded. There are two servo loops for each joint control (see Fig. The LSI-11/02 computer performs two major functions: (1) on-line user interaction and subtask scheduling from the user's VALt commands. interpreting. C's At the lower level in the system hierarchy are the joint controllers. Looking ahead two instructions to perform continuous path interpolation if the robot is in a continuous path mode. The outer loop provides position error information and is updated by the 6503 microprocessor about every 0. the controller consists of a DEC LSI-11/02 computer and six Rockwell 6503 microprocessors. The on-line interaction with the user includes parsing.2). 4. include: . The control structure is hierarchically arranged. The inner loop consists of analog devices and a com. each with a joint encoder.875 ms.4: . The interface board is in turn connected to a 16-bit DEC parallel interface board (DRV-11) which transmits the data to and from the Q-bus of the LSI-11/02 (see Fig. this involves sending incremental location updates corresponding to each set point to each joint every 28 ms. Each microprocessor resides on a digital servo board with its EPROM and DAC.-. Acknowledging from the 6503 microprocessors that each axis of motion has completed its required incremental motion. . 0 c0) 3. for control of the PUMA robot arm. For the PUMA 560 series robot arm. Convert the error actuating signal to current using the DACs. AND INTELLIGENCE Terminal Floppy disk Manual box Accessory 6503 D/A AMPLIFIER Pp JOINT MOTOR I DLV . 4. at high speeds the inertial loading term can change drastically. The main functions of the microprocessors include: 1.875 ms. 3. receive and acknowledge trajectory set points from the LSI-11/02 computer and perform interpolation between the current joint value and the desired joint value. pensator with derivative feedback to dampen the velocity variable. the above control scheme using constant feedback gains to control a nonlinear system does not perform well under varying speeds and payloads. One of the main disadvantages of this control scheme is that the feedback gains are constant and prespecified.2 PUMA robot arm servo control architecture. Thus. and send the current to the analog servo board which moves the joint. Every 0.204 ROBOTICS: CONTROL.and velocity-dependent terms. SENSING.875 nn ENCODER VAL EPROM Q B D R V -I I IN TE RFAC Ti U RAM Tq = 28 nn CPU 6503 D/A Pp AMPLIFIER JOINT MOTOR6 °6 0 875 nn ENCODER -LSI-11/02 a - PUMA ARM- Figure 5.I I1 0. In fact. the PUMA arm moves . read the register value which stores the incremental values from the encoder mounted at each axis of rotation. Both servo loop gains are constant and tuned to perform as a "critically damped joint system" at a speed determined by the VAL program. Update the error actuating signals derived from the joint-interpolated set points and the values from the axis encoders. Since an industrial robot is a highly nonlinear system. VISION. the coupling between joints and the gravity effects are all either position-dependent or position. the iner- tial loading. It does not have the capability of updating the feedback gains under varying payloads. 2. Every 28 ms. It can be seen that the PUMA robot control scheme is basically a proportional plus integral plus derivative control method (PID controller). Furthermore. or pneumatically actuated. 5. armature excited.3. ohms Field resistance. which. and short time constants. linear torque-speed characteristics. position and derivative feedback signals will be used to compute the correction torques `ti . 5.3. continuous rotation motor incorporating such features as high torque-power ratios. Most industrial robots are either electrically.3 COMPUTED TORQUE TECHNIQUE Given the Lagrange-Euler or Newton-Euler equations of motion of a manipulator. These features also reduce the motor inductance and hence the electrical time constant. One solution to the problem is the use of digital control in which the applied torques to the robot arm are obtained by a computer based on an appropriate dynamic model of the arm. This applied voltage is computed at such a high rate that sampling effects generally can be ignored in the analysis.. The motor-voltage (or motor-current) characteristics are also modeled in the computa-' tion scheme and the computed torque is converted to the applied motor voltage (or current). Vf L. when added to the torques computed based on the manipulator model. Lf Ra Rf Armature voltage.1 Transfer Function of a Single Joint This section deals with the derivation of the transfer function of a single joint robot from which a proportional plus derivative controller (PD controller) will be obtained. Use of a permanent magnet field and dc power provide maximum torque with minimum input power and minimum weight. Electrically driven manipulators are constructed with a dc permanent magnet torque motor for each joint. provide the corrective drive signal for the joint motors. smooth. volts Field voltage. low-speed operation. hydraulically.. volts Armature inductance. This will be followed by a discussion of controller design for multijoint manipulators based on the Lagrange-Euler and/or Newton-Euler equations of motion. and the Laplace transform technique is used to simplify the analysis. the dc torque motor is a permanent magnet. an equivalent circuit of an armature-controlled dc permanent magnet torque motor for a joint is shown based on the following variables: coo con con coo can V. ohms BCD con con 'a+ Because of modeling errors and parameter variations in the model. the control problem is to find appropriate torques/forces to servo all the joints of the manipulator in real time in order to track a desired time-based trajectory as closely as possible. 5. In Fig. Henry Field inductance. 5.3. Henry Armature resistance. Basically. A version of this method is discussed in Sec. The drive motor torque required to servo the manipulator is based on a dynamic model of the manipulator (L-E or N-E formulations). The analysis here treats the "single joint" robot arm as a continuous time system.CONTROL OF ROBOT MANIPULATORS 205 with noticeable vibration at reduced speeds. d.3-3) CAD 0 N.3 Equivalent circuit of an armature-controlled dc motor. That is. = dL and (5. amperes Back electromotive force (emf). SENSING. = NLOL or (5. CONTROL. VISION. Since the radius of the gear is proportional to the number of teeth it has..206 ROBOTICS. radians Moment of inertia of the motor referred to the motor shaft. if eb T 8.3-1) rn... volts Torque delivered by the motor. oz in s2/rad Viscous-friction coefficient of the load referred to the load shaft.3-2) (5.6.. oz in s/rad Moment of inertia of the load referred to the load shaft..O... then N. NL = BL Brn =n<1 . AND INTELLIGENCE Figure 5... BL J. oz in s/rad Number of teeth of the input gear (motor gear) Number of teeth of the output gear (load gear) d. and rL are.n Armature current. amperes Field current. oz-in Angular displacement of the motor shaft. the total linear distance traveled on each gear is the same. NL The motor shaft is coupled to a gear train to the load of the link. JL fL N. 5. = rLOL where r. i. respectively.4. the radii of the input gear and the output gear. oz in s2/ rad Viscous-friction coefficient of the motor referred to the motor shaft. With reference to the gear train shown in Fig. radians Angular displacement of the load shaft. (t) (5.CONTROL OF ROBOT MANIPULATORS 207 (b) (c) Figure 5.3-8) . referred to (5..(t) and (5.3-7) `CO motor or." Torque from motor shaft J - torque on torque on load + a.(t) Taking the first two time derivatives. in equation form.. we have (5.3-4) BL(t) = n8..... That is.. where n is the gear ratio and it relates BL to 8by OL(t) = nO.3-5) BL(t) = n8.4 Analysis of a gear train.(t) + TL*(t) (5. I the motor shaft T(t) = T..3-6) If a load is attached to the output gear.. then the torque developed at the motor shaft is equal to the sum of the torques dissipated by the motor and its load. Bm(t) + n2fL)em(t) (5. eb(t) = KbO.n(t) (5. + n2fL is the effective viscous friction coefficient of the combined motor and load referred to the motor shaft... independent of speed and angular position. the torque developed at the motor shaft [Eq.3-8)] is ^v..3-12).3-13) where Jeff = J.3-11) Using Eqs. we have TL(t) = n2[JLBm(t) + fLBm(t)] (5.3-12) Using Eqs. TL 8. VISION. (5.3-16) . (5. (5. Applying Kirchhoff's voltage law to the armature circuit.(t) (t) _ = fTL(t) (5. T(t) = T. we have VQ(t) = Ruia(t) + LQ dldtt) + eb(t) (5. Since the torque developed at the motor shaft increases linearly with the armature current.r' (5. be equal to the work done by the load referred to the motor shaft.3-10) Recalling that conservation of work requires that the work done by the load referred to the load shaft.3-14) where KQ is known as the motor-torque proportional constant in oz in/A. we can now derive the transfer function of this single joint manipulator system. Based on the above results.3-10) and (5.3-5). SENSING.3-6).n + n2JL is the effective moment of inertia of the combined motor and load referred to the motor shaft and feff = f.3-9) T.208 ROBOTICS. (5.CONTROL. leads to TL*(t) TL(t)OL(t) = 0.(t) = Jmem(t) + finent(t) (5. TLOL.3-15) where eb is the back electromotive force (emf) which is proportional to the angular velocity of the motor. AND INTELLIGENCE The load torque referred to the load shaft is TL(t) = JLBL(t) + fLOL(t) and the motor torque referred to the motor shaft is (5...(t) + TL (t) = (Jm + n2JL)em(t) + (fin = Jeff e(t) + fell..3-9). and (5. we have 0 T(t) = KQia(t) . 3-14). _____ Va(s) nKa s(SRaJeff + Rafeff + KaKb) (5. (5.322) . we have (5.ns + 1) (5. (5. This allows us to simplify the above equation to r-. __». s(sRaJeff + Rafeff + KaKb) _ K s(T. we have T(s) = s2Jeff®m(s) + sfeff®m(s) (5.3-18) and (5. and substituting Ia(s) from Eq.(s) = K Ra +sLa (5.3-17). we have Va(s) .3-20) 5 [5 Jeff La + (Lafeff + RaJeff)s + Rafeff + KaKb] Since the electrical time constant of the motor is much smaller than the mechanical time constant.3-17) Taking the Laplace transform of Eq.sKb®.3-13). (5. Rafeff + KaKb RaJeff motor gain constant T. + sLa (5.3-19) Equating Eqs. (s). (5.3-18) Taking the Laplace transform of Eq. (5.3-4) and its Laplace transformed equivalence. using Eq. we obtain the transfer function from the armature voltage to the angular displacement of the motor shaft. ®m(s) Va(s) _ K.sKb®m(s) I (s) = R.3-19) and rearranging the terms.CONTROL OF ROBOT MANIPULATORS 209 and Kb is a proportionality constant in V s/rad.. (s) Va(s) - K.n(s) IL T(s) = K-L.3-21) where K and K. we can relate the angular position of the joint OL (s) to the armature voltage V. Taking the Laplace transform of the above equations and solving for Ia(s). Va(s) . = Rafeff + KaKb motor time constant Since the output of the control system is the angular displacement of the joint [®L(s)]. we can neglect the armature inductance effect. La. + C!1 Va(s) = KP[Di(s) . The block diagram of the system is shown in Fig. In other words.. Equation (5. e(t) = Bi(t) . 40000 and substituting Va(s) into Eq. The technique is based on using the error signal between the desired and actual angular positions of the joint to actuate an appropriate voltage. The actual angular position of the joint can be measured either by an optical encoder or by a potentiometer.AN .2 Positional Controller for a Single Joint The purpose of a positional controller is to servo the motor so that the actual angular displacement of the joint will track a desired angular displacement specified by a preplanned trajectory.3-24) E(s) G(s) = KKP S(SRaJeff + Raffeff + KaKb) (5. Va(t) = KPe(t) where KP is the position feedback gain in volts per radian.3-23) indicates that the actual angular dis+U+ Q-'' placement of the joint is fed back to obtain the error which is amplified by the position feedback gain KP to obtain the applied voltage. we have .6. In reality.210 ROBOTICS: CONTROL.3-22). 4. (5. 5. (5. + R 4fS + . (5. Taking the Laplace transform of Eq.322)] to a closed-loop control system with unity negative feedback.OL(t)] n n (5.fin Figure 5.5..3-22) is the transfer function of the "single joint" manipulator relating the applied voltage to the angular displacement of the joint. and the gear ratio n is included to compute the applied voltage referred to the motor shaft. SENSING. Eq.3-23) . as discussed in Chap. 5.U. VISION.3-23). 5. (5.OL(t) is the system error. This closedloop control system is shown in Fig.®L(S)] = KPE(s) n --1 n (5. yields the open-loop transfer function relating the error actuating signal [E(s) ] to the actual displacement of the joint: OL(S) 4U) con 4-' a.. the applied voltage to the motor is linearly proportional to the error between the desired and actual angular displacement of the joint.3.3-25 ) .t4 ^'4 - KP[Bi(t) .(s) 6(s) _ 1 OL(S) sL.5 Open-loop transfer function of a single joint robot arm. changed the single joint robot system from an open-loop control system [Eq. AND INTELLIGENCE V "(s) I T (s) se. Since. in addition to the positional error feedback. is the error derivative feedback gain. the applied voltage to the joint motor is linearly proportional to the position error and its derivative. Va(t) where K. With this added feed- back term. n +R 1_ stir + .3 Kp[BL(t) . Equation (5.3-26) s2 + [ (Rafeff +KaKb )/RaJeff ] s + KaKplRaJeff .3-27) . tf].6 Feedback control of a single joint manipulator. and the gear ratio n is included to compute the applied voltage referred to the motor shaft.fen n Figure 5. the desired joint trajectory can be described by smooth polynomial functions whose first two time derivatives exist within [to.OL(t)] + Kv[Bi(t) . After some simple algebraic manipulation. In order to increase the system response time and reduce the steady-state error.CONTROL OF ROBOT MANIPULATORS 211 Va(s) r(a) sK. that is. the velocity of the motor is measured or computed and fed back to obtain the velocity error which is multiplied by the velocity feedback gain K.. The angular velocity of the joint can be measured by a tachometer or approximated from the position data between two consecutive sampling periods. 4. we can obtain the closed-loop transfer function relating the actual angular displacement EL(s) to the desired angular displacement ® (s): Equation (5.3-27) indicates that.BL(t)] n Kpe(t) + n (5.3-26) shows that the proportional controller for the single joint robot is a second-order system which is always stable if all the system parameters are positive.. one can increase the positional feedback gain Kp and incorporate some damping into the system by adding a derivative of the positional error. as discussed in Chap. the desired velocity can be computed from Coo CIO OL(S) _ G(s) _ KaKplRaJeff KaKp Oi(s) 1 + G(s) s2RaJeff + s(Rafeff + KaKb) + KaKp (5. Taking the Laplace transform of Eq.3-27) and substituting Va(s) into Eq. Thus. The summation of these voltages is then applied to the joint motor.(s) eL(s) -. From Fig. Depending on the location of this zero.3-22) yields the transfer function relating the error actuating signal [E(s) ] to the actual displacement of the joint: can OL(S) E(s) = GPD(s) = Ka(Kp + sKy.6. the load.3-29) reduces to Eq. + n I 1 EI. the system could have a large overshoot and a long settling time.3-29) is a second-order system with a finite zero located at -Kpl K in the left half plane of the s plane...7. we 'G1 notice that the manipulator system is also under the influence of disturbances [D(s)] which are due to gravity loading and centrifugal effects of the link.212 ROBOTICS: CONTROL. Figure 5. SENSING.Jeff + S(Rafeff + KaKb + KaKv) + KaKp 1-+ (5.3-30) "(s) + sK. Eq. AND INTELLIGENCE the polynomial function and utilized to obtain the velocity error for feedback purposes. This closed-loop control system is shown in Fig.3-29) Note that if Kv is equal to zero.n(s) + D(s) D(s) (5. (5. .. the torque generated at the motor shaft has to compensate for the torques dissipated by the motor. (5.3-26). VISION.. Because of this disturbance. (5. (5.. from Eq. 5. (5.3-18). sL + R - etI + ferr n Y&I. Equation (5.7 Feedback control block diagram of a manipulator with disturbances.) s(SRaJeff + Rafeff + KaKb) KaKvS + KaKp S(SRaJeff + Rafeff + KaKb) Some simple algebraic manipulation yields the closed-loop transfer function relating the actual angular displacement [OL(S)] to the desired angular displacement [OL(s)]: OL(S) GPD(s) OL(S) 1 + GpD (s) KaKp Z SR. Cam' CSC T(s) = [SZJeff + sfeff]O. and also the disturbances. 5. 333)..3-35) 2 w. s2 + w.3-34) JeffR.fl !D` OL(s) = - Ka(Kp + sKK)OL(s) .. as indicated in the previous section. we see that where . In order to have good . such as fast rise time. 5. are.. 4U. We shall temporarily ignore 7-1 .3.nR + (5. This is covered in the following section. Assuming for a moment that the disturbances are zero. The reader will recall that the characteristic equation of a second-order system can be expressed in the following standard form: . the damping ratio and the undamped natural frequency of the system.3-31) and using the superposition principle. and fast settling time. (5. respectively.3-29) and (5. we can obtain the actual displacement of the joint from these two inputs. = '"' Rafeff + KaKb + KaKv Jeff R. The performance of the second-order system is dictated by its natural undamped frequency w.3-31) D(s) From Eqs.3-33) 2wand KaKp (5. (5. with particular emphasis on the steady state error of the system due to step and ramp inputs and the bounds of the position and velocity feedback gains. The effect of this finite zero usually causes a second-order system to peak early and to have a larger overshoot (than the second-order system without a finite zero).3-32) the effect of this finite zero and try to determine the values of Kp and Kv to have a critically damped or overdamped system. as follows: We are interested in looking at the performance of the above closed-loop system. (5. the manipulator system cannot have an underdamped response for a step input. = 0 (5.. small or zero steady-state error. For reasons of safety.. (5.3-31) that the system is basically a second-order system with a finite zero.. we see that from Eqs. The transfer function relating the disturbance inputs to the actual joint displacement is given by ®L (s) .CONTROL OF ROBOT MANIPULATORS 213 where D(s) is the Laplace transform equivalent of the disturbances. (5. Relating the closed-loop poles of Eq.3 Performance and Stability Criteria The performance of a closed-loop second-order control system is based on several criteria. We shall first investigate the bounds for the position and velocity feedback gains.t Q'' ('n s.L" and w. and the damping ratio .nRaD(s) S2RaJeff + S(Rafeff + KaKb + KaKv) + KaKp (5.3-29) to Eq.3-29) and (5. .5wr (5. From Eq. w < 0. >0 (5. the position feedback gain is found from the natural frequency of the system: ..3-34). (5.(t) = 0 Taking the Laplace transform. .KaKb (5.3-42) L Jeff J .3-39) (5.3-37)..3-37) where the equality of the above equation gives a critically damped system response and the inequality gives an overdamped system response.3-40) =0 (5. which requires that the system damping ratio be greater than or equal to unity. we find that _ Rafeff + KaKb + KaKv > 1 2 KaKpJeffRa (5. From Eq. (5. (5. the effective moment of inertia will increase which. AND INTELLIGENCE performance (as outlined above). VISION.3-40) is Jeffs2 + kstiff and solving the above characteristic equation gives the structural resonant frequency of the system wr = Although the stiffness of the joint is fixed.. the velocity feedback gain Kv can be found to be K. then the restoring torque kstiffO. the characteristic equation of Eq. (5. if a load is added to the manipulator's end-effector. reduces the structural resonant frequency. (5.Rafeff . R. may be set to no more than one-half of the structural resonant frequency of the joint.214 ROBOTICS: CONTROL. If a structural resonant frequency wo is meas- 'C7 K.. SENSING. where wr is the structural resonant frequency in radians per second.3-35).. z Kp = w" K K. Jeff0n(t) + kstiffO. in effect.3-41) r lI2 kstiff I (5.(t) opposes the inertial torque of the motor. from Eq.. Paul [1981] suggested that the undamped natural frequency w.3-36) Substituting w.3-38) In order not to excite the structural oscillation and resonance of the joint. If the effective stiffness of the joint is kstiff. that is. we would like to have a critically damped or an overdamped system.. K > i 2 KaKpJeffRa . The structural resonant frequency is a property of the material used in constructing the manipulator.3-34) into Eq. 3-39). then the structural resonant frequency at the other moment of inertia Jeff is given by 1 1/2 Jo Wr = WO (5. the velocity feedback gain K can be found from Eq. (5.3-44) 0 < Kp <_ ooOJORa (5. the error in the Laplace transform domain can be expressed as E(s) = ®(s) .' (5. v. such as gravity loading and centrifugal torque due to the velocity of the .) + KaKp C3.OL(s) _ [S2JeffRa + S(Rafeff + KaKb)]Oi(s) + nRaD(s) s2RaJeff + s(Rafeff + KaKb + KaK.Rafeff . and if the disturbance input is unknown.3-43) L Jeff J Using the condition of Eq. Kp from Eq. (5. ess(step) o essP = lim e(t) = sim sE(s) t-CO "F+ = lim s S-0 [(s2JeffRa + s(Rafeff + KaKb)]Als + nRaD(s) s2RaJeff + s(Rafeff + KaKb + KaK.3-47) For a step input of magnitude A..3-48) which is a function of the disturbances. we do know some of the disturbances. that is. > RaWO Jo leff . that is.. The system error is defined as e(t) = Bi(t) .3-36) is bounded by Wr2 0 < K_ < p which. Fortunately.CONTROL OF ROBOT MANIPULATORS 215 ured at a known moment of inertia J0. using Eq. Next we investigate the steady-state errors of the above system for step and ramp inputs. (5.3-45) 2 4Ka After finding Kp. provided the limits exist.3-32). Bi(t) = A.3-38): K.3-46) K.KaKb ' (5.) + KaKp nRaD(s) = lim s S-0 s2RaJeff + s(Rafeff + KaKb + KaKv) + KaKp I (5.OL(t). then the steady-state error of the system due to a step input can be found from the final value theorem. Using Eq.3-43). (5. reduces to JeffRa 4Ka (5. (5. Other disturbances that we generally do not know are the frictional torque due to the gears and the system noise. as time approaches infinity.TeOmp(s)] S2Rajeff + S(Ra. is bounded by Eq..3-45).3-47) is modified to 4-" [S2JeffRa E(s) - S(Rafeff + KaKb)]® (S) + nRa[TG(s) + Tc(s) + TeIS .3-50) s To compensate for gravity loading and centrifugal effects.8.3-49) is pr. the steady-state position error of the system is given by S-0 lim s nRa[TG(s) + Tc(s) + TeIS . 5. Thus. This is called feedforward compensation. With this computed torque and using Eq.3-50). and Te are disturbances other than the gravity and centrifugal torques and can be assumed to be a very small constant value. SENSING. VISION. D(s) = TG(s) + Tc(s) + Te (5. AND INTELLIGENCE joint.3-53) Since K. Hence. we can identify each of these torques separately as TD(t) = TG(t) + 7-C(t) + Te s"' (5. (5.3-49) where TG(t) and TC(t) are. The corresponding Laplace transform of Eq.216 ROBOTICS: CONTROL. then the steady-state position error reduces to . 8L(cc) approaches zero.3-52) For the steady-state position error. ®(s) = Als. Let us denote the computed torques as TCOmp(t) whose Laplace transform is Tromp (s).. respectively.Tcomp(s)] (5. the contribution from the disturbances due to the centrifugal effect is zero as time approaches infinity.Kb + KaKv) + KaKp For a step input.feff + K.3-51) s2 RaJeff + S(Ra. The reason for this is that the centrifugal effect is a function of Bi(t) and.3-54) COD fir' 115 Caw . we can precompute these torque values and feed the computed torques forward into the controller to minimize their effects as shown in Fig. its contribution to the steady-state position error is zero.. the error equation of Eq. torques due to gravity and centrifugal effects of the link. (5.feff + KaKb + KaKv) + KaKp (5. (5.d e= P nRa Te K KaKp (5. If the computed torque Tcomp(t) is equivalent to the gravity loading of the link. the above steady-state position error reduces to 4nTe essp = WOJo (5. (5. CONTROL OF ROBOT MANIPULATORS 217 A/D Tachometer Motor back e m.3-55) . (5.f. then ®(s) = Als2.0 s2RJ +sRafeff +KaKb +KaKv) +KKp ( a eff a fRa[TG(s) + Tc(s) + Tels .(s) + TM(s) + Tels . then the steadystate error of the system due to a ramp input is "a' ess(ramP) A essv = lim s [SZJeffRa + S(Rafeff + KaKb) ]A/s2 S.8 Compensation of disturbances.3-50). computation Attitude rate error feedback compensation Planning B.5(r) Ti program Torque computation I I Voltage-torque characteristic D/A H-o Drive motor Gear train Load Compute: Gravity loading Coriolis Centrifugal Inertial effect Attitude position error compensation Figure 5.Tcomp(s)] S2RaJeff + S(Rafeff + KaKb + KaKv) + KaKp + lim s S-0 _ (Rafeff + KaKb )A KaKp nRa[TG.Tcomp(S)] + lim s s-0 S2RaJeff + S(Rafeff + KaKb + KaKv) + KaKp (5. which is small because Te is assumed to be small. The computation of Ti(t) will be discussed later using the dynamic model of the manipulator. If the input to the system is a ramp function. and if we again assume that the disturbances are known as in Eq. aqi J f o r i = 1 .218 ROBOTICS: CONTROL..6 k=1m=1 .. in) E Tr a2 °T.. The computation of TcOmp(t) depends on the dynamic model of the manipulator. the steady-state velocity error reduces to (Raffeff + KaKb )A eSSV = . . the Lagrange-Euler equations of motion of a six joint manipulator. 3. g = (gX. j=max(i.3-57) where Ti(t) is the generalized applied torque for joint i to drive the ith link. k. °Ti is a 4 x 4 homogeneous link transformation matrix which relates the spatial relationship between two coordinate frames (the ith and the base coordinate frames). SENSING.3-60) . . T a°T. and Ji is the pseudo-inertia matrix of link i about the ith 104 coordinate frame and can be written as in Eq. r Jk aqi M0 J I r a 0Tr 1 T Il + r=ij=1k=1Tr EEE r a2 °T. agkagm i.k) 6 aqkj J. the computed torque [Tcomp(t)] needs to be equivalent to the gravity and centrifugal effects. and backlash. can be written as [Eq. . respectively.2-18). (3.3-57) can be expressed in matrix form explicitly as 6 6 k=1 Fi Dikjk(t) + E E hik. and qi is the generalized coordinate of the manipulator and indicates its angular position. (3. 0) is the gravity row vector and gI = 9. 6 (5. .6 (5.. 2. excluding the dynamics of the electronic control device.. in order to reduce the steady-state velocity error. aqi T 1 .3-58) where Dik = 6 Tr 1 a°T.. . Equation (5.. Ka Kp + eSSp (5.-. 2. 2. gy. .8062 m/s2.k.m = 1.--6 (5. J agfagk r. [ Jj aqi i. . gear friction.Emg j=i 6 a °T.4k(t)4n:(t) + ci = Ti(t) i = 1.3-59) hikm = j=max(i.6 (5.0 C). ri is the position of the center of mass of link i with respect to the ith coordinate system. 2.. 6 k v>' Ti(t) = E E Tr k=i j=1 6 r a °Tk a °Tk aq.2-24)] -\ T. AND INTELLIGENCE Again.k = a °T. Is.. 4i(t) and 4i(t) are the angular velocity and angular acceleration of joint i. . as discussed in Chap... gZ. r aqi 4j(04 (t) JII . In general. Thus.3-56) which has a finite steady-state error. 0. VISION. Q. (5.. Dig. . 43(t).. the computed torque for the gravity loading and centrifugal and Coriolis effects for joint i can be found. 46(t)] 41(t) hill hi12 h113 hil4 hil5 hi16 q2(t) 43 (t) 44 (t) hill X hi3l hi22 hi23 hi33 hi24 hi34 hi25 hi35 hi26 hi36 hi32 i = 1.. 6 . 45(t)..3-61) Eq. 45 (t). 44(t). 42(t).6 (5. 42(t). . 2. Di4. as TG(t) = Ci and i = 1. 2.3-57) can be rewritten in a matrix notation as 41(t) 42(t) Ti(t) = [Di1.CONTROL OF ROBOT MANIPULATORS 219 Ci = 6 -mfg l=i a °TJ aqi rj i=1. Di3. Did 43 (t) 44 (t) 45 (t) + 141 M.. respectively. 46(t)] 41(t) hill hi21 hi 12 hi 22 hi13 ^^d 46(t) .. 43(t). ..3-62) hi14 hi 15 hi16 q2 (t) 43 (t) + Ci hi23 hi33 hi24 hi34 hi 25 hi26 X hi3l hi32 hi 35 hi36 44 (t) 45 (t) hi6l hi 62 hi63 hi64 hi 65 hi66 q6 (t) Using the Lagrange-Euler equations of motion as formulated above.6 L hi61 hi62 hi63 hi64 hi65 hi66 45(t) q6(t) (5.2..Q' (5. (5. . Di5. 44(t).3-63) TC(t) = [ql (t).3-64) . . the structure of the control law has the form r. The control components compensate for the interaction forces among all the various joints and the feedback component computes the necessary correction torques to compensate for any deviations from the desired trajectory.3-68) .q(t) and e(t) '. VISION..3-67) where e(t) g qd(t) . h(q.3-65)] is very inefficient.7 A Since D(q) is always nonsingular.. and use a proportional plus derivative control to servo the joint motors. (5. 4)+c(q)=Da(q){qd(t)+Kv[gd(t)-4(t)I+Kp[gd(t)-q(t)l} + h. 4). the structure of the control law has the form of .. Because of this reason.220 ROBOTICS: CONTROL. As a result. then the position error vector e(t) approaches zero asymptotically. Thus. can be chosen appropriately so the characteristic roots of Eq.(q)]{9d(t) + K.3-66) If Da(q). '-° + ca (q) :-. C's e-y 8>~s~° .3-65) where K. and c(q).' T(t) = diag[D.4(t).[gd(t) . (5.3-65) into Eq. (q. 5. Paul [1972] concluded that real-time closed-loop digital control is impossible or very difficult.3-66) reduces to (5. (5.q(t)]} +^. (5.O- T(t)=DQ(q){9d(t)+Kv[gd(t)-g(t)I+KP[gd(t)-q(t)+ha(q. (3. SENSING. 4) and the off-diagonal elements of the acceleration-related matrix Da (q) . (3. Basically the computed torque technique is a feedforward control and has feedforward and feedback components. and c(q) in the L-E equations of motion [Eq.4 Controller for Multijoint Robots For a manipulator with multiple joints. and KP are 6 x 6 derivative and position feedback gain matrices. respectively. and the manipulator has 6 degrees of freedom. The computation of the joint torques based on the complete L-E equations of motion [Eq. a='".4(t)] + Kp[qd(t) .1 (5.2-26). In this case. respectively. 4).3.. Substituting r(t) from Eq.. g)+ca(q) (5. we have D(q)q(t)+h(q.3-65) by neglecting the velocity-related coupling term ha (q. it is customary to simplify Eq. (5.. KP and K. This is covered in the next section. AND INTELLIGENCE This compensation leads to what is usually known as the "inverse dynamics problem" or "computed torque" technique. D(q)[e(t) + Ke(t) + Kpe(t)] = 0 IID (5. 4)+ca(q) then Eq. 4). It assumes that one can accurately compute the counterparts of D(q). ha(q. -C' 4d(t) .2-26)] to minimize their nonlinear effects.3-67) have negative real parts. ca(q) are equal to D(q). h(q. one of the basic control schemes is the computed torque technique based on the L-E or the N-E equations of motion. 5. and c(q) in the equations of motion.e. In summary.. uncertainty about the inertia parameters. gear friction. where ff is the sampling frequency (f. velocity is expressed as radians per At rather than radians per second..7 a.3-69) into the N-E recursive equations can be viewed as follows: 1. The physical interpretation of putting Eq. The remaining terms in the N-E equations of motion will generate the correction torque to compensate for small deviations from the desired joint trajectory. This has the effect of scaling the link equivalent inertia up by fs . 5._. and gravity loading of the links. BCD (JO '°A . the feedback gain matrices KP and K (diagonal matrices) can be chosen as discussed in Sec. One of the main drawbacks of this control technique is that the convergence of the position error vector depends °. on the dynamic coefficients of D(q). . and time delay in the servo loop so that deviation from the desired joint trajectory will be inevitable. time is normalized to the sampling period At.qj(t)] + i=I KK[gq(t) . or as in Paul [1981] or Luh [1983b].3. coupling effects.3.qj(t) is the position error for joint j.q1(t)] (5. Based on complete L-E equations of motion.CONTROL OF ROBOT MANIPULATORS 221 A computer simulation study had been conducted to which showed that these terms cannot be neglected when the robot arm is moving at high speeds (Paul [1972]). The control law is computed recursively using the N-E equations of motion.3-69) where K. = 1/At).fl `G° The above recursive control law is a proportional plus derivative control and has the effect of compensating for inertial loading. The analogous control law derived from the N-E equations of motion can be computed in 0(n) time. (5. 2. The first term will generate the desired torque for each joint if there is no modeling error and the physical system parameters are known. The recursive control law can be obtained by substituting 4i(t) into the N-E equations of motion to obtain the necessary joint torque for each actuator: n n gr(t) = q (t) + J=I 'BCD K1[4a(t) . 4). a. An analogous control law in the joint-variable space can be derived from the N-E equations of motion to servo a robot arm.I and Kn are the derivative and position feedback gains for joint i respectively and ea(t) = qj(t) . i. there are errors due to backlash. the computed torque technique is a feedforward compensation control. In order to achieve a critically damped system for each joint subsystem (which in turn loosely implies that the whole system behaves as a critically damped system).3. However.5 Compensation of Digitally Controlled Systems In a sampled-data control system. the joint torques can be computed in 0(n4) time. h(q.. with a negative velocity. the sampling rate for a continuous time system is more stringent than that. one should be able to recover the signal.9. A typical voltage-torque curve is shown in Fig.9 Voltage-torque conversion curve. AND INTELLIGENCE It is typical to use 60-Hz sampling frequency (16-msec sampling period) because of its general availability and because the mechanical resonant frequency of most manipulators is around 5 to 10 Hz. usually 20 times the cutoff frequency is chosen. SENSING. the actual voltage-torque curves are not linear.. and F0 is the force/torque that the joint will exert at drive level V. To minimize any deterioration of the controller due to sampling. the sampling period must be much. VISION. 5. to minimize the effect of sampling. 3-70) 5.3.6 Voltage-Torque Conversion Torque in an armature-controlled dc motor is theoretically a linear function of the armature voltage. 1 _ off 1 -CD 20 w. a computer conversion of computed torque to required input voltage is usually accomplished via lookup tables or calculation from piecewise linear approximation formulas.. /27r 20f (5 . less than the smallest time constant of the arm). if the sampling rate is at least twice the cutoff frequency of the system. That is. '. The output voltage is usually a constant value and the voltage pulse width varies. due to bearing friction at low torques and saturation characteristics at high torques. However. CAD BCD CAD E-+ . Thus.222 ROBOTICS. Although the Nyquist sampling theorem indicates that. The slopes and slope differences are obtained from the experimental curves.. the rate of sampling must be much greater than the natural frequency of the arm (inversely. where Vo is the motor drive at which the joint will move at constant velocity exerting zero force in the direction of motion.. CONTROL. 'C3 At ce. C>" Gas Figure 5. For these reasons. ... Let us briefly discuss the basics of time-optimal control for a six-link manipulator.4-4) where f2(x) is an n x 1 vector-valued function..(ui )max in' for all t (5.4-3) where is a 2n x 1 continuously differentiable vector-valued function. 41(t). Furthermore. `-' J tfdt to = tf . .x2n(t)] . .to (5. 4T(t)] _ [q1(t). .4-7) . . . f2(x) = -D-1(x1)[h(x1.4n(t)] III III [x 1(t). The objective of minimum-time control is to transfer the end-effector of a manipulator from an initial position to a specified desired position in minimum time. x2) + c(x1)] and it can be shown that b(x1) is equivalent to the matrix D-1(x1).4-1) and an n-dimensional input vector as uT(t) = [T1(t). (5.4 NEAR-MINIMUM-TIME CONTROL For most manufacturing tasks. This prompted Kahn and Roth [1971] to investigate the time-optimal control problem for mechanical manipulators. Xz(t)] '. Iui I . the system is assumed to be in the initial state x(to) = x0. it is desirable to move a manipulator at its highest speed to minimize the task cycle time.4-3).Tr(t)] (5. The state space representation of the equations of motion of a six-link robot dimensional state vector of a manipulator as xT(t) _ [qT(t). Since D(q) is always nonsingular. while minimizing the performance index in Eq.q. (5. r2(t). the admissible controls of the system are assumed to be bounded and satisfy the constraints. Let us define a 2n- [x1(t). . .4-5) At the initial time t = to.4-7) and subject to the constraints of Eq.. the above equation can be expressed as X1(t) = X2(t) and %2(t) = f2[x(t)] + b[xl(t)]u(t) (5.fl can be formulated from the L-E equations of motion. x2(t).CONTROL OF ROBOT MANIPULATORS 223 5.4-2) The L-E equations of motion can be expressed in state space representation as i(t) = f[x(t). (5.4-6) Then the time-optimal control problem is to find an admissible control which transfers the system from the initial state x0 to the final state x f. and at the final minimum time t = tf the system is required to be in the desired final state x(tf) = xf..(t).. . (5. u(t)] (5. (5. and the second n%j(t) is the error of the rate of the angular position. as an alternative to the numerical solution. (5.. SENSING.4-4) and a Taylor series expansion is used to linearize the system about the ori.(t) The first nE.(t) = x. u*) p for all t e [to. the optimal adjoint variables p*(t). .224 ROBOTICS: CONTROL.4-11) PM . the equations of motion can be transformed. Furthermore. u*) x and H(x*. i .(tf) and ...aH(x*ap*.4-11).. tf] for all t e [to. the control problem becomes one of moving the system from an initial state (to) to the origin of the space. In terms of the optimal state vector x*(t). a numerical solution is usually the only approach to this problem.4-12) b-0 . v) = pTf(x. (5.n i = n + 1. .4-4)] by a linear system and analytically finding an optimal control O'. However. Kahn and Roth [1971] proposed an approximation to the optimal control which results in a near-minimumtime control. Due to the nonlinearity of the equations of motion. Therefore. .. Obtaining v*(t) from Eqs. A transformation is used to decouple the controls in the linearized system. O. 2. . (5.4-10) (5.. in practice. . the solution is optimal for the special initial and final conditions. VISION. Eq.. the optimal control vector v*(t). AND INTELLIGENCE Using the Pontryagin minimum principle (Kirk [1970]). using the new state variables. Defining a new set of dependent 'C) 0-o 'LS variables. The suboptimal feedback control is obtained by approximating the nonlinear system [Eq. i = 1 .4-8) x*(t) = aH(x*ap*.(t) . u) 'CS and for all admissible controls. In order to obtain the linearized system.4-9) (5. tf] 'yam (5. Because of this change of variables. p. tf] for all t e [to.(t) is the error of the angular position. the numerical solution only computes the control function (open-loop control) and does not accommodate any system disturbances. an optimal control which minimizes the above functional J must minimize the hamiltonian. 2n (5. v) + 1 the necessary conditions for v*(t) to be an optimal control are (5. and the hamiltonian function. 2n.. H(x.. u*) < H(x*.x.(t) = x. 2. the numerical procedures do . p*.4-8) to (5.. ( t ) . the computations of the optimal control have to be performed for each manipulator motion.. to .C+ s.d v. Hence. The linear system is obtained by a change of variables followed by linearization of the equations of motion. . i = 1. the optimization problem reduces to a two point boundary value problem with boun- dary conditions on the state x(t) at the initial and final times. 4-'° not provide an acceptable solution for the control of mechanical manipulators.4-12) is substituted into Eq. In addition. for the linear system. p*. (5. However. By properly selecting a set of basis vectors from the linearly independent columns of the controllability matrices of A and B to decouple the control function. and switching hypersurfaces. These regions are separated by curves in two-dimensional space. switching surfaces. In addition. (5. . a solution to the time-optimal control and switching surfaces]' problem can be obtained by the usual procedures. and by hypersurfaces in n-dimensional space. and v(t) is related to u(t) by v(t) = u(t) + c. this control method is usually too complex to be used for manipulators with 4 or more degrees of freedom and it neglects the effect of unknown external loads. 2. Although Eq.4-16)] generally results in response times and trajectories which are reasonably close to the time-optimal solutions. and thus we are interested in regions of the state space over which the control is constant. t Recall that time-optimal controls are piecewise constant functions of time.7w . .4-16) Vi.. These separating surfaces are called switching curves.. From this point on. where the vector c contains the steady-state torques due to gravity at the final state. 2. a new set of equations with no coupling in control variables can be obtained: (t) = controls: Bv(t) (5. we can obtain a three double-integrator system with unsymmetric bounds on J2i-1(0 = vi (5.4-13) is linear. respectively.5 vi < vi+ and Vi F = (Ui)max + Cl i = 1. The linearized and decoupled suboptimal control [Eqs.= -(Ui)max + Ci where ci is the ith element of vector c. the linearized equations of motion are 4(t) = At(t) + Bv(t) where (5. by surfaces in three-dimensional space.4-15) and (5.4-13) z.4-14) Using a three-link manipulator as an example and applying the above equations to it.4-15) 2i(t) = J2i-I where vi.CONTROL OF ROBOT MANIPULATORS 225 gin of the space. all sine and cosine functions of i are replaced by their series representations. 3 (5. the control functions v(t) are coupled.T(t) = ( S I . As a result. chemical. of VSS is that it has the so-called sliding mode on the switching surface. the bounds of the model parameters are sufficient to construct the controller..4-5). P6. . v) + b(el + pd)u(t) (5. VISION. Variable structure systems (VSS) are a class of systems with discontinuous feedback control. . .5-1) = (PT. SENSING. the system remains insensitive to parameter variations and disturbances and its trajectories lie in the switching surface. It is this insensitivity property of VSS that enables us to eliminate the interactions among the joints of a manipulator.) are defined in Eq. (5. AND INTELLIGENCE 5. defining the state vector xT(t) as . . we have changed the tracking problem to a regulator problem..226 ROBOTICS: CONTROL... Furthermore.46) A (PI .4-1). . the system is insensitive to system parameter variations in the sliding mode. 41. (5. the theory of VSS can be used to design a variable structure controller (VSC) which induces the sliding mode and in which lie the robot arm's trajectories.6 (5. The error equations of the system become e1(t) = v(t) and CCs v(t) = f2(el + pd.. v) can be constructed as !CI ui* (p. a variable structure control u(p. v6 ) (5.q6. . Variable structure control differs from time-optimal control in the sense that the variable structure controller induces the sliding mode in which the trajectories of the system lie. .5-2) where f2(. vi) > 0 ui(p. From Eq..'T XT = (qi. . VI ..5 VARIABLE STRUCTURE CONTROL In 1978.5-3) .. V) if si(ei. if si(ei. Within the sliding mode. Young [1978] proposed to use the theory of variable structure systems for the control of manipulators. Hence.. For the regulator system problem in Eq.pd and the velocity error vector e2(t) = v(t) (with vd = 0). v) = ui_ (p. .-+ cad . vi) < 0 -`0 . and aerospace industries. Let us consider the variable structure control for a six-link manipulator. For the last 20 years. The sliding phenomena do not depend on the system parameters and have a stable property. Such design of the variable structure controller does not require accurate dynamic modeling of the manipulator.. The main feature. VT) 'C7 and introducing the position error vector e1(t) = p(t) .5-2). (5. the theory of variable structure systems has found numerous applications in control of various processes in the steel.) and b(. V) i = 1. the controller [Eq. (5.5-4) and the synthesis of the control reduces to choosing the feedback controls as in Eq.5-3) so that the sliding mode occurs on the intersection of the switching planes.. 6 (5. A more detailed discussion of designing a multi-input controller for a VSS can be found in Young [1978]. and desired periodic trajectories. 0. When in the sliding mode.. ei = .6 (5.e.5-6) where C = diag [ cl . (5.CONTROL OF ROBOT MANIPULATORS 227 where si(ei. and polynomial) and obtained decoupled subsystems..5-4) as (5. e. C2.. . vi) = ciei + vi ci > 0 i = 1.5-6)] is used to control the manipulator. In summary. postural stability. . the controller produces a discontinuous feedback control signal that change signs rapidly. chattering) should be taken into consideration for any applications to robot arm control. As we can see. the sliding mode is obtained from Eq. . The dynamics of the manipulator in the sliding mode depend only on the design parameters ci.ciei i = 1. 5. .g.+ asymptotic stability of the system in the sliding mode and make a speed adjustment of the motion in sliding mode by varying the parameters ci. By solving the algebraic equations of the switching planes.O '-. vi) are the switching surfaces found to be si(ei. the computed torque technique.. 6 (5. Then. Their approach is different from the method of linear system decoupling a0) 7N. each representing 1 degree of freedom of the manipulator when the system is in the sliding mode. a.C tor (i. However. With the choice of ci > 0. The effects of such control signals on the physical control device of the manipula- C1. . the variable structure control eliminates the nonlinear interactions among the joints by forcing the system into the sliding mode.5-5) a unique control exists and is found to be Ueq = -D(p)(f2(p. ji(ei. Hemami and Camana [1976] applied the nonlinear feedback control technique to a simple locomotion system which has a particular class of nonlinearity (sine. (5.. vi) = 0 i=1.. Most of the existing robot control algorithms emphasize nonlinear compensations of the interactions among the links. v) + Cv) (5. we can obtain the can Vii . the controller [Eq.6 NONLINEAR DECOUPLED FEEDBACK CONTROL There is a substantial body of nonlinear control theory which allows one to design a' near-optimal control strategy for mechanical manipulators.5-3)] forces the manipulator into the sliding mode and the interactions among the joints are completely eliminated. cosine.. . .. .5-7) The above equation represents six uncoupled first-order linear systems. ... c6 ]. and Q x) are matrices of compatible order. To achieve such a high quality of control.6-1) results in the following expressions: and i(t) = A(x) + B(x)F(x) + B(x)G(x)w(t) y(t) = C(x) a-) (5. . Let us define a nonlinear operator NA as NAC. It provides an approximate optimal control for a manipulator. let us define the differential order d. Freund [1982]) which will be utilized together with the Newton-Euler equations of motion to compute a nonlinear decoupled controller for robot manipulators. . j = 1.6-1) where x(t) is an n-dimensional vector. Saridis and Lee [1979] proposed an iterative algorithm for sequential improvement of a nonlinear suboptimal control law. In this section.. m where Ci(x) is the ith component of C(x) and NACi(x) = Ci(x). of the nonlinear system as di = min j: !_N_1Cj(x) B(x) # 0. we shall briefly describe the general nonlinear decoupling theory (Falb and Wolovich [1967]. .6-4) into the system equation of Eq. SENSING. and G(x) is an m x m input gain matrix so that the overall system has a decoupled input-output relationship.. F(x) and G(x) are chosen.6-4) where w(t) is an m-dimensional reference input vector... (5. F(x) is an m x 1 feedback vector for decoupling and pole assignment.. VISION.6-3) Then. n) (5. as follows: F(x) = Fi (x) + Fz (x) F7.n (5. (5. .. this method also requires a considerable amount of computational time.6-6) . Substituting u(t) from Eq. Given a general nonlinear system as in OMs *(t) = A(x) + B(x)u(t) and y(t) = C(x) (5. 2. (5. AND INTELLIGENCE where the system to be decoupled must be linear.228 ROBOTICS: CONTROL. the control objective is to find a feedback decoupled controller u(t): u(t) = F(x) + G(x) w(t) (5.6-5) In order to obtain the decoupled input-output relationships in the above system.6-2) i = 1. and A(x).(x) I ax NA 'G(x) A(x) K= 1.. respectively.2. 2. Also. B (x) .. u(t) and y(t) are m-dimensional vectors. That is. .6-10) = C. while F *(x) performs the control part with arbitrary pole assignment. . Yi(di')(t) (5. we obtain (d) y. Then.6-9) and A is a diagonal matrix whose elements are constant values Xi for i = 1.6-4) and Eqs. The input gain of the decoupled part can be chosen by G(x).iyi(t) _ Xiwi(t) (5. (5. .6-7) C*(x) is an m-dimensional vector whose ith component is given by Ci*(x) = NA''Ci(x) M*(x) is an m-dimensional vector whose ith component is given by (5.6-11). # 0 (5. .i Yr (d-I) (t) + .6-8) Mi"(x) _ d. ..m. + ao.6-12) where UK.6-6) to (5. (5. (t) + ad.. (5.. 2. and D*(x) is an m x m matrix whose ith row is given by b-0 B(x) for d.CONTROL OF ROBOT MANIPULATORS 229 where F*(x) = -D*-I(x)C*(x) Fz (x) = -D*-I(x)M*(x) and G(x) = D*-'(x)A F*(x) represents the state feedback that yields decoupling.*(x) + Di*u(t) u-' (5. . and Xi are arbitrary scalars. # 0 (5. the system in Eq.6-1) can be represented in terms of y*(t) = C*(x) + D*(x) u(t) where y*(t) t is an output vector whose ith component is Yj (`O (t).-I aKiNACi(x) K=O for d.I.6-11) Utilizing Eq. the above equation can be rewritten as 6(t) = -D-1(6)[h(6.. 0(t) is the angular positions. .230 ROBOTICS: CONTROL. . ax Ci(x) B(x)F(x) + 8C1(x) ax [B(x)G(x)w(t)] Using Eqs. yi (t) = Ci (x) and. 0) '"'3 CI(0)1 uI (t)1 D16 D66 06(t) I h6(0. c(6) is a 6 x 1 gravitational force vector.(x) ax Y10)(t) = yi(t) = x(t) 8C.6-13) which can be rewritten in vector-matrix notation as D(6)6 + h(6. 6 (t) is the angular velocities. Since D(6) is always nonsingular. we have 9C. the Lagrange-Euler equations of motion of a six-link robot can be written as . 3.6-14) where u(t) is a 6 X 1 applied torque vector for joint actuators. AND INTELLIGENCE To show that the ith component of y*(t) has the form of Eq. and D(6) is a 6 x 6 acceleration-related matrix. 0) u6(t) (5.6-15) . D16 h1(0.(x) [A(x) + B(x)F(x) + B(x)G(x)w(t)] ax 3C1(x) = NA+BFCi(x) + Using ax [B(x)G(x)w(t)] the identity. (5. 0 (t) is a 6 x 1 acceleration vector. (5. As discussed in Chap."Ci(x) + [a/3XNA` ICi(x)]B(x)F(x).6-7). it becomes yiWWW(t) = Ci*(x) + D*(x)u(t) Similar comments hold for di =2. o.. .6-11). Thus.. 3. Then. 6) + c(6)] + D-I(6)u(t) (5. by differentiating it successively. let us assume that di = 1.. . NA'+BFCi(x) = N. VISION. h(6. (5.6-11). the resultant system has decoupled input-output relations and becomes a time-invariant second-order system which can be used to model each joint of the robot arm. 6) + c(6) = u(t) (5. to yield Eq.6-4) and (5.. yip 1 W (t) can be written as YiU)(t) = NAC1(x) + ..~3 D11 .. SENSING. 6) is a 6 x 1 Coriolis and centrifugal force vector. (5. = 2.(0.D(0) {-D-'(0)[h(0. D11 . explicitly. d.(t) = h. the above equation can be related to Eq.6-16) 0(t) = D16 D66 h6(0.X6W6 (t) _ (5.CONTROL OF ROBOT MANIPULATORS 231 or.6-17) where C. for joint i. (5. the controller u(t) for the decoupled system [Eq. (5.6-5)] must be u(t) _ -D*-'(x)[C*(x) + M*(x) ..Aw(t)} = h(0. .(21(t) = y. (5. 0) + c(0 )] + = C.(t) .6-11) as C'" Y. Did L a1606(t) + a0606(t) .*(x)u(t) [D-'(0)]iu(t) (5. OT(t)] and D.D(0)[M*(x) .*(x) = [D-'(0)]. hence. 0) + c.Aw(t)] .6-19) and [D-'(0)]i is the ith row of the D-'(0) matrix.XIWI (t) u.[Dil .101(t) + a010. Thus.(t) = -[D-i(0)]i[h(0.6-18) xT(t) = [OT(t).*(x) = -[D-'(0)].r' (5. 0) + c(0) . 0) + c(0)] .(0) .6-20) a. 0) + cl (0) (5.. Treating each joint variable 0i(t) as an output variable. 0) + c(0)] + M*(x) . D16 I h1(0.[h(0.Aw(t)] Explicitly. 0) + c6(0) J D16 D11 ui (t) 1 D [D16 66 J [U6 (t) J The above dynamic model consists of second-order differential equations for each joint variable...*(x) + D.6-21) . X6w6(t) Since D(O) is always nonsingular.6-22) a1606(t) + a0606(t) . (5.6 . In many applications.X1w1 (t) D(O) I I =0 (5.. aoi. which leads to 01(t) + a1101(t) + a0101(t) . 5. (5.-I .71 (5.6-20)] can be computed efficiently based on the manipulator dynamics. An efficient way of computing the controller u(t) is through the use of the Newton-Euler equations of motion. It is interesting to note that the parameters a1i.a11Di(t) .6-23) 06(t) + a1606(t) + a0606(t) . ... Hence. Resolved motion means that the motions of the various joint motors are combined and resolved into separately controllable hand motions along the world coordinate axes.aoi0i(t) in the Newton-Euler equations of motion.. 0i(t) is substituted with Xiwi(t) . decoupled. provided that the stability criterion is maintained. we have D(O) 0(t) + h(0.D(O) 1 I (5. VISION. 000 taneously at different time-varying rates in order to achieve desired coordinated hand motion along any world coordinate axis. Hence.7 RESOLVED MOTION CONTROL In the last section. several methods were discussed for controlling a mechanical manipulator in the joint-variable space to follow a joint-interpolated trajectory.6-24) which indicates the final decoupled input-output relationships of the system.6-14). and Xi can be selected arbitrarily. 2. 0) + c(0) ra1101(t) + a0101(t) .X6w6 (t) V.6-20) into Eq. L"" G7.232 ROBOTICS: CONTROL. the manipulator can be considered as six independent.. second-order. to compute the controller ui(t) for joint i. SENSING. This implies that several joint motors must run simulCD.. Substituting u(t) from Eq. time-invariant systems and the controller u(t) [Eq. 0) + c(O) . the above equation becomes Bi(t) + ali0i(t) + aoiOi(t) = XiWi(t) i = 1. 'C7 s.X1w1(t) = h(0. (5. we note that the controller ui(t) for joint i depends only on the current dynamic variables and the input w(t). which commands the manipulator hand to move in a desired cartesian direction in a coordinated position and rate control. . This enables the user to specify the CD-- O. . is more appropriate. resolved motion control. AND INTELLIGENCE From the above equation. .Y(t) baseThand(t) - ny(t) nz(t) sy(t) ay(t) sz(t) py(t) p .(t) 1 n(t) s(t) a(t) p(t) 1 az(t) 0 0 0 0 0 0 (5. The mathematical relationship between these two coordinate systems is important in designing efficient control in the Cartesian space. In general. n.CONTROL OF ROBOT MANIPULATORS 233 direction and speed along any arbitrarily oriented path for the manipulator to follow. the desired motion of a manipulator is specified in terms of a time-based hand trajectory in cartesian coordinates. The problem of finding the location of the hand is reduced to finding the position and orientation of the hand coordinate frame with respect to the inertial frame of the manipulator. This motion control greatly simplifies the specification of the sequence of motions for completing a task because users are usually more adapted to the Cartesian coordinate system than the manipulator's joint angle coordinates. while the servo control system requires that the reference inputs be specified in joint coordinates. as shown in Fig. .(t) sX(t) ax(t) p.10 The hand coordinate system. The location of the manipulator hand with respect to a fixed reference coordinate system can be realized by establishing an orthonormal coordinate frame at the hand (the hand coordinate frame).10. This can be conveniently achieved by a 4 x 4 homogeneous transformation matrix: C.7-1) Sweep Figure 5. 5. We shall briefly describe the basic kinematics theory relating these two coordinate systems for a six-link robot arm that will lead us to understand various important resolved motion control methods.' one _Q. 7-2) -so where sin a = Sa. Instead of using the rotation submatrix [n. and . a are the unit vectors along the principal axes of the coordinate frame describing the orientation of the hand. the instantaneous angular velocities of the hand coordinate frame about the principal Can all 411 [a(t). linear velocity v(t).. vy(t). then a rotation of the 0 angle about the yo axis. pz(t)]T "Z3 fi(t) (2(t) o v(t) N'(0. vv(t)]T [c (t). we can use three Euler angles. s. a(t). (2. and a rotation of the y angle about the z0 axis of the reference frame [Eq. yo.CySa + SySfCa I (Do SyCo (5. and n. py(t). sin y = Sy. Euler angles 4 (t). and z0 of the reference frame.(t) ar(t) ay(t) az(t) base Rhand(t) - cy Sy 0 -Sy 0 Cy 0 co 0 0 1 sot 0 1 0 0 0 1 0 0 Ca Sa . SENSING. sin 0 = So. s. respectively: "C3 Iii p(t) [px(t). yaw a(t). a] from the Euler rotation matrix resulting from a rotation of the a angle about the x0 axis. One can obtain the elements of [n. cosy = Cy. and angular velocity Q(t) vectors of the manipulator hand with respect to the reference frame.234 ROBOTICS: CONTROL. s.(t) s .2-19)]. wZ(t)]T (5. cos 0 = C0. y(t)]T all . and roll y(t).7-4) Since the inverse of a direction cosine matrix is equivalent to its transpose. respectively.Sa ca -so 0 co CryCo -S7Ca+CySfSa CyCa + S7SoSa SySa+CySoCa 1 . Thus: nx(t) ny(t) nz(t) s (t) s. pitch 0(t). a] to describe the orientation.7-3) The linear velocity of the hand with respect to the reference frame is equal to the time derivative of the position of the hand: v(t) d dtt) = p(t) (5. Let us define the position p(t). AND INTELLIGENCE where p is the position vector of the hand. wy(t). which are defined as rotations of the hand coordinate frame about the x0.-. VISION.' 111 cos a = Ca. III Cosa coca fem. (q)..Sy S7C/3 -so Its inverse relation can be found easily: . (5.y SyC36 + Cya .7-6) «(t) 4(t) Cy = sec (3 Sy L'2 0 0 CO wx(t) -SyC(3 C7s/3 CyC/3 wy(t) wZ(t) 3 (5.CyC3ci + Sy0 From the above equation.7-2): RdRT dt =- dRRT= dt 0 wz -Wz 0 w.Cy/3 C7Cf3& . N2(q). . 0 -wy 0 .CONTROL OF ROBOT MANIPULATORS 235 axes of the reference frame can be obtained from Eq.co.7-9) . wz(t)]T and [«(t).Sya 0 (5.7-5) so a* . y(t)]T can be found by equating the nonzero elements in the matrices: CyCo .S/3& + 0 . the relation between the [cwx(t). wy(t).ca 0 0 1 Cy 0 (5.7-7) (t) or expressed in matrix-vector form.. N6(q)] 4(t) (5. the linear and angular velocities of the hand can be obtained from the velocities of the lower joints: V(t) L 0(t) [N(q)]4(t) _ N. 0(t). SyS0 fi(t) [s0)] fi(t) (5.S7C/3cx .7-8) Based on the moving coordinate frame concept.x Wy . . and p is the position of the hand with respect to the reference coordinate frame. . Substituting 4(t) from Eq. 4)N-l(q) F V(t) Q(t) (5..7-11) into Eq. VISION.. AND INTELLIGENCE where 4(t) = (41 .. (5. 4)N-'(q) v(t) lator. 4)4(t) + N(q)q(t) :C" (5.N-'(q)N(q. The accelerations of the hand can be obtained by taking the time derivative of the velocity vector in Eq. and N(q) is a 6 x 6 jacobian matrix whose ith column vector Ni(q) can be found to be (Whitney [1972]): where x indicates the vector cross product.1)th coordinate frame with respect to the reference frame.7-11) Given the desired linear and angular velocities of the hand. t!7 t`. SENSING. cq6 )T is the joint velocity vector of the manipulator. . (5.7-13) and the joint accelerations q(t) can be computed from the hand velocities and accelerations as q(t) = N-'(q) rte. .236 ROBOTICS: CONTROL.46(t)]T is the joint acceleration vector of the manipu= N(q. C/) .7-12) where q(t) = [q1(t).7-12) gives v(t) St(t) 0(t) + N(q)q(t) (5.7. (5. 5. . then the joint velocities 4(t) of the manipulator can be computed from the hand velocities using Eq.. p!_1 is the position of the origin of the (i . zi_1 is the unit vector along the axis of motion of joint i. (5. this equation computes the joint velocities and indicates the rates at which the joint motors must be maintained in order to achieve a steady hand motion along the desired cartesian direction. If the inverse jacobian matrix exists at q(t).7-9): 4(t) = N-'(q) 0(t) v(t) c$0 (5.7-9): Q(t) i(t) I = N(q.1 for various resolved motion control methods .7-14) The above kinematic relations between the joint coordinates and the cartesian coordinates will be used in Sec. 5. l < j < m '-. Various methods of computing the inverse jacobian matrix can be used. and (5. q2q 2 . as in Eq. m = n. af. that is. (5.7-15). (5. given the desired rate along the world coordinates. and y)T q1z )T q(t) = generalized coordinates = (ql. the relationship is linear. yaw a.7-17) We see that if we work with rate control. For a more general discussion.7-18) From Eq. that is. (5.7-18). If we differentiate Eq.7-16) where N(q) is the jacobian matrix with respect to q(t). 1 < i < n.7-16).. (5. a. and roll y to the joint angle coordinate of a six-link manipulator is inherently nonlinear and can be expressed by a nonlinear vector-valued function as x(t) = f[q(t)] where f(q) is a 6 x 1 vector-valued function. resolved motion rate control block diagram is shown in Fig. pitch 0. When x(t) and q(t) are of the same dimension. (5. The relationship between the linear and angular velocities and the joint velocities of a six-link manipulator is given by Eq.11. as indicated by Eq. then the manipulator is nonredundant and the jacobian matrix can be inverted at a particular nonsingular position q(t): q(t) = N-'(q)x(t) (5.7.1 Resolved Motion Rate Control Resolved motion rate control (RMRC) means that the motions of the various joint motors are combined and run simultaneously at different time-varying rates in order to achieve steady hand motion along any world coordinate axis.7-15) with respect to time.CONTROL OF ROBOT MANIPULATORS 237 and in deriving the resolved motion equations of motion of the manipulator hand in Cartesian coordinates. (5. then the joint angles and the world coordinates are related by a nonlinear function. py.7-15) x(t) = world coordinates = (P. sweep py. reach p. A . if we assume that the manipulator has m degrees of freedom while the world coordinates of interest are of dimension n. 5. No = aq. such as lift p. pZ. The mathematics that relate the world coordinates.7-9). we have dx(t) dt = *(t) = N(q)4(t) (5. one can easily find the combination of joint motor rates to achieve the desired hand motion. 7-20) into Eq.7-23) If the matrix A is an identity matrix. is an n X 6 matrix that relates the orientation of the hand coordinate system to the world coordinate system. and A is an m X m symmetric. we obtain 4(t) = A-INT(q)[N(q)A-1NT(q)1-li(t) (5.7-19) where X is a Lagrange multiplier vector. then the manipulator is redundant and the inverse jacobian matrix does not exist.T[i . In this case.~f C = +4'4 + >. yields X= (5. .11 The resolved motion rate control block diagram.7-20) (5. positive 4(t) = A-'N T(q)X and (5. [ i Arm 6 Joint sensing Figure 5. and solving for X.7-16) with a Lagrange multiplier to a cost criterion. that is. SENSING.7-18). (5.7-24) where °R.7-22) Substituting X into Eq.N(q)4] 'a' definite matrix.10). VISION.238 ROBOTICS: CONTROL. (5. the desired hand rate motion h(t) along the hand coordinate system is related to the world coordinate motion by i(t) = 0Rhh(t) (5. we have C17 (5. AND INTELLIGENCE ill N-1(x) Joint controller 6. (5. it is of interest to command the hand motion along the hand coordinate system rather than the world coordinate system (see Fig. (5.7-21) i(t) = N(q)4(t) [N(q)A-lNT(q)l-li(t) Substituting 4(t) from Eq. if the rank of N(q) is n. In this case. then 4(t) can be found by minimizing an error criterion formed by adjoining Eq.. Quite often. Given the desired hand rate motion h(t) . (5. If m > n. 5. then Eq.7-20). Minimizing the cost criterion C with respect to 4(t) and X.7-23) reduces to Eq.7-21). (5. This reduces the problem to finding the generalized inverse of the jacobian matrix. 7-23) and (5.".7-28) . s. The actual and desired position and orientation of the hand of a manipulator can be represented by 4 x 4 homogeneous transformation matrices. z of the hand coordinate system. respectively. the angular position q(t) depends on time t.PX(t) en(t) = pd(t) . (11 and it assumes that the desired accelerations of a preplanned hand motion are specified by the user.7-25).7. The orientation submatrix [n.P(t) = py(t) . [1980b]) extends the concept of resolved motion rate control to include acceleration control. the orientation error is defined by the discrepancies between the desired and actual orientation axes of the hand and can be represented by e0(t) = 1/2[n(t) x nd + s(t) x sd + a(t) x ad] `'0 pd(t) . a are the unit vectors along the principal axes x.7-25) In Eqs.pz(t) (5.7-23) to (5. /3. (5.2 Resolved Motion Acceleration Control The resolved motion acceleration control (RMAC) (Luh et al. y. so we need to evaluate N-1(q) at each sampling time t for the calculation of 4(t). a] can be defined in terms of Euler angles of rotation (a.py(t) pd(t) . +-+ Similarly. and p(t) is the position vector of the hand with respect to the base coordinate system. as H(t) _ and n(t) s(t) 0 nd(t) a(t) p(t) 0 1 0 Hd(t) = (5. the joint rate 4(t) can be computed by: 4(t) = A-'NT(q)[N(q)A-INT(q)]-IORh i(t) (5. (5. and using Eqs. All the feedback control is done at the hand level.7-27) (5.CONTROL OF ROBOT MANIPULATORS 239 with respect to the hand coordinate system. The added computation in obtaining the inverse jacobian matrix at each sampling time and the singularity problem associated with the matrix inversion are important issues in using this control method. (5. y) with respect to the base coordinate system as in Eq. respectively. The position error of the hand is defined as the difference between the desired and the actual position of the hand and can be expressed as . 5. s.7-2). It presents an alternative position control which deals directly with the position and orientation of the hand of a manipulator.7-26) sd(t) 0 0 ad(t) 0 pd(t) 1 where n.7-24). . then the time derivative of x(t) is the hand "CU) acceleration Cow where N(q) is a 6 x 6 matrix as given in Eq. the desired velocity vd(t).7729) hand velocities. If this idea is extended further to solve for the joint accelerations from the hand acceleration K(t). to reduce the orientation error of the hand. and the desired acceleration vd(t) of the hand are known with respect to the base coordinate system. we can combine the linear velocities v(t) and the angular velocities W(t) of the hand into a six-dimensional vector as i(t). [Cod(t) . (5. Equation (5. one may apply joint torques and forces to each joint actuator of the manipulator. SENSING. In order to reduce the position error. C3.7-31) can be rewritten as eP(t) + kl6p(t) + kzep(t) = 0 C2.p C/) "_' .p(t)] `t7 (5.7-29) is the basis for resolved motion rate control where joint velocities are solved from the The closed-loop resolved motion acceleration control is based on the idea of reducing the position and orientation errors of the hand to zero.7-32) where ep(t) = pd(t) . VISION. v(t)..7-30) s. Similarly. then the desired position pd(t).7-32) have negative real parts. control of the manipulator is achieved by reducing these errors of the hand to zero. This requires that kI and k2 be chosen such that the characteristic roots of Eq. zoo X(t) = N(q)q(t) + N(q.240 ROBOTICS: CONTROL.p(t).7-10). AND INTELLIGENCE Thus.. This essentially makes the actual linear acceleration of the hand.W(t)] + k2ea (5.v(t)] + k2[pd(t) .7-33) Let us group vd and cod into a six-dimensional vector and the position and orientation errors into an error vector: C2. one has to choose the input torques and forces to the manipulator so that the angular acceleration of the c. The input torques and forces must be chosen so as to guarantee the asymptotic convergence of the position error of the hand. If the cartesian path for a manipulator is preplanned.. (5. V(t) W(t) = N(q)4(t) (5. hand satisfies the expression w(t) = cvd(t) + k. tion v(t) = v`t(t) + k. [vd(t) . satisfy the equar-. Equation (5. (5.7-31) where kI and k2 are scalar constants. 4)4(t) (5. Considering a six-link manipulator. 3 Resolved Motion Force Control The basic concept of resolved motion force control (RMFC) is to determine the applied torques to the joint actuators in order to perform the Cartesian position control of the robot arm. 5.7-33)..7-36) is the basis for the closed-loop resolved acceleration control for manipulators. and joint velocity 4(t) are measured from the potentiometers. of the manipulator. A control block diagram of the RMFC is shown in Fig. In order to compute the applied joint torques and forces to each are used.7-35) and solving for q(t) gives q(t) = N-I(q)[Xd(t) + kI (Xd(t) . (5. (5. w.7-31) and (5. (5. and the desired acceleration vd(t) of the hand obtained from a planned trajectory can be CS' CD' joint actuator of the manipulator. Finally the applied joint torques and forces can be computed recursively from the Newton-Euler equations of motion. this control method is characterized by extensive computational requirements.7-30) into Eq. and the need to plan a manipulator hand trajectory with acceleration information. . An advantage of RMFC is that the control is not based on the complicated dynamic equations of motion of the manipulator and still has the ability to compensate for changing arm configurations. the recursive Newton-Euler equations of motion AO) c. (5. N-I.12. The force convergent control determines the necessary joint torques to each actuator so that the end-effector can maintain the desired forces and moments obtained from the position control. and H(t) can be computed from the above equations. we have X(t) = xd(t) + kI [Xd(t) . or optical encoders. The basic con- trol concept of the RMFC is based on the relationship between the resolved CND t3" 'TI coo y. all the control of RMFC is done at the hand level. The position control calculates the desired forces and moments to be applied to the end-effector in order to track a desired cartesian trajectory. can -C3 . gravity loading forces on the links.N(q. singularities associated with the jacobian matrix. The joint position q(t). desired velocity vd(t).7. 4)4(t)] _ -k1 q(t) + N-'(q)[Xd(t) + k1 Xd(t) + k2e(t) .' used to compute the joint acceleration using Eq. As in the case of RMRC. The quantities v.*(t)] + k2e(t) (5.fl 5. A more detailed discussion can be found in Wu and Paul [1982].. The RMFC is based on the relationship between the resolved force vector F obtained from a wrist force sensor and the joint torques at the joint actuators. These values together with the desired position pd(t).7-35) Substituting Eqs. The control technique consists of the cartesian position control and the force convergent control.7-29) and (5. We shall briefly discuss the mathematics that governs this control technique. Similar to RMAC.N(q.7-36) Equation (5. N.*(t)) + k2e(t) . and internal friction. 4)4(t)] (5.7-36). N.CONTROL OF ROBOT MANIPULATORS 241 Combining Eqs. can be represented .242 ROBOTICS: CONTROL. respectively.12 Resolved motion force control. force vector. °A6(t + At). vy. the velocity (vx. (t) At (5.Wx(t) 1 Vx(t) WI(t) Vy(t) -Wy(t) 0 Wx(t) 0 Vz(t) 1 a [(°A6)-'(t)°A6(t + At)] (5. as in Eq. Wx. (r1.. r= (5. . My. and the joint torques. T(t) = NT(q)F(t) where N is the jacobian matrix.7-10). (5. 7-2.wy(t) Wx(t) 0 0 0 then.fl .-t 00o . The underlying relationship between these quantities is 7-")T F = (F. vy.7-31) is different from the above velo- . vZ)T. w)T can be obtained from the element of the following equation 1 . FY.7-38) . and the angular velocity (wx. FZ )T and (Mx. SENSING. AND INTELLIGENCE Figure 5.t" .o I which are applied to each joint actuator in order to counterbalance the forces felt at the hand.x can be obtained using the above equation.Wz(t) 1 a4) ^^^ Wy(t) . an appropriate time-based position trajectory has to be specified as functions of the arm transformation matrix °A6(t). F. VISION. MZ)T. (5.}' as 1 -WZ(t) 1 Wy(t) .Wx(t) 1 Vx(t) Vy(t) VZ(t) 1 °A6(t + At) = °A6(t) w. MZ ) T are the cartesian forces and moments in the hand coordinate system. where (Fx. The velocity error xy x used in Eq. My.7-39) 0 The cartesian velocity error id . the desired time-varying arm transformation matrix. co)" U) w )T about the hand coordinate system. wy. Mz.7-37) Since the objective of RMFC is to track the cartesian position of the endeffector. That is. the desired cartesian velocity xd(t) = (vx. Fy. vZ. 0 If the error between the measured force vector F0 and the desired cartesian force is greater than a user-designed threshold OF(k) = F"(k) .. I.742) have negative real parts. (5. x(t) will converge to xd(t) asymptotically. the velocity error is obtained simply by differentiating pd(t) . (5.(t) + Kv*e(t) + Kpxe(t) = 0 By choosing the values of K. if the mass and the load approaches the mass of the manipulator. (5. Based on the above control technique. Then.7-45) 0 . the desired cartesian forces Fd can be resolved into the joint torques: T(t) = NT(q)Fd = NT(q) MX(t) (IQ . The force convergent control method is based on the Robbins-Monro stochastic approximation method to determine the actual cartesian force FQ so that the observed cartesian force F0 (measured by a wrist force sensor) at the hand will converge to the desired cartesian force Fd obtained from the above position control w.7-43) where M is the mass matrix with diagonal elements of total mass of the load m and the moments of inertia I. as compared with the mass of the manipulator.x. (5. Similarly.x(t)] or (5. But. technique. This can be done by setting the actual cartesian acceleration as X(t) = Xd(t) + K.7-37)..7-40) At Based on the proportional plus derivative control approach. the desired cartesian forces and moments to correct the position errors can be obtained using Newton's second law: Fd(t) = M5(t) (5. and Kp so that the characteristic roots of Eq. then the actual cartesian force is updated by Fa(k + 1) = Fa(k) + yk AF(k) (5. then we want the actual cartesian acceleration x(t) to track the desired cartesian acceleration as closely as possible. a force convergence control is incorporated as a second part of the RMFC.7-41) (5..7-31).7-42) K .Fo(k). This is due to the fact that some of the joint torques are spent to accelerate the links.*(t)] + KK[xd(t) .p(t). the position of the hand usually does not converge to the desired position. using the Eq. In order to compensate for these loading and acceleration effects.. the desired cartesian acceleration xd(t) can be obtained as: Xd(t) = xd(t + At) - xd(t) (5. In Eq. the above RMFC works well when the mass and the load are negligible. if there is no error in position and velocity of the hand..[Xd(t) . at the principal axes of the load.° "L3 L"..CONTROL OF ROBOT MANIPULATORS 243 city error because the above error equation uses the homogeneous transformation matrix method. lyy.7-44) In general. The manipulator is controlled by adjusting the position and velocity feedback gains to follow the model so that its closed-loop performance characteristics closely match the set of desired performance characteristics in the reference model.244 ROBOTICS: CONTROL. O°'C: BCD coo p. A linear second-order time invariant differential equation is selected as the reference model for each degree of freedom of the robot arm. the payload is taken into consideration by combining it to the final link. N. 5. Dubowsky and DesForges [1979] proposed a simple model-referenced adaptive control for the control of mechanical manipulators. the selected reference model provides an effective and flexible means of specifying desired closed-loop performance of the controlled system. . A general control block diagram of the model-referenced adaptive control system is shown in Fig.8 ADAPTIVE CONTROL Most of the schemes discussed in the previous sections control the arm at the hand or joint level and emphasize nonlinear compensations of the interaction forces between the various joints. The adaptation algorithm is driven by the errors between the reference model outputs and the actual system outputs. 5. the value of N can be chosen based on the force convergence. . Based on a computer simulation study (Wu and Paul [1982]). These changes in the payload of the controlled . VISION. AND INTELLIGENCE where yk = 11(k + 1) for k = 0.. 1. Z5' :'_ '±. ca) .h system often are significant enough to render the above feedback control strategies ineffective.8. the RMFC with force convergent control has the advantage that the control method can be extended to various loading conditions and to a manipulator with any number of degrees of freedom without increasing the computational complexity. As a result.1 Model-Referenced Adaptive Control Among various adaptive control methods. These control algorithms sometimes are inadequate because they require accurate modeling of the arm dynamics and neglect the changes of the load in a task cycle. the value of N . Theoretically. which limits the precision and speed of the end-effector.. must be large. Any significant gain in performance for tracking the desired time-based trajectory as closely as possible over a wide range of manipulator motion and payloads require the consideration of adaptive control techniques. in practice.O. . ate) 5. its Ll. . The concept of model-referenced adaptive control is based on selecting an appropriate reference model and adaptation algorithm which modifies the feedback gains to the actuators of the actual system. and the end-effector dimension is assumed to be small compared with the length of other links.13.. Then. this adaptive control scheme only Op. In their analysis. The result is reduced servo response speed and damping. However. In summary. the model-referenced adaptive control (MRAC) is the most widely used and it is also relatively easy to implement. a value of N = 1 or 2 gives a fairly good convergence of the force vector. SENSING. 13 A general control block diagram for model-referenced adaptive control. Defining the vector y(t) to represent the reference model response and the vector x(t) to represent the manipulator response.Xi(t) + xi(t) = ri(t) time.8-3) where the system parameters c>:i (t) and (3i (t) are assumed to vary slowly with . requires moderate computations which can be implemented with a low-cost microprocessor. (5. Such a model-referenced. BT) Adjustable feedback gains Adaptation mechanism tXa Reference + L v model Figure 5.8-2) wni 4-y CC) If we assume that the manipulator is controlled by position and velocity feedback gains. The resulting model-referenced adaptive system is capable of maintaining uniformly good performance over a wide range of motions and payloads. the joint i of the reference model can be described by aiyi(t) + biyi(t) + yi(t) = ri(t) (5. etc). then the manipulator dynamic equation for joint i can be written as Ui(t)Xi(t) + 0i(t). and that the coupling terms are negligible.8-1) In terms of natural frequency wni and damping ratio i of a second-order linear system. ai and bi correspond to ai =wni z and bi = 22-i (5. adaptive control algorithm does not require complex mathematical models of the system dynamics nor the a priori knowledge of the environment (loads.CONTROL OF ROBOT MANIPULATORS 245 Reference input r + R ob ot arm dynamics x = (©T. (5.8-5) (5. A block diagram of the control system is shown in Fig. Finally.8-8) aiwi(t) + biwi(t) + wi(t) _ . (5. and the values of the weighting factors.yi(t) (5. the system parameters adjustment mechanism which will minimize the system error is governed by ai(t) [k2et(t) + kiei(t) + koei(t)][kzui(t) + kiui(t) + kour(t)] (5. yields ai(t) and Qi(t).14. . 5.8-6). which is the difference between the response of the actual system [Eq.86).8. The closed-loop adaptive system involves solving the reference model equations for a given desired input. 2. and Dubowsky and DesForges [1979] carried out an investigation of this adaptive system using a linearized model. . kk.8-4) where ei = yi . Let the input torque to joint i be ui. (5. (5.8-7) (5. Due to its simplicity. a steepest descent method is used to minimize a quadratic function of the system error. + kiei + kaei)2 i = 1. where ui(t) and wi(t) and their derivatives are obtained from the solutions of the aiui(t) + biui(t) + ui(t) . The control algorithm assumes that the interaction forces among the joints are negligible..8-3)] and the response of the reference model [Eq.. A stability analysis is difficult. However.246 ROBOTICS: CONTROL. VISION. The input-output C!1 (/1 . and the output angular position of the manipulator be yi.8-7) and (5.2 Adaptive Control Using an Autoregressive Model Koivo and Guo [1983] proposed an adaptive. SENSING.8-5) and (5.yi(t) and yi(t) and yi(t) are the first two time derivatives of response of the reference model. self-tuning controller using an autoregressive model to fit the input-output data from the manipulator.85) and (5. solving the differential equations in Eqs. 5. AND INTELLIGENCE Several techniques are available to adjust the feedback gains of the controlled system. then the differential equations in Eqs. the adaptability of the controller can become questionable if the interaction forces among the various joints are severe.xi. but stability considerations of the closed-loop adaptive system are critical. (5.8-6) Qi(t) _ [k`Zei(t) + kiei(t) + k'0ei(t)1 [kiu'i(t) + kix'i(t) + kowi(t)] following differential equations: r-. Using a steepest descent method. n (5.-. The fact that this control approach is not dependent on a complex mathemati- cal model is one of its major advantages.8-1)]: Ji(ei) = '/z(k2k.8-8) are solved to yield ui(t) and wi(t) and their derivatives for Eqs. are selected from stability considerations to obtain stable system behavior. a°. .CONTROL OF ROBOT MANIPULATORS 247 Manipulator Figure 5... .yi(k." and b.')T (5.8-13) .8-12) dii(N) = 6ii(N .14 Adaptive control with autoregressive model. b. . ." are determined so as to obtain the best leastsquares fit of the measured input-output data pairs..1) + Pi(N)Oi(N . ...1)0i(N .1).m) + b.8-10) EN(ai) = E N + 1 k=0e?(k) where N is the number of measurements.1)] (5. . can be found as (5. pairs (ui.ui(k-1). (k . be the ith parameter vector: ai = (a°. ei(k) is the modeling error which is assumed to be white gaussian noise with zero mean and independent of ui and yi(k .`O 1 N (5.yi(k-n). Let a.m) ] + a.1)[yi(N) .. a recursive least-squares estimation of a.(k .° + ei(k) (5.m).°.8-11) and let 1'. These parameters can be obtained by minimizing the following criterion: .ny.ti pairs as closely as possible: /7 yi(k) a.8-9) where ai° is a constant forcing term. bi' . The parameters a. b. yi) may be described by an autoregressive model which match these . m > 1. . ail .ui(k-n) ]T Then."ui(k .diT(N .1) be the vector of the input-output pairs: iki(k-1)=[1. 8-17) J where a.ii(k) and yi is a user-defined nonnegative weighting factor. In summary. 19851 proposed an adaptive control strategy which tracks a desired time-based manipulator trajectory as closely rt. to fit the input-output data from the manipulator.1)i/i7(N .8-16) where E[ ] represents an expectation operation conditioned on i.1) µi + &7(N . the model can be represented by: yi(k) = aiTOi(k .ya(k + 2) m=2 NJ= " (5.m) .y.8-17)] to servo the manipulator.248 ROBOTICS: CONTROL.".1)Pi(N .8-15) In order to track the trajectory set points. VISION.8-13) and (5. SENSING. (5.8-9)] The recursive least-squares identification scheme [Eqs. The optimal control that minimizes the above performance criterion is found to be: ui(k + 1) -bil(k) [bi1(k)]2 + 'Yi 17 ai°(k) + a11(k)[6L[T'i(k)] + E d'?'(k)y'i(k + 2 . 5. (5.1) (5. Pi is a (2n + 1) x (2n + 1) symmetric matrix.8-14)] is used to estimate the parameters which are used in the optimal control [Eq. and the hat notation is used to indicate an estimate of the parameters.m) m=2 + E b"(k)ui(k + 2 . (5. a performance criterion for joint i is defined as Jk(u) = E{[yi(k + 2) .1)oi(N . Lee and Chung [1984.8. .1) + ei(k) (5. (5.".3 Adaptive Perturbation Control Based on perturbation theory. this adaptive control uses an autoregressive model [Eq.1)Pi(N . b.8-13) and (5.8-14) where 0 < µi < 1 is a "forgetting" factor which provides an exponential weighting of past data in the estimation algorithm and by which the algorithm allows a slow drift of the parameters.1)>/ii(N .i(k + 2) ]2 + yiui2(k + 1)10i(k)} CID (5. and a" are the estimates of the parameters from Eqs. Using the above equations to compute the estimates of the autoregressive model.8-14). AND INTELLIGENCE with Pi (N) = 1 I'i'i Pi(N . ti . The parameters and the feedback gains of the linearized system are updated and adjusted in each sampling period to obtain the necessary control effort.. g[x(t)]} is asymptotically stable and tracks a desired trajectory as closely as possible over a wide range of payloads for all '.4-4): xn (t) = f[xn(t). (5. The highly coupled nonlinear dynamic equations of a manipulator are then linearized about the planned manipulator trajectory to obtain the linearized perturbation system. un(t)] (5. and the corresponding nominal torques u.4-4). The feedback component computes the perturbation torques which reduce the position and velocity errors of the manipulator to zero along the nominal trajectory. A one-step optimal control law is designed to control the linearized perturbation system about the nominal trajectory. The nominal trajectory is specified by an interpolated joint trajectory whose angular position. real-time.CONTROL OF ROBOT MANIPULATORS 249 linear system about a nominal trajectory. With this formulation. (5. The adaptive control discussed in this section is based on linearized perturbation equations in the vicinity of a nominal trajectory. We need to derive appropriate linearized perturbation equations suitable for developing the feedback controller which computes perturbation joint torques to reduce position and velocity errors along the nominal trajectory.4-4)] are known from the planned trajectory.y . both and satisfy Eq.+° times. and angular acceleration are known at every sampling instant.8-18) 'L7 as possible for all times over a wide range of manipulator motion and payloads. the control problem is to find a feedback control law u(t) = g[x(t)] such that the closed loop control system i(t) = f{x(t). recursive. (5. The adaptive control is based on the linearized perturbation equations about the referenced trajectory. (t) of the system [Eq. This adaptive control strategy reduces the manipulator control problem from nonlinear control to controlling a vii O'Q r0) . angular velocity. the feedforward component computes the nominal torques which compensate all the interaction forces between the various joints along the nominal trajectory. An efficient.U+ . The total torques applied to the joint actuators then consist of the nominal torques computed from the NewtonEuler equations of motion and the perturbation torques computed from the one-step optimal control law of the linearized system. The L-E equations of motion of an n-link manipulator can be expressed in state space representation as in Eq. Then. leastsquares identification scheme is used to identify the system parameters in the perturbation equations.. Using the Newton-Euler equations of motion as inverse dynamics of the manipulator. Suppose that the nominal states x. Adaptive perturbation control differs from the above adaptive schemes in the sense that it takes all the interactions among the various joints into consideration. The controlled system is characterized by feedforward and feedback components which can be computed separately and simultaneously.(t) are also known from the computations of the joint torques using the N-E equations of motion. . it reduces a nonlinear control problem to a linear control problem about a nominal trajectory.. the computations of the nominal.. the feedforward component computes the corresponding nominal torques u. subtracting Eq. For implementation on a digital computer. A(t) and B(t). . The feedback component computes the corresponding perturbation torques bu(t) which provide control effort to compensate for small deviations from the nominal trajectory.8-18) from it. (5. (5. First. VISION.. u(kT) is an n-dimensional piecewise constant control input vector of u(t) over the time interval between any two consecutive sam- 'O. (5. `O' x[(k + 1)T] = F(kT)x(kT) + G(kT)u(kT) k = 0. A control block diagram of the method is shown in Fig. . adaptive control techniques can be easily implemented using present day low-cost microprocessors. vary slowly with time. (5. second.8-19) = A(t) 6x(t) + B(t) 6u(t) COD where V. the manipulator control problem is reduced to determining bu(t). 1. the associated linearized perturbation model for this control system can be expressed as bx (t) = V f1 6X(t) + bu(t) (5. of Eq. . cjd(t). Because of this parallel computational structure. However. The computation of the perturbation torques is based on a one-step optimal control law. and qd(t). parameter identification techniques must be used to identify the unknown elements in A(t) and B(t)..and perturbation torques can be performed separately and simultaneously. u(t)] evaluated at x.8-19) needs to be discretized to obtain an appropriate discrete linear equations for parameter identification: `CS 'LS tai . 6x(t) = x(t) and 6u(t) = u(t) The system parameters. Given the planned trajectory set points qd(t).8-20) where T is the sampling period.. As a result of this formulation. Because of the complexity of the manipulator equations of motion.15.(t) and respectively.250 ROBOTICS: CONTROL.(t) from the N-E equations of motion. which drives 6x(t) to zero at all times along the nominal trajectory. The main advantages of this formulation are twofold. Thus.8-19) be known at all times.8-19) depend on the instantaneous manipulator position and velocity along the nominal trajectory and thus. 5. AND INTELLIGENCE Using the Taylor series expansion on Eq. '+. The overall controlled system is thus characterized by a feedforward component and a feedback component. SENSING.4-4) about the nominal trajectory. it is extremely difficult to find the elements of A(t) and B(t) explicitly. Eq.fl and are the jacobian matrices of f[x(t). (5. (5. and assuming that the higher order terms are negligi- ble. the design of a feedback control law for the perturbation equations requires that the system parameters of Eq. P 000 x(k) F (k). instrumental variable. t)B(t)u(t)dt to (5.8-20) are measurable. kT] and (5. to) is the state-transition matrix of the system.8-22) (5. Due to its simplicity and ease of application. (2) measurement noise is negligible. cross correlation. F(kT) and G(kT) are. we shall drop the sampling period T from the rest of the equations for clarity and simplicity. such as the methods of least squares. R- One-step optimal controller Recursive least square F(O). pling instants for kT < t < (k + 1) T. a recursive real-time least-squares parameter identification scheme is selected here for identifying the system parameters in F(k) and G(k).8-23) G(kT)u(kT) = S(kT+l)Tr [(k + 1)T.8-21) and r(kT. 2n x 2n and 2n x n matrices and are given by F(kT) = P[(k + 1)T.CONTROL OF ROBOT MANIPULATORS 251 Robot link parameters Disturbances I Trajectory planning system Newton-Euler u(k) equations of motion u(k) Robot manipulator Environment i Q. and x(kT) is a 2n-dimensional perturbed state vector which is given by x(kT) = P(kT. P(0). rte-. and stochastic approxiniation. In the parameter identification scheme.15 The adaptive perturbation control. and (3) the state variables x(k) of Eq. (5. a total of 6n2 parameters in the F(kT) and G(kT) matrices need to be identified. respectively. have been applied successfully to the parameter identification problem. gin' "(y 4-i . maximum likelihood. Various identification algorithms. Measurements identification scheme G(0). t]B(t)u(t)dt With this model. G(k) 6x(k) Figure 5. Without confusion. to)x(to) + I P(kT. we make the following assumptions: (1) the parameters of the system are slowly time-varying but the variation speed is slower than the adaptation speed. ..8-24) i = 1. SENSING. . x2 (k). p .8-20) can be written as xi(k + 1) = zT(k)01(k) i = 1. 0(k) = f1p(k) .. ..8-20).. Similarly. ..O (k)] (5. gi1(k). gpI(k) .zT(k)Bi(k) i = 1 .. we wish to identify the parameters in each column of 0(k) based on the measurement vector z(k). . 2. often called a residual. u2(k). .8-20): ei(k) = xi(k + 1 ) . (5.fip(k). (5. .8-26) and the states at tht kth instant of time in a 2n-dimensional vector as XT(k) _ [xI(k). fpi(k) 1 g1I(k) . .. uI(k).(k) .... expressed in matrix form. (5. a 2n-dimensional error vector e(k).. 02(k). 2. . .252 ROBOTICS: CONTROL. we have moo BT(k) _ [fr1(k).8-20)] at the kth instant of time in a 3n-dimensional vector as zT(k) = [x1(k).xp(k)] (5. In order to examine the "goodness" of the least-squares estimation algorithm.R.8-25) g1. is included to account for the modeling error and noise in Eq.8-29) Basic least-squares parameter estimation assumes that the unknown parameters are constant values and the solution is based on batch N sets of meas- .. .... fpp(k) _ [01(k). . ... defining the outputs and inputs of the perturbation system [Eq. . as rf. . (5.8-27) we have that the corresponding system equation in Eq. x2(k)..8-28) With this formulation. p (5.. Defining and expressing the ith row of the unknown parameters of the system at the kth instant of time in a 3n-dimensional vector. VISION. 2. we need to rearrange the system equations in a form that is suitable for parameter identification. (5.I (k) .xp(k). gpn(k) where p = 2n. ..gin(k)] (5.. .... AND INTELLIGENCE In order to apply the recursive least-squares identification algorithm to Eq.. (5.. p or.. The above recursive equations indicate that the estimate of the parameters 0.(k) corrected by the term proportional to [x.'' .(k) + y(k)P(k)z(k) [x. . . the hat notation is used to indicate the estimate of the parameters 0. which are weighted equally.(k + 1) = 0. The components of the vector y(k)P(k)z(k) are weighting factors which indicate how the corrections and the previous estimate should be weighted to obtain the new estimate 0. (5.8-32) (5.(k+ 1). least-squares parameter identification algorithm can be found by minimizing an exponentially weighted error criterion which has an effect of placing more weights on the squared errors of the more recent measurements..8-33) and y(k) = [zT(k)P(k)z(k) + p]-I (5.zT(k)0. Unfortunately.8-30) with respect to the unknown parameters vector 0. this algorithm cannot be applied to time-varying parameters.z(k)] is the measurement matrix up to the kth sampling instant.. If p << 1. ei(N)l (5. The term zT(k)0. and P(k) = p[Z(k)ZT(k)]-I is a 3n x 3n symmetric positive definite matrix. a recursive real-time least-squares identification scheme can be obtained for 0. to estimate the unknown parameters. z(2). Furthermore. the solution requires matrix inversion which is computational intensive.(k) are identically distributed and independent with zero mean and variance a2.' .S s. then P(k) can be interpreted as the covariance matrix of the estimate if p is chosen as a2.(k).8-30) e1 (N) = [JN-'ei(l).(k+1) ..y 'f7 . a sequential least-squares identification scheme which updates the unknown parameters at each sampling period based on the new set of measurements at each sampling interval provides an efficient algorithmic solution to the identification problem. .(k + 1) .(k) and the measurement vector z(k). Such a recursive.zT(k)0..8-31) and N > 3n is the number of measurements used to estimate the parameters 01(N). realtime. In order to reduce the number of numerical computations and to track the time- varying parameters ®(k) at each sampling period.(k) is the prediction of the value x. (k) after simple algebraic manipulations: 0.(k + 1) at the (k + 1)th sampling period is equal to the previous estimate 0. that is. and utilizing the matrix inverse lemma.CONTROL OF ROBOT MANIPULATORS 253 urement data.8-34) where 0 < p < 1. JN = E pN-jei2(J) j=I where the error vector is weighted as N (5. Minimizing the error criterion in Eq. where Z(k) _ [ z ( 1 ) . If the errors e..(k)].(k) ] P(k + 1) = P(k) .(k + 1) based on the estimate of the parameters 0. The parameter p is a weighting factor and is commonly used for tracking slowly timevarying parameters by exponentially forgetting the "aged" measurements. a large weighting factor is placed on the more recent sampled data by 00)j v.y(k) P(k)z(k)zT(k)P(k) (5. V-.8-34)] can be started by choosing the initial values of P(0) to be P(0) = a I3n v.8-37) 2 where T is the sampling period.8-35) where a is a large positive scalar and I3 is a 3n x 3n identity matrix.8-38) subject to the constraints of Eq. VISION. (5.8-36) ran [xn(0).90'< p<1. un(0)] J ax T2 2 + [xn(0).8-32) to (5.8-20): (IQ asp J(k) = I/2[xT(k + 1)Qx(k + 1) + uT(k)Ru(k)] t=o where Q is a p x p semipositive definite weighting matrix and R is an n x n positive definite weighting matrix. We can compromise between fast adaptation capabilities and loss of accuracy in parameter identification by adjusting the weighting factor p. In most applications for tracking slowly time-varying parameters. at the same time.8-38) . the above identification scheme [Eqs.. If p = 1. (5.8-20) is well known fin (5. With the determination of the parameters in F(k) and G(k).8-38) indicates that the objective of the optimal control is to drive the position and velocity errors of the manipulator to zero along the nominal trajectory in a coordinated position and rate control per interval step while. The initial estimate of the unknown parameters F(k) and G(k) can be approximated by the following equations: 2 F(0)-I2n+ f[x. The optimal control solution which minimizes the functional in Eq. au [xn(0). un(0)] au [xn(0). (5. Finally. un(0)] (5. un(0)]1T+ f _Lf 8x G(0) = un(0)]1 2 (5. AND INTELLIGENCE rapidly weighing out previous samples. proper control laws can be designed to obtain the required correction torques to reduce the position and velocity errors of the manipulator along a nominal trajectory.z(0). This can be done by finding an optimal control u*(k) which minimizes the performance index J(k) while satisfying the constraints of Eq. s.254 ROBOTICS: CONTROL. (5./ (5.. (5. attaching a cost to the use of control effort. p is usually chosen to be v. accuracy in tracking the time- varying parameters will be lost due to the truncation of the measured data sequences.0. SENSING. un(0)] x sic T+ ax [xn(0). 0. The one-step performance index in Eq. 3-65)].1985]) to evaluate and compare the performance of the adaptive controller with the controller [Eq. Although the weighting factor p can be adjusted for each ith parameter vector O (k) as desired.8-39) p. The computational requirements of the adaptive perturbation control are tabulated in Table 5. (5. If we assume that for each ADDF and MULF instruction. '77 'J' 'T1 Table 5.8-34). [zT(k)P(k)z(k) + p] gives a scalar. (5. The study was carried out for various loading conditions along a given trajectory.n2 .8-39) o00 2^C do not phi require complex computations.-.1 8n3 .-. which is basically a proportional plus derivative control (PD controller).G.n + 18 8n3 + 29n2 + 2n + 17 W." X000 . since P(k) is a symmetric positive definite matrix.. such adjustments are not desirable.1 Computations of the adaptive controller Adaptive controller Multiplications 4-.1.8-34)] at the kth sampling instant.8-34) and Eq. In Eq..' . where F(k) and G(k) are the system parameters obtained from the identification algorithm [Eqs.. an ADDF (floating point addition) instruction required 5.8-32) to (5.CONTROL OF ROBOT MANIPULATORS 255 and is found to be (Saridis and Lobbia [1972]) u*(k) = -[R + GT(k)QG(k)] . 'CS .17µs.24 30n2 + 5n + 1 8n3 + 2n2 + 39 8n3 + 32n2 + 5n + 40 103n .IGT(k) Q F(k) x(k) (5.. then the adaptive perturbation control requires approximately 7. Moreover. only the upper diagonal matrix of P(k) needs to be computed.. The identification and control algorithms in Eqs.5 ms to compute the necessary joint torques to servo the first three joints of a PUMA robot arm for a trajectory set point. (5. we need to fetch data from the core memory twice and the memory cycle time is 450 ns. L]. this requires excessive computations in the P(k + 1) matrix. Additions Newton-Euler egtiations of motion Least-squares identification algorithm Control algorithm Total 000 1l7n . A computer simulation study of a three joint PUMA manipulator was conducted (Lee and Chung [1984.8-32) to (5.21 30n2 + 3n . (5.' . P(k + 1) is computed only once at each sampling time using the same weighting factor p. 'a. so its inversion is trivial. Based on the specifications of a DEC PDP 11 / 45 computer. (5. The performances of the PD and adaptive controllers are compared and evaluated for three different loading . For real-time robot arm control. The combined identification and control algorithm can be computed in 0(n3) time.17µs and a MULF (floating point multiply) instruction requires 7. 5.34 0.values.36 0. . The resolved motion adaptive control is performed at the hand level and is based on the linearized perturbation system along a desired time-based hand trajectory.000 0.19 2.328 1.020 0. Additional details of the simulation result can be found in Lee and Chung [1984.CO con .025 0.045 0. In each case. r. error Final position (degrees) (mm) error (degrees) (degrees) error (degrees) (mm) 0.014 0. Plots of angular position errors for the above cases for the adaptive control are shown in Figs..360 0.23 5.256 ROBOTICS.113 0.082 000 0.054 0.16 to 5. AND INTELLIGENCE Table 5.83 1. 5.020 0.71 No-load and 10% error 1 2 In inertia tensor 3 i/2 max.098 0.078 0. error Max.30 0.002 0.121 and 10% error 2 In inertia tensor 3 Max.8.14 0.8. p.069 0.28 0.4 Resolved Motion Adaptive Control The adaptive control strategy of Sec.53 0.3 in the joint variable space can be extended to control the manipulator in cartesian coordinates under various loading conditions by adopting the ideas of resolved motion rate and acceleration controls.077 0.+ a11 r>' .245 0..86 2. load 1 2.066 1. 5. SENSING.480 0. Similar to the previous adaptive control.039 0.11 2.55 1. For all the above cases.185 0.004 0. (2) half of maximum load and 10 percent error in inertia tensor.20 and 10% error 2 0. a 10 percent error in inertia matrices means f 10 percent error about its measured inertial.. The resolved motion adaptive control differs from the resolved motion acceleration control by minimizing the position/orientation and angular and linear velocities of the manipulator hand along the hand coordinate axes instead of position and orientation errors.145 4. and (3) maximum load (5 lb) and 10 percent error in inertia tensor. load 1 0.. error Max.121 0.065 0. error Final position Max.041 0.069 0.019 In inertia tensor 3 conditions and the results are tabulated in Table 5. VISION.N.607 3.050 0. The feedforward component resolves the specified .78 1.2: (1) no-load and 10 percent error in inertia tensor.58 0.18.096 0.023 000 0.147 0.089 0. CAD C'i 'a-. the adaptive controller shows better performance than the PD controller with constant feedback gains both in trajectory tracking and the final position errors.2 Comparisons of the PD and adaptive controllers PD controller Trajectory tracking Adaptive controller Trajectory tracking Various loading conditions Joint Max.CONTROL.032 0.57 0. 1985].22 0. the controlled system is characterized by feedforward and feedback components which can be computed separately and simultaneously. 0198 0.000 1 I I I 0.0409 -0.000 Time (s) Figure 5.17 Joint 2 position error under various loads. Joint 2 Position error (deg) Figure 5.2000 0.4000 06000 0 8000 1.0046 C -0.0257 -0.16 Joint I position error under various loads. .CONTROL OF ROBOT MANIPULATORS 257 Joint I 0.05601 0. 2- CD" CDD .0360 0 0142 -0. 4)N-'(q) v(t) 0(t) + N(q)q(t) (5. -'velocities. (5.7-1) to (5.0075 -0. The acceleration of the manipulator has been obtained previously in Eq. and accelerations of the hand into a set of values of joint positions. A recursive least-squares identification scheme is again used to perform on-line parameter identification of the linearized . --. Since D(q) is always nonsingular. VISION. positions.0796 0. system.8-40) In order to include the dynamics of the manipulator into the above kinematics equation [Eq.0578 Position error (deg) 0. we need to use the L-E equations of motion given in Eq.7-14). velocities.6000 I 0 8000 Time (s) Figure 5.. (5.18 Joint 3 position error under various loads. the equations of motion of the manipulator in cartesian coordinates can be easily obtained.2-26). (5.02931 0 000 1 1 0.^1.258 ROBOTICS: CONTROL. q(t) can be obtained from Eq.2000 0 4000 VI 0. . AND INTELLIGENCE 0. The feedback component computes the perturbation joint torques which reduce the manipulator hand position and velocity errors along the nominal hand trajectory. Using the kinematic relationship between the joint coordinates and the cartesian coordinates derived previously in Eqs. (3.7-13) and is repeated here for convenience: = N(q. and accelerations from which the nominal joint torques are computed using the Newton-Euler equations of motion to compensate for all the `.8-40)].O+ interaction forces among the various joints. (3. SENSING. 8-42b) D-1(q) A III E(q) g r E11(q) L E21(q) E12(q) E22(q) h1(q.8-44a) (5. 4)K12(q)+N22(q.8-41). 4) h2(q. (5.8-43b) (5. and T(t) into 3 x 1 submatrices: N(q) A o N11(q) N21(q) all N12(q) N22(q) (5. 4)K22(q) N21(q. 4) Combining Eqs. N-1(q). and using Eqs.8-40) to obtain the accelerations of the manipulator hand: (t) St(t) v(t) ICI v(t) -N(q. 4)K12(q)+N12(q. 4) K22(q) 0 0 0 x 0 N11(q)E11(q)+N12(q)E21(q) N21(q)E11(q)+N22(q)E21(q) N11(q)E12(q)+N12(q)E22(q) N21(q)E12(q)+N22(q)E22(q) (continued on next page) x . 4). 4 )K11(q)+N22(q.8-41) For convenience.8-44). 9)-c(q)] (5. (5. (5.8-42a) N-' (q) K(q) K11(q) K21(q) K12(q) K22(q) (5. 4)N-1(q) +N(q)D-1(q)[T(t)-h(q.8-43a) (5. c(q).CONTROL OF ROBOT MANIPULATORS 259 26) and substituted into Eq. q )K11(q)+N12(q.8-44b) [0 0 0 0 13 0 S(4) 0 0 0 N1 1(q. 4)K21(q) N11(q. we can obtain the state equations of the manipulator in cartesian coordinates: p(t) 4(t) v(t) 0(t) all (5. (5. let us partition N(q).7-4).7-8).8-42) to (5. and (5. and D-1(q) into 3 x 3 submatrices and h(q. 4)K21(q) 0 0 N2 1(q. 4)-ci(q)+T1(t) x (5. . . . u) = secx5 (xlo cosx5 sinx6 .`3 +-+ -r7 . Equation (5. VX.8-48) xI (t) = fI (x.8-46) VT fjT)T . x2.x12)T (5. 4)x(t)+bi+6(q)X(q. a. (PX. . W. . AND INTELLIGENCE -hi(q. It is noted that the leftmost and middle vectors are 12 x 1. u) _ -secx5(x10cosx6 + x11 sinx6) x5 (t) = f5 (x. SENSING.260 ROBOTICS: CONTROL. nonlinear vector-valued function. )T (pT and the input torque vector as U(t) all (TI . VISION. 4)-c2(q)+T2(t) where 0 is a 3 x 3 zero matrix. and n = 6 is the number of degrees of freedom of the manipulator. u(t)] (5. Pz. u) = x8(t) x3 (t) = f3 (x. . . Defining the state vector for the manipulator hand as X(t) A CAD (xI. py. a. (5. the right matrix is 12 x 6. u(t) is an n-dimensional vector. y. T6)T (ttI .x11 cosx5 cosx6 ) (5.8-45) can be expressed in state space representation as: is a where x(t) is a 2n-dimensional vector.8-45) -h2(q.8-49) 4(t) = f6 (x. WX. 2n x 1 continuously differentiable. .8-47) X(t) = f[x(t). 4)+bi+6(q)u(t) . u) = -secx5 (x10 sinx5 cosx6 +x11 sinx5 sinx6 +x12 cosx5 ) xi+6(t) =fi+6(x. Equation (5.8-45) represents the state equations of the manipulator and will be used to derive an adaptive control scheme in cartesian coordinates. and the rightmost vector is 6 x 1. Eq. v} vZ. . u) = x7 (t) X2(t) = f2(x. u) = x9 (t) z4(t) = f4(x. . U6 )T (5. u) =gi+6(q. the center left matrix is 12 x 12. Wy.8-48) can be expressed as . Sli(t).8-49) describes the complete manipulator dynamics in cartesian coordi- nates.(t). '-' 0'° . CAD C.I(q) N21(q)Eii(q) + N22(q)E21(q) and Nti(q)E12(q) + N12(q)E'2(q) N21(q)E12(q) + N22(q)E22(q) -hl(q. velocities.C. (5. 4)Ki2(q)+ N22(q.8-39)]. 4)K 21(q) 0 0 N21(q..c2(q) Equation (5. The feedback component computes the perturbation joint torques 6u(t) the same way as in Eq. (5.8-34) and Eq. using the recursive least-squares ..8-32) to (5. The overall resolved motion adaptive control system is again characterized by a feedforward component and a feedback component. and Sli(t) are resolved hand trajectory set points p`t(t). Such a formulation has the advantage of employing parallel schemes in computing these components. 4)K11(q) +N22(q. 5. Again. 4) .U. 4)K11(q) +N12(q. 4) . 4)K22(q) - and bi+6(q) is the (i + 6)th row of the matrix: 0 0 0 0 Ntt(q)E11(q) + N12(q)E. .U+ arc identification scheme in Eqs. A feasibility study of implementing the adaptive controller based on a 60-Hz sampling frequency and using present-day low-cost microprocessors can be conducted by looking at the computational requirements in terms of mathematical mulCAD . 4)K 21(q) N11(q. D) 0 0 0 N11(q. 4) = -h2(q.8-32) to (5..6 and gi+6(q. and the control problem is to find a feedback control law u(t) = g[x(t)] to minimize the manipulator hand error along the desired hand trajectory over a wide range of payloads.8-39).8-34). into a set of values of desired joint positions. (5. . The resolved motion CD- crow adaptive control block diagram is shown in Fig. vd(t). The determination of the feedback control law for the linearized system is identical to the one in the joint coordinates [Eqs. 4) is the (i + 6)th row of the matrix: 00 0 0 13 0 S(.ci(q) X(q..19. (5. 4)K22(q) N21(q. (2) the desired joint torques along the hand trajectory are computed from the NewtonEuler equations of motion using the computed sets of values of joint positions.CONTROL OF ROBOT MANIPULATORS 261 where i = 1. velocities.. perturbation theory is used and Taylor series expansion is applied to Eq. These computed torques constitute the nominal torque values u. ate) . The feedforward component computes the desired joint torques as follows: (1) The vd(t). (5.D. and accelerations.8-49) to obtain the associated linearized system and to design a feedback control law about the desired hand trajectory. 'fl . and accelerations. 4)K12(q)+ N12(q. . least square identification scheme G ( O ). R One-step optimal contr oller Recursive 'i7 262 I vdrr) SZ"t(t) qd [N-t (q d ) ] Robot link parameters Hand trajectory 4"t(r) qd Newton-Euler planning [N . ( [N(q) ] P p(r)1 p d(t) F( k ).'(q)] v t L2 equat i ons un(k) u(k) (t)i of motion 6u(k) Robot manipulator Kinematics routine + [N(q)Iq(t) [p t(t) Inverse d c:. G ( k) 6x(k) p(t) 1H(t)J IA(t)J x d(t) _ et (r) x(t) = m(1) v(t) Cd(r) n(t)! Figure 5.L0. F(O) .19 The resolved motion adaptive control.1(t) ki nemati cs routine Q. P O). and a memory fetch or store requires 0. We assume that multiprocessors are available for parallel computation of the controller. We anticipate that faster microprocessors. Finally. and a memory fetch or store requires 450 ns. will be available in a few years. and a memory fetch or store requires 9µs. the proposed controller can be computed in about 18 ms which translates to a sampling frequency of approximately 55 Hz..32 As. an integer multiply requires 3.3. Most of the joint motion and resolved motion control methods discussed servo the arm at the hand or the joint level and emphasize nonlinear compensations of the coupling forces among the various joints. an addition requires 17 As. the PDP 11/45 is a uniprocessor machine and the parallel computation assumption is not valid. Assuming that two memory fetches are required for each multiplication and addition operation.6 Its. but suitable reference models are difficult to choose and it is difficult to establish any stability analysis of 0 ^L' . which will be able to compute the proposed resolved motion adaptive controller within 10 ms.24 ms which is still not fast enough for closing the servo loop. The control techniques are discussed in joint motion control. It requires about 3348 multiplications and 3118 additions for a six joint manipulator. Similarly. The model-referenced adaptive control is easy to implement.3 As. an addition requires 300 ns. It requires a total of 1386 multiplications and 988 additions for a six joint manipulator.7 in about 26. """ 5.t CD" speed for adaptive control of a manipulator. an addition requires 0. the resolved motion adaptive control requires a total of 3348 multiplications and 3118 additions in each sampling period. an integer multiply requires 5.5 that a minimum of 16 msec is required if the sampling frequency is 60 Hz). looking at the specification sheet of a Motorola MC68000 microproces- sor.. 5. This exercise should give the reader an idea of the required processing .96 As. an integer multiply requires 19 µs. (Recall from Sec. Since the feedforward and feedback components can be computed in parallel. We have also discussed various adaptive control strategies.9 CONCLUDING REMARKS We have reviewed various "robot manipulator control methods. looking at the specification sheet of a PDP 11/45 computer. They vary from a simple servomechanism to advanced control schemes such as adaptive control with an identification algorithm. and adaptive control. the proposed controller can be computed C/1 '-t p. The feedforward component which computes the nominal joint torques along a desired hand trajectory can be computed serially in four separate stages..CONTROL OF ROBOT MANIPULATORS 263 tiplication and addition operations. However. The feedback control component which computes the perturbation joint torques can be conveniently computed serially in three separate stages. the proposed controller can be computed in about 233 ms which is not fast enough for closing the servo loop. Computational requirements in terms of multiplications and additions for the adaptive controller for a n -joint manipulator are tabulated in Table 5. Based on the specification sheet of an INTEL 8087 microprocessor. resolved motion control. .3. Adaptive control using perturbation theory may be more appropriate for various manipulators because it takes all the interaction forces between the joints into consideration.fl F.1'/20 (1233) stage 3 Compute adaptive controller 8113+4n2+n+ 1 (1879) 8n3 -n (1722) Total feedback computations 8n3+38n2+37n+30 (3348) 8113+35112n2+ 17'/v1+7 (3118) Total 8n3+38n2+37n+30 (3348) 8n3+35'/2112+17/2n+7 (3118) mathematical operations i Number inside parentheses indicate computations for n = 6. SENSING. The adaptive perturbation control strategy was found suitable for controlling the manipulator in both the joint coordinates and cartesian coordinates. AND INTELLIGENCE Table 5. Both methods neglect the coupling forces between the joints which may be severe for manipulators with rotary joints.3n (126) stage 4 Compute T 117n . VISION.3 Computations of the resolved motion adaptive control t Adaptive controller stage 1 Number of multiplications (39) Number of additions (32) Compute q' (inverse kinematics) Compute 4`1 stage 2 112+27n+327 (525) n2+ 18n+89 (233) stage 3 Compute qd 4n2 (144) 4n'... the controlled system. .1 a.264 ROBOTICS: CONTROL.24 (678) 103n . Self-tuning adaptive control fits the input-output data of the system with an autoregressive model.21 (597) Total feedforward computations 5n2 + 144n + 342 (1386) 5n22 + 118n + 100 (988) r Compute (p747 0 stage I Compute (vTQT)T (48) (22) 112+27n-21 (177) n'-+ 18n.15 (129) Compute hand errors stage 2 Identification scheme 0 (0) 2n (12) 33nz + 9n + 2 (1244) 34'hn2 . An adaptive perturbation control system is characterized by a feedforward component and a `_" feedback component which can be computed separately and simultaneously in . Koivo and Guo [1983]. If the applied voltage V. Horowitz and Tomizuka [1980]. PROBLEMS 5. REFERENCES Further readings on computed torque control techniques can be found in Paul [1972]. Luh et al. and Lee [1982]. The computations of the adaptive control for a six-link robot arm may be implemented in low-cost microprocessors for controlling in the joint variable space. These adaptive control schemes can be found in Dubowsky and DesForges [1979]. Various researchers have discussed nonlinear decoupled control. Bejczy [1974]. what is the open-loop transfer function OL(s)/E(s) and the closed-loop transfer function Or(s)/OL(s) of the system? 5.(t) is linearly proportional to the position error and to the rate of the output angular position. 1972] who discussed resolved motion rate control. [1984]. 5. both in joint and cartesian coordinates.2. [1980b].1. and Lee and Chang [1986b]. while the resolved motion adaptive control cannot be implemented in present-day low-cost microprocessors because they still do not have the required speed to compute the controller parameters for the "standard" 60-Hz sampling frequency.1 Consider the development of a single joint positional controller. Lee and Chung [1984. More general theory in variable structure control can be found in Utkin [1977] and Itkis [1976]. An associated problem relating to control is the investigation of efficient control system architectures for computing the control laws within the required servo time. Orin [1984].3. BCD 'U' mob' Ooh COa) ti. Further readings on resolved motion control can be found in Whitney [1969. Nigam and Lee [1985]. Repeat for a ramp input. and minimum-time 1'3 control with torque constraint is discussed by Bobrow and Dubowsky [1983]. [1984]. are oriented toward this n-b 5'c goal. 5. CAD In order to compensate for the varying parameters of a manipulator and the changing loads that it carries. and Gilbert and Ha [1984]. Freund [1982].. 1985]. Saridis and Lee [1979]. Markiewicz [1973]. The disadvantage of resolved motion control lies in the fact that the inverse jacobian matrix requires intensive computations. . C/) CAD '-C O.. as discussed in Sec. Lee and Lee [1984]. Tarn et al. discuss the steady-state error of the system due to a step input. Minimum-time control can be found in Kahn and Roth [1971]. .CONTROL OF ROBOT MANIPULATORS 265 parallel. [1982]. Horowitz and Tomizuka [1980]. Luh and Lin [1982]. Papers written by Lee et al. Young [1978] discusses the design of a variable structure control for the control of manipulators. [1980b] extended this concept to include resolved acceleration control..-. various adaptive control schemes.2 For the applied voltage used in Prob. have been developed. Hemami and Camana [1976]. Luh et al. including Falb and Wolovich [1967]. and Lee et al. '-h 5.12 Give two main disadvantages of using the adaptive perturbation control. express the equations of motion of this robot arm explicitly in terms of dl's.8 Find the jacobian matrix in the base coordinate frame for the robot in Prob. if the Newton-Euler equations of motion are used to compute the applied joint torques for a 6 degree-of-freedom manipulator with rotary joints. 5. (See Sec.5.7 Design a nonlinear decoupled feedback controller for the robot in Prob. 02)g T1(t) LT2(t) where g is the gravitational constant.. while the actual control on the robot arm is done in discrete time (i. 0°' -a7 I C1 (81.6 can be written in a compact matrix-vector form as: d11(02) d12(82) d12(82) d22 01(t) 82(t) I 612(02)0 +2012(82)8182 . Assuming that D-1(8) exists. what is the required number of multiplications and additions per trajectory set point? 5..) 5.5 The equations of motion of the two-link robot arm in Sec..10 Give two main disadvantages of using the resolved motion acceleration control. 5. (3. 5.e.) 'a+ 5. VISION. s.11 Give two main disadvantages of using the model-referenced adaptive control. Choose an appropriate state variable vector x(t) and a control vector u(t) for this dynamic system.3 In the computed torque control technique. 5. try .2.. Explain the condition under which this practice is valid. 5. arc .. AND INTELLIGENCE 5. 5. 5.5.6. 02)g Lc2(e1. 5. 5.'s in a statespace representation with the chosen state-variable vector and control vector. SENSING.266 ROBOTICS: CONTROL. the analysis is performed in the continuous time.5.012(82)0 t3" .5..6 Design a variable structure controller for the robot in Prob. 3. and c.9 Give two main disadvantages of using the resolved motion rate control.4 In the computed torque control technique. (See Appendix B.1's. (See Sec. by a sampleddata system) because we use a digital computer for implementing the controller.) 5. the use of sensing technology to endow machines with a greater degree of intelligence in dealing with their environment is. slip. 267 '-h . the topic of Chaps.1 INTRODUCTION The use of external sensing mechanisms allows a robot to interact with its environ- ment in a flexible manner. Internal state sensors deal with the detection of variables such as arm joint position. The most prominent examples of noncontact sensors measure range. 5. It is r°. 7 and 8. while proximity and touch are associated with the terminal stages of object grasping. on the other hand. Although the latter is by far the most predominant form of operation of present industrial robots. and torque. requires less stringent control mechanisms than preprogrammed machines. Vision sensors and techniques are discussed in detail in Chaps. proximity. External sensing. as discussed in Chap. which are used for robot control. The function of robot sensors may be divided into two principal categories: internal state and external state. trainable system is also adaptable to a much larger variety of tasks. deal with the detection of variables such as range. such as touch. This is in contrast to preprogrammed operation in which a robot is "taught" to perform repetitive tasks via a set of programmed functions. A sensory. 6 to 8. proximity. and force-torque sensing. of interest to note that vision and range sensing generally provide gross guidance information for a manipulator..g. the former class of sensors respond to physical contact. to avoid crushing the object or to prevent it from slipping). and touch. indeed. External state sensors may be further classified as contact or noncontact sensors. proximity. External state sensors. A robot that can "see" and "feel" is easier to train in the performance of complex tasks while. as well as for object identification and handling. Noncontact sensors rely on the response of a detector to variations in acoustic or electromagnetic radiation. an active topic of research and development in the robotics field. and visual properties of an object. thus achieving a degree of universality that ultimately translates into lower production and maintenance costs. Force and torque sensors are used as feedback devices to control manipulation of an object once it has been grasped (e.CHAPTER SIX SENSING Art thou not sensible to feeling as to sight? William Shakespeare 6. The focus of this chapter is on range. at the same time. As their name implies. touch. is used for robot guidance. 1 triangulation One of the simplest methods for measuring range is through triangulation techniques. then it is possible '-' '-r (IQ Figure 6. The above approach yields a point measurement. © IEEE. (Adapted from Jarvis [1983a].2 RANGE SENSING A range sensor measures the distance from a reference point (usually on the sensor itself) to objects in the field of operation of the sensor. If the source-detector arrangement is moved in a fixed plane (up and down and sideways on a plane perpendicular to the paper and containing the baseline in Fig. as discussed in Chap.1 Range sensing by triangulation. its distance D to the illuminated portion of the surface can be calculated from the geometry of Fig. Baseline . while other animals.) .2. 6. 6. An object is illuminated by a narrow beam of light which is swept over the surface. AND INTELLIGENCE 6. 6. utilize the "time of flight" concept in which distance estimates are based on the time elapsed between the transmission and return of a sonic pulse. Humans estimate distance by means of stereo visual processing. SENSING.1. Range sensors are used for robot navigation and obstacle avoidance. 7.-+ cps source and detector are known.268 ROBOTICS: CONTROL. to more detailed applications in which the location and general shape characteristics of objects in the work space of a robot are desired. In this section we discuss several range sensing techniques that address these problems.1). 6.1 since the angle of the source with the baseline and the distance B between the . where interest lies in estimating the distance to the closest objects. such as bats..-O . VISION. when the detector sees the light spot. If the detector is focused on a small portion of the surface then.. The sweeping motion is in the plane defined by the line from the object to the detector and the line from the detector to the source. This approach can be easily explained with the aid of Fig. 3b.2 (a) An arrangement of objects scanned by a triangulation ranging device.3b is to position the camera so that every such vertical stripe also appears vertical in the image plane.3. These distances are easily transformed to three-dimensional coordinates by keeping track of the location and orientation of the detector as the objects are scanned. "C3 '-h CAS Cam ''' 'CS . In this way. 6. As illustrated in Fig.2b shows the results in terms of an image whose intensity (darker is closer) is proportional to the range measured from the plane of motion of the source-detector pair. and the sheet of light is perpendicular to the line joining the origin of the light sheet and the center of the camera lens.SENSING 269 Figure 6. 6. the light source and camera are placed at the same height.-. Figure 6. every point along the same column in the image will be known to have the same distance to the reference BCD `u' plane.) to obtain a set of points whose distances from the detector are known.2a shows an arrangement of objects scanned in the manner just explained. Figure 6. For example. 6.2. We call the vertical plane containing this line the reference plane. The stripe pattern is easily analyzed by a computer to obtain range information. An example is shown in Fig. 6.3a. (From Jarvis [1983a]. IEEE. an inflection indicates a change of surface. Specific range values are computed by first calibrating the system. 6. The objective of the arrangement shown in Fig. Clearly. bpi 'C7 .2. One of the most popular light patterns in use today is a sheet of light generated through a cylindrical lens or a narrow slit. the reference plane is perpendicular to the sheet of light.3a) in which every point will have the same perpendicular distance to the reference plane. (b) Corresponding image with intensities proportional to range. One of the simplest arrangements is shown in Fig. which represents a top view of Fig. and a break corresponds to a gap between surfaces. In this arrangement. 6. and any vertical flat surface that intersects the sheet will produce a vertical stripe of light (see Fig.2 Structured Lighting Approach This approach consists of projecting a light pattern onto a set of objects and using the distortion of the pattern to calculate the range. 6. the intersection of the sheet with objects in the work space yields a light stripe which is viewed through a television camera displaced a distance B from the light source. 2).1 be the column index of this array. (b) Top view of part (a) showing a specific arrangement which simplifies calibration. and let y = 0.2) con .270 ROBOTICS: CONTROL. 1. VISION. . M . Once these quantities are known. 2. the calibration procedure consists of measuring the distance B between the light source and lens center. 6. As explained below. and then determining the angles a.. Most systems based on the sheet-of-light approach use digital images. .2.2-1) 0=a. Suppose that the image seen by the camera is digitized into an N x M array (see Sec. 7.3b is given by CAD d = XtanO where X is the focal length of the lens and (6. SENSING.3 (a) Range measurement by structured lighting approach. it follows from elementary geometry that d in Fig.-ao (6. and ao. AND INTELLIGENCE (a) Figure 6.. ) The angle GYk made by the projection of an arbitrary stripe is easily obtained by noting that «k = «. (In an image viewed on a monitor.dk (6. k = 0 would correspond to the leftmost column and k = M/2 to the center column.3 (continued) For an M-column digital image.2-3) for 0 <_ k 5 M/2. using Eq.2-5) or. . the distance increment dk between columns is given by dk=kd M/2 = 2kd M (6. ek = tan-1 d(M .2-3).2k) (6.ek (6.2-4) where d tan Bk = . (6.2-6) .SENSING 271 (b) Figure 6. . 6. To determine a.1. from Fig.. so Eqs.. the calibration procedure consists simply of measuring B and determining a.3b that the perpendicular distance Dk between an arbitrary light stripe and the reference plane is given by D (6.2-11) LBJ . VISION.2-9) with k = 0. as indicated above. From the geometry of Fig. we move the surface closer to the reference plane until its light stripe is imaged at y = 0 on the image plane.3b it follows that °. ~^" `'' ac = tan -' CAD '-h 1:3 In order to determine ao.M . and ao. we place a flat vertical surface so that its intersection with the sheet of light is imaged on the center of the image plane (i. Since M and X are fixed parameters. It is important to note that once B. (6. (6. We then physically measure the perpendicular distance D. the column number in the digital image completely determines the distance between the reference plane and all points in the stripe imaged on that column. a.1 and the results are stored in memory.< . 6. We then measure Do and... 1.r ?:. M. For the remaining values of k (i.e.2-8) By comparing Eqs. SENSING. It then follows from Fig.2-7) are identical for the entire range 0 < k < M .1. AND INTELLIGENCE where 0 < k < M12. 6.1). where ak is given either by Eq. the distance associated with every column in the image is computed using Eq. ..2-4) or (6.2-10) . Then.2-7) 8" = tan-' k for M12 < k < (M .. (6. we have ak = ac + where 0k.3b. (6. on the other side of the optical axis).n. The principal advantage of the arrangement just discussed is that it results in a relatively simple range measuring technique. .2-9) for 0 < k < M .2-8) we note that 0k' °k. . and X are known.2- 4) and (6.2-6) and (6. d(2k ..272 ROBOTICS. ao. 2. during normal operation. the distance of any imaged point is obtained simply by deter- V/4 rte-' C== (6. between the surface and the reference plane. CONTROL.e..I ao=tan ' rpo 1 (6.2-7). at y = M12).M) MA (6. This completes the calibration procedure. (6. Once calibration is completed. The distance to the surface is given by the simple relationship Q.SENSING 273 mining its column number in the image and addressing the corresponding location in memory. . 7. since light travels at approximately 1 ft/ns.= D = cT/2. The bright areas around the object boundaries represent discontinuity in range determined by postprocessing in a computer. The resulting expressions. (b) Image with intensity proportional to range. Before leaving this section. we point out that it is possible to use the concepts discussed in Sec. 4-.2.4. © IEEE. . Q.e. . would be considerably more complicated and difficult to handle from a computational point of view.25 cm. Figure 6.1 s.4 (a) An arrangement of objects.' "-s (DD CDs -4+ C.3 Time-of-Flight Range Finders In this section we` discuss three methods for determining range based on the timeof-flight concept introduced at the beginning of Sec. Part (a) of this figure shows "C3 'YS p.7r OCR 6. along the same path) from a reflecting surface. where T is the pulse transit time and c is the speed of light. The working range of this device is on the order of 1 to 4 m.--.4 to solve a more general problem in which the light source and camera are placed arbitrarily with respect to each other. 6. 6. 't7 "t7 plished by deflecting the laser light via a rotating mirror. (From Jarvis [1983b]. 6. however..) v.4b is the corresponding sensed array displayed as an image in which the intensity at each point is proportional to the distance between the sensor and the reflecting surface at that point (darker is closer).. while the third is based on ultrasonics.. An example of the output of this system is shown in Fig. Two of the methods utilize a laser. and Fig. A pulsed-laser system described by Jarvis [1983a] produces a two-dimensional array with values proportional to distance. It is of interest to note that.2.. One approach for using a laser to determine range is to measure the time it takes an emitted pulse of light to return coaxially (i. the supporting electronic instrumentation must be capable of 50-ps time resolution in order to achieve a ± 1/4-inch accuracy in range. The two-dimensional scan is accomCAD a collection of three-dimensional objects. with an accuracy of f 0. F>? "L7 gyp. the reflected beam travels a longer path and.. n = 1 . .5. In this case we have that CAD D' = L + (C) CAD 360 (6. and the other travels a distance D out to a reflecting surface. a unique solution can be obtained only if we F Beam splitter Laser Reflecting surface Outgoing beam . a phase shift is introduced between the two beams at the point of measurement. based on measurements of phase shift alone. as illustrated in Fig. Thus. phase shift) between the outgoing and returning beams. If we let D increase.274 ROBOTICS: CONTROL. 6. One of these (called the reference beam) travels a distance L to a phase measuring device. SENSING. (b) Shift between outgoing and returning light waveforms. Suppose that a beam of laser light of wavelength X is split into two beams. VISION.e.2-12) X It is noted that if 0 = 360 ° the two waveforms are again aligned and we cannot differentiate between D' = L and D' = L + nX. Suppose that D = 0.. . Under this condition D' = L and botht the reference and reflected beams arrive simultaneously at the phase measuring device.4.Returning beam 1 Phase measurement (a) x -.5b.. 6.5 (a) Principles of range measurement by phase shift. AND INTELLIGENCE An alternative to pulsed light is to use a continuous-beam laser and measure the delay (i. therefore. .. 2.. The total distance traveled by the reflected beam is D' = L + 2D.a k--% N / (b) \\// Figure 6. We illustrate this concept with the aid of Fig. the longer we average. 6. Equation (6. 6.2-12) that D = c3' (6. 632. the smaller the uncertainty will be in the distance estimate.6 Amplitude-modulated waveform. The modulated laser signal is sent out to the target and the returning beam is stripped of the modulating signal. where N is the number of samples averaged. A simple solution to this problem is to modulate the amplitude of the laser light by using a waveform of much higher wavelength. Uncertainties in distance measurements obtained by either technique require averaging the returned signal to reduce the error. The true intensity information obtained with the same device is shown in part (b).5 is impractical for robotic applications. the pulsed-light technique is that the former yields intensity as well as range information (Jarvis [1983a]). and we assume that measurements are statistically independent. If we treat the problem as that of measurement noise being added to a true distance. `=i ""S C'.2-13) still holds. (6.times the standard deviation of the noise.tCD ate) L?G s. . recalling that c = f X.°' . An example of results obtainable with a continuous.SENSING 275 require that 0 < 360 ° or. modulated laser beam scanned by a rotating mirror is shown in Fig. it is difficult to count the c4- '"' Figure 6.) The approach is illustrated in Fig. In other words. An important advantage of the continuous vs. continuous systems require considerably higher power. equivalently.7b. Since the wavelength of laser light is small (e. The basic technique is as before. However. we have by substitution into Eq.8 nm for a helium-neon laser).\rN. that 2D < X. Note that these two images complement each other.ti . then it can be shown that the standard deviation of the average is equal to l/. 6. Since D' = L + 2D. modulating function. Note the much larger wavelength of the . For example. but we are now working in a more practical range of wavelengths. Part (a) of this figure is the range array displayed as an intensity image (brighter is closer).g. which is then compared against the reference to determine phase shift...2-13) which gives distance in terms of phase shift if the wavelength is known. (For example. the method sketched in Fig. a modulating sine wave of frequency f = 10 MHz has a wavelength of 30m. but the reference signal is now the modulating function.6. (From Duda.7a. =D: BCD 6. `". An ultrasonic range finder is another major exponent of the time-of-flight con`T1 cept.) number of objects on top of the desk in Fig. on the other hand. 6. SENSING. 50.Proximity sensors. The construction and operational characteristics of ultrasonic sensors are discussed in further detail in Sec. consisting of 56 pulses at four frequencies. for this reason. and 60 KHz. is transmitted by a transducer 11/z inches in diameter. . while this information is readily available in the range array. 53. In an ultrasonic ranging system manufactured by Polaroid.3. The basic idea is the same as that used with a pulsed laser. it is not possible to determine the distance between the near and far edges of the desk top by examining the intensity image. and Barrett [1979]. VISION. a simple calculation involving the time interval between the outgoing pulse and the return echo yields an estimate of the distance to the reflecting surface. Nitzan. This is a common problem with ultrasonic sensors and. they are used primarily for navigation and obstacle avoidance. Techniques for processing this type of information are discussed in Chaps. The beam pattern of this device is around 30 °. a 1- C17 ms chirp.-r CAD i4- . with an accuracy of about 1 inch.3 PROXIMITY SENSING The range sensors discussed in the previous section yield an estimate of the distance between a sensor and a reflecting object. for example. © IEEE. The signal reflected by an object is detected by the same transducer and processed by an amplifier and other circuitry capable of measuring range from approximately 0. a simple task in the intensity image. An ultrasonic chirp is transmitted over a short time period and. since the speed of sound is known for a specified medium. AND INTELLIGENCE Figure 6. which introduces severe limitations in resolution if one wishes to use this device to obtain a range image similar to those discussed earlier in this section. 7 and 8.9 to 35 ft. (b) Intensity image. generally have a binary output which indicates the presence of an object CCs . The mixed frequencies in the chirp are used to reduce signal cancellation. Conversely. 57. 6.7 (a) Range array displayed as an image.276 ROBOTICS: CONTROL. 8 and 6.8a shows a schematic diagram of an inductive sensor which basically consists of a wound coil located next to a permanent magnet packaged in a simple. Typically. 6. as shown in Fig. no current is induced in the coil.3.9. as a ferromagnetic object enters or leaves the field of the magnet. © Society Italiana di Fisica. The principle of operation of these sensors can be explained with the aid of Figs. (Adapted from Canali [1981a]. proximity sensors are used in robotics for near-field work in connection with object grasping or avoidance. (b) Shape of flux lines in the absence of a ferromagnetic body. (a) Coil Maenctic flux (b) lines (C) Figure 6. However. 6. Under static conditions there is no movement of the as) flux lines and.SENSING 277 within a specified distance interval. the resulting change in ate.8b and c. In this section we consider several fundamental approaches to proximity sensing and discuss the basic operational characteristics of these sensors. 6. (c) Shape of flux lines when a ferromagnetic body is brought close to the sensor.1 Inductive Sensors Sensors based on a change of inductance due to the presence of a metallic object are among the most widely used industrial proximity sensors. ors The effect of bringing the sensor in close proximity to a ferromagnetic material causes a change in the position of the flux lines of the permanent magnet.8 (a) An inductive sensor. therefore. rugged housing. Figure 6.) -ti . 9b illustrates the relationship between voltage amplitude and sensor-object distance. boa t'. It is noted from this figure that sensitivity falls off rapidly with increasing distance. © Society Italiana di Fisica.500 Sensor-object distance (mm) (b) Figure 6.y BCD (b) Sensor response as a function of distance.4 I 1 1 0 0. and then switches to high (indicating proximity of an object) when the threshold is exceeded.. Figure 6. The voltage waveform observed at the output of the coil provides an effective means for proximity sensing. The binary output remains low as long as the integral value remains below a specified threshold. and that the sensor is effective only for fractions of a millimeter. SENSING..-.9 (a) Inductive sensor response as a function of speed. (Adapted from Canali [198la]. CAD GCD approach for generating a binary signal is to integrate this waveform.278 ROBOTICS: CONTROL. one °'A CAD . .250 0. VISION.9a illustrates how the voltage measured across the coil varies as a function of the speed at which a ferromagnetic material is introduced in the field of the magnet.8 0.750 0. AND INTELLIGENCE +V High velocity Voltage across coil o U 0 U C) Time Cs 0 -V (a) Normalized signal amplitude 0. Figure 6.) the flux lines induces a current pulse whose amplitude and shape are proportional to the rate of change in the flux. . vii CAD ti= Since the sensor requires motion to produce an output waveform. The polarity of the voltage out of the sensor depends on whether the object is entering or leaving the field. This force acts on an axis perpendicular to the plane established by the direction of motion of the charged . This force would act on the electrons.3. and that conventional current flows opposite to electron current.10a). moo '-' oco °0) particle and the direction of the field.. However.. © Society Italiana di Fisica.10 Operation of a Hall-effect sensor in conjunction with a permanent magnet. B is the magnetic field vector. When such a material is brought in close proximity with the device. When used by themselves. Hall-effect sensors can only detect magnetized objects. would be positive at the top.SENSING 279 6.2 Hall-Effect Sensors The reader will recall from elementary physics that the Hall effect relates the voltage between two points in a conducting or semiconducting material to a magnetic field across the material. that a current flows through a doped. (Adapted from Canali [1981a]. the magnetic field weakens at the sensor due to bending of the field lines through the material. When used in this way.. `. in this case. 6..11. 6. for example. . n-type semiconductor which is immersed in a magnetic field. Hall-effect sensors are based on the principle of a Lorentz force which acts on a charged particle traveling through a magnetic field. a Hall-effect device senses a strong magnetic field in the absence of a ferromagnetic metal in the near field (Fig. v is the velocity vector.. and " x " is the vector cross product. they are capable of detecting all ferromagnetic materials.10. Suppose. when used in conjunction with a permanent magnet in a configuration such as the one shown in Fig. 6. 6. 6. Hall-effect sensor (a) (b) Figure 6.11. Recalling that electrons are the majority carriers in n-type materials. as shown in Fig.) v)' cps _"' . as shown in Fig. which would tend to collect at the bottom of the material and thus ''h produce a voltage across it which. the Lorentz force is given by F = q(v x B) where q is the charge. That is. we would have that the force acting on the moving.w. negatively charged particles would have the direction shown in Fig.10b. VISION.. Bringing a ferromagnetic material close to the semiconductor-magnet device would decrease the strength of the magnetic field. f-" TES cue .11 Generation of Hall voltage. In addition.' s. A cavity of dry air is usually placed behind the capacitive element to provide isolation. ultimately. in which case it is normally embedded in a resin to provide sealing and mechanical support. interference.Electron current (q = -) Figure 6. This drop in voltage is the key for sensing proximity with Hall-effect sensors. SENSING. The sensing element is a capacitor composed of a sensitive electrode and a reference electrode. ruggedness. 0. such as silicon.. As their name implies. One of the simplest includes the capacitor as part of an 'c7 cud . 6. and immunity to electrical a. thus reducing the Lorentz force and.3 Capacitive Sensors Unlike inductive and Hall-effect sensors which detect only ferromagnetic materials. the use of semiconducting materials allows the construction of electronic circuitry for amplification and detection directly on the sensor itself. The rest of the sensor consists of electronic circuitry which can be included as an integral part of the unit. Binary decisions regarding the presence of an object are made by thresholding the voltage out of the sensor. the voltage across the semiconductor.!- '. AND INTELLIGENCE i -. 6. It is of interest to note that using a semiconductor.12. a metallic disk and ring separated by a dielectric material. {-" C".Convention it cwrent -. These can be. capacitive sensors are potentially capable (with various degrees of sensitivity) of detecting all solid and liquid materials.280 ROBOTICS: CONTROL. The basic components of a capacitive sensor are shown in Fig.3.. for example. these sensors are based on detecting a change in capacitance induced by a surface that is brought near the sensing element. There are a number of electronic approaches for detecting proximity based on a change in capacitance. thus reducing sensor size and cost. has a number of advantages in terms of size. threshold value. -.) (From Canali [1981a].. (From Canali [1981a].12 A capacitive proximity sensor.r 5 l0 1.) 4-. Fisica. r=.13 Response (percent change in capacitance) of a capacitive proximity sensor as a function of distance. Typically. The phase shift is proportional to the change in capacitance and can thus be used as a basic mechanism for proximity detection.5 Distance (nom) Figure 6. © Societa Italiana di Fisica. The start of oscillation is then translated into an output voltage which indicates the presence of an object. A more complicated approach utilizes the capacitive element as part of a circuit which is continuously driven by a reference sinusoidal waveform.SENSING 281 Reference electrode Dry air Container Printed circuit Dialectric G MP /////e-111A Sensitive electrode Sealing resin Figure 6.13 illustrates how capacitance varies as a function of distance for a proximity sensor based on the concepts just discussed. It is of interest to note that sensitivity decreases sharply past a few millimeters. A change in capacitance produces a phase shift between the reference signal and a signal derived from the capacitive element. © Societa Italiana di oscillator circuit designed so that the oscillation starts only when the capacitance of the sensor exceeds a predefined threshold value.. This method provides a binary output whose triggering sensitivity depends on the +. these sensors are dam .. and that the shape of the response curve depends on the material being sensed. . Figure 6. (It is noted that these time intervals are equivalent to specifying distances since the pro=ado Sensor housing Metallic housing Resin Ceramic transducer Acoustic absorber Figure 6. The housing is designed so that it produces a narrow acoustic beam for efficient energy transfer and signal directionality. AND INTELLIGENCE operated in a binary mode so that a change in the capacitance greater than a preset threshold T indicates the presence of an object.14 An ultrasonic proximity sensor. SENSING. The basic element is an electroacoustic transducer. often of the piezoelectric ceramic type. 6. Waveform B shows the output signal as well as the resulting echo signal. it also acts as an acoustical impedance matcher. © Elsevier Sequoia. and other environmental factors.282 ROBOTICS: CONTROL. dust. and Ate + At2 the maximum. time interval Atl is the minimum detection time.) . VISION. Since the same transducer is generally used for both transmitting and receiving.3. 6. That is. The pulses shown in C result either upon transmission or reception. 6. Waveform A is the gating signal used to control transmission.4 Ultrasonic Sensors CAD r The response of all the proximity sensors discussed thus far depends strongly on the material being sensed. while changes below the threshold indicate the absence of an object with respect to detection limits established by the value of T.15. whose operation for range detection was introduced briefly at the end of Sec. The resin layer protects the transducer against humidity. fast damping of the acoustic energy is necessary to detect objects at close range. This is accomplished by providing acoustic absorbers. we introduce a time window (waveform D) which in essence establishes the detection capability of the sensor. Figure 6.2. The operation of an ultrasonic proximity sensor is best understood by analyzing the waveforms used for both transmission and detection of the acoustic energy signals. In order to differentiate between pulses corresponding to outgoing and returning energy. and by decoupling the transducer from its housing. In this section we discuss in more detail the construction and operation of these sensors and illustrate their use for proximity sensing.14 shows the structure of a typical ultrasonic transducer used for proximity sensing. (Adapted from Canali [1981b]. This dependence can be reduced considerably by using ultrasonic sensors.3. A typical set of waveforms is shown in Fig. 6. (Adapted from Canali [1981b]. © Elsevier Sequoia.16. In other words. which is reset to low at the end of a transmission pulse in signal A. signal F is set high on the positive edge of a pulse in E and is reset to low when E is low and a pulse occurs in A.3. a surface located anywhere in the volume will produce a reading.5 Optical Proximity Sensors Optical proximity sensors are similar to ultrasonic sensors in the sense that they detect proximity of an object by its influence on a propagating wave as it travels from a transmitter to a receiver. the typical application of the arrangement BCD .15 Waveforms associated with an ultrasonic proximity sensor. F will be high whenever an object is present in the distance interval specified by the parameters of waveform D.16 does not yield a point measurement.SENSING 283 n Driving signal B Echo signal rin C I I J0 I UU_11]r_ I Figure 6. 6.) An echo received while signal D is high produces the signal shown in E. pencillike volume. Although this approach is similar in principle to the triangulation method discussed in Sec. 6. which acts as a transmitter of infrared light. While it is possible to calibrate the intensity of these readings as a function of distance for known object orientations and reflective characteristics. The cones of light formed by focusing the source and detector on the same plane intersect in a long. Finally. That is. This volume defines the field of operation of the sensor since a reflective surface which intersects the volume is illuminated by the source and simultaneously "seen" by the receiver. and a solid-state photodiode which acts as the receiver.) pagation velocity of an acoustic wave is known given the transmission medium.1. it is important to note that the detection volume shown in Fig. ors 6.2. In this manner. One of the most common approaches for detecting proximity by optical means is shown in Fig. F is the output of interest in an ultrasonic sensor operating in a binary mode. This sensor consists of a solid-state light-emitting diode (LED). it is also possible to center the hand over the object for grasping and manipulation.16 is in a mode where a binary signal is generated when the received light intensity exceeds a threshold value.284 ROBOTICS: CONTROL.) shown in Fig. as well as to control the force exerted by a manipulator on a given object. AND INTELLIGENCE Light-emitting diode Figure 6. Binary sensors are basically switches which respond to the presence or absence of an object. a switch is placed on the inner surface of each finger of a manipulator hand. binary touch sensors are contact devices. In addition. 'L7 tin 6. This latter use of touch sensing is analogous to what humans do in feeling their way in a totally dark room. 6. In the simplest arrangement.-.+ . they are often mounted on the external surfaces of a manipulator hand to provide control signals useful for guiding the hand throughout the work space. Touch information can be used. such as microswitches.4. By moving the hand over an object and sequentially making contact with its surface.1 Binary Sensors As indicated above. This type of sensing is useful for determining if a part is present between the fingers. for example.4 TOUCH SENSORS Touch sensors are used in robotics to obtain information associated with the contact between a manipulator hand and objects in the workspace. output a signal proportional to a local force. 6. on the other hand. for object location and recognition. (From Rosen and Nitzan [1977]. SENSING.16 Optical proximity sensor. VISION. Touch sensors can be subdivided into two principal categories: binary and analog. These devices are discussed in more detail in the following sections. as illustrated in Fig. Multiple binary touch sensors can be used on the inside surface of each finger to provide further tactile information. Analog sensors. © IEEE. 6. mss 0 .17. '}' During the past few years. The simplest of these devices consists of a spring-loaded rod (Fig.17 A simple robot hand equipped with binary touch sensors. The rotation is then measured continuously using a potentiometer or digitally using a code wheel.4.19. development of tactile sensing arrays capable of yielding touch information over a wider area than that afforded by a single sensor. which shows a robot hand in which the inner surface of each Figure 6. considerable effort has been devoted to the . 6. 6.2 Analog Sensors An analog touch sensor is a compliant device whose output is proportional to a local force. Knowledge of the spring constant yields the force corresponding to a given displacement.SENSING 285 Finger control Figure 6.18 A basic analog touch sensor. 6. The use of these devices is illustrated in Fig.18) which is mechanically linked to a rotating shaft in such a way that the displacement of the rod due to a lateral force results in a proportional rotation of the shaft. The external sensing plates are typically binary devices and have the function described at the end of Sec. ono Cs. Current flows from the common ground to the individual electrodes as a function of compression of the conductive material. Several basic approaches used in the construction of artificial skins are shown in Fig. In the method shown in Fig. an object pressing against the surface causes local deformations which are measured as continuous resistance variations. often called artificial skins.20a is based on a "window" concept. finger has been covered with a tactile sensing array. 6. The conductive material is placed above this plane and insulated from the substrate plane. characterized by a conductive material sandwiched between a common ground and an array of electrodes etched on a fiberglass printed-circuit board. The scheme shown in Fig.19 A robot hand equipped with tactile sensing arrays. Although sensing arrays can be formed by using multiple individual sensors.. j14 . SENSING.4.286 ROBOTICS: CONTROL. narrow electrode pairs are placed in the same substrate plane with active electronic circuits using LSI technology. 6. In these devices. one of the most promising approaches to this problem consists of utilizing an array of electrodes in electrical contact with a compliant conductive material (e. graphite-based substances) whose resistance varies as a function of compression. 6. Each electrode consists of a rectangular area (and hence the name window) which defines one touch point. Resistance changes resulting from material compression are measured and interpreted by the active circuits located between the electrode pairs. 6.g.20b long. C1. VISION. The latter are easily transformed into electrical signals whose amplitude is proportional to the force being applied at any given point on the surface of the material. "C3 CAD 4-. CAD °-_ olio 0-6 CCD t-.1. except at the electrodes. AND INTELLIGENCE External sensing plates Figure 6.20. flat electrodes in the base. constitutes one sensing point. 6. Another possible technique is shown in Fig.. Each intersection. The magnitude of the current in each of these elements is proportional to the compression of the material between that element and the element being driven externally.20d requires the use of an anisotropically conductive material. Changes in resistance as a function of material compression are measured by electrically driving the electrodes of one `C3 . Finally.SENSING 287 Conductive material Common ground Sensing electrodes Electrode pair Active circuitry (a) (b) Anisotropically conducting material (arrows show conduction axis) Y electrode X electrode Conductive material (c) (d) Figure 6. the arrangement shown in Fig. The sensor is constructed by using a linear array '-r . 6. The conductive material is placed on top of O-t "C3 array (one at a time) and measuring the current flowing in the elements of the other array. and the conductive material in between. In this approach the conductive material is located between two arrays of thin.20c. Such materials have the property of being electrically conductive in only one direction.s of thin. flexible electrodes that intersect perpendicularly.20 Four approaches for constructing artificial skins (see text). flat. 6.21. consists of a free-moving dimpled ball which deflects a thin rod mounted on the axis of a conductive disk. VISION. illustrated in Fig. 6.288 ROBOTICS: CONTROL. As the force increases so does the contact area. A number of electrical contacts are Object slip Dimpled ball (1' \\ Contacts (16 places) Conductive disk Figure 6. 6. The device. As with the method in Fig. except the one being driven. © AAAS. with the conduction axis perpendicular to the electrodes and separated from them by a mesh so that there is no contact between the material and electrodes in the absence of a force. The measurement of tangential motion to determine slip is another important aspect of touch sensing.20c.21 A device for sensing the magnitude and direction of slip. we are basically able to "look" at the contribution of the °-n "'t CAD individual element intersections.) vac . By scanning the receiving array one path at a time. It is noted that touch sensitivity depends on the thickness of the separator. one array is externally driven and the resulting current is measured in the other. SENSING. This often leads to difficulties in interpreting signals resulting from complex touch patterns because of "cross-point" inductions caused by alternate electrical paths. (Adapted from Bejczy [1980]. The methods in Fig. AND INTELLIGENCE this. Before leaving this section. we illus- trate this mode of sensing by describing briefly a method proposed by Bejczy [1980] for sensing both the direction and magnitude of slip. Another method is to ground all paths. resulting in lower resistance. One solution is to place a diode element at each intersection to eliminate current flow through the alternate paths. All the touch sensors discussed thus far deal with measurements of forces normal to the sensor surface. Application of sufficient force results in contact between the material and electrodes.20c and d are based on sequentially driving the elements of one of the arrays. . as discussed in Sec. (0j CAD 0)) C/) .5. In order to reduce hysteresis and increase the accuracy in measurement. and z axes of the force coordinate frame. 6. Since the eight pairs of strain gauges are oriented normal to the x. however. In most applications. As an example. typically aluminum. y. this is only a crude first-order compensation. The differential connection of the strain gauges provides automatic compensation for variations in temperature. 6. the hardware is generally constructed from one solid piece of metal. are mounted between the tip of a robot arm and the end-effector.5 FORCE AND TORQUE SENSING Force and torque sensors are used primarily for measuring the reaction forces developed at the interface between mechanical assemblies. Ball rotation resulting from an object slipping past the ball causes the rod and disk to vibrate at a frequency which is proportional to the speed of the ball. the sensor shown in Fig. This can be done by premultiplying the sensor reading by a sensor calibration matrix. 6. The principal approaches for doing this are joint and wrist sensing. They consist of strain gauges that measure the deflection of the mechanical structure due to external forces. 14. sensing is done simply by measuring the armature current. For a joint driven by a dc motor.5. 6. l A joint sensor measures the cartesian components of force and torque acting on a robot joint and adds them vectorially.G t Another category is pedestal sensing. The analysis of pedestal sensing is quite similar to that used for wrist sensing. which is discussed in detail in this section. light in weight (about 12 oz) and relatively comw2. Wrist sensors. The direction of ball rotation determines which of the contacts touch the disk as it vibrates.SENSING 289 evenly spaced under the disk.2.22 uses eight pairs of semiconductor strain gauges mounted on four deflection barsone gauge on each side of a deflection bar. The characteristics and analysis methodology for this type of sensor are summarized in the following discussion. The gauges on the opposite open ends of the deflection bars are wired differentially to a potentiometer circuit whose output voltage is proportional to the force component normal to the plane of the strain gauge.1 Elements of a Wrist Sensor Wrist sensors are small. 0. respectively. sensitive. the principal topic of discussion in this section. the three components of force F and three components of moment M can be determined by properly adding and subtracting the output voltages. pact in design-on the order of 10 cm in total diameter and 3 cm in thickness. with a dynamic range of up to 200 lb. in which strain gauge transducers are installed between the base of a robot and its mounting surface in order to measure the components of force and torque acting on the base. the base is firmly mounted on a solid surface and no provisions are made for pedestal sensing.0 '"A O°° [P' `J' CAD Fit `. However. pulsing the corresponding electrical circuits and thus providing signals that can be analyzed to determine the average direction of the slip. Most wrist force sensors function as transducers for transforming forces and moments exerted at the hand into measurable deflections or displacements at the wrist. The natural frequency of a mechanical device is related to its stiffness. AND INTELLIGENCE Figure 6. minimizing the distance between the hand and the sensor reduces the lever arm for forces applied at the hand.. it is desirable to measure as large a hand force/moment as possible. This ensures that the device will not restrict the movement of the manipulator in a crowded workspace. Furthermore. VISION. Thus. tee' Furthermore. it reduces the magnitude of the deflections of an applied force/moment. It is important that the wrist motions generated by the force sensor do not affect the positioning accuracy of the manipulator.2. it is important to place the sensor as close to the tool as possible to reduce positioning error as a result of the hand rotating through small angles.290 ROBOTICS: CONTROL. In addition. 2. high stiffness ensures that disturbing forces will be quickly ti.5.. High stiffness.3 . Good linearity between the response of force sensing elements and the applied forces/moments permits resolving the forces and moments by simple f74 '"' '-. SENSING. Linearity. damped out to permit accurate readings during short time intervals. the calibration of the force sensor is simplified.. thus. the required performance specifications can be summarized as follows: '~r 1. thus. s. matrix operations.22 Wrist force sensor. 3. With the compact force sensor. 6. Compact design. . It also minimizes collisions between the sensor and the other objects present in the workspace. s-: .. cad G16 CS' r». which may add to the positioning error of the hand. This is discussed in Sec.. ". MZ)T W = raw readings = (WI. 6.5. 6. The wrist force sensor shown in Fig. My. a'+ F = RFW where 0. With reference to Fig. using a simple force-torque balance technique. called the resolved force matrix RF (or sensor calibration matrix). w3 .5-2). .22. .s" In Eq. and that the strain gauges produce readings which vary linearly with respect to changes in their elongation.2 Resolving Forces and Moments Assume that the coupling effects between the gauges are negligible. . FZ. 6.SENSING 291 4. the resolved force vector directed along the force sensor coordinates can be obtained mathematically as t3. 6. FY.5-1) F = (forces. we can obtain the above equation with some of the r. and . If the coupling effects between the gauges are negligible. that the wrist force sensor is operating within the elastic range of its material. . Mx. into three orthogonal force and torque components with reference to the force sensor coordinate frame.j 42° (6. It also produces hysteresis effects that do not restore the position measuring devices back to their original readings.22 and summing the forces and moments about the origin of the sensor coordinate frame located at the center of the force sensor. (6. which is postmultiplied by the force measurements to produce the required three orthogonal force and three torque components.22 was designed with these criteria taken into consideration. . r1l r18 RF = r61 . Such a transformation can be realized by specifying a 6 x 8 matrix. moments)T = (Fr.5-2) r68 . w8) T .22 produces eight raw readings which can be resolved by computer software. Internal friction reduces the sensitivity of the force sensing elements because forces have to overcome this friction before a measurable deflection can be produced. the r11 # 0 are the factors required for conversion from the raw reading W (in volts) to force/moment (in newton-meters). then by looking at Fig. . 6. W2. Then the sensor shown in Fig. (6. Low hysteresis and internal friction. Then the calibration matrix RF from Eq.5-4) by (RF)T.5-6) F = [(RF)TRF]-I (RF)TW (6. 6.5. (6. (6. this may produce as much as 5 percent error in the calculation of force resolution.5-7) . based on experimental data. For some force sensors. AND INTELLIGENCE equal to zero. 6. as discussed in Sec. 6.22. (6. (6. The disadvantage of using a wrist force sensor is that it only provides force vectors resolved at the assembly interface for a single contact. SENSING. This "full" matrix is used to calibrate the force sensor.292 ROBOTICS: CONTROL. (6. in practice.3 Sensor Calibration The objective of calibrating the wrist force sensor is to determine all the 48 unknown elements in the resolved force matrix [Eq.5-1) can be found from the pseudoinverse of RF* in Eq.8 (6. The resolved force vector F is used to generate the necessary error actuating control signal for the manipulator. this assumption is not valid and some coupling does exist. it is usually necessary to replace the resolved force matrix RF by a matrix which contains 48 nonzero elements. Premultiplying Eq. (6. VISION. we need to find the full 48 nonzero elements in the RF matrix. Due to the coupling effects. With reference to Fig.3. 0 0 0 0 0 0 Quite often.5-4) RFRF = 18. we have (RF)TW = [(RF)TRFIF Inverting the matrix [ (RF)TRF ] yields .5-2)].5-5) where RF is an 8 x 6 matrix and 18x8 is an 8 x 8 identity matrix. the resolved force matrix in Eq. The calibration of the wrist force sensor is done by finding a pseudoinverse calibration matrix RF which satisfies W = RFF and 0 (6.52) becomes 0 r21 0 0 r32 r13 0 0 r34 0 r25 0 0 r36 r17 0 0 r38 0 0 0 0 0 0 RF = 0 0 r61 0 r52 0 0 r63 r44 0 0 r65 0 r56 0 0 r67 r48 (6 5-3) .5. Thus.5-4) using a least-squares-fit technique.r. PROBLEMS 6. 6. For further reading on the material in Sec. It must be kept in mind. 1983b]. [1981a. and Hackwood et al. however.3 see Spencer [1980]. Marck [1981].. 1981b]. Hillis [1982]. (6. and Canali et al.1 Show the validity of Eq.2 A sheet-of-light range sensor illuminating a work space with two objects produced the following output on a television screen: C/] 000 7-- Ct1 }. and Wise [1982]. Bejczy [1980].SENSING 293 Therefore comparing Eq. that the performance of these sensors is still rather primitive when compared with human capabilities. (6. the majority of present industrial robots perform their tasks using preprogrammed techniques and without the aid of sensory feedback. Catros and Espiau [1980]. --d coo 000 . Further reading for the material in Sec. For this reason. [1983]. 6.5-1) and Eq. [1979] and Jarvis [1983a. Further reading on laser range finders may be found in Duda et al. however.4 may be found in Harmon [1982].5) are discussed by Drake [1977]. Galey and Hsia [1980]. McDermott [1980].6 CONCLUDING REMARKS The material presented in this chapter is representative of the state of the art in external robot sensors. 6. ". See also the papers by Beni et al. sensor development is indeed a dynamic field where new techniques and applications are commonplace in the literature. [1983]. Thus. McDermott [1980].5-7). The relatively recent widespread interest in flexible automation. and Merritt [1982]. Additional reading on force-torque sensing may be found in Nevins and Whitney [1978]. (6. has led to increased efforts in the area of sensor-driven robotic systems as a means of increasing the scope of application of these machines. Meindl 0000 and Wise [1979]. Shimano and Roth [1979]. and Raibert and Tanner [1982].3 ass -"' REFERENCES Several survey articles on robotic sensing are Rosen and Nitzan [1977].5-8) The RF matrix can be identified by placing known weights along the axes of the sensor coordinate frame. 6. we have RF = [(RF)TRF]-'(RF)T C/) (6. Details about the experimental procedure for calibrating the resolved force matrix can be found in a paper by Shimano and Roth [1979]. 6. As indicated at the beginning of this chapter.2-8). Pedestal sensors (Sec. the topics included in this chapter were selected primarily for their value as fundamental material which would serve as a foundation for further study of this and related fields. 6. C.`3 ... SENSING. Do = 1 m. how would you compensate the range (IQ ins measurements for this effect? 6. over what approximate range will this sensor detect an object? Assume that an object is detected anywhere in the sensitive volume. A column is read by holding it at ground and measuring the current. and that the lens centers are 6 mm apart. 6.. At time t = 0 the transducer is pulsed for 0..6 With reference to Fig. Given that sound travels at 344 m/s: (a) What range of time should be used as a window? (b) At what time can the device be pulsed again? (c) What is . B = 3 m.1 ms. 6.3b with M = 256. 6.8 An optical proximity sensor (Fig.7 Suppose that an ultrasonic proximity sensor is used to detect the presence of objects within 0. . CAD 6.4 ms for resonances to die out within the transducer and 20 ms for echoes in the environment to die out. D.294 ROBOTICS: CONTROL. 6. give a set of waveforms for an ultrasonic sensor capable of measuring range instead of just yielding a binary output associated with proximity.4 Compute the upper limit on the frequency of a modulating sine wave to achieve a working distance of up to (but not including) 5 m using a continuous-beam laser range 'C7 s. the mean of the noise were 5 cm. How many measurements would have to be averaged to obtain an accuracy of =E0. s. finder. AND INTELLIGENCE 1 Assuming that the ranging system is set up as in Fig.5 (a) Suppose that the accuracy of a laser range finder is corrupted by noise with a gaussian distribution of mean 0 and standard deviation of 100 cm.-' :31 . = 2 m.3 (a) A helium-neon (wavelength 632. 6. ono the minimum detectable distance? 6. The cone formed by each beam originates at the lens and has a vertex located 4 cm in front of the center of the opposite lens. Assume that the . instead of being 0.5 cm with a .15. obtain the distance between the objects in the direction of the light sheet. Given that each lens has a diameter of 4 mm..9 A 3 x 3 touch array is scanned by driving the rows (one at a time) with 5 V. VISION. and X = 35 mm.5 m of the device. What is the distance to an object that produces a phase shift of 180°? (b) What is the upper limit on the distance for which this device would produce a unique reading? 6.16) has a sensitive volume formed by the intersection of two identical beams.w. Assume that it takes 0.95 probability? (b) If.8 nm) continuous-beam laser range finder is modu- lated with a 30-MHz sine wave. 2) and (3. and (3.SENSING 295 undriven rows and unread columns are left in high impedance. (3.6 voltage drop) is in series with the resistance at each junc- 6. column): 100 11 at (1. Unfortunately.9 assuming (a) that all undriven rows and all columns are held at tion. 3). '`7 ground. `-' . and (b) that a diode (0.11 A wrist force sensor is mounted on a PUMA robot equipped with a parallel jaw gripper and a sensor calibration procedure has been performed to obtain the calibration matrix RF. taking into account the cross-point problem. after you have performed all the measurements. All other intersections have infinite resistance. and 50 0 at (2. 1).10 Repeat Prob. A given force pattern against the array results in the following resistances at each electrode intersection (row. 3). (1. Compute the current measured at each row-column intersection in the array. 6. r-. 1). someone remounts a different gripper on the robot. 2). 6. Do you need to recalibrate the wrist force sensor? Justify your answer. " .CHAPTER SEVEN LOW-LEVEL VISION Where there is no vision. and (6) interpretation.. may be subdivided into six principal areas: (1) sensing. they do provide a useful framework for categorizing the various processes that are inherent components of a machine vision system. "CO `. and processing hardware associated with machine vision are considerably more complex than those associated with the sensory approaches discussed in Chap. Segmentation is the process that partitions an image into objects of interest. (3) segmentation. 6. be expected. Proverbs 7. shape) suitable for differentiating one type of object from another.. While there are no clearcut boundaries between these subdivisions. engine block).g. Sensing is the process that yields a visual image. concepts.~D 'L3 Ca) s."y 4'" U.. . is motivated by the continuing need to increase the flexibility and scope of applications of robotic systems. The use of vision and other sensing schemes. c`" 'CS 4. This will take us from the image formation process itself to compensations such as noise reduction. characterizing..'3 . and interpreting information from images of a three-dimensional world. the people perish. For instance. and finally to the extraction of ']t' can '-+ A't.. r-. size. the sensors. (5) recognition. 4-i boo . Finally.y 'Z7 . (2) preprocessing. We consider three levels of processing: low-. C/) 010 °. vision capabilities endow a robot with a sophisticated sensing mechanism that allows the machine to respond to its environment in an "intelligent" and flexible manner. (4) description.33 C.t CAD CAD 296 '-h . also commonly referred to as machine or computer vision.. and high-level vision. we associate with low-level vision those processes that are primitive in the sense that they may be considered "automatic reactions" requiring no intelligence on the part of the vision system. 6. such as those discussed in Chap. It is convenient to group these various areas according to the sophistication involved in their implementation..g. medium-. vision is recognized as the most powerful of robot sensory capabilities. Recognition is the process that identifies these objects (e. Robot vision may be defined as the process of extracting. Preprocessing deals with techniques such as noise reduction and enhancement of details.. As might "-+ CAD En' C1.. bolt. and force sensing play a significant role in the improvement of robot performance. interpretation assigns meaning to an ensemble of recognized objects."- "{. wrench. touch. This process. . In our discussion. While proximity.. we shall treat sensing and preprocessing as low-level vision functions.° C-) c1. Description deals with the computation of features (e.1 INTRODUCTION As is true in humans. 7. Thus. the subdivision of functions addressed in this discussion may be viewed as a practical approach for implementing state-of-the-art machine vision systems. and label components in an image resulting from low-level vision. characterize. that recognition and interpretation are highly interrelated functions in a human. given our level of understanding and the analytical tools currently avail'. we will treat segmentation. As discussed in Chap. for example.. We know. a)0 coo .LOW-LEVEL VISION 297 primitive image features such as intensity discontinuities. preprocessing. 7. When sampled spatially and quantized in amplitude.3. 7. f3. as discussed in Sec.. a)) (DD 00" O`r "C7 °¢c +-' . These relationships. The mathematics of image formation are discussed in Sec. The principal devices used for robotic vision are television cameras.. It is not implied that these subdivisions represent a model of human vision nor that they are carried out independently of each other.4. consisting either of a tube or solid-state imaging sensor. 7. This range of processes may be compared with the sensing and adaptation process a human goes through in trying to find a seat in a dark theater immediately after walking in during a bright afternoon. such as the structured-lighting approach discussed in Sec. r. We will associate with medium-level vision those processes that extract._ able in this field. Although an in-depth treatment of these devices is beyond the scope of the present discus'CS BCD te' r-' A. and recognition of individual objects as medium-level vision functions. these limitations lead to the formulation of constraints and idealizations intended to reduce the complexity of this task. with depth information being obtained by special imaging techniques. The intelligent process of finding an unoccupied space cannot begin until a suitable image is available. or by the use of stereo imaging. The categories and subdivisions discussed above are suggested to a large extent by the way machine vision systems are generally implemented.4.. and with concepts and techniques required to implement low-level vision functions. The material in this chapter deals with sensing. Although true vision is inherently a three-dimensional activity. In this section we are interested in three main topics: (1) the principal imaging techniques used for robotic vision.2 IMAGE ACQUISITION Visual information is converted to electrical signals by visual sensors. and (3) the effects of amplitude quantization on intensity resolution. 8.C (3. and associated electronics. (2) the effects of sampling on spatial resolution. High-level vision refers to processes that attempt to emulate cognition. While algorithms for lowand medium-level vision encompass a reasonably well-defined spectrum of activities. Topics in higher-level vision are discussed in Chap. In terms of our six subdivisions. description. however. most of the work in machine vision is carried out using images of a three-dimensional scene. our knowledge and understanding of high-level vision processes is considerably more vague and speculative. 8. these signals yield a digital image. are not yet understood to the point where they can be modeled analytically. Solid-state imaging devices offer a number of advantages over tube cameras. 7.1 (a) Schematic of a vidicon tube. the vidicon camera tube is a cylindrical glass envelope containing an electron gun at one end. as explained below. However. A thin photosensiCAD Transparent metal coating Beam focusing coil Photosensitive layer I I (a) (b) Figure 7. including lighter weight. . Solid-state imaging sensors will be introduced via a brief discussion of charge-coupled devices (CCDs). The inner surface of the glass faceplate is coated with a transparent metal film which forms an electrode from which an electrical video signal is derived.1a. AND INTELLIGENCE sion. smaller size. (b) Electron beam scanning pattern. The beam is focused and deflected by voltages applied to the coils shown in Fig. a commonly used representative of the tube family of TV cameras.1a.298 ROBOTICS: CONTROL. VISION. The deflection circuit causes the beam to scan the inner sur- face of the target in order to "read" the image. the resolution of certain tubes is still beyond the capabilities of solid-state cameras. As shown schematically in Fig. longer life. and lower power consumption. and a faceplate and target at the other. which are one of the principal exponents of this technology. SENSING. we will consider the principles of operation of the vidicon tube. 7. When discussing CCD devices. This current is proportional to the number of electrons replaced and. this effect produces an image on the target layer that is identical to the light image on the faceplate of the tube.-. Since the amount of electronic charge that flows is proportional to the amount of light in any local area of the target.. contain image data.. a video signal proportional to the intensity of the input image. its resistance is reduced and electrons are allowed to flow and neutralize the positive charge.1. the image would flicker perceptibly.}- SET' CAD . In the absence of light. In normal operation. If the lines were scanned sequentially and the result ::r 7. a positive voltage is applied to the metal coating of the faceplate. This variation in current during the electron beam scanning motion produces.1a). that is.C . thus creating electron-hole pairs. the remaining concentration of electron charge is high in dark areas and lower in light areas.. Other standards exist which yield higher line rates per frame. with the electron beam depositing a layer of electrons on the inner surface of the target surface to balance the positive charge on the metal coating. For example. '"h f3. called the RETMA (Radio-Electronics-Television Manufacturers Associa- tion) scanning convention. therefore. it is convenient to subdivide sensors into two categories: line scan sensors and area sensors. or twice the frame rate. a popular scanning approach in computer vision and digital image processing is based on 559 lines. while the second field scans the even lines. of which 512. Behind the photosensitive target there is a positively charged fine wire mesh which decelerates electrons emitted by the gun so that they reach the target surface with essentially zero velocity. the photosensitive layer thus becomes a capacitor with negative charge on the inner surface and positive charge on the other side.. The first field of each frame scans the odd lines (shown dashed in Fig. is the standard used for broadcast television in the United States.LOW-LEVEL VISION 299 tive "target" layer is deposited onto the metal film. . to the light intensity at a particular location of the scanning beam. . `J' i. Image photons pass through a transparent polycrystalline silicon gate structure and are absorbed in the silicon crystal. . `L3 O-.Wt . after conditioning by the camera circuitry. When light strikes the target layer. number of advantages for both hardware and software implementations.' (WD 'J' . the photosensitive material behaves as a dielectric. . but their principle of operation is essentially the same. V'1 00..5 lines and scanned 60 times each second. As the beam again scans the target it replaces the lost charge. thus causing a current to flow in the metal layer and out one of the tube pins. The resulting photoelectrons are collected in the photosites. 'AU . v0. this layer consists of very small resistive globules whose resistance is inversely proportional to light intensity.-. The electron beam scans the entire surface of the target 30 times per second. each complete scan (called a frame) consisting of 525 lines of which 480 contain image information. As the elecp. R. CAD ". each consisting of 262. This scanning scheme. with the amount of charge collected C3.! '-h shown on a TV monitor.1b. This phenomenon is avoided by using a scan mechanism in which a frame is divided into two interlaced fields. Working with integer powers of 2 has a "'_ CAD C17 a.-.-r CAD °C° . 'LS tron beam scans the surface of the target layer. 7. The basic component of a line scan CCD sensor is a row of silicon imaging elements called photosites.y..'3: The principal scanning standard used in the United States is shown in Fig.. -I 4 F. two transfer gates used to clock the contents of the imaging elements into so-called transport registers.300 ROBOTICS. Output C a C O Output i-1 .. As shown in Fig.2a.2 (a) CCD line scan sensor. (b).4 F. Control signals (a) Horizontal transport register Gate Fas . 7.0 Control signals FI Vertical transport register .CONTROL. . F. VISION. a typical line scan sensor is composed of a row of the imaging elements just discussed.0 F Photosites Gate C s LL C7 0-o 0-4 Amplifier Output gate F4 9 F4 r-0 I . AND INTELLIGENCE at each photosite being proportional to the illumination intensity at that location..CCD area sensor. SENSING. w (b) Figure 7. and an output gate used to clock the contents of the transport registers into an amplifier whose output is a voltage signal proportional to the contents of the row of photosites.. °.. noted that f(0. Line scan sensors with resolutions ranging between 256 and 2048 elements are not uncommon.e. :'O "CJ °°° ``'o ue" NON 3UD . Figure 7..2-1) where x and y are now discrete variables: x = 0. f(0. .+ s.. 0) represents the pixel at the origin of the image. N . 2.:t CAD of the spatial coordinates (x. M .. 1) . The terms intensity and gray level will be used interchangeably..2b.. as well as the coordinate convention on which all subsequent discussions will be based. . .1. 0) f(N . . The motion of an object in the direction perpendicular to the sensor produces a two-dimensional image..3 illustrates this concept. 1 . These devices are ideally suited for applications in which objects are moving past the sensor (as in conveyor belts). where each sample is also quantized in intensity. and the value of f at any point (x.. M . y) must be digitized both spatially and in amplitude (intensity)... 1) CD' f(0. . Suppose that a continuous image is sampled uniformly into an array of N rows and M columns. The con'C7 Repeating this procedure for the even-numbered lines completes the second field of a TV frame.1. picit is ture element.. The resolutions of area sensors range between.1) (7. or pixel. f(N .1) f(0.1. and so CAD III .3.. M .. y) will be referred to as image sampling. y) to denote the two-dimensional image out of a TV camera or other imaging device.. This array.32 x 32 at the low end to 256 x 256 elements 'L3 w== tent of this register is fed into an amplifier whose output is a line of video.. 0) f(0. . y) is proportional to the brightness (intensity) of the image at that point... f(N .1. 2..... M. Line scan cameras obviously yield only one line of an input image....O V.'3 -16 for a medium resolution sensor. y) = f(l... 1... Throughout this book. 1) the pixel to its right. Higher-resolution devices presently in the market have a resolution on the order of 480 x 380 elements. while amplitude digitization will be called intensity or gray-level quantization. Each element in the array is called an image element. With reference to Fig.. 1) f(1...LOW-LEVEL VISION 301 Charge-coupled area arrays are similar to the line scan sensors. with the exception that the photosites are arranged in a matrix format and there is a gatetransport register combination between columns of photosites.1) .. y = 0. 7. image plane) coordinates.. The latter term is applicable to monochrome images and reflects the fact that these images vary from black to white in shades of gray. as shown in Fig. This "scanning" mechanism is repeated 30 times per second.' f(X. ... In order to be in a form suitable for computer processing.. may be represented as C]. called a digital image. and experimental CCD sensors are capable of achieving a resolution of 1024 x 1024 elements or higher.. an image 'function f(x. Digitization .1. We will often use the variable z to denote intensity variations in an image when the spatial location of these variations is of no interest. 7. The contents of odd-numbered photosites are sequentially gated into the vertical transport registers and then into the horizontal transport register.. where x and y denote spatial (i. we will use f(x. 0) f(1. . It is common practice to let N. however. The 256-. CDA ors "sue- :'. ACC t3. V Figure 7. It is noted that the 256 x 256 image is reasonably close to Fig.302 ROBOTICS: CONTROL. VISION. As a rule.4b to e shows the same image. This produced a checkerboard effect that is particularly visible in the low-resolution images. AND INTELLIGENCE /Origin . but image quality deteriorated rapidly for the other values of N. "C3 'b4 on. i. and 32. . 7. displayed with 16 levels.O L]4 ono . Since the display area used for each image was the same (512 x 512 display points). SENSING. and increases sharply thereafter. This effect is considerably more visible as ridgelike structures (called false contours) in the image a>. pixels with N = 512. the intensity of each pixel is quantized into one of 256 discrete levels.5 illustrates the effect produced by reducing the number of intensity levels while keeping the spatial resolution constant at 512 x 512. but with N = 256.3 Coordinate convention for image representation. M. consider Fig. Figure 7. In all cases the number of allowed intensity levels was kept at 256.. In order to gain insight into the effect of sampling and quantization. 128-. s.4. Part (a) of this figure shows an image sampled into an array of N x N The number of samples and intensity levels required to produce a useful (in the machine vision sense) reproduction of an original image depends on the image itself and on the intended application. 7. 64. the requirements to obtain quality comparable to that of monochrome TV pictures are on the order of 512 x 512 pixels with 128 intensity levels.fl s. 128. The 32-level image. and 64-level images are of acceptable quality. . Figure 7. a minimum system for general-purpose vision work should have spatial resolution capabilities on the order of 256 x 256 pixels with 64 levels.4a. °a) jar C3. y) is given by the value (intensity) of f at that point. pixels in the lower resolution images were duplicated in order to fill the entire display field.. and the number of discrete intensity levels of each quantized pixel be integer powers of 2. shows a slight degradation (particularly in areas of nearly constant intensity) as a result of using too few intensity levels to represent each pixel.. As a basis for comparison. The value of any point (x. . (b) 256 x 256.4 Effects of reducing sampling-grid size.LOW-LEVEL VISION 303 Figure 7. (e) 32 x 32. (a) 512 x 512. (c) 128 x 128. (d) 64 x 64. and 2 levels. AND INTELLIGENCE Figure 7. 7.8.3 ILLUMINATION TECHNIQUES Illumination of a scene is an important factor that often affects the complexity of vision algorithms. "CS m. 8. 7. Backlighting. and extraneous details. This technique is ideally suited for applications in which silhouettes of objects are sufficient for recognition or other measurements.5 A 512 x 512 image displayed with 256. VISION. 4.304 ROBOTICS: CONTROL. 64. 7. shadows. A well-designed lighting system illuminates a scene so that the complexity of the resulting image is minimized. The diffuse-lighting approach shown in Fig. as shown in Fig. SENSING.6a can be employed for objects characterized by smooth. produces a black and white (binary) image. Arbitrary lighting of the environment is often not acceptable because it can result in low-contrast images. 32. 16. An example is shown in Fig. 7. while the information required for object detection and extraction is enhanced.7.6 shows four of the principal schemes used for illuminating a robot work space. This lighting scheme is generally employed in applications where surface characteristics are important. 7. 128.6b.[ . regular surfaces. An example is shown in Fig. Figure 7. specular reflections. ' ue. Second.. or grids onto the work surface. by analyzing the way in which the light pattern is distorted.. it is possible to gain insight into the three-dimensional characteristics of the object. two light sources are used to guarantee that the object will break the act 4. This line would be interrupted by an object which breaks both light planes simultaneously. This lighting technique has two important advantages. . and disturbances of this pattern indicate the presence of an object.10b. Two examples of the structured-lighting approach are shown in Fig.9. located above the surface and focused on the stripe would see a continuous line of light in the absence of an object.. 7. First.10a.9b consists of two light planes projected from different directions. ue.5 (continued) The structured-lighting approach shown in Fig. thus simplifying the object detection problem. it establishes a known light pattern on the work space.. stripes. 7.LOW-LEVEL VISION 305 Figure 7. as shown in Fig.. As shown in Fig.6c consists of projecting points... . `p' -T" . A line scan camera..0 a-. The example shown in Fig. 7. The first shows a block illuminated by parallel light planes which become light stripes upon intersecting a flat surface. CG= 0. 7...r CAD R. This particular approach is ideally suited for objects moving on a conveyor belt past the camera. 7.. ono r. . but converging on a single stripe on the surface. AND INTELLIGENCE (c) Rough surface (d) Figure 7. For flaw-free surfaces little light is scattered upward to the camera.g. but two-dimensional information can be accumulated as the object moves past the cam. +-+ "c7 "C7 . 7. It is of interest to note that the line scan camera sees only the line on which the two light planes converge.t. 'i. (From Mundy [1977]. can be detected by using a highly directed light beam (e. The directional-lighting approach shown in Fig. 7.6 Four basic illumination schemes.11. such as pits and scratches.306 ROBOTICS: CONTROL. the presence of a flaw generally cps . Defects on the surface. SENSING. © IEEE. increases the amount of light scattered to the camera. VISION..fl era. On the other hand.) light stripe only when it is directly below the camera. An example is shown in Fig.6d is useful primarily for inspection of object surfaces. a laser beam) and measuring the amount of scatter. thus facilitating detection of a defect. 1 Some Basic Transformations The material in this section deals with the development of a unified representation for problems such as image rotation.7 Example of diffuse lighting. 2 in connection with robot arm kinematics. Here.4. and treat the stereo imaging problem in some detail.4 IMAGING GEOMETRY In the following discussion we consider several important transformations used in imaging. and translation. All transformations . Some of the transformations discussed in the following section were already introduced in Chap. but from the point of view of imaging. 7.LOW-LEVEL VISION 307 Figure 7. scaling. 7. we consider a similar problem. derive a camera model. 4. it is often useful to concatenate several transformations to produce a composite result. In cases involving two-dimensional images.308 ROBOTICS. Translation. Z*) are the coordinates of the new point. Suppose that we wish to translate a point with coordinates (X. Z). CONTROL. Y. and then rotation. Zo ). The notational representation of this process is . we will adhere to our previous convention of using the lowercase representation (x.4-1) can be expressed in matrix form by writing: X Y Z 1 (7. Z) to a new location by using displacements (X0.4-2) As indicated later in this section. AND INTELLIGENCE Figure 7. The translation is easily accomplished by using the following equations: -. SENSING. VISION. Z) as the world coordinates of a point. are expressed in a three-dimensional (3D) cartesian coordinate system in which a point has coordinates denoted by (X. Y. followed by scaling.4 X*=X+Xo Z*=Z+Zo Y* = Y + Yo (7. It is common terminology to refer to (X.8 Example of backlighting. Y. y) to denote the coordinates of a pixel. such as translation. Yo. Y*.1) where (X*. Equation (7. 10 (a) Top view of two light planes intersecting in a line of light.LOW-LEVEL VISION.. (Part (a) is from Rocher and Keissling [1975].9 Two examples of structured lighting.ht source N Light source 'In Light plane (a) Line scan camera Light source U .) . (Adapted from Holland [1979]. (b) Object will be seen by the camera only when it interrupts both light planes.:s Light source M* (b) Figure 7. © IEEE. part (b) is from Myers [1980]. © Plenum. Inc. © Kaufmann.`3 .) T Li(. 309 Figure 7. 310 ROBOTICS. we write Eq. (7.CONTROL.4-3) are clearly . © IEEE. and Z*.4-3) 0 In terms of the values of X*. Eqs.4-4) where A is a 4 x 4 transformation matrix. With this in mind.4-2) in the following form: tip X* Y* 1 0 1 0 X0 Yo X Y )>C 0 0 0 0 1 Z* 1 0 0 Zo 1 z 1 (7.11 Example of directional lighting. Throughout this section. VISION.4-5) . (7. (From Mundy [1977]. we will use the unified matrix representation v* = Av ginal coordinates: (7. AND INTELLIGENCE Figure 7.) simplified considerably by using square matrices. SENSING. v is a column vector containing the ori- ''C (7.4-2) and (7. Y*.a? equivalent. o t. (7.. 'LS The rotation angle 0 is measured clockwise when looking at the origin from a point on the +Z axis.LOW-LEVEL VISION 311 and v* is a column vector whose components are the transformed coordinates: X* Y* v* = (7.4-7) T = 0 0 0 0 0 Zo 1 0 and the translation process is accomplished by using Eq. rotation of a point about the Z coordinate axis by an angle 0 is achieved by using the transformation -. With reference to Fig. ''' O. the second performs the rotation..- cos0 Re = 0 0 sing 0 0 0 1 0 0 0 1 -sin0 cos0 0 (7.4-4). The transformations used for three-dimensional rotation are inherently more complex than the transformations discussed thus far. the matrix used for translation is given by 1 0 1 0 0 1 X0 Yo (7. It is noted that this transformation affects only the values of X and Y coordinates. Scaling by factors S. To rotate a given point about an arbitrary point in space requires three transformations: The first translates the arbitrary point to the origin.4-9) 0 . and Z axes is given by the transformation matrix Sx 0 Sy 0 0 0 (7. Scaling. Sy.12. and the third translates the point back to its original position.4-8) S = 0 0 SZ 0 0 0 0 0 01 coo Rotation. The simplest form of these transformations is for rotation of a point about the coordinate axes.4-6) Z* 11 Using this notation. 7. so that v* = Tv. Y. and SZ along the X. ~.4-12) is important to note that these matrices generally do not commute. The application of several transformations can be represented by a single 4 x 4 transformation matrix. . rotation of a point about the Y axis by an angle 0 is achieved by using the transformation ti.4-11) sin /3 0 0 cos /3 0 1 0 0 Concatenation and Inverse Transformations. Angles are measured clockwise when looking along the rotation axis toward the origin. v* = RB[S(Tv)] = Av where A is the 4 x 4 matrix A = REST. scaling.312 ROBOTICS: CONTROL. and rotation about the Z axis of a point v is given by C1.' 1 0 0 0 R« = 0 0 0 sin a . Rotation of a point about the X axis by an angle a is performed by using the '_' transformation C. VISION.sin a cos a 0 0 cos a 0 0 1 (7. cosy 0 Ro = 0 1 -sin(3 0 0 0 (7.4-10) Finally. For example."N .12 Rotation of a point about each of the coordinate axes. SENSING. translation. It (7. AND INTELLIGENCE Y Figure 7.. and so the order of application is important. 13.4-15) 0 0 The inverse of more complex transformation matrices is usually obtained by 0 numerical techniques. If we form a 4 x m matrix V whose columns are these column vectors. Perspective transformations play a central role in image processing because they provide an approximation to the manner in which an image is farmed by viewing a three-dimensional world.4. . v. (7. 7. Similarly.4-14) '-. For example. We define the camera coordinate system (x.. represent the coordinates of m points. . T-I = (7. let VI. y.4-5). we point out that many of the transformations discussed above have inverse matrices that perform the opposite transformation and can be obtained by inspection..LOW-LEVEL VISION 313 Although our discussion thus far has been limited to transformations of a single point..2 Perspective Transformations A perspective transformation (also called an imaging transformation) projects 3D points onto a plane. these transformations are fundamentally different from those discussed in the previous section because they are nonlinear in the sense that they involve division by coordinate values. . 7. v. by using a single transformation. With reference to Eq.. v2.. and optical axis (established by the center of the lens) along the z `J' . Before leaving this section.4-13) The resulting matrix V* is 4 x m. the inverse translation matrix is given by '-. the same ideas extend to transforming a set of m points simultaneously '"' L). Although perspective transformations will be expressed later in this section in a 4 x 4 matrix form. then the simultaneous transformation of all these points by a 4 x 4 transformation matrix A is given by V* = AV (7. A model of the image formation process is shown in Fig. Its ith column. the inverse rotation matrix RBI is given by RBI= cos(-9) sin( -B) -sin(-O) cos(-O) 0 0 0 0 0 1 0 0 0 1 (7. contains the coordinates of the transformed point corresponding to vi. z) as having the image plane coincident with the xy plane. 7. Let (X. Z) be the world coordinates of any point in a 3D scene.4-17): CAD an d y = -Y Z N t?.4-17) N x _ xX X-Z (7.314 ROBOTICS: CONTROL. X X X Z . With reference to Fig. In this section. Thus. it is assumed that the camera coordinate system is aligned with the world coordinate system (X. AND INTELLIGENCE Y. X is the focal length of the lens.may. Z) onto the image plane..4-16) and (7. Y. (7. it follows that . o. the center of the image plane is at the origin.13.13. What we wish to do first is obtain a relationship that gives the coordinates (x. CAD and where the negative signs in front of X and Y indicate that image points are actually inverted. y) of the projection of the point (X.Z Figure 7. The camera coordinate system (x. The image-plane coordinates of the projected 3D point follow directly from Eqs.Z = Y (7. Y Image plane x.Z (7. tion will be removed in the following section.4-16) y X =- Y Z. Y.X X x. 7.13.4-18) (7 . Z).A x. as can be seen from the geometry of Fig.13 Basic model of the imaging process. z) is aligned with the world coordinate system (X. that is. X). VISION. Z). and the center of the lens is at coordinates (0. This is easily accomplished by the use of similar triangles. 4-19) . 7. This restric. Y. Y. X (X. axis. y. all points of interest lie in front of the lens. SENSING. 0. It will be assumed throughout the following discussion that Z > X. Z) z. If the camera is in focus for distant objects. Y. as shown in Fig. nonzero constant. As indi- . kZ. and scaling.4-22) 0 Then the product Pw. The homogeneous coordinates of a point with cartesian coordinates (X.kZ x + kj The elements of Ch are the camera coordinates in homogeneous form. translation.4-21) If we define the perspective transformation matrix 0 1 1 0 0 1 1 0 0 0 1 0 P = 0 0 0 (7. This can be accomplished easily by using homogeneous coordinates.= kX kY kZ kX kY kZ 0 0 0 0 (7. k). kY.4-20) and its homogeneous counterpart is given by kX kY Wh = kZ k (7. Clearly. Z) are defined as (kX..LOW-LEVEL VISION 315 It is important to note that these equations are nonlinear because they involve division by the variable Z. it is often convenient to express these equations in matrix form as we did in the previous section for rotation. where k is an arbitrary. A point in the cartesian world coordinate system may be expressed in vector form as OCR (7.4-23) k j . Y. Although we could use them directly as shown above. conversion of homogeneous coordinates back to cartesian coordinates is accomplished by dividing the first three homogeneous coordinates by the fourth. yields a vector which we shall denote by Ch: 1 0 1 0 0 1 1 0 0 0 1 0 Ch =PW1. 4-18) and (7. SENSING. Thus. the cartesian coordinates of any point in the camera coordinate system are given in vector form by x c = y z (7. Y. as shown earlier in Eqs.316 ROBOTICS: CONTROL. (7. (7.13.4-25) where P-' is easily found to be 1 0 1 0 0 1 1 0 0 0 1 P-' = L 0 0 0 0 0 (7 4-26) .4-24) The first two components of c are the (x.4-25) then yields the homogeneous world coordinate vector kxo Wh = kyo (7. w1.4-27) k Application of Eq. these coordinates can be converted to cartesian form by dividing each of the first three components of c1. This point can be expressed in homogeneous vector form as kx0 Ch = kyo 0 (7. Z). this component acts as a free variable in the inverse perspective transformation. (7. AND INTELLIGENCE cated above.4-19). VISION. X J Suppose that a given image point has coordinates (x0. 7. from Eq.4-28) 0 k . The inverse perspective transformation maps an image point back into 3D. As will be seen below. where the 0 in the z location simply indicates the fact that the image plane is located at z = 0. y) coordinates in the image plane of a projected 3D point (X. The third component is of no interest to us in terms of the model in Fig. Thus. = P-Ich (7. by the fourth. 0). yo.4-23). 4-25) that Wh __ (7.4-31) These equations show that. X).4-18) and (7. 0) and (0. unless we know something about the 3D point which generated a given image point (for example. This observation.4-30) and Y= (X .4-33) . 0.. (7.4-19).. Thus. letting . can be used as a way to formulate the inverse perspective transformation simply by using the z component of Ch as a free variable instead of 0. (7. `'' (7. we cannot completely recover the 3D point from its image. yo. X w = Y xo A 0 (7.4-29) Z This is obviously not what one would expect since it gives Z = 0 for any 3D point. The equations of this line in the world coordinate system are obtained from Eqs. The problem here is caused by the fact that mapping a 3D scene onto the image plane is a many-to-one transformation.4-32) k we now have from Eq. which is certainly not unexpected.Z) (7. its Z coordinate). kxo Ch = kyo kz (7. in Cartesian coordinates.-. yo) corresponds to the set of colinear 3D points which lie on the line that passes through (xo.LOW-LEVEL VISION 317 or. that is. The image point (xo. . However. The situation is depicted in Fig.4-35) z Z = X+z Solving for z in terms of Z in the last equation and substituting in the first two expressions yields X °(X -Z) (7. upon conversion to cartesian coordinates. Y.14. which shows a world coordinate system (X. yields X x0 (7. This problem will be addressed again in Sec.4.4. AND INTELLIGENCE which.4-36) Y °(X -Z) (7. 7. This model is based on the assumption that the camera and world coordinate systems are coincident. These two equations thus constitute a basic mathematical model of an imaging camera. the basic objective of obtaining the imageplane coordinates of any given world point remains the same.3 Camera Model Equations (7.318 ROBOTICS: CONTROL. SENSING. 7.4-23) and (7. 7. treating z as a free variable yields the equations X x0 X = a+z X+ Xz Y (7.4-34) In other words.4-37) which agrees with the above observation that recovering a 3D point from its image by means of the inverse perspective transformation requires knowledge of at least one of the world coordinates of the point.4-24) characterize the formation of an image via the projection of 3D points onto an image plane.5. Z) used to locate both the camera and 3D points (denoted by w). VISION. This . In this section we consider a more general problem in which the two coordinate systems are allowed to be separate. CC.14 Imaging geometry with two coordinate systems. In this discussion.. (x. y. The concepts developed in the last two sections provide all the necessary tools to derive a camera model based on the geometrical arrangement of Fig. r3) . The offset of the center of the gimbal from the origin of the world coordinate system is denoted by vector w0.LOW-LEVEL VISION 319 W Figure 7. ono C3' C1.4-22) to obtain the imageplane coordinates of any given world point. The approach is to bring the camera and world coordinate systems into alignment by applying a set of transformations. It is assumed that the camera is mounted on a gimbal which allows pan through an angle 0 and tilt through an angle a. we first reduce the . with . (7. figure also shows the camera coordinate system i. Nit CAD "C3 r. and the offset of the center of the imaging plane with respect to the gimbal center is denoted by a vector r. After this has been accomplished. we simply apply the perspective transformation given in Eq.. and tilt as the angle between the z and Z axes. z) and image points `C3 (denoted by c). con .14. CAD components (r1 .. r2. In other words.... 7. pan is defined as the angle between the x and X axes.. y. 7. which implies a counterclockwise rotation of the camera about the z axis.+ °o. 7. Since tilt is the angle between these two axes.. CAD . and (4) displacement of the image plane with respect to the gimbal center. As above.O .13 before applying the perspective transformation. The sequence of mechanical steps just discussed obviously does not affect the world points since the set of points seen by the camera after it was moved from normal position is quite different. a counterclockwise rotation of the camera implies positive angles. r-y co. and z.. Suppose that. these two axes are aligned. the camera was in normal position. we tilt the camera an angle a by rotating the z axis by a.14 can be achieved in a number of ways.4-38) L0 In 0 other words. The rotation is with respect to the x axis and is accomplished by applying the transformation matrix Ra given in Eq.T' CAD .12. a homogeneous world point Wh that was at coordinates (X0. s. t A useful way to visualize these transformations is to construct an axis system (e. and all axes were aligned...4-9). our problem is thus reduced to applying to every world point a set of transformations which correspond to the steps given above.4-9). c°o }. 7. (7. one axis at a time. (3) tilt of the z axis.4-10) to all points (including the point RBGwh). At this point in the development the z and Z axes are still aligned. . However. and perform the rotations manually.^3 gimbal center is accomplished by using the following transformation matrix: 1 0 0 1 0 0 1 G = 0 0 0 0 -X0 -Yo -Z0 1 U-. in the sense that the gimbal center and origin of the image plane were at the origin of the world coordi- Starting from normal position." '"S' (7. application of this matrix to all points (including the point Gwh ) effectively rotates the x axis to the desired location. the pan angle is measured between the x and X axes. and the 0° mark is where the z and Z axes are aligned. it is important to keep clearly in mind the convention established in Fig. As indicated earlier.t r0. VISION. (2) pan of the x axis. (7. SENSING. Translation of the origin of the world coordinate system to the location of the nate system. In normal position. The unrotated (0°) position corresponds to the case when the x and X axes are aligned.t t3O a.13 for application of the perspective transformation. AND INTELLIGENCE problem to the geometrical arrangement shown in Fig.g. That is. angles are considered positive when points are rotated clockwise. 7. We assume the following sequence of steps: (1) displacement of the gimbal center from the origin. When using Eq. Z0) is at the origin of the new coordinate system after the transformation COD Gw.320 ROBOTICS: CONTROL. we simply rotate it by 0. (7. we can achieve normal position again simply by applying exactly the same sequence of steps to all world points. label the axes x. In order to pan the x axis through the desired angle. with pipe cleaners). initially. the geometrical arrangement of Fig. The rotation is with respect to the z axis and is accomplished by using the transformation matrix R0 given in Eq. Y0. Since a camera in normal position satisfies the arrangement of Fig. In other words. (7."' which are the image coordinates of a point w whose world coordinates are (X. displacement of the origin of the image plane by vector r is achieved by the transformation matrix 1 0 1 0 .4-42) (7.-. The image-plane coordinates of a point wh are finally obtained by using Eq. y) of the imaged point by dividing the first and second components of Ch by the fourth.14 has the following homogeneous representation in the camera coordinate system: cl. and a = 0 = 0 °. 7.4-41) and converting to cartesian coordinates yields _ x-X and (X-X0) cosO+(Y-Yo) sin0-r1 -(X-Xo)sinOsina+(Y-Yo)cosOsina-(Z-Z))cosa+r3+X -(X-X0)sinUcos a+(Y-Y0)cosOcosa+(Z-Z0)sina-r2 y-X -(X-Xo)sin0sina+(Y-Yo)cos0sina-(Z-ZO)cosa+r3+X .LOW-LEVEL VISION 321 According to the discussion in Sec.4-41) (7.4. a homogeneous world point which is being viewed by a camera satisfying the geometrical arrangement shown in Fig. Expanding Eq. 7. Off' (7.4-40) 0 0 . by applying to Wh the series of transformations CRGwh we have brought the world and camera coordinate systems into coincidence. S. (7.rI C = 0 0 0 0 1 -r2 (7. we obtain the cartesian coordinates (x.4-19) when X0 = Yo = Zo = 0.4-39) Finally.o . the two rotation matrices can be concatenated into a single matrix.4-9) and (7.4-18) and (7.2. It then follows from Eqs. R = R«Ro. (7.4-43) 0. Y.sin 0 cos a sin 0 sin a 0 cos 0 cos a .4-10) that cos 8 sin 8 0 0 0 0 1 R = .4.. (7.r3 1 0 Thus.4-22). Z). 7. It is noted that these equations reduce to Eqs.cos 0 sin a 0 sin a cos a 0 (7. rI = r2 = r3 = 0. As indicated in Sec.4. In other words. suppose that we wish to find the image coordinates of the corner of the block shown in Fig. = PCRGwh This equation represents a perspective transformation involving two coordinate systems. Example: As an illustration of the concepts just discussed. We will follow the convention established above that transformation angles are positive when the camera rotates in a counterclockwise manner when viewing the origin along the axis of rotation.16d shows a view after pan.4-43).035 m The corner in question is at coordinates (X. Let us examine in detail the steps required to move the camera from nor- mal position to the geometry shown in Fig. along the x axis of the camera to establish tilt.03 -1.02 m A = 35 mm = 0.322 ROBOTICS: CONTROL. Figure 7.53 + A and y-X .53 + x CDO s. after this step. The world coordinate axes are shown dashed in the latter two figures to emphasize the fact that their only use is to establish the zero reference for the pan and tilt angles. Z) = (1.4-42) and (7. The camera is offset from the origin and is viewing the scene with a pan of 135 ° and a tilt of 135 °. all rotations take place about the new (camera) axes. which makes a a positive angle. .42 -1. Figure 7.15.. VISION. 7. 0. after displacement of the worldcoordinate origin. 7. It is important to note that. (7. The rotation about this axis is counterclockwise. and displaced from the origin in Fig. which makes 0 a positive angle. 1. That is.2). that is.16c shows a view along the z axis of the camera to establish pan. Y.fl Xa = O m own Yo = O m Z o = 1 m a = 135° 0 = 135 ° ri = 0.0. we simply substitute the above parameter values into Eqs. AND INTELLIGENCE 7. To compute the image coordinates of the block corner. the world coordinate axes are used only to establish angle references. The camera is shown in normal position in Fig. We do not show in this figure the final step of displacing the image plane from the center of the gimbal.03 m r2 = r3 = 0. SENSING.16a. -0.16b.15. L]. The following parameter values apply to the problem: . In this case the rotation of the camera about the z axis is counterclockwise so world points are rotated about this axis in the opposite direction. 7. .3 we obtained explicit equations for the image coordinates (x. for example. (7.4-42) and (7. A change of coordinates would be required to use the convention established earlier.g. it would have been outside the effective field of view of the camera). camera offsets. Substituting X = 0.LOW-LEVEL VISION 323 Z X Y V Figure 7. when the camera moves frequently) to determine one or more of the . 7. we had used a lens with a 200-mm focal length..025 x 0.025 m) imaging plane. y) of a world point w. and angles of pan and tilt. (7. 7.4. Finally. implementation of these equations requires knowledge of the focal length.035 yields the image coordinates x = 0.15 Camera viewing a 3D scene.009 m It is of interest to note that these coordinates are well within a 1 x 1 inch (0.0007 m and y = 0.e.4-43). it is often more convenient (e. it is easily verified from the above results that the corner of the block would have been imaged outside the boundary of a plane with these dimensions (i.4. As shown in Eqs. in which the origin of an image is at its top left corner. If.4-43) are with respect to the center of the image plane. While these parameters could be measured directly. we point out that all coordinates obtained via the use of Eqs.4 Camera Calibration In Sec.4-42) and (7. 4-41) that Ch = Awh.4-44) a33 a43 Z 1 CIA a41 a42 a44 . This requires a set of image points whose world coordinates are known. let A = PCRG.4-41). we may write ci:1 Ch2 Ch3 all a21 a31 a12 a22 a32 a13 a23 a14 a24 a34 X Y (7. (7.16 (a) Camera in normal position.XY plane (c) (d) Figure 7. VISION. and the computational procedure used to obtain the camera parameters using these known points is often referred to as camera calibration. AND INTELLIGENCE Z k l' (a) (b) . (c) Observer view of rotation about z axis to determine pan angle.324 ROBOTICS: CONTROL. SENSING. (7. Letting k = 1 in the homogeneous representation. The elements of A contain all the camera parameters. and we know from Eq. With reference to Eq. (b) Gimbal center displaced from origin. (d) Observer view of rotation about x axis for tilt. parameters by using the camera itself as a measuring device. l = xch4 and Ci12 = ych4 in Eq.a44 y + a24 = 0 (CD (7. (7.4-46) Substituting ci.m (there are two equations involving the coordinates of these points. for example.4-49) O\- The calibration procedure then consists of (1) obtaining m > 6 world points with known coordinates (Xi.4. Noble [1969]).. yi ). That is. 2. (7. Substitution of CIA in the first two equations of (7. m. Y2)It is assumed that the cameras are identical and that the coordinate systems of both CAD COD N . It was noted in Sec. an image point does not uniquely determine o1= -p- the location of a corresponding world point.4. Z.LOW-LEVEL VISION 325 From the discussion in the previous two sections we know that the camera coordinates in cartesian form are given by Ch1 X ch4 (7. 7..17.). There are many numerical techniques for finding an optimal solution to a linear system of equations such as (7. It is shown in this section that the missing depth information can be obtained by using stereoscopic (stereo for short) imaging techniques.4-47) yields two equations with twelve unknown coefficients: a11X+a12Y+a13Z-a41xX-a42xY-a43xZ-a44x+a14 =0 a21 X+ a22 Y+ a23 Z . As shown in Fig. Y.4-49) to solve for the unknown coefficients.4-47) where expansion of C1i3 has been ignored because it is related to z. yl) and (x2. so at least six points are needed). . .a41 yX . Yi.g.a42 y Y. i = 1.4-45) an d Ch2 y = Ch4 (7. a world point w).4-48) and (7.6-44) and expanding the matrix product yields xCh4 = a11X + a12Y + a13Z + a14 yCh4 = a21 X + a22 Y + a23 Z + a24 Ch4 = a41 X + a42 Y + a43 Z + a44 N (7. . and the objective is to find the coordinates (X..448) and (7. (2) imaging these points with the camera in a given position to obtain the corresponding image points (xi.5 Stereo Imaging C1.. stereo imaging involves obtaining two separate image views of an object of interest (e. . 7. and (3) using these results in Eqs.4-49) (see. Z) of a point w given its image points (x1.4-48) (7. 7..2 that mapping a 3D scene onto an image plane is a many-to-one transformation. i = 1.a43 yZ . The distance between the centers of the two lenses is called the baseline. 2. 18.4-51) However. the Z coordinate of w is exactly the same for both camera coordinate systems.-.4-31). AND INTELLIGENCE Figure 7.4-52) Z2 = Z1 = Z (7. under the above assumption. If. = XI . Recall our convention that. Then. 7. differing only in the location of their origins. SENSING.17. it follows that X2 = XI + B and (7. then we would have that w lies on the line with (partial) coordinates X2 = (X . as shown in Fig. B is the baseline distance. 7. instead. cameras are perfectly aligned. due to the separation between cameras and the fact that the Z coordinate of w is the same for both camera coordinate systems. with the second camera and w following. . as indicated above. after the camera and world coordinate systems have been brought into coincidence. but keeping the relative arrangement shown in Fig.(X . from Eq. c~b . Suppose that we bring the first camera into coincidence with the world coordi- nate system. (7. VISION. a condition usually met in practice.4-53) where.4-50) where the subscripts on X and Z indicate that the first camera was moved to the origin of the world coordinate system. w lies on the line with (partial) coordinates 'LS X.Z2) (7.Z1) (7. the xy plane of the image is aligned with the XY plane of the world coordinate system.17 Model of the stereo imaging process. the second camera had been brought to the origin of the world coordinate system. Then.326 ROBOTICS: CONTROL. (7.4-51) results in the following equations: XI + B = and . as discussed in coo -c7 .4-54) (7. (7.4-56) to obtain Z is to actually find two corresponding points in different images of the same scene. (7.4-50) and (7. The X and Y world coordinates then follow directly from Eqs.4-55) from (7.4-52) and (7.4-56) which indicates that if the difference between the corresponding image coordinates x2 and xI can be determined. calcu- lating the Z coordinate of w is a simple matter. a frequently used approach is to select a point within a small region in one of the image views and then attempt to find the best matching region in the other view by using correlation techniques.4-31) using either (xI.17 with the first camera brought into coincidence with the world coordinate system.(X . yI ) or (X2. Substitution of Eqs.4-55) Subtracting Eq.18 Top view of Fig. (7. Y2 ) The most difficult task in using Eq.LOW-LEVEL VISION 327 Figure 7. Since these points are generally in the same vicinity. 7.4-53) into Eqs.4-54) and solving for Z yields the expression Z=X- XB (7.Z) (7. and the baseline and focal length are known. (7.4-30) and (7. y) has four horizontal and vertical neighbors whose coordinates are given by (x + 1. together with the 4-neighbors defined above.1) and will be denoted ND (p) . 2.5 SOME BASIC RELATIONSHIPS BETWEEN PIXELS In this section we consider several primitive. y + 1) (x. called the 4-neighbors of p. When the scene contains distinct features. then V = {59. y). y) is on the border of the image. Two pixels p and q with values from V are m-connected if . we point out that the calibration procedure developed in the previous section is directly applicable to stereo imaging by simply treating the cameras independently. "c3 7. m-connectivity (mixed connectivity). such as prominent corners. denoted N8(p).5. VISION. an image will be denoted by f(x. AND INTELLIGENCE Chap. if only connectivity of pixels with intensities of 59. We consider three types of connectivity: 1. but important relationships between pixels in a digital image. we will use lower-case letters. These points. 7. y) is on the border to the image. y) will be denoted by S. such as p and q. y) (x.y+ 1) (x+ 1. As before.5. a feature-matching approach will generally yield a faster solution for establishing correspondence. for example. y) -CD (x . are called the 8-neighbors of p.1) (x.1. some of the points in ND(p) and N8 (p) will be outside the image if (x. As in the previous sections.y. Two pixels p and q with values from V are 8-connected if q is in the set N8 (p). y+ 1) (x. SENSING. Before leaving this discussion.328 ROBOTICS: CONTROL. The four diagonal neighbors of p have coordinates (x+ 1. 61).1.1) This set of pixels. y) and also that some of the neighbors of p will be outside the digital image if (x. will be denoted by N4 (p).y. 8. 7. and 61 is desired. It is noted that each of these pixels is a unit distance from (x. When referring to a particular pixel.1. 60. 4-connectivity. 8-connectivity.2 Connectivity Let V be the set of intensity values of pixels which are allowed to be connected. 3. Two pixels p and q with values from V are 4-connected if q is in the set N4(p). A subset of pixels of f(x. y . 60.1 Neighbors of a Pixel A pixel p at coordinates (x. . Assuming V = {1.3 Distance Measures Given pixels p. 8-. (b) 8-neighbors of the pixel labeled "2. D(p. consider the pixel arrangement shown in Fig. . 8-. (This is the set of pixels that are 4-neighbors of both p and q and whose values are from V.19c. D(p. Y") where (x0. yo) = (x. If p and q are pixels of an image subset S. depending on the type of adjacency used. or m-adjacency. A pixel p is adjacent to a pixel q if they are connected. 1 °O0 or m-paths.." (c) mneighbors of the same pixel. For example. 7. and (u. y) to pixel q with coordinates (s. (x1 . q) >. YO). yi_ 1). (s. This ambiguity is removed by using mconnectivity. . We may define 4-. depending on the type of connectivity specified.19a. q) = D(q. (xi. p) 1. yi) is adjacent to i 5 n. or (b) q is in ND(p) and the set N4 (p) fl N4 (q) is empty. (s. and z.19 (a) Arrangement of pixels. respectively. 0 3. (xn. and n is the length of the path. the set of pixels in S that are connected to p is called a connected component of S. as shown in Fig. (a) q is in N4(p). q. Two image subsets S1 and S2 are adjacent if some pixel in St is adjacent to some pixel in S2. q) = 0 if p = q] 2. y) and (x.19b. t). then p is connected to q in S if there is a path from p to q consisting entirely of pixels in S. We may define 4-. 7. A path from pixel p with coordinates (x.. 7 7. It is important to note the ambiguity that results from multiple connections to this pixel. z) .LOW-LEVEL VISION 329 0 1 I 0 1----1 0 0 2 0 0 20 0 (b) 1 0 2 0 0 0 (a) 1 0 0 0 (c) I Figure 7. For any pixel p in S. the 8-neighbors of the pixel with value 2 are shown by dashed lines in Fig. with coordinates (x.Y1 ). z) s D(p. (xi_ 1. q) + D(q. y). 2}. It then follows that any two pixels of a connected component are connected to each other. v). 7. D(p.) Mixed connectivity is a modification of 8-connectivity and is introduced to eliminate the multiple connections which often cause difficulty when 8-connectivity is used. and that distinct connected components are disjoint. t). t) is a sequence of distinct pixels with coordinates (X0. we call D a distance function or metric if [D(p.5.. q) = Ix .sj. since the definition of these distances involve only the coordinates of these points.tj) (7. y). VISION.330 ROBOTICS: CONTROL. however. For example. (x. (7. y) (the center point) form the following contours of constant distance: 2 2 2 1 1 acs It is noted that the pixels with D4 = 1 are the 4-neighbors of (x. I Y .'O "'0 o'0 . When dealing with m-connectivity. 'p. y) are the points contained in a disk of radius r centered at . SENSING. Similar comments apply to the D8 distance. In fact. the value of the distance (length of the path) between two pixels depends on the values of the pixels 'A. It is of interest to note that the D4 distance between two points p and q is equal to the length of the shortest 4-path between these two points. q) = {(x . y). y). y) form a diamond centered at (x.t) t3. For example.5-1) For this distance measure.sl + (y .s)2 + (y - t)2)1/2 (7. y) (the center point) form the following contours of constant distance: 2 2 2 2 2 2 1 1 1 The pixels with D8 = 1 are the 8-neighbors of (x. we can consider both the D4 and D8 dis(or ''h "CC 2 1 0 1 2 2 2 2 D8(p. Y) The D4 distance (also called city-block distance) between p and q is defined as - D4(p. The D8 distance (also called chessboard distance) between p and q is defined as In this case the pixels with D8 distance less than or equal to some value r form a square centered at (x.5-2) In this case the pixels having a D4 distance less than or equal to some value r from (x. the pixels with D8 distance < 2 from (x. the pixels having a distance less than or equal to some value r from (x. the pixels with D4 distance < 2 from (x. q) = max( lx .5-3) 1 1 1 1 2 2 2 2 2 0 1 2 2 2 2 2 CAD tances between p and q regardless of whether or not a connected path exists between them. AND INTELLIGENCE The euclidean distance between p and q is defined as De(p. y). . LOW-LEVEL VISION 331 along the path as well as their neighbors. y) to yield g(x. The center of the subimage is moved from pixel to pixel starting. and applying the operator at each location (x. at the top left corner.20. y). are sometimes used. defined over some neighborhood of (x. Although other neighborhood shapes. y). P2. The first is based on spatial-domain techniques and the second deals with frequency-domain concepts via the Fourier transform.6.6 PREPROCESSING In this section we discuss several preprocessing approaches used in robotic vision systems. The spatial domain refers to the aggregate of pixels composing an image. Spatial-Domain Methods. as discussed in Sec. say. the mdistance between p and p4 is 2. and spatial-domain methods are procedures that operate directly on these pixels. 7.6-1) ". only a subset of these methods satisfies the requirements of computational speed and low implementation cost. y) is to use a square or rectangular subimage area centered at (x.. If both pl and p3 are 1.t where f(x. y) is the resulting (preprocessed) image. (o] . For instance.1 Foundation In this section we consider two basic approaches to preprocessing. 7. y). and p4 are valued 1 and pI and p3 may be valued 0 or 1: P3 P4 PI w°- P2 P If we only allow connectivity of pixels valued 1. as shown in Fig. Together. such as a circle. Preprocessing functions in the spatial domain may be expressed as g(x.t . and h is an operator on f. where it is assumed that p.2. consider the following arrangement of pixels. square arrays are by far the most predominant because of their ease of implementation. the distance is 3. 7. y) = h[f(x.6. The range of preprocessing approaches discussed in this section are typical of methods that satisfy these requirements. 7. these approaches encompass most of the preprocessing algorithms used in robot vision systems. It is also possible to let h operate on a set of input images. If either pI or p3 is 1. which are essential elements of an industrial vision system. y) is the input image. the distance is 4. Although the number of techniques available for preprocessing general image data is significant. The principal approach used in defining a neighborhood about (x. and pI and p3 are 0. y)] (7. g(x. such as performing the pixelby-pixel sum of K images for noise reduction. At each pixel position in the image. VISION. the center of the mask is located at one of the isolated points. If. In this case h becomes an intensity mapping or transformation T of the form s = T(r) (7. 7. y) in an image.20 A 3 x 3 neighborhood about a point (x. that is. the sum will be different 7r' C)° . we multiply every pixel that is contained within the mask area by the in' . y) and g(x.6. The simplest form of h is when the neighborhood is 1 x 1 and. 3 x 3) two-dimensional array. while its 8-neighbors are multiplied by . The results of these nine multiplications are then summed.(DD corresponding mask coefficient.g. such as the one shown in Fig. As an introduction to this concept. we have used s and r as variables denoting. the pixel in the center of the mask is multiplied by 8. SENSING.CONTROL. respectively. on the other hand. the sum will be zero. as indicated above. If all the pixels within the mask area have the same value (constant background). suppose that we have an image of constant intensity which contains widely isolated pixels whose intensities are different from the background.1. 7. a mask is a small (e. Basically.. therefore. for simplicity. This type of transformation is discussed in more detail in Sec. AND INTELLIGENCE Figure 7.6-2) where. the intensity of f(x. The procedure is as follows: The center of the mask (labeled 8) is moved around the image. y) at any point (x. y). g depends only on the value of f at (x. or filters). whose coefficients are chosen to detect a given property in an image. These points can be detected by using the mask shown in Fig. 7.21.3. y). windows. One of the spatial-domain techniques used most frequently is based on the use of so-called convolution masks (also referred to as templates.332 ROBOTICS.20. 22 A general 3 x 3 mask showing coefficients and corresponding image pixel locations..I) (C + I.1. w2w 2 . y)] wlf(x. y). and consider the 8-neighbors of (x. y) + w6f(x. .1)+w2f(x. 1 . y). w9 represent mask coefficients As shown in Fig.22. from zero. we may generalize the preceding discussion as that of performing the following operation: '"' h[f(x.LOW-LEVEL VISION 333 8 Figure 7.1.1) + w5f(x. y + 1) + w7f(x + 1.1. y . .1) + w8 f(x + 1. GL. 7.21 A mask for detecting isolated points different from a constant background.y+ 1) + waf(x. \ + I) Figure 7. These weaker responses can be eliminated by comparing the sum against a threshold. (7.. y . the sum will also be different from zero. but the magnitude of the response will be weaker. if we let wI. y. y) + + w9f(x + 1.6-3) It 1 (- I. +I) 11'4 Ith + I) 11'7 It's 111 (x + I. y + 1) on a 3 x 3 neighborhood of (x. (1 + I. If the isolated point is in an off-center position. y)+w3f(x. N . N x=0 y=0 1 N-I N-I vy)IN (7. 1. . . many spatial techniques for COD A. f(x). AND INTELLIGENCE Before leaving this section. is easily verified by substituting Eq. 1. The concept of "frequency" is often used in interpreting the Fourier transform and arises from the fact that this particular transform is composed of complex sinusoids. We begin the discussion by considering discrete functions of one variable. The forward Fourier transform of f(x) [l.1. N .1. (7. or vice versa. Use of a fast Fourier transform (FFT) algorithm significantly reduces this number to N loge N.-. . where N is assumed to be an integer power of 2. . 1. . and to obtain the skeleton of an object. to compute measures of texture. (7. . A more extensive treatment of the Fourier transform and its properties may be found in Gonzalez and Wintz [1977]. The validity of these expressions.I E f(x)e- (7. we will use neighborhood operations in subsequent discussions for noise reduction.". . 2. The inverse Fourier transform of F(u) yields f(x) back.6-6) --l 71~ .1. . 2. 1. A number of FFT algorithms are readily available in a variety of computer languages. 2. to obtain variable image thresholds.E E f(x. Frequency-Domain Methods. The frequency domain refers to an aggregate. However. . N . . VISION. . In this equation j = and u is the so-called frequency variable.6-4) for u = 0. The material in this section will serve as an introduction to these concepts. v) = . In addition. . is defined as F(u) - 1 N. A direct implementation of Eq..N .6-4) =0 for u = 0. In either case we would get an identity.6-5) for x = 0. . we point out that the concept of neighborhood pro- cessing is not limited to 3 x 3 areas nor to the cases treated thus far. N-I (7. of complex pixels resulting from taking the Fourier transform of an image. . The two-dimensional Fourier transform pair of an N X N image is defined as F(u. 2. 1. N . `i7 CD. .6-5) U = 0 for x = 0. .-ti CAD . SENSING.' Fourier transform does play an important role in areas such as the analysis of object motion and object description. . the A. Due to extensive processing requirements.. (7. enhancement and restoration are founded on concepts whose origins can be traced to a Fourier transform formulation.334 ROBOTICS: CONTROL. Similar comments apply to Eq. For `CAD instance. . called the Fourier transform pair. 2. . x = 0..1. and is defined as f(x) =F. frequency-domain methods are not nearly as widely used in robotic vision as are spatial-domain techniques. (7.6-5). .1 would require on the order of N2 additions and multiplications.6-4) for F(u) in Eq. but the interested reader is referred to the book by Goodman [1968] for an excellent introduction to Fourier optics. Applications of the discrete two-dimensional Fourier transform in image reconstruction.. the usefulness of this approach in industrial machine vision is still quite restricted due to the extensive computational requirements needed to implement this transform.. We point out before leaving this section. thus producing a two-dimensional array of intermediate results. y) whose intensity at every point (x. . N . This leads to a straightforward procedure for computing the two-dimensional Fourier transform using only a onedimensional FFT algorithm: We first compute and save the transform of each row of f(x.1.`y CAD CAD o.6. is used in industrial environments for tasks such as the inspection of finished metal surfaces.. by treating the boundary of an object as a one-dimensional array of points and computing their Fourier transform.. These results are multiplied by N and the one-dimensional transform of each column is computed.1.6-4).. as mentioned earlier. transmission. as will be shown in Chap. CAD dimensional Fourier transform has also been used as a powerful tool for detecting object motion. selected values of F(u) can be used as descriptors of boundary shape. enhancement. The onel]. vy)/N (7. F(u. Further treatment of this topic is outside the scope of our present discussion. and f(x. The final result is F(u. This approach. . which requires the use of precisely °J" aligned optical equipment. 1 . light) by optical means. y) is obtained by averaging the intensity values of the pixels of f contained in a BCD '. that the two-dimensional.. y) = N - 1 N-I N-I r.S . 1 . A. 2. . Given an image f(x. The order of computation from a row-column approach "i7 off "Q' . N . Similar comments apply for computing f(x. v). the procedure is to generate a smoothed image g(x. y)..2 Smoothing Smoothing operations are used for reducing noise and other spurious effects that may be present in an image as a result of sampling. 8. It is possible to show through some manipulation that each of these equations can be expressed as separate one-dimensional summations of the form shown in Eq. y).'y 2U2 can be reversed to a column-row format without affecting the final result. For example. .. 7.e . C/] Neighborhood Averaging. . v = 0. The Fourier transform can be used in a number of ways by a vision system.LOW-LEVEL VISION 335 for u.-y can cad . quantization.6-7) U = 0 v = 0 'C7 for x. y) given F(u. v). 2.. Neighborhood averaging is a straightforward spatialdomain technique for image smoothing. or disturbances in the environment during image acquisition. . continuous Fourier transform can be computed (at the speed of 'T1 CD. y = 0. In this section we consider several fast smoothing methods that are suitable for implementation in the vision system of a robot. however. and restoration are abundant although. (7. 68) and (7. the filter mask._.24b shows the same image but with approximately 20 percent of the pixels corrupted by "impulse noise. called median filters. and Fig. '. in a 5 x 5 neighborhood the thirteenth largest value. and P is the total number of points in the neighborhood. Figure 7. in which we replace the intensity of each pixel by the median of the intensities in a predefined neighborhood of that pixel. and Fig. P (n. in a 3 x 3 neighborhood the median is the fifth largest value. . 25.o tems. 20.y CD. (7. Median Filtering. 20.. the smoothed value of each pixel is determined before any of the other pixels have been changed. we are not limited to square neighborhoods in Eq. 20. SENSING. As is true with most mask processors. C3. This blurring can often be reduced significantly by the use of so. Of course. 7. VISION. respectively. (7. 15.6-8) for all x and y in f(x. including (x..6-3) that the former equation is a special case of the latter with w. . instead of by the average. = 1/9. Example: Figure 7. Recall that the median M of a set of values is such that half the values in the set are less than M and half the values are greater than M. A little thought will reveal that the principal function of median filtering is to force points with very distinct intensities to be more like their neighbors. 7. 20. 20. ''' C1. m) (7. If a 3 x 3 neighborhood is used.23c through f are the results of using neighborhoods of sizes 3 x 3.C C:' 7.1. thus actually eliminating intensity spikes that appear isolated in the area of `w' C3.23a shows an image corrupted by noise.24a shows an original image.. 20. determine the median. y). 20. 3 x 3 neighborhood has values (10.6-8) but. . 20. and assign this value to the pixel. 100). as mentioned in Sec. 20. 5 x 5.23 illustrates the smoothing effect produced by neighborhood averaging..23b is the result of averaging every pixel with its 4-neighbors. 20. 100). in)eS . One of the principal difficulties of neighborhood averaging is that it blurs edges and other sharp details. we note by comparing Eqs. In order to perform median filtering in a neighborhood of a pixel. Similarly." The result of neighborhood averaging over a 5 x 5 area is BCD gyp" o-°. For example. . S is the set of coordinates of points in the neighborhood of (x.d CIO b-0 C1. the smoothed image is g(x. t4- . 7. These values are sorted as (10. F. we group all equal values as follows: Suppose that a -(D< ((DD CD. When several values in a neighborhood are the same. -. which results in a median of 20. It is noted that the degree of Figs. smoothing is strongly proportional to the size of the neighborhood used.. y).336 ROBOTICS: CONTROL.a' .. Example: Figure 7. y) itself. 25.. AND INTELLIGENCE predefined neighborhood of (x. we first sort the values of the pixel and its neighbors. y).f(n. obtained by using the relation In other words.15. and so on. Y) = 1 4y.6. these are by far the most predominant in robot vision sys.r'. and 11 x 11. 7 x 7. 5 x 5. and 11 x 11.LOW-LEVEL VISION 337 Figure 7. (b) Result of averaging each pixel along with its 4-neighbors.23 (a) Noisy image. respectively. 7 x 7. . (c) through (f) are the results of using neighborhood sizes of 3 x 3. 24d resulted from a large concentration of noise at those points.338 ROBOTICS: CONTROL. CA. thus biasing the median calculation. Inc. y) to an uncorrupted image f(x.24 (a) Original image.24d. y). (Courtesy of Martin Connor. y) '-r .. C]. y) which if formed by the addition of noise n(x.) CD' . that is. ". Consider a noisy image g(x.24c and the result of a 5 x 5 median filter is shown in Fig. 7.s g(x. Texas Instruments. [. VISION.) 9'0 ''C3 points.. (c) Result of 5 x 5 neighborhood averaging.r' shown in Fig. (b) Image corrupted by impulse noise. 7. AND INTELLIGENCE Figure 7. Texas. (d) Result of 5 x 5 median filtering. SENSING. The three bright dots remaining in Fig. y) = f(x. y) + n(x.. Image Averaging.6-9) . The superiority of the median filter over neighborhood averaging needs no explanation. (7. Two or more passes with a median filter would eliminate those (Z. 7. Lewisville. . it is a simple problem to show (Papouli''s [1965]) that if an image g(x. during which no motion can take place.. . 16. y). this means that all object in the work space must be at rest with respect to the camera during the averaging process. as discussed in Sec. Y) i=I (7. say. . The objective of the following procedure is to obtain a smoothed result by adding a given set of noisy images. and ag(x. images. . It is important to note that the technique just discussed implicitly assumes that all noisy images are registered spatially. Y) an(x.6-12) and (7.6-10) then it follows that E{g(x. all at coordinates (x.6-13) indicate that. y)} = f(x. We will use the convention of labeling dark points with a 1 and light points with a 0.25b to f show the results of averaging 4. K = 32.6-11) (7. y) is formed by averaging K different noisy 8(x. y). Smoothing Binary Images.LOW-LEVEL VISION 339 where it is assumed that the noise is uncorrelated and has zero average value. 8.5. gi(x. or from processes such as edge detection or thresholding. t]. with only the pixel intensities varying. In terms of robotic vision. . i = 1.fl Example: An an illustration of the averaging method.6-13) Equations (7. as K increases. one-thirtieth of a second). 7. Y) (7. consider the images shown in Fig.25. K. y). Binary images result from using backlighting or structured lighting. y) = K o (x. 7. y) will approach the uncorrupted image f(x. Part (a) of this figure shows a sample noisy image and Fig. the variability of the pixel values decreases. . y) = K E gi(x. y) app where E{-(x.. 7.4 and 7. 16 images will take on the order of 'h s. Since E{g(x. y)} is the expected value of g. as discussed in Secs.. y) are the variances of g and n.e.6. Thus. y)} = f(x. 7. If the noise satisfies the constraints just stated. this means that g(x. 32. 2. Many vision systems have the capability of performing an entire image addition in one frame time interval (i. y) as the number of noisy images used in the averaging process increases. and 64 such images.6. y) and (7.3. Thus. since binary . the addition of. respectively.-.6-12) a8 (x. The standard deviation at any point in the average image is given by ag(x. It is of interest to note that the results are quite acceptable for . y) and an(x. (b) through (f) are the results of averaging 4.25 (a) Sample noisy image.CONTROL. SENSING. . 16 32. and 64 such images. VISION. 8. AND INTELLIGENCE Figure 7.340 ROBOTICS. and to assign to p a 1 or 0. . in the sense that the next value of each pixel location is determined before any of the other pixels have been changed. otherwise this pixel is assigned a 0. Following the convention established above. (4) eliminates small bumps along straightedge segments. we assign a 1 to p. h 1 t h Figure 7. a dark pixel contained in the mask area is assigned a logical 1 and a light pixel a logical 0. depending on the spatial arrangement and binary values of its neigh- bors. (2) fills in small notches in straightedge segments. missing corners. small holes. Then. With reference to Fig. Dark pixels are denoted by 1 and light pixels by 0. noise in this case produces effects such as irregular boun- daries. 7.6-15) simultaneously for all pixels. we let p = 1 if B2 = 1 and zero otherwise. As above.26. if BI = 1. The smoothing approach (1) fills in small (one pixel) holes in otherwise dark areas.14) is applied to all pixels simultaneously. Due to limitations in available processing time for industrial vision tasks. and isolated points. The basic idea underlying the methods discussed in this section is to specify a boolean function evaluated on a neighborhood centered at a pixel p. 7.LOW-LEVEL VISION 341 images are two-valued. and (5) replaces missing corner points. (3) eliminates isolated l's. which leads us to the 3 x 3 mask shown in Fig.6. the first two smoothing processes just mentioned are accomplished by using the Boolean expression act B1 = (7. Steps 3 and 4 in the smoothing process are similarly accomplished by evaluating the boolean expression B2 = (b+c+e) (d+f+g)] (7.26 Neighbors of p used for smoothing binary images. Equation (7. the analysis is typically limited to the 8-neighbors of p.26.6-14) where " " and "+" denote the logical AND and OR. respectively. Figure 7. right corner points are filled in by means of the expression B3 = p (a+b+c+e+h) +p b (7. SENSING.27c.27d shows the result of applying B3 through B6 to the image in Fig.27b. 7. The reader is reminded that enhancement is a major area in digital image processing and scene analysis. 7.27b shows the result of applying BI. Similarly. Finally.6.fl En" may" ..27. Let the variable r represent the intensity of pixels in an image to be enhanced. top left. It will be assumed initially that r is a normalized. the bumps along the boundary of the dark area and all isolated points were removed (the image was implicitly extended with 0's for points on the image border).6-20) . In this subsection we consider several enhancement techniques which address these and similar problems. The discrete case is considered later in this section. .6-17) (7. 1]. "suit`J' able" implies having fast computational characteristics and modest hardware requirements. and that our discussion of this topic is limited to sample techniques that are suitable for robot vision systems.6-18) (7. The capability to compensate for effects such as shadows and "hot-spot" reflectances quite often plays a central role in determining the success of subsequent processing algorithms. Histogram Equalization.27c shows the result of applying B2 to the image in Fig. Fig. coo this particular case.6-19) (a+b+c+d+ f) + p B6 =p(bce) (a+d+f+g+h)+p '-r dam" These last four expressions implement step 5 of the smoothing procedure. and Fig. lower right.0. `w't °x' (3. Note that the notches along the boundary and the hole in the dark 'r1 C14 area were filled in. attention will be focused on transformations of the form s = T(r) (7. 7. Figure 7.342 ROBOTICS: CONTROL.27a shows a noisy binary image. Only B4 had an effect in L2. 7. 7.6-16) where overbar denotes the logical complement. AND INTELLIGENCE Missing top. Example: The concepts just discussed are illustrated in Fig. continuous variable lying in the range 0 < r < 1. VISION.3 Enhancement One of the principal difficulties in many low-level vision tasks is to be able to automatically adapt to changes in illumination. 7. and lower left missing corner points are filled in by using the expressions B4 = p (a B5 = p and d) (c + e + f + g + h) + p (7. For any r in the interval [0. In this context. As expected. Condition 1 preserves the order from black to white in the intensity scale. 0 < T(r) < 1 for 0 < r < 1. A transformation function satisfying these conditions is illustrated in Fig. 7. 2. (d) Final result after application of B3 through B6.LOW-LEVEL VISION 343 Figure 7. (c) Result of applying B2. It is assumed that the transformation function T satisfies the conditions: 1.28. and condition 2 guarantees a mapping that is consistent with the allowed 0 to 1 range of pixel values.6-21) . (b) Result of applying BI. which produce an intensity value s for every pixel value r in the input image. (7. The inverse transformation function from s back to r is denoted by r = T-1(s) where it is assumed that T-1(s) satisfies the two conditions given above.27 (a) Original image. T(r) is single-valued and monotonically increasing in the interval 0 < T(r) 1. 5. SENSING. an image whose pixels have the PDF shown in Fig.28 An intensity transformation function. 7.29b would have predominant light tones. can be characterized by their probability density functions (PDFs) p. For example.. 1] and.(r) and pc(s). . VISION. On the other hand.344 ROBOTICS. AND INTELLIGENCE Figure 7. A great deal can be said about the general appearance of an image from its intensity PDF. an image whose pixels have an intensity distribution like the one shown in Fig. as such. The intensity variables r and s are random quantities in the interval [0. X27 0 0 cry . CONTROL.. 7.29 (a) Intensity PDF of a "dark" image and (b) a "light" image.29a would have fairly dark characteristics since the majority of pixel values would be concentrated on the dark end of the intensity scale.' i (a) (h) Figure 7. -.. ".. the concepts developed above must be formulated in discrete form.6-22) yields P.LOW-LEVEL VISION 345 It follows from elementary probability theory that if pr(r) and T(r) are known.. This is important because it is often quite difficult to find T-1(s) analytically.t 0<S<1 '-h (7. As will be seen below. The net effect `r.6-26) k=0.6-23) where w is a dummy variable of integration.L. pr (rk) is an estimate of the pro- bability of intensity rk. independent of the shape of pr(r).6-23) yields transformed intensities that always have a flat PDF.o which is a uniform density in the interval of definition of the transformed variable s. In order to be useful for digital processing.. this process can have a rather dramatic effect on the appearance of an image. . (7. (S) = L Pr(r) 1 Pr(r) J r=T'(s) _ [ 1 Ir=T-'(s) =1 .3 >.2.1 where L is the number of discrete intensity levels.6-25) also noted that using the transformation function given in Eq. and T-1(s) satisfies condition 1.. a property that is ideally suited for automatic enhancement. then the PDF of the transformed intensities is given by Ps(s) = Pr(r) (7... It is ::r of this transformation is to balance the distribution of intensities. The derivative of s with respect to r for this particular transformation function is easily found to be ds = Pr(r) (7.6-22) ds r=T '(s) Suppose that we choose a specific transformation function given by s = T(r) = Jo pr(w) dw .6-24) dr Substitution of drids into Eq. (7. It is noted that this result is independent of the inverse transformation function. The rightmost side of this equation is recognized as the cumulative distribution function of pr(r).. which is known to satisfy the two conditions stated earlier. 0<r<1 (7.. 'C1 1 (7. For intensities that assume discrete values we deal with probabilities given by the relation Pr(rk) =n- nk O < rk .1. nk is the number of times this intensity appears in the . let pr(r) and pz(z) be the original and desired intensity PDFs..6-27) j=0 k j=0 E Pr(rj) 22 L . It is noted from this equation that in order to obtain the mapped value Sk corresponding to rk. The discrete form of Eq. Histogram Specification. consider the image shown in Fig. 7. A plot of pr(rk) versus rk is usually called a histogram.6-23) is given by k Sk = T(rk) = E nj n (7. AND INTELLIGENCE image.6-23).1.I (Sk) is not used in histogram equalization. a process that is not applicable when a priori information is available regarding a desired output histogram shape. and the technique used for obtaining a uniform histogram is known as histogram equalization or histogram linearization.6-29) .30c and the corresponding equalized histogram is shown in Fig. However. we simply sum the histogram components from 0 to rk. Histogram equalization is ideally suited for automatic enhancement since it is based on a transformation function that is uniquely determined by the histogram of the input image.." coo C^'" Pte) ova s = T(r) = S pr(W) dw O-. . as discussed below.30a and its histogram shown in Fig.7. (7. it plays a central role in histogram specification. Example: As an illustration of histogram equalization. ova N `. and n is the total number of pixels in the image. (7.6-27) to this image is shown in Fig. the method is limited in the sense that its only function is histogram linearization.'3 a)4- "s~ . 1 . It is noted that the histogram is not perfectly flat. The result of applying Eq. Suppose that a given image is first histogram equalized by using Eq.30b. rk = T- I (Sk) 0 5 Sk 5 1 'C7 (7. a condition generally encountered when applying to discrete values a method derived for continuous quantities. Although T. 7. The inverse discrete transformation is given by f o r 0 5 r k G 1 and k=0. 7. Here we generalize the concept of histogram processing by developing an approach capable of generating an image with a specified intensity histogram.. SENSING. Starting again with continuous quantities. histogram equalization is a special case of this technique. VISION.30d. As will be seen below. 7.346 ROBOTICS: CONTROL. "'" CD. The improvement of details is evident. that is. (7. (7.6-28) where both T(rk) and T' (Sk) are assumed to satisfy conditions 1 and 2 stated above. pZ (z) .30 (a) Original image and (b) its histogram. Thus. the resulting levels z = G-1(s) would have the desired PDF.6-29) and (7.LOW-LEVEL VISION 347 Figure 7. we use the inverse levels s obtained from the original image.) If the desired image were available.6-30). © IEEE.6-30) The inverse process.6-30) guarantees a uniform density. (From Woods and Gonzalez [1981]. Equalize the levels of the original image using Eq. Assuming that G-1(s) is single-valued. Specify the desired intensity PDF and obtain the transformation function G(z) using Eq. z = G-1(v) would then yield the desired levels back. regardless of the shape of the PDF inside the integral. cat vii t-. however. if instead of using v in the inverse process. This.. that ps(s) and p. (7. the procedure can be summarized as folcan 'i+ . v = G(z) = Jo pZ(w) dw +-+ (7. (7. is a hypothetical formulation since the z levels are precisely what we are trying to obtain.(v) would be identical uniform densities since the use of Eqs. (7. 2. It is noted. of course.6-29). its levels could also be equalized by using the transformation function lows: 1. (c) Histogram-equalized image and (d) its histogram. Cc" . SENSING. is. the use of Q. AND INTELLIGENCE 3.. While this global approach is suitable for overall enhancement. . Since the number of pixels in these areas may have negligible influence on the computation of a global transformation.. and pz(zj) is specified.-. Figure 7. in the sense that pixels are modified by a transformation function which is based on the intensity distribution over an entire image. k Sk = T(rk) _ E p. Apply the inverse transformation z = G-1(s) to the intensity levels of the histogram-equalized image obtained in step 1. in this case.348 ROBOTICS: CONTROL. 7..6-32) G(zi) = Fr pz (zj) j=0 (7.. Part (a) of this figure shows the input image and Fig.31b is the result of histogram equalization. this method .31. The real problem in using the two transformations or their combined representation for continuous variables lies in obtaining the inverse function analytically. In the discrete case this problem is circumvented by the fact that the number of distinct intensity levels is usually relatively small (e. COED and where pr(rj) is computed from the input image.(rj) j=0 (7.6-31) shows that the input image need not be histogram-equalized explicitly in order to perform histogram specification.31c shows a specified histogram and Fig. The two transformations required for histogram specification.. All that is required is that T(r) be determined and combined with G-1(s) into a single transformation that is applied directly to the input image. This procedure yields an output image with the specified intensity PDF.6-34) zi = G-I (si) Example: An illustration of the histogram specification method is shown in Fig. 7. Equation (7.6-33) (7. The discrete formulation of the foregoing procedure parallels the development in the previous section: '^p t-. The histogram equalization and specification methods discussed above are global. can be combined into a single transformation: °-= z = G-1(s) = which relates r to z. .g. histogram equalization had little effect on the image. VISION.6-31) It is noted that.31d is the result of using this histogram in the procedure discussed above. T(r) and G(s). It is noted that. G-1 [T(r)] G-1 (7. Local Enhancement.fl Off' reduces to histogram equalization. when [T(r)] = T(r). it is often necessary to enhance details over small areas. CAD 256) and it becomes feasible to calculate and store a mapping for each possible integer pixel value. 7. (From Woods and Gonzalez [1981].. Since only one new row or column of the neighborhood changes during a pixel-to-pixel translation of the region.) b-0 +-' s. it is possible to update the histogram obtained in the previous location with the new data introduced at each motion step. Another approach often used to reduce computation is to employ nonoverlapping regions. © IEEE.. The solution is to devise transformation functions that are based on the intensity distribution. w-1 0:F- :7r Figure 7. mss. This approach has obvious advantages over repeatedly computing the histogram over all n x in pixels every time the region is moved one pixel location. but this often produces an undesirable checkerboard effect. At each location.LOW-LEVEL VISION 349 global techniques seldom yields acceptable local enhancement. (d) Result of enhancement by histogram specification. . The procedure is to define an n x in neighborhood and move the center of this area from pixel to pixel. BCD . The histogram-processing techniques developed above are easily adaptable to local enhancement. or other properties. This function is finally used to map the intensity of the pixel centered in the neighborhood..31 (a) Input image. in the neighborhood of every pixel in a given image. we compute the histogram of the n x in points in the neighborhood and obtain either a histogram equalization or histogram specification transformation function. The center of the n x in region is then moved to an adjacent pixel location and the procedure is repeated. (b) Result of histogram equalization. (c) A specified histogram. . 7.. .-r CA. .32b shows the result of histogram equalization. a problem that commonly occurs when using this technique on noisy images. r-.r CC! kit 1. AND INTELLIGENCE Example: An illustration of local histogram equalization where the neighbor- hood is moved from pixel to pixel is shown in Fig. The most striking feature in this image is the enhancement of noise. i. (b) Result of global histogram equalization. Figure 7. .y f.0". Note that the dark areas have been enhanced to reveal an inner structure that was not visible in either of the previous two images. Part (a) of this figure shows an image with constant background and five dark square areas.S! sew C'. SENSING. 4-.32 (a) Original image. tit 'o' . Figure 7.. Noise was also enhanced.. (c) Result of local histogram equalization using a 7 x 7 neighborhood about each pixel.32c shows the result of local histogram equalization using a neighborhood of size 7 x 7..2).32. VISION.350 ROBOTICS: CONTROL.. even if they have been smoothed prior to equalization..t Figure 7. 7. . The image is slightly blurred as a result of smoothing with a 7 x 7 mask to reduce noise (see Sec. but its texture is much finer due to the local nature of the enhancegyp''+' .6. y) to the difference between f(x. m(x. at 30 image frames per second). Note the enhancement of detail at the boundary between two regions of different overall intensities and the rendition of intensity details in each of the regions. and a are variable quantities which depend on a predefined neighborhood of (x. one could base local enhancement on other properties of the pixel intensities in a neighborhood. y) into a new image g(x. It is important to note that A.-p. In this chapter we are 'C.. Application of the local gain factor A(x.. serving as the initial preprocessing step for numerous object detection algorithms. too each pixel location (x. y): V'1 g(x. y) are the intensity mean and standard deviation computed in a neighborhood centered at (x.m(x. the mean is a measure of average brightness and the variance is a measure of contrast. y) between two limits [Arvin .. . it is often desirable to add back a fraction of the local mean and to restrict the variations of A(x. s. y)[f(x. y) = k a (M y) 0<k<1 A a) (7. y) = A(x.. The intensity mean and variance (or standard deviation) are two such properties which are frequently used because of their relevance to the appearance of an image. . m. An example of the capabilities of the technique using a local region of size 15 x 15 pixels is shown in Fig. y) and the local mean amplifies local variations. significantly the overall characteristics of a global technique. y) CAD (7.6-35) where A(x. y) by performing the following transformation at C/' .33.4 Edge Detection Edge detection plays a central role in machine vision. Since A(x. y). y). and k is a constant in the range indicated above. 7.. The mean is added back in Eq. y)] + m(x. y) and a(x.U+ transformation based on these concepts maps the intensity of an input image f(x. (7. y) .6-36) In this formulation.LOW-LEVEL VISION 351 This example clearly demonstrates the necessity for using local enhancement when the details of interest are too small to influence ment approach. Amax ] in order to balance out large excursions of intensity in isolated regions. areas with low contrast receive larger gain. M is the global mean of f(x. In practice. Instead of using histograms.e. t3" . That is.6. (OD Example: The preceding enhancement approach has been implemented in hardware by Narendra and Fitch [1981].-. A typical local f. y). 7. 4.6-35) to restore the average intensity level of the image in the local region. and has the capability of processing images in real time (i. y) is inversely proportional to the standard deviation of the intensity. Similar comments apply to the case of a dark object on a light background.34. while the sign of the second derivative can be used to determine whether an edge pixel lies on the dark (background) or light (object) side of an edge. ago a?. the first derivative at any point in an image can be obtained by using the magnitude of the gradient at that point. This model is representative of the fact that edges in digital images are generally slightly blurred as a result of sampling." CAD C]. This concept can be easily illustrated with the aid of Fig. 7. We simply define a profile perpendicular to the edge direction at any given point and interpret the results as in the preceding discussion. Although the discussion thus far has been limited to a one-dimensional horen' cps [17 C3" 00. is zero in all locations.352 ROBOTICS: CONTROL. (From Narendra and Fitch [1981]. Part (a) of this figure shows an image of a simple light object on a dark background. SENSING. except at the onset and termination of an intensity transition. on the other hand. Based on these remarks and the concepts illustrated in Fig. as shown in Fig. the idea underlying most edge detection techniques is the computation of a local derivative operator. and the first and second derivatives of the profile. the intensity profile along a horizontal scan line of the image. 8. The second derivative. izontal profile. and assumes a constant value during an intensity transition. 7. The first derivative of an edge modeled in this manner is zero in all regions of constant intensity. AND INTELLIGENCE Figure 7. 7.) interested in fundamental techniques for detecting edge points.34b. is positive for pixels lying on the dark side of both the leading and trailing edges of the object.. "C7 'C3 i..34. for example. Basically. 'L7 . '-' . © IEEE. It is noted from the profile that an edge (transition from dark to light) is modeled as a ramp.. The sign of the second derivative in Fig. while the second derivative is given by the 'CS "C3 c`" 't3 Laplacian. Subsequent processing of these edge points is discussed in Chap. As will be shown below.33 Images before and after local enhancement. rather than as an abrupt change of intensity. a similar argument applies to an edge of any orientation in an image. while the sign is negative for pixels on the light side of these edges. It is of interest to note that identically the same interpretation regarding the sign of the second derivative is true for this case. VISION.34a. arc ::r 'CA '-" CD. Basic Formulation. it is evident that the magnitude of the first derivative can be used to detect the presence of an edge.. 7. + C13 i___ -__ is of ax Q. y) at location (x. (a) Light object on a dark background.34 Elements of edge detection by derivative operators. of ay (7. y) defined as the two-dimensional vector r.LOW-LEVEL VISION 353 Image Profile of a horizontal line First derivative Second derivative Figure 7. The gradient of an image f(x.7 G[f(x. (b) Dark object on a light background. Gradient Operators. y)] = L:1 = .6-37) v:' . y+1)+f(x+1.(a + 2d + g) where we have used the letters a through i to represent the neighbors of point (x.f(x. Gx = ax = f(x. There are a number of ways for doing this in a digital image.-y 1)] (7. y). y) using this simplified notation is shown 'a+ F'+ 't7 (7.6-43) . The 3 x 3 neighborhood of (x.1.6-39) This approximation is considerably easier to implement. y) .6-38) of ay 2]112 + It is common practice to approximate the gradient by absolute values: G[f(x. y + 1) ] = (g+2h+i) . SENSING. AND INTELLIGENCE It is well known from vector analysis that the vector G points in the direction of maximum rate of change of f at location (x. y + 1) ] . y . y .1.1) + 2f(x.6-41) (7.1. particularly when dedicated hardware is being employed. VISION.1)+f(x+ 1.6-40) (7. y)].1) + 2f(x + 1. y) ] = [GS + Gy ] 1/2 of 2 ax (7.y+1)+2f(x. (7. y . y). we are interested in the magnitude of this vector.[f(x-1. y) and Gy = ay = f(x. y = (c + 2e + i) . y) ] = I Gx I + I Gy I (7.y+1)] .[f(x .(a+2b+c) and Gy= ay =[f(x-1. generally referred to as the gradient and denoted by G[f(x. y) . One approach is to use first-order differences between adjacent pixels.6-42) . It is noted from Eq.1) + 2f(x . y) is given by 'C7 Gx = ax = [f(x + 1. that is.1.f(x . y .6-38) that computation of the gradient is based on obtaining the first-order derivatives of/ax and of/ay. y) + f(x + 1. where G[f(x.1) A slightly more complicated definition involving pixels in a 3 x 3 neighborhood centered at (x. y) + f(x . however. For edge detection. y .354 ROBOTICS: CONTROL. but 3 x 3 operators are by far the most popular in industrial computer vision because of their computational speed and modest hardware requirements...6-42). as given in Eq. The responses of these two masks at any point (x.f(x. The simplest approach is to let the value O o of g at coordinate (x.36. (c) Mask used to compute G .. It is noted that the pixels closest to (x. 7. y) are weighted by 2 in these particular definitions of the digital derivative.6-41) has the advantage of 'L1 W increased averaging.6-39) to obtain an approximation to . N 2 1 < O O O o N O N V'1 .r. y) be equal to the gradient of the input image f at that g(x. There are numerous ways by which one can generate an output image.35b. (b) Mask used to compute G G.1 that GX. y) = G[. O o g(x. y). a h c d (.6-44) 0 point. It follows from the discussion in Sec. (7. y) ] 7. Gy may be obtained by using the mask shown in Fig.LOW-LEVEL VISION 355 in Fig.6-38) or (7. can be computed by using the mask shown in Fig. (7. 7.6.35 (a) 3 x 3 neighborhood of point (x. An example of using this approach to generate a gradient image is shown in Fig. It is CD" O possible to define the gradient over larger neighborhoods (Kirsch [19711). thus tending to make the gradient less sensitive to noise. (7. y) are combined using Eqs. 7. 7. Moving these masks throughout the image f(x.35a. W X O O o O O O o the gradient at that point. y) O yields the gradient at all points in the image. based on gradient computations. c) e K h 1 (a) -1 -2 -1 -I 0 N 0 I O 0 N 0 0 0 0 0 -2 0 1 2 1 -I 0 N 0 Figure 7.35c..6-40) and (7. These two masks are commonly referred to as the Sobel operators. that is. y).p . Computing the gradient over a OO AX 3 x 3 area rather than using Eqs. Similarly. O (7. y) + f(x. 7. y) ] > T (7.1)] .1. The use of Eq. VISION. y)] < T where T is a nonnegative threshold. SENSING. (7. y + 1) + f(x.6-45) may be viewed as a procedure which extracts only those pixels that are characterized by significant (as determined by T) transitions in intensity. only edge pixels whose gradients exceed T are considered important.6-45) g(x. (7.356 ROBOTICS: CONTROL.4 f(x. Y) = 0 if G[f(x. y) + f(x . as expected of a second-order derivative. (b) Result of using Eq.. `L1 els is usually required to delete isolated points and to link pixels along proper . the use of Eq. the Laplacian is defined as L[f(x. y) (7. 4-+ Laplacian Operator. y)] = [f(x + 1. Another approach is to create a binary image using the following relationship: 1 if G[f(x. 3.1.2.37. The Laplacian is a second-order derivative operator defined as L[f(x. (7. 8. Further analysis of the resulting pix- boundaries which ultimately determine the objects segmented out of an image. Thus.i .6-47) can be based on the mask shown in Fig.6-45) in this context is discussed and illustrated in Sec. y .6-46) 2 For digital images. In this case. AND INTELLIGENCE Figure 7. y) ] = a2f + ax- ay a2 (7. (7.6-44).6-47) This digital formulation of the Laplacian is zero in constant areas and on the ramp section of an edge. The implementation of Eq...36 (a) Input image. Typically. the point is called a background point.. .C a)' 3-. it is seldom used by itself for edge detection. In V basic approach and classify a point (x..LOW-LEVEL VISION 357 0 0 -4 I 0 0 Figure 7. 8. y) < T. One obvious way to extract the objects from the background is to select a threshold T which separates the intensity modes. we can use the same CDt f(x. Based on the foregoing concepts. y) > T2.38b. Suppose that the intensity histogram shown in Fig. 8. y) as belonging to one object class if TI < f (x. if handled by thresholding. being a second-derivative operator. y). 7.6-48) . More sophisticated uses of thresholding techniques are discussed Chap. variable threshold.5 Thresholding Image thresholding is one of the principal techniques used by industrial vision systems for object detection. problems of this nature. y) < TI. are best addressed by a single.37 Mask used to compute the Laplacian. In this section we are concerned with aspects of thresholding that fall in the category of low-level processing. any point (x. the Laplacian responds to transitions in intensity. y) > T is called an object point. especially in applications requiring high data throughputs. Here. y)l (7. otherwise. f(x. to the other object class if f (x. [ti r. . and to the back- ground if f(x. the Laplacian is typically unacceptably sensitive to noise. especially when the number of corresponding histogram modes is large. p(x. Although. two types of light objects on a dark background). y. 3. This type of multilevel thresholding is generally less reliable than its single threshold counterpart because of the difficulty in establishing multiple thresholds that effectively isolate regions of interest. such that object and background pixels have intensities grouped into two dominant modes.-. 7.38a corresponds to an image. Thus. A slightly more general case of this approach is shown in Fig.6. 7. y) for which L"' this case the image histogram is characterized by three dominant modes (for example. we may view thresholding as an operation that involves tests against a function T of the form gin' BCD Cap T = T[x.. as discussed in Chap. y). The reason is that. f(x. . Then. composed of light objects on a dark background. as indicated at the beginning of this section. this operator is usually delegated the secondary role of serving as a detector for establishing whether a given pixel is on the dark or light side of an edge. y). We associate with low-level ?'_ . we find that pixels labeled 1 (or any other convenient intensity level) correspond to objects. y). We create a thresholded image g(x. in addition. If 'T depends on both f(x. the average intensity of a neighborhood centered at (x. 7. If.. then the threshold is called local.6-49) g(x. T depends on the spatial coordinates x and y. where f(x. in examining g(x. y). y) > T (7. while pixels labeled 0 correspond to the . y) = 0 iff(x. AND INTELLIGENCE (a) La 11 (h) L Figure 7. y) denotes some local property of this point. y) S T Thus. SENSING. and p(x.38a shows an example of such a threshold). y) by defining 1 if f(x.358 ROBOTICS: CONTROL. background. y) and p(x.38 Intensity histograms that can be partioned by (a) a single threshold and (b) multiple thresholds. it is called a dynamic threshold. y). for example.. VISION.r a. the threshold is called global (Fig. When T depends only on f(x. y) is the intensity of point (x. y). 7. 7.39. 7.3.4 are important techniques for deriving depth from image information.. the structured-lighting approaches in Sec. as indicated in Sec. Since thresholding plays a central role in object segmentation. (c) Image obtained by using Eq. vision is a three-dimensional problem. -fl Q'.LOW-LEVEL VISION 359 Figure 7. more sophisticated formulations are associated with functions in medium-level vision.39 (a) Original image. most machine vision algorithms. A simple example of global thresholding is shown in Fig. . vision those thresholding techniques which are based on a single. are based on images of a three-dimensional scene. global value of T. 7.2. The range sensing methods discussed in Sec.1. especially those used for low-level vision. (7. (b) Histogram of intensities in the range 0 to 255.7 CONCLUDING REMARKS The material presented in this chapter spans a broad range of processing functions normally associated with low-level vision. 7. and the material in Sec.6-49) with a global threshold T = 90. Although. 7.t . It is important to keep in mind that many of the areas we have discussed have a range of application much broader than this.4. 7. as exemplified by Lee [1983] and Chaudhuri [1983]. The survey article by Barnard and Fischler [1982] contains a comprehensive set of references on computational stereo.2 can be found in most books on computer graphics (see. and Narendra and Fitch [1981].. AND INTELLIGENCE Our discussion of low-level vision and other relevant topics. has been at an introductory level. The smoothing technique for binary images discussed in Sec. 7. For details on implementing median filters see Huang et al. and Myers [1980].5 is based on Toriwaki et al. is the ever-present (and often contradictory) requirements of low cost and high computational speeds. VISION. [1979]. 7. 0'< 'O. Wolfe and Mannos [1979]. [1979].1 How many bits would it take to store a 512 x 512 image in which each pixel can have 256 possible intensity values'? C. '-' `r7 CS' PROBLEMS 7. . however. °'- . The concept of smoothing by image averaging is discussed by Kohler and Howell [1963]. and Fairchild [1983]. The discussion in Sec.U' °-o . The book by Rosenfeld and Kak [1982] contains a detailed description of threshold selection techniques.. For an introduction to edge detection see Gonzalez and Wintz [1977]. The transformations discussed in Secs.1 and 7. Holland et al. SENSING. More recent work in this field emphasizes computational speed. is adapted from Gonzalez and Wintz [1977]. [1979].4. Early work on edge detection can be found in Roberts [1965]. One of the salient features of industrial applications.6. such as the nature of imaging devices.. A survey of techniques used in this area a decade later is given by Davis [1975].3 is based on Mundy [1977]. The material in Sec.fl .3 is based on Gonzalez and Fittes [1977] and Woods and Gonzalez [1981].2 is based on an early paper by II' :-" C. The discussion in Sec. and with a very directed focus toward robot vision.6. ^C3 REFERENCES Further reading on image acquisition devices may be found in Fink [1957]. Newman and Sproull [1979]). The material in Sec.d Unger [1959]. A good example is enhancement.. CD- °"U s. which for years has been an important topic in digital image processing. Herrick [1976]. For further details on local enhancement see Ketcham [1976]. Harris [1977]. 7.6. and Chaudhuri [1983]. The selection of topics included in this chapter has been influenced by these requirements and also by the value of these topics as fundamental material which would serve as a foundation for further study in this field. 7. for example. Additional reading on camera modeling and calibration can be found in Duda and Hart [1973] and Yakimovsky and Cunningham [1979].360 ROBOTICS: CONTROL.1. and Rosenfeld and Kak [1982]. A survey paper by Weska [1978] is also of interest.. 7. mow Ll. `-' 7. [The process of subtracting a blurred version of f(x. y) an average of the 4-neighbors of (x. 7.2 Propose a technique that uses a single light sheet to determine the diameter of cylindrical objects. panned 135 ° and tilted 135 °.6. 0. 7.3.] ti' Win.4-42) and (7. Assume a linear array camera with a resolution of N pixels and also that the distance between the camera and the center of the cylinders is fixed. the result of using a 3 x 3 smoothing mask with coefficients (see Sec.LOW-LEVEL VISION 361 7.. 7.5 Start with Eq. (Hint: The answer lies on using complex conjugates).aX = 1 m? 7.. The result of this pass is 1 '-t 1/g then followed by a pass of the mask 1 1 .. 7. in general. (7.13 The results obtained by a single pass through an image of some two-dimensional masks can also be achieved by two passes of a one-dimensional mask. 7...'/a) to subtracting from f(x.6.6-4) into Eq. Is this path unique? 7.9 Give the boolean expression equivalent to Eq. 7. 7. (b) What is the maximum error if N = 2048 pixels and D.11 Explain why the discrete histogram equalization technique will not. y).6 Show that the D4 distance between two points p and q is equal to the shortest 4-path between these points. 7. .14 Show that the digital Laplacian given in Eq. D. ').10 Develop a procedure for computing the median in an n x n neighborhood. y) from itself is called unsharp masking.2) can also be obtained by first passing through an image the mask [1 1 1].6-47) is proportional (by the factor . (7.12 Propose a method for updating the local histogram for use in the enhancement technique discussed in Sec.6-5) yields an identity.. (7.4-41) and derive Eqs.-I . 7.35) can be implemented by one pass of a differencing mask of the form [ . 7. (7..4 Determine if the world point with coordinates (1/2.7 Show that a Fourier transform algorithm that computes F(u) can be used without modification to compute the inverse transform. Show that the Sobel masks (Fig. . I/2) is on the optical axis of a camera located at (0. Assume a 50-mm lens and -CD let r.3 (a) Discuss the accuracy of your solution to Prob. (7. (7. 7. The final result is then scaled by 1/9 .4-43). 7.8 Verify that substitution of Eq.6-16) for a 5 x 5 window. 1/2. 7. = r2 = r3 = 0.2 in terms of camera resolution (N points on a line) and maximum expected cylinder diameter. For example.1 0 1] (or its vertical counterpart) followed by a smoothing mask of the form [1 2 1] (or its vertical counterpart). yield a flat histogram. and high-level vision. (2) the capability to learn from (3. 7. the state of the art in machine vision is for the most part -based on analytical formulations tailored to meet specific tasks. 'L7 'CD characteristics come immediately to mind: (1) the ability to extract pertinent information from a background of irrelevant details. (3) the ability to infer facts from incomplete information. In terms of speed and achievable altitude. and to formulate plans for meeting these goals. Although the concept of "intelligence" is somewhat vague.. we introduced in Sec. While it is possible to design and implement a vision system with these characteristics in a limited environment. however. Given that the objective is to fly between two points. r. it is not difficult to conceptualize the type of behavior that we may. that imitating nature is not the only solution to this problem. It is of interest to note. however grudgingly. coo r.-..1 INTRODUCTION 1"1 For the purpose of categorizing the various techniques and approaches used in machine vision. Low-level vision deals with basic sensing and preprocessing. 7. and (4) the capability to generate self-motivated goals. We may view the material in that chapter as being instrumental in providing image and other relevant information that is in a form suitable for subsequent intelligent °-h vii i_+ c°° 'n' visual processing. Although research in biological systems is continually uncovering new and promising concepts.1 three broad subdivisions: low-. we do not yet know how to endow it with a range and depth of adaptive performance that comes even close to emulating human vision. our present solution is quite different from the examples provided by nature.CHAPTER EIGHT HIGHER-LEVEL VISION The artist is one who gives form to difficult visions. this solution exceeds the capabilities of these examples by a wide margin. The reader is undoubtedly familiar with early experimental airplanes equipped with flapping wings and other birdlike features. bon 'C1 . particularly when one is referring to a machine. `CS bin fro 362 `ZS `C:3 C3' via 7.. Theodore Gill coo 8. The time frame in which we may have machines that approach human visual and other sensory capabilities is open to speculation. . topics which were covered in some detail in Chap. characterize as intelligent. Several examples and to generalize this knowledge so that it will apply in new and different circumstances. medium-. to . these techniques should yield only pixels lying on the boundary between objects and the background. The principal approach in the first category is based on edge detection. 7. One of the simplest approaches for linking edge points is CAD 0a- 00- s.. analyze the characteristics of pixels in a small neighborhood (e.1. Thus. This is followed by a discussion of object description techniques. breaks in the boundary due to nonuniform illumination.2. It will be seen in the following sections that these topics encompass a variety of approaches that are wellfounded on analytical concepts. 3 x 3 or 5 X 5) 4-.2 SEGMENTATION Segmentation is the process that subdivides a sensed scene into its constituent parts or objects. description. and recognition of individual objects. The material is subdivided into four principal areas. and other effects that introduce spurious intensity discontinuities. leading to the formulation of constraints and idealizations intended to simplify the complexity of this task. Segmentation algorithms are generally based on one of two basic principles: discontinuity and similarity. The material discussed in this chapter introduces the reader to a broad range of topics in state-of-the-art machine vision. medium-level vision deals with topics in segmentation. We then discuss the principal approaches used in the recognition stage of a vision system. 7. . In the following discussion we consider several techniques suited for this purpose. however.4 detect intensity discontinuities. with a strong orientation toward techniques that are suitable for robotic vision. These concepts are applicable to both static and dynamic (time-varying) scenes.and medium-level vision is significantly more vague and speculative. We begin the discussion with a detailed treatment of segmentation. Local Analysis. In the latter case. Our knowledge of these areas and their relationship to low. High-level vision deals with issues such as those discussed in the preceding paragraph.1 Edge Linking and Boundary Detection The techniques discussed in Sec. We conclude the chapter with a discussion of issues on the interpretation of visual inforCS' CA. Segmentation is one of the most important elements of an automated vision system because it is at this stage of processing that objects are extracted from a scene for subsequent recognition and analysis. In practice.6. Ideally. this set of pixels seldom characterizes a boundary completely because of noise. 8. the principal approaches in the second category are based on thresholding and region growing..g.HIGHER-LEVEL VISION 363 As indicated in Sec. edge detection algorithms are typically followed by linking and other boundary detection procedures designed to assemble edge pixels into a meaningful set of object boundaries. 8.G' mation. motion can often be used as a powerful cue to improve the performance of segmentation algorithms. y) to the pixel at (x. we link a point in the predefined neighborhood of (x. y) if 10 . y') and in the predefined neighborhood of (x.2-1) where T is a threshold.2-2) where 0 is the angle (measured with respect to the x axis) along which the rate of change has the greatest magnitude. y)]. as defined in Eqs.364 ROBOTICS: CONTROL. Example: As an illustration of the foregoing procedure. simultaneously.6. SENSING. and (2) the direction of the gradient. for the purpose of comparing directions. Finally.8'I < A (8. y) in an image that has undergone an edge detection process. Fig. Thus. 7.2-3) where A is an angle threshold. y) if both the magnitude and direction criteria are satisfied. I CAD "our (8. y) if "C1 G[f(x. consider Fig.4. The direction of the gradient may be established from the angle of the gradient vector given in Eq. There are two principal properties used for establishing similarity of edge pixels in this kind of analysis: (1) the strength of the response of the gradient operator used to produce the edge pixel. in reality. It is noted that the direction of the edge at (x. we say that an edge pixel at (x'. y') ] < T ti.639). (7. This process is repeated for every location in the image.1b and c shows the horizontal and vertical com- ponents of the Sobel operators discussed in Sec.6-37). Then. A simple bookkeeping procedure is to assign a different gray level to each set of linked edge pixels. The formation of these rectangles can be accomplished by detecting strong horizontal and vertical edges. y) has an angle similar to the pixel at (x. perpendicular to the direction of the gradient vector at that point.1a. AND INTELLIGENCE about every point (x. we say that an edge pixel with coordinates (x'. as indicated in Sec. 8. y) is. (8. y') in the predefined neighborhood of (x.1d shows the results of linking all points which.4. That is. 8.2-3) yields equivalent results. thus forming a boundary of pixels that share some common properties. (7. y) is similar in magnitude to the pixel at (x.6-38) or (7. Figure 8. The objective is to find rectangles whose sizes makes them suitable license plate candidates. had a gradient value greater than 25 and whose gradient directions did not differ by more coo -W- . y)] . VISION. However. which shows an image of the rear of a vehicle. 7. Eq. 0' = tan 1 (8. All points that are similar (as defined below) are linked. Based on the foregoing concepts.G[f(x'. The first BAS property is given by the value of G[f(x.6. keeping a record of linked points as the center of the neighborhood is moved from pixel to pixel. (c) Vertical component of the gradient. In this section we consider the link- ing of boundary points by determining whether or not they lie on a curve of specified shape.1 (a) Input image. The problem with this procedure is that involves finding n(n .HIGHER-LEVEL VISION 365 Figure 8.) than 15 °. 8.1)]/2 "L7 i-1 it n3 . (b) Horizontal component of the gradient. we wish to find subsets that lie on straight lines.n2 lines and then performing n[n(n . One possible solution is to first find all lines determined by every pair of points and then find all subsets of points 'C7 that are close to particular lines.1c. +-' Global Analysis via the Hough Transform. (d) Result of edge linking.1)/2 --. The horizontal lines were formed by sequentially applying these criteria to every row of Fig. while a sequential column scan of Fig. Suppose initially that. 8.lb yielded the vertical lines. Further processing consisted of linking edge segments separated by small breaks and deleting isolated short segments. (Courtesy of Perceptics Corporation. given n points in the xy plane of an image. SENSING. b'). The accuracy of the colinearity of these points is established by the number of subdivisions in the ab plane. y. This is computationally prohibitive in all but the most trivial applications.2 (a) xy Plane.r- at (a'. (b) Parameter space. a value of M in cell A(i. yi ) . 'r1 . for every point (Xk. y. = ax. There is an infinite number of lines that pass through (xi. b') where a' is the slope and b' the intercept of the line containing both (xi. At the end of this procedure. If a choice of at.366 ROBOTICS: CONTROL. Then. 8. yi) and (xj.x + bj. but they all satisfy the equation y. these cells are set to zero. yj) will also have a line in parameter space associated with it. Yk) in the image plane.. y (a) (b) Figure 8. all points contained on this line will have lines in parameter space which intercept at (a'. = axi + b for varying values of a and b. q) = A(p. where (amax. and this line will intersect the line associated with (xi. The resulting b's are then rounded off to the nearest allowed value in the b axis. if we write this equation as b = -xia + yi. In fact. Initially. a second point (xj. '-. VISION. However. and consider the ab plane (also called parameter space). 8.). and (bax. AND INTELLIGENCE comparisons of every point to all lines. then we have the equation of a single line for a fixed pair (x.2. y. This problem may be viewed in a different way using an approach proposed by Hough [1962] and commonly referred to as the Hough transform. bj). Consider a point (xi. These concepts are illustrated in Fig. j) corresponds to M points in the xy plane lying on the line y = a. we let A(p. j) corresponds to the square associated with parameter space coordinates (a.). results in solution bq. are the expected ranges of slope and intercept values. we let the parameter a equal each of the allowed subdivision values on the a axis and solve for the corresponding b using CAD the equation b = -xka + yk. The computational attractiveness of the Hough transform arises from subdividing the parameter space into so-called accumulator cells. + b. Accumulator cell A(i. yj) in the xy plane.. as illustrated in Fig. yi) and the general equation of a straight line in slope-intercept form. Furthermore. q) + 1.3. Thus.4a.2-4) is shown in Fig.~r. (8. 8. Figure 8.5b is the gradient image. the procedure just discussed is linear in n. with ranges -4-90' and t Pmax. respectively. It is noted that if we subdivide the a axis into K increments. we now have sinusoidal curves in the Op plane.5a shows an image of an industrial piece. + y sin 8i = pj will yield M sinusoidal curves which intercept at (6. 8.3 Quantization of the parameter plane into cells for use in the Hough transform. Since there are n image points. v. The subdivision of the parameter space is illustrated in Fig.. In this case. 8. (8. and Fig. instead of straight lines. The meaning of the parameters used in Eq. the only difference is that. Fig. The use of this representation in constructing a table of accumulators is identical to the method discussed above for the slope-intercept representation. and the product nK does not approach the number of computations discussed at the beginning of this section unless K approaches or exceeds n. `J' Example: An illustration of using the Hough transform based on Eq.5c shows the Op plane displayed as an image in which brightness level is proportional to the number of counts in the accumulators. When we use the method of incrementing 0 and solving for the corresponding p. Amax was set (I1 CAD .5..2-4) . then for every point (xk. M colinear points lying on a line x cos 8. 8. this involves nK computations. the procedure will yield M entries in accumulator A(i. 8. pj) in the parameter space.. One way around this difficulty is to use the normal representation of a line.HIGHER-LEVEL VISION 367 b 0 a 0 Figure 8. pj). The abscissa in this image corresponds to 0 and the ordinate to p. As before.4b. j) associated with the cell determined by (81'. yk) we obtain K values of b corresponding to the K possible values of a.2-4) is illustrated in Fig. given by xcosO + ysinO = p (8. r~+ A problem with using the equation y = ax + b to represent a line is that both the slope and intercept approach infinity as the line approaches a vertical position. are treated in detail by Ballard [1981].368 ROBOTICS: CONTROL. Clearly.5d superimposed on the original image. The basic difference is that we now have three parameters.ct )2 + (y .2-5) ._. the complexity of the Hough transform is strongly dependent on the number of coordinates and coefficients in a given functional representation. we point out that further generalizations of the Hough transform to detect curves with no simple analytic representations are possible. SENSING. solve for the c3 that satisfies Eq. It is of interest to note the bright spots (high accumulator counts) near 0 ° corresponding to the vertical lines. j. 'C3 emu. Although our attention has been focused thus far on straight lines. VISION. 8. which are extensions of the material presented above. where x is a vector of coordinates and c is a vector of coefficients. The lines detected by this method are shown in Fig. 8. (8. 8. (x . c2. These concepts. Before leaving this section. and update the accumulator corresponding to the cell associated with the triple (c1. k). The discrepancy is due to the quantization error of 0 and p in the parameter space. The center of the square in Fig. (b) Quantization of the Op plane into cells. and near =E90' corresponding to the horizontal lines in Fig. c2.c2 )2 = c3 (8. AND INTELLIGENCE P Pmax 0 Pmin L°min 0 0. cl . c3 ). and c3.5c thus corresponds to 0 = 0 ° and p = 0.4 (a) Normal representation of a line.2-5). . c) = 0. equal to the distance from corner to corner in the original image.5b. For example. the locus of points lying on the circle can easily be detected by using the approach discussed above. the Hough transform is applicable to any function of the form g(x. The procedure is to increment ct and c2. which result in a three-dimensional parameter space with cubelike cells and accumulators of the form A(i. (a) Ihl Figure 8. . (c) Hough transform table. A) is a finite. .HIGHER-LEVEL VISION 369 Figure 8. Since the gradient is a derivative. Texas Instruments. A graph G = (N.. Inc. is seldom suitable as a preprocessing step in situa.. We now discuss a global approach based on representing edge segments in the form of a graph structure and searching the graph for low-cost paths which correspond to significant edges.) t11 Global Analysis via Graph-Theoretic Techniques. (b) Gradient image.5 (a) Image of a work piece. (Courtesy of D. tions characterized by high noise content. The method discussed in the previous section is based on having a set of edge points obtained typically through a gradient operation. This representation provides a rugged approach which performs well in the presence of noise.. nonempty set of nodes N. together with a set A of unordered pairs of CAD .. We begin the development with some basic definitions. (d) Detected lines superimposed on the original image.1 C1. it enhances sharp variations in intensity and. As might be expected. the procedure is considerably more complicated and requires more processing time than the methods discussed thus far. therefore.. Cate. nj).. such that p and q are 4-neighbors. ni) i=2 905C CAR (8. The process of identifying the successors of a node is called expansion of the node. we define an edge element as the boundary between two pixels p and q. The graph for this problem is shown in Fig. A sequence of nodes n1. A graph in which its arcs are directed is called a directed graph. A cost c(n . Each pair (n. such that level 0 consists of a single node. nj) of A is called an arc. AND INTELLIGENCE distinct elements of N.. VISION. In each graph we will define levels. it has been assumed that the edge starts in the top row and terminates in the last row. SENSING.2-6) Finally.6 Edge element between pixels p and q.7. The cost of each edge element. q) = H . consider the 3 x 3 image shown in Fig. computed using Eq. 8. . n2.f(q)] (8. called the start node. nj) can be r.. (8.[f(p) . and goal nodes are shown in double rectangles. In this context. Each path between the start node and a goal node is a possible edge. to node nj. is shown by the arc leading into it. as illustrated in Fig.8.2-7). For simplicity. and an arc exists between two nodes if the two corresponding edge elements taken in succession can be part of an edge. nk with each node n.370 ROBOTICS: CONTROL. and the cost of the path is given by k c= c(n. p and q are 4-neighbors. As indicated above. then nj is said to be a successor of its parent node ni. where the outer numbers are pixel coordinates and the numbers in parentheses represent intensity. If an arc is directed from node n. being a successor of node n.! 'C7 coo associated with every arc (n. In order to illustrate how the foregoing concepts apply to edge detection._I is called a path from nI to nk. With each edge element defined by pixels p and q we associate the cost c(p. Each node in this graph corresponds to an edge element. 8. 8. so that the first ele- 11 q 11 Figure 8. and f(q) is the intensity value of q. an edge is a sequence of edge elements.-I.. f(p) is the intensity value of p. .6.2-7) where H is the highest intensity value in the image (7 in this example). and the nodes in the last level are called goal nodes. . (0. Note that p is assumed to be to the right of the path as the image is traversed from top to bottom. ment of an edge can only be [(0. and the corresponding edge is shown in Fig. 8.2-6). 2)] and the last ele- ment [(2. is shown dashed. computational point of view. (2. (Adapted from Martelli [1972]. 8. 2)]. respectively. 0). b) (c.8 Graph used for finding an edge in the image of Fig. the problem of finding a minimum-cost path is not trivial from a Figure 8. 1). The minimum-cost path. (0. Typically. The dashed lines indicate the minimum-cost path. and the algorithm discussed below is representative of a class of Fly In general. 1)] or [(0. 1)] or [(2.HIGHER-LEVEL VISION 371 0 1 2 0 0 (7) a (2) (2) 1 0 (5) a 0 (7) (2) 2 0 (5) a (1) a (0) Figure 8. The pair (a. © Academic Press. the approach is to sacrifice optimality for the sake of speed. 0). 1). (8.7 A 3 x 3 image.) . (2.7. computed using Eq.9. d) in each box refers to points p and q. 2-8) Here.2-8) is smallest. (8. If n is a goal node. (If there are no successors. go to step 2.) = g(n) + c(n. g(n) can be chosen as the lowest-cost path from s to n found so far. ) mark it OPEN. is marked CLOSED or OPEN. set C/] r(n. exit with the solution path obtained by tracing back through the pointers. generating all its successors.372 ROBOTICS: CONTROL. ).)] Mark OPEN those CLOSED successors whose g' values were thus lowered and redirect to n the pointers from all nodes whose g' values were lowered. where the path is constrained to go through n.. Mark the start node OPEN and set g(s) = 0. Step 2. (Ties for minimum r values are resolved arbitrarily.9 Edge corresponding to minimum-cost path in Fig. If a successor n1 is not marked. SENSING. g(n) + c(n. If no node is OPEN. Let r(n) be an estimate of the cost of a minimum-cost path from the start node s to a goal node. n.) Step 6. but always in favor of a goal node. exit with failure. 8. otherwise continue. Mark CLOSED the OPEN node n whose estimate r(n) computed from Eq. If a successor n. expanding only certain nodes based on previous costs in getting to that node). VISION. and h(n) is obtained by using any available heuristic information (e. An algorithm that uses r(n) as the basis for performing a graph search is as follows: Step 1.) = min[g(n.8. procedures which use heuristics in order to reduce the search effort. Step 3. update its value by letting g'(n. that is. . plus an estimate of the cost of that path from n to a goal node. Step 5. r(n) = g(n) + h(n) (8. Go to step 2.g. This cost can be expressed as the estimate of the cost of a minimum-cost path from s to n. otherwise continue. Expand node n. Step 7. AND INTELLIGENCE 0 (7) (2) 0 (2) o (5) (7) (2) 0 (5) Y (1) (0) Figure 8. n.) Step 4. and direct pointers from it back to n. Z.A.. )1 o.M 8 8 b 8 8 8 8 8 8 8 e 3 3 3 8 8 8 8 8 8 8 8838b tl e NHS-9 10aX . a .M )3 ZZ. 8 3 8 88 3 e e e 8363 388 Example: A typical result obtainable with this procedure is shown in Fig. It can be shown. ^<4. e2+ .1..) In general... 8.1 . 8.. h = 0) then the procedure reduces to the uniform-cost algorithm of Dijkstra [1959]. + 0)) -' IM )31X0M88149)0048) Z@60"-Oft N®N 8 8 8 8 8 3 8 8 .4 ). [1968]).+ZAX 108.A. e O..a a+XZ 11 +Z80)A3MZ9a0$A 0Z-)+1iM80Z N®N a2 +13a13314aAMX8O XaZ11-M9X 11=a XZ8DX20A@40861+iZZ)00 )00 dl` Z) ZiD114X3O83A8dM0. however.XAOA39M0AAO8oXi-))-.1r Wiry N-r r^' 2!X0- ZZ X)Ma r. cat T. then the procedure will indeed find an optimal path to a goal (Hart et al. 11131 : IA+1-17-4.+ZMXLO+. (From Martelli [1976].)-... Part (a) of this figure shows a noisy image and Fig. + 11+10®1 ZIZXZ).1)X OZaiiXZIA. I 3.1=))M)' .1M4MZ8000AaaA 012 IZAXa41080X080X000208A X01a_8A80b 1.89Xo008XM) 1ZZ...XA$M. a s N. A .M A830)M800.ZZ)YOM ..+. oOZ 8 8 8 3 e X®m "''y 1.. 14M'3OD9ZMM18008A.+2) )X Z+.I)A-A) : )X08)XMM+ii0$MM*LMe1. N/.l8AO+ XI..AAa-11 ZZ..LM++ 3 3 NN.3MDM8A0AAZ00101ZXa-1:+2 x111 )00 n N<.=M 1 XA Z Ia 8++8i Z1 A--X l .11ai&M))GZ -AiXXL.808)ZAX..HIGHER-LEVEL VISION 373 eeeoesooeesoeeoseeoeeeeoeeaeoe . 11. search.1 e e a 3 3 3 3 3 3 8 8 8 9 8 3 gam 8 8 e ZMA*L.I)X X11. ©ACM. ZXa -1' x-Ai2L M+M... )1+Z .. Heuristics were brought into play by not expanding those nodes whose cost exceeded a given threshold... that if h(n) is a lower bound on the cost of the minimal-cost path from node n to a goal node. (b) Result of edge detection by using the heuristic graph tom.XZ)ZXZ)ZX.10 (a) Noisy image.: +Z XI.+.aX).84 24iz ).10.eeeroeesesssoeeseaeeseeee t. boo .-))8IZ. ix' e e . 8889 as a 8 8 3 8 e 1 Z.MIXAZ+.'X....3.4 )))ill3))M0008XMX+XMXIXAZ)ZX+LLi )t+XM XXa.)ZMA:04..Z1.2 x+134000814AOM6dx80O MMl80X.=ZMMZ +I +ZAA. 89b 8 88 e 8 0 8 e a a 8 NN- ..X)i.. its advantage is speed via the use of heuristics. .a-I+Mli'+Z4xiA di =vaAMZ).e.. ). If no heuristic information is available (i.+X-MaZX1-1.!Z e 1 +x.M8Aiaz0 60108AX®80LM. .+1 ¢t) 200 X11 --> LZ8M00O00X20XMA)IIMI)IM u)M)a+X+-1 0-47.X4I1011L) is -L. t Z. (a) eoeeeeeeeeeseeeeoeeeoooeseseo Figure 8.-a1. tom 1XX1.10b is the result of edge segmentation by searching the corresponding graph for low-cost paths. e. this algorithm is not guaranteed to find a minimum-cost path.AraXMXa3iDD31)0AMO08+a.1ZXLLXMX iAiAiA)AiXMX)4XZ+)..Ae M 32 L m<® 9 8 b $. 8 e o 8 8 8 d 6 8 8 8 -a.L= y 3 3 IL' X)1340)00XMML8e0MXOMl -XA$MMILL i XI.A 31 N!)¢ >'N- 20aro.+ZAA.ZA8M18004" oA1XAZ1J1a-00MOX400008X09A800OA84M0O01 18X 1X41+i. 5 as an operation involving tests against a function of the form T = T[x. y) and p(x.2-9) where f(x. and P2 are called a priori probabilities. The backlighting and structured lighting techniques discussed in Sec.6. the threshold is called global. Global vs. If. y)l n.. Optimum Threshold Selection.3 usually yield images that can be segmented by global thresholds. It is often possible to consider a histogram as being formed by the sum of probability density functions. For example. A thresholded image.1 Ia. f(x. y) labeled 1 correspond to objects. The overall histogram may be approximated by the sum of two proba°-n . y) _ iff(x. if handled by thresholding. y) denotes some local property measured in a neighborhood of this point.2-9) depends only on f(x. SENSING. 7. shadows.2 Thresholding The concept of thresholding was introduced in Sec. then it is called a local threshold. in addition.374 ROBOTICS. AND INTELLIGENCE 8. The opposite condition is handled by reversing the sense of the inequalities. Global thresholds have application in situations where there is clear definition between objects and background. Local Thresholds. 7.2-10) assumes that the intensity of objects is greater than the intensity of the background. and P. they are usually employed in situations requiring local threshold analysis. Equation (8. and reflections. require some type of local analysis to compensate for effects such as nonuniformities in illumination. For the most part. In the case of a bimodal histogram the overall function approximating the histogram is given by P(z) = PIPI(z) + P2P2(z) CAD (8. when T in Eq. 7. y). arbitrary illumination of a work space yields images that. g(x. As indicated in Sec. p.2-11) where z is a random variable denoting intensity.6. (z) and P2 (z) are the probability density functions.CONTROL. y) is the intensity of point (x. VISION. T depends on the spatial coordinates x and y. y). (8. In the following discussion we consider a number of techniques for selecting segmentation thresholds.2.y)>T if f(x. y. y) and p(x.2-10) 0 so that pixels in g(x. These last two quantities are simply the probabilities of occurrence of two types of intensity levels in an image. y) 5 T (8. Although some of these techniques can be used for glo- bal threshold selection. (8.5. If T depends on both f(x. and where illumination is relatively uniform. while pixels labeled 0 correspond to the background. y) is created by defining 11 g(x. 8. it is called a dynamic threshold. y). p(x. consider an image whose histogram is shown in Fig. (a) (b) Figure 8. as follows: d1(z) = PIPI(z) and (8. 8.2-14) It is known from decision theory (Tou and Gonzalez [1974]) that the average error of misclassifying an object pixel as background. If it is known that light pixels represent objects and also that 20 percent of the image area is occupied by object pixels. is minimized by using the following rule: Given a pixel with intensity value z. PI + P2 = 1 (8.11 (a) Intensity histogram. we classify the pixel as an object . as shown in Fig.2-14).2.2-12) which simply says that. bility density functions. we substitute that.HIGHER-LEVEL VISION 375 Intensity . (b) Approximation as the sum of two probability density functions.2-13) d2(z) = P2P2(z) (8. in this case. Let us form two functions of z.llb. (8. Then. the remaining 80 percent are background pixels.2-13) and (8. then PI = 0. It is required that t3. or vice versa. value of z into Eqs. VISION.2-20) If a = 0 or PI = P2. The optimum threshold is then given by the value of z for which d1 (z) = d2 (z).14' PI (z) _aI exp and P2 (Z) = . That is. a condition met whenever the number of object pixels is equal to the number of background pixels in an image.2-10) can be used to segment a given image. we have that the optimum threshold satisfies the equation PIPI (T) = P2P2(T) (8.376 ROBOTICS: CONTROL. . (8. we can use this equation to solve for the optimum threshold that separates objects from the background. If the standard deviations are equal. As an important illustration of the use of Eq.2-13) and (8. that is.2-14).2-17) Letting z = T in these expressions. setting z = T in Eqs.p (8. . AND INTELLIGENCE pixel if d1 (z) > d2 (z) or as background pixel if d2 (z) > d1(z). Eq.m2 In P2 PI (8.2-15). The former condition simply means that both the object and background intensities are constant throughout the image.2-15).2-18) A=or 2 -a2 B = 2(m1a2 . (8. al = a2 = a. SENSING. a single threshold sufficient: is 7 . (z) and P2 (z) are gaussian probability density functions.. the optimum threshold is just the average of the means. suppose that p.m2ai) 2 2 2 C = alm2 .2-16) 2 a2 exp 2a2 (8. (8.2-15) Thus. Once this threshold is known.MI + m2 + 2 a2 m1 . substituting into Eq. and simplifying yields a quadratic equation in T-CO AP +BT+C=0 where (8.a2ml2 + 2aja2 In 2 2 (8. The latter condition means that object and background pixels are equally likely to occur. (8. if the functional forms of pI (z) and P2 (z) are known.2-19) a2P1 a1P2 The possibility of two solutions indicates that two threshold values may be required to obtain an optimal solution. y).(z) As before. y). we ignore the grid superimposed on the image. It is of interest to note that this method involves local analysis to establish the threshold for each cell. A display of how . as defined by the grid in Fig. . T(x.-d '-" i = 1.p. As expected.12a.12c. the variations in intensity rendered this approach virtually useless.. and that these local thresholds are interpolated to create a dynamic threshold which is finally used for segmentation..HIGHER-LEVEL VISION 377 Example: As an illustration of the concepts just discussed. 8. consider the seg- mentation of the mechanical parts shown in Fig. where the horizontal lines provide an indication of the relative scales of these histograms. global threshold is evident. and finally using this threshold in Eq.y:° G°0 Then. . n "0V (8. (8. fitting it with a bimodal gaussian density. era +-O d. + . these regions are assigned thresholds computed by interpolating the thresholds from neighboring subimages that are bimodal. Suppose that we can model a multimodal histogram as the sum of n probability density functions so that P(z) = pip. the optimum threshold between category k .. (8..2-10) to segment the image. A similar approach.2-22) . where.2-18) and (8.. After the image has been subdivided. The minimum-error decision rule is now based on n functions of the form A given pixel with intensity z is assigned to the kth category if dk(z) > dj(z). a histogram is computed for each subimage and a test of bimodality is conducted. n.2-19). however. Figure 8.. Finally.12d. . T(x.t The approach developed above is applicable to the selection of multiple thresholds. 8. y) varies as a function of position is shown in Fig.12a. a thresholded image is created by comparing every pixel in the original image against its corresponding threshold. .. 8. 8.d 76b . for the moment.12b shows the result of computing a global histogram. The result of using this method in this particular case is shown in Fig. 8.. No thresholds are computed for subimages without bimodal histograms. The improvement over a single. The bimodal histograms are fitted by a mixed gaussian density and the corresponding optimum threshold is computed using Eqs. j # k. 2. instead.2-21) (8.w. At the end of this procedure a second interpolation is carried out on a point-by-point manner using neighboring thresholds so that every point is assigned a threshold value. 2. The histograms for each subimage are shown in Fig. .(z) = P. establishing an optimum global threshold. (z) + . can be carried out on a local basis by subdividing the image into subimages. j = 1 . the optimum thresholding problem may be viewed as classifying a given pixel as belonging to one of n possible categories.12e. Note that this is a dynamic thres'1y hold since it depends on the spatial coordinates (x. VISION. courtesy of A. Rosenfeld. AND INTELLIGENCE Figure 8.) .378 ROBOTICS: CONTROL. (d) Display of dynamic threshold. (c) Histograms of subimages. (b) Result of global thresholding. (From Rosenfeld and Kak [1982]. SENSING.12 (a) Image of mechanical parts showing local-region grid. (e) Result of dynamic thresholding. The gradient. One of the most important aspects of selecting a threshold is the capability to reliably identify the mode peaks in a given histogram. Similarly.6.6. y)] is given by Eq. denoted by Tkj. and separated by deep valleys. 7. 7. This is particularly important for automatic threshold selection in situations where image characteristics can change over a broad range of intensity distributions. use of the Laplacian can yield information regarding whether a given pixel lies on the dark (e. on the other hand. thus improving the symmetry of the histogram peaks. as will be seen below. Finally.6-38) or (7. narrow. y)]. coo C1. the probability that a given pixel lies near the edge of an object is usually equal to the probability that it lies on the edge of the background. For instance. '. This inforw'. G[f(x. One immediate and obvious improvement is that this makes histograms less dependent on the relative size between objects and the background.o C17 ate 'hi.6. This' property produces the highly desirable deep valleys mentioned earlier in this sectea. ~O" "ti ca- ''3 .4. One approach for improving the shape of histograms is to consider only those pixels that lie on or near the boundary between objects and the background. (7. a') 0 a'° . c=' °'t. we may expect in practice that the valleys of histograms formed from the pixels selected by a gradient/Laplacian criterion to be sparsely populated. the Laplacian L[f(x. The principal problem with the foregoing comments is that they implicitly assume that the boundary between objects and background is known.2-23) As indicated in Sec. We may CAD ^. Since.g. In addition. the real problem with using multiple histogram thresholds lies in establishing meaningful histogram modes.5.'3 mation is clearly not available during segmentation since finding a division between objects and background is precisely the ultimate goal of the procedures discussed here. using pixels that satisfy some simple measures based on gradient and Laplacian operators has a tendency to deepen the valley between histogram peaks. is obtained by solving the equation PkPk(Tkj) = PjPj(Tkj) (8. at any point in an image is given by Eq. nearly constant background area and one small object would be dominated by a large peak due to the concentration of background pixels. only the pixels on or near the boundary between the object and the background were used. 7. it is intuitively evident that the chances of selecting a "good" threshold should be considerably enhanced if the histogram peaks are tall.6-47). symmetric. as discussed in Sec. However. In addition. background) or light (object) side of an edge.4 that an indication of whether a pixel is on an edge may be obtained by computing its gradient.3 cct) =.HIGHER-LEVEL VISION 379 and category j.. the resulting histogram would have peaks whose heights are more balanced. the intensity histogram of an image composed of a large. tion. (7. If.6-39). the Laplacian is zero on the interior of an ideal ramp edge. Threshold Selection Based on Boundary Characteristics. we know from the material in Sec. Based on the discussion in the last two sections.. .. +.+++---00000000000000---+.++-------. 000 000000 000000 00000 0. 1 1 1 0 0 0 0 0/ I 1 1 1 I 0 0 0 O O O O O 1 1+++ 1 o 0 0 ..++--0000000000 00000000 1 1 1 000000000--+++..++.represent any three distinct gray levels. .0 000000000000000----------.++++++--0000000000000 00000000--.380 ROBOTICS: CONTROL..+ 1 ++ 1 + + + 1 1 .+--------..000--++++++++-----. +.. (8.++++. y)] 0 (8.--0000000000000000------00000000000000000000000 0000000000--+.+: + .2-24).0000000000000000000000000000000000000000000000 000000000000000000000000000+000000+0000000+000000000000000000000000000 O O O O O O O O O 0 0 0000000000000000000000000000000000001..+ 1 0 0 0 0+.+---00000000000 -------. y)] if G[f(x. (From White and Rohrer [1983].+++++++++++ 00000000000--. and .++++--0000000000000000000000000000000----------00000000000000---+. y)] < 0 where the symbols 0... ++--00000------..2-24) produces an image s(x... (8.+++---00000000000000000 00000000---+++--0 --------.+ 1100000 oooo O 0 0 0 0 0 0 0 0 I 1100001+ 1 1 looooo looooo looooo looooo looooo 0 0 0 / O O o 0 0 0 0 o 0 0 O O o 0 O O O O O O O O 0 0 0 0 O O 1 1 1 1++ 1+ 1+ 1+ 1 1 1 oooooo 0 oooooo oooooo 000000 oooooo 000000 oooooo 000000 oooooo Iooooo O O O O o O oooooo oooooo oooooo oooooo oooooo oooooo Iooooo Iooooo oooooo 000000 looooo oooooo 00000 00000 O O O o 00000 Iooooo 00000 00000 1 1 00000 O O O O O O O O O O O O O O O O O O O O O O O O O 00000 00000 00000 00000 00000 0 0 0 0 0 M . VISION. (8. 000o0000000000 0000000000 00000000000000000--00+0..+---9000000000000000000000000000000000-------. the use of Eq. +++---90000000000---+++++. y) _ OO N N if G[f(x.13 shows the labeling produced by Eq...---+.+---++++--------------. y) in which all pixels which are not on an edge (as determined by G[ f (x. SENSING... 000000000000000000000000000000000000000000000000000000000+000000000000 00000000000000000000000000000000000000000000000000-0000-00000+00000000 0000000000000000000+00000000000000000000000000000000000000-00000+00000 0000000000000000000000+ 0000000000000000000000000000000000000000-0-00+0 00000000-00000000000-00000000++000000000000000000000000000000000000000 00000000000000000000000000000000000+000+000000000000000000000000000-00000000000000000000000000000000-000-0000000-0+000000000000000000000000 0000000000000000000000000000000000000000000000000000+00000+00000000000 000000000000000000000000000000000000000000000000000-0000-0000+00000000 000000000000000000000000000000000000000000000000000000000-00+000000000 000000000000000------. underlined stroke written on a light background.++'+++ 1+.0+++-0000----+..224) are reversed for a light object on a dark background.++--00000000---.+++++--00000000000000000 0+ 0 0 0 'o.0 ---------------------------------------------------------------------000-00000000--00-000000-------0000000----------------0---------------000000000000000000000ooooooooooooooooooooooooooooooooooooooooooooooooo 0000000000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000000000 000000 000000 Figure 8.--00--.13 Image of a handwritten stroke coded by using Eq.00000000---++...000000000000000000000 000000000---++++--------00000000---++++---------0000000000... 00000000---++.000000000000000000000000000000000000000000000000 000oooooooobo--------------oooooooooooooooooooooooooooooooooocoooooooo 000000000000---+++++++---000000000000000000000000000000000000000000000 0000000000---++++00+++...(8.+.+.) O 1'V CAD w N O o o O O o O O O O O O O O O O O O O O O O O O O O O O O O OO O O O U O OO O o o O 0 00000000000000000000o1111 0000000+0000000000000000 0000000 o O l 0 0 O o 0 O O O O O O O O 0000000000 O O 0000000000000000000000 O O O O O O O O O 0 0 0 0 O O O O O O O O 0 40 0 1 O o 0 0 O O 1 0 000000{00000000000000000001 000 0 0 0 / O o 0 0 o O o 0 0 0 0 + 0 O O O o 0 O+ O O O O O O O O O O O O O O 0 0 O O 0 O + o O O O0 1 0 0 0 0+ O o 0 00 O 0 0 0 0 0 0 1 1+++ I O O O O O O O O + 0 0 O O O O O O O O O O O 0O 0O0 O O O t o 0 0 1 1 0 0 0 0 O o O 1 1++++ 1 I O 1 t 1 0 0 1 0 1 1 1 1 o 4 1+++++++++ 1 1 0 1 0011 p++ 1++ 1+ O O 1 1 ++ a 1++ .+------0000000000--+++---0--... binary image in which 1's correspond to objects of interest o O M I 1 'T7 000000.00000000000000000000000000000000000000000000 000000000000000000000000000000000000000000ooooo00000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000000000 ----000---00000-0000-0000000000000000000000000000000000000000000000000 ---------------------------------------------------------------------- 000 0000 000 000 000 000 000 000 00000. y)] T and L[f(x.+++..+--000--++00+++--00.2-24) 1 T and L[f(x. and with reference to Fig...++++-0000000000000000 I 00000000--++++--00--++. AND INTELLIGENCE use these two quantities to form a three-level image.. * 1 1 1 1+... " The symbols + and . Figure 8... y)] < T v n\ O+ + s(x.000000000000000000---++++++++++ 000000000000--. Assuming a dark object on a light background..----00-----+... ©IBM.++--000000000000----------. 7-34b..-00. 1 1(++ 1 1 l 1 1 1++ +++ I a++: 1 o O o O o O O O O 0 0 0 0 0 0 0 + 1 1 0 0 1 1 0 0 0 O O O O O O O I O O O O o 0 01 I 0 . as follows: O o O 0 if G[f(x.in Eq. y) ] being less than T) are labeled "0... " all pixels on the dark side of an edge are labeled "+ .. The information obtained by using the procedure just discussed can be used to generate a segmented.... +..++++----++. 0 0 0 0 0 o 0 0 0 000000000000000000000000000000000001 1 1+.2-24) for an image of a dark. and T is a threshold.++. " and all pixels on the light side of an edge are labeled ".++ I 1 . (b) Segmented image. First we note that the transition (along a horizontal or vertical scan line) from a light background to a dark object must be characterized by the occurrence of a . © IBM. -) . 8. (8. Thus we have that a horizontal or vertical scan line containing a section of an object has the following structure: ( )(-. Finally. consider Fig. nearly of the same height. and are separated by a distinct valley. or 0. (From White and Rohrer [19831.14a which shows an image of an ordinary scenic bank check. The innermost parentheses contain object points and are labeled I. Example: As an illustration of the concepts just discussed. it has two dominant modes which are symmetric.14b shows the segmented image obtained by using Eq. bounded by (-.-)( .) . The result was made binary by using the sequence analysis discussed above. Fig. +) and (+. -. That is.+)(Oor+)(+. the transition from the object back to the background is characterized by the occurrence of a + followed by a -. All other pixels along the same scan line are labeled 0.14 (a) Original image. Figure 8. Finally. (a) (b) Figure 8. ) ( ) represents any combination of +. It is noted that this histogram has the properties discussed earlier.15 shows the histogram as a function of gradient values for pixels with gradients greater than 5.HIGHER-LEVEL VISION 381 and 0's correspond to the background. with the exception of any sequence of (0 or + ) where . 8. The interior of the object is composed of pixels which are labeled either 0 or +.2-24) with T near the midpoint of the valley. y).followed by a + in s(x. we label it with a 1. where each "tight" cluster is analogous to a dominant mode in a one-variable histogram. -°. The principal difficulty is that finding meaningful clusters generally becomes an increasingly complex task as the number of variables is increased. we can segment an image by using the following procedure: For every pixel in the image we compute the distance between that pixel and the centroid of each cluster. may be viewed as a point in three-dimensional space. green. ©IBM]. AND INTELLIGENCE 1500 Number pixels I 1 1 5-14 15-24 25 and above Gradient value Figure 8..) Thresholds Based on Several Variables. . Then. we form a 16 x 16 x 16 grid (cube) and insert in each cell of the cube the number of pixels whose RGB components have intensities corresponding to the coordinates defining the location of that particular cell. the book by Tou and Gonzalez [1974]. if the pixel is closer to the centroid of the object cluster. For example. Car . This concept is easily extendible to more pixel components and.0. to more clusters. 'J' Sao v'. This and other related techniques for segmentation are surveyed by Fu and Mui [1981].. In some applications it is possible to use more than one variable to characterize each pixel in an image. Keeping in mind that each pixel now has three components and. SENSING. VISION.00 corresponds to objects and the other to the background. The basic procedure is the same as that used for one variable. where red.15 Histogram of pixels with gradients greater than 5. In this case.382 ROBOTICS: CONTROL. CAD tea: . for example. therefore. The reader interested in further pursuing techniques for cluster seeking can consult. The concept of threshold selection now becomes that of finding clusters of points in three-dimensional space. each pixel is characterized by three values and it becomes possible to construct a three-dimensional histogram. for example. but also to distinguish between objects themselves. (From White and Rohrer [1983. . A notable example is color sensing. certainly. Suppose. thus enhancing the capability to differentiate not only between objects and background.. . given three 16-level images corresponding to `S' the RGB components of a color sensor. that we find two significant clusters of points in a given histogram. and blue (RGB) components are used to form a composite color image. we label it with a 0. .-. otherwise. where one cluster 'CS A. The techniques discussed thus far deal with thresholding a single intensity variable. Each entry can then be divided by the total number of pixels in the image to form a normalized histogram."t. HIGHER-LEVEL VISION 383 Example: As an illustration of the multivariable histogram approach, consider Fig. 8.16. Part (a) of this image is a monochrome image of a color photograph. The original color image was composed of three 16-level RGB images. For our purposes, it is sufficient to note that the scarf and one of the flowers were a vivid red, and that the hair and facial colors were light and different in spectral characteristics from the window and other background features. f74 ..+ .., which was known to contain RGB components representative of flesh tones. It is important to note that the window, which in the monochrome image has a range of intensities close to those of the hair, does not appear in the segmented image because its multispectral characteristics are quite different. The '-BCD CD- Figure 8.16b was obtained by thresholding about a histogram cluster a-+ fact that some small regions on top of the subject's hair appear in the seg._, C." ..C C/] /-I mented image indicates that their color is similar to flesh tones. Figure 8.16c was obtained by thresholding about a cluster close to the red axis. In this case Figure 8.16 Segmentation by multivariable threshold approach. (From Gonzalez and Wintz [1977], ©Addison-Wesley.) 'C) Q.) cps O°"" LS, i.. III eau 384 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE only the scarf, the red flower, and a few isolated points appeared in the segmented image. The threshold used to obtain both results was a distance of one cell. Thus, any pixels whose components placed them within a unit distance from the centroid of the cluster under consideration were coded white. All other pixels were coded black. 8.2.3 Region-Oriented Segmentation Basic Formulation. The objective of segmentation is to partition an image into regions. In Sec. 8.2.1 we approached this problem by finding boundaries between regions based on intensity discontinuities, while in Sec. 8.2.2 segmentation was accomplished via thresholds based on the distribution of pixel properties, such as intensity or color. In this section we discuss segmentation techniques that are based on finding the regions directly. Let R represent the entire image region. W e may view segmentation as a process that partitions R into n subregions, RI , R2 , ... , R,,, such that 11 1. U Ri = R i=I 2. Ri is a connected region, i = 1, 2, ... , n 3. Ri (1 Rj = 0 for all i and j, i * j 4. P(Ri) = TRUE for i = 1, 2, ... , n 5. P(Ri U Rj) = FALSE for i # j where P(Ri) is a logical predicate defined over the points in set Ri, and 0 is the null set. Condition 1 indicates that the segmentation must be complete; that is, every pixel must be in a region. The second condition requires that points in a region must be connected (see Sec. 7.5.2 regarding connectivity). Condition 3 indicates that the regions must be disjoint. Condition 4 deals with the properties that must be satisfied by the pixels in a segmented region. One simple example is: P(Ri) = TRUE if all pixels in Ri have the same intensity. Finally, condition 5 indicates that regions Ri and Rj are different in the sense of predicate P. The use of these conditions in segmentation algorithms is discussed in the following subsections. Region Growing by Pixel Aggregation. As implied by its name, region growing is a procedure that groups pixels or subregions into larger regions. The simplest of these approaches is pixel aggregation, where we start with a set of "seed" points and from these grow regions by appending to each seed point those neighboring pixels that have similar properties (e.g., intensity, texture, or color). As a simple illustration of this procedure consider Fig. 8.17a, where the numbers inside the cells represent intensity values. Let the points with coordinates (3, 2) and (3, 4) be used as seeds. Using two starting points will result in a segmentation consisting of, at most, two regions: RI associated with seed (3, 2) and R2 associated a.+ +U+ 4.+ t"' own HIGHER-LEVEL VISION 385 2 1 3 5 0 --' 0 6 7 1 5 8 7 0 2 1 6 7 7 0 7 6 6 5 0 1 5 6 5 (a) a a b b b a a b b b b a a b b a a b b b a a b b b (b) a a a a a a a a a a a a a a a a a a a a a a a a a (c) Figure 8.17 Example of region growing using known starting points. (a) Original image array. (b) Segmentation result using an absolute difference of less than 3 between intensity levels. (c) Result using an absolute difference less than 8. (From Gonzalez and Wintz Fr" [1977], ©Addison-Wesley.) with seed (3, 4). The property P that we will use to include a pixel in either region is that the absolute difference between the intensity of the pixel and the intensity of the seed be less than a threshold T (any pixel that satisfies this property simultaneously for both seeds is arbitrarily assigned to regions RI ). The `"j s., 386 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE result obtained using T = 3 is shown in Fig. 8.17b. In this case, the segmentation consists of two regions, where the points in RI are denoted by a's and the points in R2 by b's. It is noted that any starting point in either of these two resulting regions would have yielded the same result. If, on the other hand, we had chosen T = 8, a single region would have resulted, as shown in Fig. 8.17c. The preceding example, while simple in nature, points out some important problems in region growing. Two immediate problems are the selection of initial seeds that properly represent regions of interest and the selection of suitable properties for including points in the various regions during the growing process. Selecting a set of one or more starting points can often be based on the nature of the problem. For example, in military applications of infrared imaging, targets of interest are hotter (and thus appear brighter) than the background. Choosing the brightest pixels is then a natural starting point for a region-growing algorithm. When a priori information is not available, one may proceed by computing at every pixel the same set of properties that will ultimately be used to assign pixels to regions during the growing process. If the result of this computation shows clusters of values, then the pixels whose properties place them near the centroid of these clusters can be used as seeds. For instance, in the example given above, a histogram of intensities would show that points with intensity of 1 and 7 are the .'~ (1) in, (1) C3, COD most predominant. The selection of similarity criteria is dependent not only on the problem under consideration, but also on the type of image data available. For example, the analysis of land-use satellite imagery is heavily dependent on the use of color. This problem would be significantly more difficult to handle by using monochrome images alone. Unfortunately, the availability of multispectral and other complementary image data is the exception, rather than the rule, in industrial computer vision. Typically, region analysis must be carried out using a set of descriptors based on intensity and spatial properties (e.g., moments, texture) of a single image source. A discussion of descriptors useful for region characterization is given in Sec. 8.3. .Vi It is important to note that descriptors alone can yield misleading results if connectivity or adjacency information is not used in the region growing process. An illustration of this is easily visualized by considering a random arrangement of pixels with only three distinct intensity values. Grouping pixels with the same intensity to form a "region" without paying attention to connectivity would yield a segmentation result that is meaningless in the context of this discussion. Another important problem in region growing is the formulation of a stopping rule. Basically, we stop growing a region when no more pixels satisfy the criteria for inclusion in that region. We mentioned above criteria such as intensity, texture, and color, which are local in nature and do not take into account the "history" of region growth. Additional criteria that increase the power of a regiongrowing algorithm incorporate the concept of size, likeness between a candidate pixel and the pixels grown thus far (e.g., a comparison of the intensity of a candidate and the average intensity of the region), and the shape of a given region being grown. The use of these types of descriptors is based on the assumption that a model of expected results is, at least, partially available. ..d .V+ c}0 ova E'+ U,,, ate) b.0 gyp E". s.. .a) o'ff HIGHER-LEVEL VISION 387 Region Splitting and Merging. The procedure discussed above grows regions starting from a given set of seed points. An alternative is to initially subdivide an image into a set of arbitrary, disjoint regions and then merge and/or split the regions in an attempt to satisfy the conditions stated at the beginning of this section. A split and merge algorithm which iteratively works toward satisfying these constraints may be explained as follows. Let R represent the entire image region, and select a predicate P. Assuming a square image, one approach for segmenting R is to successively subdivide it into that quadrant into subquadrants, and so on. This particular splitting technique has a convenient representation in the form of a so-called quadtree (i.e., a tree in which each node has exactly four descendants). A simple illustration is shown in Fig. 8.18. It is noted that the root of the tree corresponds to the entire image and that each node corresponds to a subdivision. In this case, only R4 was subdivided further. `.Y .fl smaller and smaller quadrant regions such that, for any region R, P(R;) = TRUE. The procedure starts with the entire region R. If P(R) = FALSE, we divide the image into quadrants. If P is FALSE for any quadrant, we subdivide '-t Ri R2 R41 R42 R3 R43 Raa (a) R R R, R3 Ra p., Rai Raz Ra3 Raa (b) Figure 8.18 (a) Partitioned image. (b) Corresponding quadtree. 388 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE If we used only splitting, it is likely that the final partition would contain adjacent regions with identical properties. This may be remedied by allowing merging, as well as splitting. In order to satisfy the segmentation conditions stated earlier, we merge only adjacent regions whose combined pixels satisfy the predicate P; that is, we merge two adjacent regions R, and Rk only if P(R; U Rk) _ TRUE. The preceding discussion may be summarized by the following procedure in which, at any step, we 1. Split into four disjoint quadrants any region R, for which P(R;) = FALSE 2. Merge any adjacent regions Rj and Rk for which P(Rj U Rk) = TRUE 3. Stop when no further merging or splitting is possible C/1 A number of variations of this basic theme are possible (Horowitz and Pavlidis [1974]). For example, one possibility is to initially split the image into a set of square blocks. Further splitting is carried out as above, but merging is initially limited to groups of four blocks which are descendants in the quadtree representation and which satisfy the predicate P. When no further mergings of this type are possible, the procedure is terminated by one final merging of regions satisfying step 2 above. At this point, the regions that are merged may be of different sizes. The principal advantage of this approach is that it uses the same quadtree for splitting and merging, until the final merging step. (t4 Example: An illustration of the split and merge algorithm discussed above is shown in Fig. 8.19. The image under consideration consists of a single object and background. For simplicity, we assume that both the object and background have constant intensities and that P(R;) = TRUE if all pixels in R; have the same intensity. Then, for the entire image region R, it follows that P(R) = FALSE, so the image is split as shown in Fig. 8.19a. In the next step, only the top left region satisfies the predicate so it is not changed, while the other three quadrant regions are split into subquadrants, as shown in Fig. 8.19b. At this point several regions can be merged, with the exception of the two subquadrants that include the lower part of the object; these do not satisfy the predicate and must be split further. The results of the split and merge operation are shown in Fig. 8.19c. At this point all regions satisfy P, and merging the appropriate regions from the last split operation yields the final, segmented result shown in Fig. 8.19d. a'' a.+ .Qt emu ply CAD 8.2.4 The Use of Motion Motion is a powerful cue used by humans and other animals in extracting objects of interest from the background. In robot vision, motion arises in conveyor belt applications, by motion of a sensor mounted on a moving arm or, more rarely, by motion of the entire robot system. In this subsection we discuss the use of motion for segmentation from the point of view of image differencing. 'r1 C3. a)¢ 3.- ..O HIGHER-LEVEL VISION 389 (a) (b) T-T T I I I I I I I I I i (c) (d) Figure 8.19 Example of split and merge algorithm. Basic Approach. One of the simplest approaches for detecting changes between two image frames f (x, y, ti) and f (x, y, tj) taken at times ti and tj, respectively, is to compare the two images on a pixel-by-pixel basis. One procedure for doing this is to form a difference image. Suppose that we have a reference image containing only stationary components. If we compare this image against a subsequent image having the same environment but including a moving object, the difference of the two images will cancel the stationary components, leaving only nonzero entries that correspond to 'C= _A, `"' (D. ... ti. the nonstationary image components. A difference image between two images taken at times ti and tj may be defined as ,O= d1 (x, y) _ I if If(x, Y, ti) - f(x, y, tj) > 0 (8.2-25) 0 otherwise O,. 390 ROBOTICS: CONTROL, SENSING, VISION. AND INTELLIGENCE where 0 is a threshold. It is noted that d;1(x, y) has a 1 at spatial coordinates (x, y) only if the intensity difference between the two images is appreciably different at those coordinates, as determined by the threshold 0. In dynamic image analysis, all pixels in di1(x, y) with value 1 are considered the result of object motion. This approach is applicable only if the two images are registered and the illumination is relatively constant within the bounds established by 0. In practice, 1-valued entries in d11(x, y) often arise as a result of noise. 4-r C/1 Typically, these will be isolated points in the difference image and a simple approach for their removal is to form 4- or 8-connected regions of l's in d,1(x, y) and then ignore any region that has less than a predetermined number of entries. This may result in ignoring small and/or slow-moving objects, but it enhances the chances that the remaining entries in the difference image are truly due to motion. `--' 000000000000 00000000000000 0000000000000000 000000000000000000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 00000000000000000000 0000000000000000000000 000000000000000000000000 0000000000000000000000000 000000000000000000000000000 000000000000000000000000000 000000000000000000000000000 00000000000000000000000000 00000000000000000000000000 000000000000000000000000 0000000000000000000000 00000000000000000000 0000000000000000000 00000000000000000 0000000000000000 00000000000000 000000000000 .-- 000000000000 00000000000000 0000000000000000 000000000000000000 0000000000000000000 00000000000000000000 0000000000000000000000 000000000000000000000000 0000000000000000000000000 000000000000000000000000000 000000000000000000000000000 000000000000000000000000000 00000000000000000000000000 00000000000000000000000000 000000000000000000000000 0000000000000000000000 00000000000000000000 0000000000000000000 00000000000000000 0000000000000000 00000000000000 000000000000 ....... ... ..... ........... ............ ....... (a) (h) 1111 1 llll 1 , , 1 1 1,1, lilt l 1,1, till 1111 ill 1 t till ,111 flit 111, , 1 ,,1t /l,,, 1, 1 1 1 ,t,1R 1111 till R, ,111 Ri (c) Figure 8.20 (a) Image taken at time t,. (b) Image taken at time .. (c) Difference image. (From Jain [1981], ©IEEE.) 000 000 000 HIGHER-LEVEL VISION 391 The foregoing concepts are illustrated in Fig. 8.20. Part (a) of this figure shows a reference image frame taken at time ti and containing a single object of constant intensity that is moving with uniform velocity over a background surface, also of constant intensity. Figure 8.20b shows a current frame taken at time tp and Fig. 8.20c shows the difference image computed using Eq. (8.2-25) with a threshold larger than the constant background intensity. It is noted that two disjoint regions were generated by the differencing process: one region is the result of the leading edge and the other of the trailing edge of the moving object. Q.. Accumulative Differences. As indicated above, a difference image will often contain isolated entries that are due to noise. Although the number of these entries can be reduced or completely eliminated by a thresholded connectivity analysis, this filtering process can also remove small or slow-moving objects. The approach discussed in this section addresses this problem by considering changes at a pixel location on several frames, thus introducing a "memory" into the process. The basic idea is to ignore those changes which occur only sporadically over a frame sequence and can, therefore, be attributed to random noise. Q., °-t ago .-. Q°" c5ow Consider a sequence of image frames f (x, y, tI ), f (X, y, t2), ... , (7q f(x, y, and let f(x, y, tI) be the reference image. An accumulative difference image is formed by comparing this reference image with every subsequent image in the sequence. A counter for each pixel location in the accumulative image is incremented every time that there is a difference at that pixel location between the reference and an image in the sequence. Thus, when the kth frame is being compared with the reference, the entry in a given pixel of the accumulative image gives the number of times the intensity at that position was different from the corresponding pixel value in the reference image. Differences are established, for example, by use of Eq. (8.2-25). The foregoing concepts are illustrated in Fig. 8.21. Parts (a) through (e) of this figure show a rectangular object (denoted by 0's) that is moving to the right with constant velocity of 1 pixel/frame. The images shown represent instants of time corresponding to one pixel displacement. Figure 8.21a is the reference image frame, Figs. 8.21b to d are frames 2 to 4 in the sequence, and Fig. 8.21e is the eleventh frame. Figures 8.21f to i are the corresponding accumulative images, which may be explained as follows. In Fig. 8.21f, the left column of l's is due to differences between the object in Fig. 8.21a and the background in Fig. 8.21b. The right column of l's is caused by differences between the background in the reference image and the leading edge of the moving object. By the time of the foylrth frame (Fig. 8.21d), the first nonzero column of the accumulative difference image shows three counts, indicating three total differences between that column in coo coo Q.. 'r1 ,^y 'T, CC. CAD the reference image and the corresponding column in the subsequent frames. Finally, Fig. 8.21a shows a total of 10 (represented by "A" in hexadecimal) changes at that location. The other entries in that figure are explained in a similar manner. CAD It is often useful to consider three types of accumulative difference images: absolute (AADI), positive (PADI), and negative (NADI). The latter two quantities G1. tea, ti, `J' `°z "C7 4-. a)^ 392 ROBOTICS: CONTROL. SENSING, VISION, AND INTELLIGENCE 9 10 11 (a) 12 13 14 15 00000000 00000000 00000000 00000000 00000000 00000000 9 16 9 10 11 00000000 00000000 iii 10 11 1 1 1 12 13 14 15 16 000000 10 11 (c) 12 00000000 00000000 00000000 10 11 21 21 11111 21 21 21 13 14 15 16 000000 °°o 10 11 12 (d) 13 14 15 16 000000 000000 10 11 12 (e) 13 14 15 00000000 00000000 00000000 00000000 00000000 00000000 11 12 13 14 A98765438887654321 A98765438887654321 A98765438887654321 A98765438887654321 (` v-4 15 A98765438887654321 A98765438887654321 16 16 Figure 8.21 (a) Reference image frame. (b) to (e) Frames 2, 3, 4, and 11. (f) to (i) Accumulative difference images for frames 2, 3, 4, and 11. (From Jain [1981], ©IEEE.) are obtained by using Eq. (8.2-25) without the absolute value and by using the reference frame instead of f(x, y, t,). Assuming that the intensities of an object are numerically greater than the background, if the difference is positive, it is compared with a positive threshold; if it is negative, the difference is compared with a negative threshold. This definition is reversed if the intensities of the object are less than the background. Example: Figure 8.22a to c show the AADI, PADI, and NADI for a 20 x 20 pixel object whose intensity is greater than the background, and which is moving with constant velocity in a south-easterly direction. It is important to note a4-0'M ,,d HIV day -PA 11V 000 -CV 9 1 1 1 1 1 00000000 00000000 00000000 00000000 12 13 14 15 1 1 1 1 16 9 -CV 9 12 21 00000000 00000000 00000000 13 14 21 21 21 21 21 ` (h) 15 16 21 9 00000000 00000000 00000000 00000000 00000000 10 11 321 321 321 321 321 321 12 00000000 9 13 14 15 16 9 10 321 321 321 321 321 321 <<< HIGHER-LEVEL VISION 393 that the spatial growth of the PADI stops when the object is displaced from its original position. In other words, when an object whose intensities are greater than the background is completely displaced from its position in the reference v)' image, there will be no new entries generated in the positive accumulative difference image. Thus, when its growth stops, the PADI gives the initial location of the object in the reference frame. As will be seen below, this pro- perty can be used to advantage in creating a reference from a dynamic sequence of images. It is also noted in Fig. 8.22 that the AADI contains the regions of both the PADI and NADI, and that the entries in these images give an indication of the speed and direction of object movement. The images in Fig. 8.22 are shown in intensity-coded form in Fig. 8.23. Establishing a Reference Image. A key to the success of the techniques discussed in the previous two sections is having a reference image against which subsequent comparisons can be made. As indicated earlier, the difference between two images in 1"r a dynamic imaging problem has the tendency to cancel all stationary components, leaving only image elements that correspond to noise and to the mov- 99999999999999999999 99999999999999999999 99999999999999999999 9988888888888888888811 9988888888888888888811 9988888888886888888811 9988777777777777777722 and and <`d C/) 1 11 11 996877777777777777772211 11 1122222222 .-. 22_22211 PPP PPP 0000 998877777777777777772211 99887766666666666666332211 11 998877666666666666663322 99887766666666666666332211 9988776655555555555544332211 9988776655555555555544332211 9988776655555555555544332211 998877665544444444445544332211 998877665544444444445544332211 998877665544444444445544332211 99887766554433333333665544332211 99887766554433333333665544332211 112233445566666666665544332211 11223344556677777777665544332211 11223344556677777777665544332211 112233445566666666665544332211 11223344556677777777665544332211 11223344556677777777665544332211 112233445566666666665544332211 11223344556677777777665544332211 11223344556677777777665544332211 11223344556666666666554433221 1 11223344556666666666554433221 1 112233445566666666665544332211 1122334455555555555544332211 1122334455555555555544332211 1122334455555555555544332211 11223344444444444444332211 11223344444444444444332211 11223344444444444444332211 112233333333333333332211 112233333333333333332211 112233333333333333332211 7 1122222222222222222211 4'.00"0' 2211 2211 2211 332211 332211 332211 44332211 44332211 PPP W13 9988776655555555',555 1122222222222222222211 11111111111111111111 11111111111111111111 11111111111111111111 99887766555555555522 99887766555555555555 99887766554444444444 99887766554444444444 99887766554444444444 99887766554433333333 99887766554433333333 .'. CAP 99999999999999999995 99999999999999999999 99999999999999999997 99688888888888888868 99888888888888888888 99888888888888888888 99887777777777777777 99887777777777777777 99887777777777777777 99887766666666666666 99887766666666666666 99887766666666666666 44332211 5544332211 5544332211 5544332211 665544332211 665544332211 112233445566666666665544332211 11223344556677777777665544332211 1122_3344556677777777665544332211 112233445566666666665544332211 11223344556677777777665544332211 11223344556677777777665544332211 112233445566666666665544332211 11223344556677777777665544332211 11223344556677777777665544332211 112233445566666666665544332211 112233445566666666665544332211 112233445566666666665544332211 1122334455555555555544332211 1122334455555555555544332211 1122334455555555555544332211 11223344444444444444332211 11223344444444444444332211 11223344444444444444332211 112233333333333333332211 112233333333333333332211 112233333333333333332211 1122222222222222222211 1122222222222222222211 1122222222222222222211 11111111111111111111 11111111111111111111 11111111111111111111 -'"'-' r11 -'j (.W .-' .-. AAA add PPP PPP 0." N4]4] COD +NN 000 0,0 NNCI -.N v00 WW-000PPPP PPP 0'1`1P 0000'0P PPP AAA PPP PPP PPP PPP PPP PPP I.) ";j AAA ADD ADD AAA AAA AAA ddb i.: .a: (a) (b) (e) Figure 8.22 (a) Absolute, (b) positive, and (c) negative accumulative difference images for a 20 x 20 pixel object with intensity greater than the background and moving in a southeasterly direction. (From Jain [1983], courtesy of R. Jain.) Q,) ... '1J PPP PPP ddb I'm :::+ -+N .-r PPP non PPP PPP PPP PPP PPP PPP PPP PPP PPP INN PPM 'In N.- '^. cps -r CI) s. a reference image containing only stationary components will have been created.._. .) ing objects. . (From Jain [1983].. The noise problem can be handled by the filtering approach discussed earlier or by forming an accumulative difference image. courtesy of R.23 Intensity-coded accumulative difference images for Fig. and (c) NADI..394 ROBOTICS: CONTROL. Jain. the first image in a sequence to be the reference image. it is not always possible to obtain a reference image with only sta- tionary elements and it becomes necessary to build a reference from a set of images containing one or more moving objects. -. When all moving objects have moved completely out of their original positions. CSC CORD G1. In practice. VISION.-1 . One procedure for generating a reference image is as follows: Suppose that we consider .22. 8. AND INTELLIGENCE Figure 8. CAD . Object displacement can be established by monitoring the growth of the PADI.. (a) AADI. This is particularly true in situations describing busy scenes or in cases where frequent updating is required. SENSING. When a nonstationary component has moved completely out of its position in the reference frame. (b) PADI. the corresponding background in the present frame can be duplicated in the location originally occupied by the object in the reference frame... 8.25. Figure 8. The pedestrian is removed in Fig. 8.) Example: An illustration of the approach just discussed is shown in Figs. and the second depicts the same scene some time later.25a.24 shows two image frames of a traffic intersection. 8. (b) Image with pedestrian removed and background restored.24 and 8. There are two principal moving objects: a white car in the middle of the picture and a pedestrian on the lower left. descriptors should be independent of object size. (From Jain [1981].24 Two image frames of a traffic scene. The latter image can be used as a reference. and orientation and should contain enough discriminatory informa- Figure 8. 8. The principal moving features are the automobile moving from left to right and a pedestrian crossing the street in the bottom left of the picture. The first image is considered the reference.25b.HIGHER-LEVEL VISION 395 Figure 8.3 DESCRIPTION The description problem in vision is one of extracting features from an object for the purpose of recognition. ©IEEE. Removal of the moving automobile is shown in Fig. ©IEEE. location. (From Jain [1981].) CAD . Ideally.25 (a) Image with automobile removed and background restored. VISION.26 (a) 4-directional chain code. The result is shown in Fig.3.3. vii 8. 8.28. 8. as shown in Fig. To generate the chain code of a given boundary we first select a grid spacing. SENSING. respectively. we assign it a 0. Of course.26 are the ones most often used in practice. .: f3. An alternate procedure is to subdivide the boundary into segments of equal length (i. this representation is established on a rectangular grid using 4.1 Boundary Descriptors Chain Codes. and the directions are given by the code chosen. Typically. we assign a 1 to that cell. and descriptors suitable for representing three-dimensional structures. as shown in Fig. 8.e. The length of each segment is established by the resolution of the grid. where cells with value 1 are shown dark. Description is a central issue in the design of vision systems in the sense that descriptors affect not only the complexity of recognition algorithms but also their performance.. we subdivide descriptors into three principal categories: boundary descriptors.396 ROBOTICS: CONTROL.27b illustrates this process. AND INTELLIGENCE tion to uniquely identify one object from another. otherwise. 8. (b) 8-directional chain code. Chain codes are used to represent a boundary as a set of straight line segments of specified length and direction. 8. each seg. and assigning to each line the direction closest to one of the allowed chain-code directions. c7- I 2 2 0 7 3 6 (a) (h) Figure 8. Finally. if a cell is more than a specified amount (usually 50 percent) inside the boundary. 8. where the coding was started at the dot and proceeded in a clockwise direction.or 8-connectivity.26a.2. In Secs.27a.27c. but the codes shown in Fig. An example of this approach using four directions is shown in Fig. we code the boundary between the two regions using the direction CAD a>' a)) codes given in Fig.26. 8. and 8. Figure 8. regional descriptors. o.-. it is possible to specify chain codes with more directions.1.3. connecting the endpoints of each segment with a straight line.4. and three bits are needed for the 8-code. It is noted that two bits are sufficient to represent all directions in the 4-code. ment having the same number of pixels). 8. Then. CD. . The dot in (c) indicates the starting point. to normalize the code by a straightforward procedure: Given a chain code generated by starting in an arbitrary position. we treat it as a circular sequence of direction numbers and redefine the starting point so that the resulting sequence of numbers forms an integer of minimum magnitude. . We can also normalize for rotation by using the first difference of the chain code. It is possible.27 Steps in obtaining a chain code.HIGHER-LEVEL VISION 397 (a) (b) 0 i 3 1 3 3 3 0 0 13 7 3 2 9 T. however. I I Chain code: 03300 1 1 033 323 (c) Figure 8. r--+ in. +-+ Lei +O-+ tiny o. instead of the code itself. j It is important to note that the chain code of a given boundary depends upon the starting point. The difference is computed simply by counting (in a counterclockwise manner) the number of directions that separate two C1. + om. The preceding normalizations are exact only if the boundaries themselves are invariant to rotation and scale change. CONTROL. adjacent elements of the code. '. Size normalization can be achieved by subdividing all object boundaries into the same number of equal segments and adjusting the code segment lengths to fit these subdivision.' C/) .. Signatures. $.28 Generation of chain code by boundary subdivision. this is seldom the case.27 along the principal axes of the object to be coded.29.O. then the first element of the difference is computed using the transition between the last and first components of the chain. .28. say. In this example the result is 33133030.398 ROBOTICS. For instance. AND INTELLIGENCE 130322211 Figure 8.09 C]." (3. The startingpoint problem can be solved by first obtaining the chain code of the boundary and 't3 then using the approach discussed in the previous section. VISION. angle is. This effect can be reduced by selecting chain elements which are large in proportion to the distance between pixels in the digitized image or by orienting the grid in Fig. although quite different .. In practice. -fl . [1975]). yam. SENSING.fl . unit maximum value. This is discussed below in the section on shape numbers. For instance.. traverse the boundary and plot the angle between a line tangent to the boundary and a reference line as a function of position along the boundary (Ambler et al.. A signature is a one-dimensional functional representation of a boundary. not the only way to generate a signature. of course.. There are a number of ways to generate signatures. 8. the same object digitized in two different orientations will in general have different boundary shapes. Distance vs. for example. We could. as illustrated in Fig. with the degree of dissimilarity being proportional to image resolution. If we treat the code as a circular sequence. One of the simplest is to plot the distance from the centroid to the boundary as a function of angle. 8. 'C7 Ll. Signatures generated by this approach are obviously dependent on size and starting point. The resulting signature. the first difference of the 4-direction chain code 10103322 is 3133030. as illustrated in Fig. 8. Size normalization can be achieved simply by normalizing the r(O) curve to. horizontal segments in the curve would correspond to straight lines along the boundary since the tangent angle would be constant there.. r(0) is constant. i = 1. from the r(0) curve. An approach often used to characterize a signature is to compute its moments.29 Two simple boundary shapes and their corresponding distance vs..K. A variation of this approach is to use the so-called slope density function as a signature (Nahin [1974]). angle signatures. Suppose that we treat a as a discrete random variable denoting amplitude variations in a signature. This function is simply a histogram of tangent angle values. where K is the number of discrete amplitude increments of a. The nth moment of a about its mean is defined as K . the slope density function would respond strongly to sections of the boundary with constant tangent angles (straight or nearly straight segments) and have deep valleys in sections producing rapidly varying angles (corners or other sharp inflections). . This problem. would carry information about basic shape characteristics. Since a histogram is a measure of concentration of values. r(0) = A sec 0. . denote the corresponding histogram. however.HIGHER-LEVEL VISION 399 A A I a 4 r I I I 3r 4 0 is 4 I I I I 1 I I I I I I I 2 it 2 7a 4 27r 7r 4 2 r LT 4 Tf 0 5a 4 2 3a 7a 4 27f (a) (b) Figure 8.). For instance. is generally easier because we are now dealing with one-dimensional functions.3-1) . and let p(a. Once a signature has been obtained.d s. we are still faced with the problem of describing it in a way that will allow us to differentiate between signatures corresponding to different boundary shapes. while in (b).m)'p(ai) (8. In (a). f= (ai . 2... a long straight line were being tracked and it turned a corner. One approach is to merge points along a boundary until the least-squares error line fit of the points merged thus far exceeds . 8.3-2) The quantity m is recognized as the mean or average value of a and µ2 as its variance. thus producing a polygon of minimum perimeter which fits in the geometry established by the cell strip. Although this problem is in general not trivial and can very quickly turn into a time-consuming iterative search. and we can think of the object boundary as a rubberband contained within the walls.. suppose that we enclose a given boundary by a set of concatenated cells. for instance. then the error in each cell between the original boundary and the rubberband approximation would be at most hd. '"' mss' U. The procedure is best explained by means of an example. A digital boundary can be approximated with arbitrary accuracy by a polygon. One of the principal difficulties with this method is that vertices do not generally correspond to inflections (such as corners) in the boundary because a new line is not started until the error threshold is exceeded.400 ROBOTICS: CONTROL.y a. If the cells are chosen so that each cell encompasses only one point on the boundary.30. If we now allow the rubberband to shrink. a number (depending on the threshold) of points past the corner would be absorbed before the threshold is exceeded. merging new points along the boundary until the error again exceeds the threshold. We begin the discussion with a method proposed by Sklansky et al. Merging techniques based on error or other criteria have been applied to the problem of polygonal approximation. 8. With reference to Fig. 8. Only the first few moments are generally required to differentiate between signatures of clearly distinct shapes. AND INTELLIGENCE where x m= i=I aip(ai) (8. For a closed curve. If. as shown in Fig. where d is the distance between pixels. the goal of a polygonal approximation is to capture the "essence" of the boundary shape with the fewest possible polygonal segments.30a. In practice. it will take the shape shown in Fig. and the procedure is repeated. Several of these techniques are presented in this section.0 0-y a preset threshold. SENSING. This error can be reduced in half by forcing each cell to be centered on its corresponding pixel. We can visualize this enclosure as consisting of two walls corresponding to the outside and inside boundaries of the strip of cells.- . At the end of the procedure the intersections of adjacent line segments form the vertices of a polygon. the approximation is exact when the number of segments in the polygon is equal to the number of points in the boundary so that each pair of adjacent points defines a segment in the polygon. the parameters of the line are stored. [1972] for '-< finding minimum-perimeter polygons. there are a number of polygonal approximation techniques whose modest complexity and processing requirements makes them well-suited for robot vision applications. When this occurs.30b. VISION. Polygonal Approximations. the error is set to zero. the furthest point becomes a vertex.. 8. we might require that the maximum perpendicular distance from a boundary segment to the line joining its two endpoints not exceed a preset threshold.25 times the length of line ab. Since no point in the new boundary segments has a perpendicular distance (to its corresponding straight-line segment) which exceeds this threshold. to use splitting along with merging to alleviate this difficulty. the procedure terminates with the polygon shown in Fig. An example is shown in Fig. '.31. One approach to boundary segment splitting is to successively subdivide a seg- ment into two parts until a given criterion is satisfied. For a closed boundary.30 (a) Object boundary enclosed by cells. (b) Minimum-perimeter polygon. It is possible.object boundary. Part (a) of this figure shows an . and Fig. Fl- . thus subdividing the initial segment into two subsegments.. If it does.31c shows the result of using the splitting procedure with a threshold equal to 0. point d has the largest distance in the bottom segment.HIGHER-LEVEL VISION 401 (a) (b) Figure 8. Similarly. :T' v.' 5'r1 `W+ `C1 't7 COO . 8.31b shows a subdivision of this boundary (solid line) about its furthest points. however. Figure 8. Fdr instance. rte'..31d.. The point marked c has the largest perpendicular distance from the top segment to line ab. the best starting pair of points is usually the two furthest points in the boundary. 8. This approach has the advantage that it "seeks" prominent inflection points. Although the first difference of a chain code is independent of rotation. depending on the starting point. Figure 8. Note that the first differences were computed by treating the chain codes as a circular sequence in the manner discussed earlier. The order.27a.31 (a) Original boundary. 8. and corresponding shape numbers. the coded boundary in general will depend on the orientation of the coding grid shown in Fig. based on the 4directional code of Fig. The shape number of such a boundary. In most cases a unique shape number will be obtained by aligning the chain-code grid with the sides of the basic rectanpvm mss' Coo :1) `n.26a is defined as the first difference of smallest magnip.. A chain-coded boundary has several first differences. and 8. The ratio of the major to minor axis is called the eccentricity of the boundary. AND INTELLIGENCE a b (a) (b) a (c) (d) Figure 8. (b) Boundary subdivided along furthest points. and the rectangle just described is called the basic rectangle. The minor axis is perpendicular to the major axis and of length such that a box could be formed that just encloses the boundary. One way to normalize the grid orientation is as follows. It is noted that n is even for a closed boundary. A comprehensive discussion of these methods is given by Pavlidis [1977]. of a shape number is defined as the number of digits in its representation. The major axis of a boundary is the straight-line segment joining the two points furthest away from each other. (d) Resulting polygon.. and that its value limits the number of possible different shapes. We point out before leaving this section that a considerable amount of work has been done in the development of techniques which combine merging and splitting. CAD :-s ^a7 3"' . CIO "t1 CAD r-' . along with their chain-code representations.402 ROBOTICS: CONTROL. (c) Joining of vertices by straight line segments. n. first differences.. 6. 8. SENSING.32 shows all the shapes of orders 4. VISION. tude. CAD Shape Numbers. we find the rectangle of order n whose eccentricity best approximates that of the basic rectangle. and so we subdivide the basic rectangle as shown in Fig.26. 8. In order to obtain a shape number of this order we follow the steps discussed above. and 8. we specify a rectangle of order lower than n and repeat the procedure until the resulting shape number is of order n. if n = 12. those whose perimeter length is 12) are 2 x 4. as shown in Fig.33c. If the eccentricity of the 2 x 4 rectangle best matches the eccentricity of the basic rectangle for a given boundary. boundaries with depressions comparable with this spacing will sometimes yield shape numbers of order greater than n.33a.!? may 4-. given a desired shape order. Example: Suppose that we specify n = 18 for the boundary shown in Fig. we establish a 2 x 4 grid centered on the basic rectangle and use the procedure already outlined to obtain the chain code. In practice. Finally. r-. 6. I "'h (]..HIGHER-LEVEL VISION 403 IOrder 4 Order 6 I1 Chain code Difference 0321 003221 3333 3333 Order 8 303303 033033 Shape number: } IL Chain code. all the rectangles of order 12 (i. as indicated above. as shown in Fig. Freeman and Shapira [1975] give an algorithm for finding the basic rectangle of a closed. In this case. 3 x 3.e. ear C]. 8. gle.33b.32 All shapes of order 4. we obtain the chain code and use its first difference to compute the shape number. CAD 'CS CAD ~i. . and the dot indicates the starting point.: F". and use this new rectangle to establish the grid size. The closest rectangle of order 18 is a 3 x 6 rectangle. 8.33d. chain-coded curve. The directions are from Fig. N-3' Although the order of the resulting shape number will usually be equal to n because of the way the grid spacing was selected. . For example. 003322 1 1 030322 1 1 00032221 Difference Shape number: 30303030 03030303 33 1 33030 03033 1 33 30033003 00330033 Figure 8. and 1 x 5. First we find the basic rectangle. The shape number follows from the first difference of this code. 8. where it is noted that the chain code directions are aligned with the resulting grid. 8. 34. u = 0..r ()may F(u).33 Steps in the generation of a shape number. (7. 8. 1. AND INTELLIGENCE (a) (b) Chain code: 00003003223222 1 2 1 (c) 1 Difference: 30003 103301 3003 1 30 Shape number: 0 0 0 3 1 0330 1 3003 1 303 (d) Figure 8..1. CONTROL. F(u) can be (CD of points along the boundary forms a function whose Fourier transform t. 2. If.404 ROBOTICS. SENSING. as discussed in Sec.1. The sequence computed using an FFT algorithm. y) is reduced to the one-dimensional complex number x + jy. . is .U4 (i4 `. then each two-dimensional boundary point our (x.6. The motivation for this approach is that only the first few components of F(u) are generally required . Suppose that M points on a boundary are available.. 7.6-4) can often be used to describe a two-dimensional boundary. If M is an integer power of 2.. Fourier Descriptors. . one-dimensional Fourier transform given in Eq. The discrete. as shown in Fig. M . VISION. we view this boundary as being in the complex plane. rotation. this is equivalent to multiplying (scaling) the boundary by the same factor."3 . the starting point traverses the entire contour once.) !-! .-e Figure 8. To change the size of a contour we simply multiply the components of F(u) by a constant. (From Persoon and Fu [1977]. . Finally.HIGHER-LEVEL VISION 405 Imaginary axis Real axis Figure 8. to distinguish between shapes that are reasonably distinct. ©IEEE. Rotation by an angle 0 is similarly handled by multiplying the elements of F(u) by `"- exp (j0) . ?'° The Fourier transform is easily normalized for size. Due to the linearity of the Fourier transform pair. the objects shown in Fig. .35 Two shapes easily distinguishable by Fourier descriptors.! .. 27r].35 can be differentiated by using less than 10 percent of the elements of the complete Fourier transform of their boundaries.34 Representation of a region boundary in the frequency domain. As T goes from 0 to 2ir. where T is in the interval [0. This information can be used as the basis for normalization (Gonzalez and Wintz [1977]).-. and starting point on the boundary. 8. it can be shown that shifting the starting point of the contour in `o' the spatial domain corresponds to multiplying the kth component of F(u) by exp (jkT). For example. A number of other regional descriptors are discussed 'U-' below. ten gyp. as discussed in Sec. examples are shown in Fig. The identification of objects or regions in an image can often be accomplished.3. defined as perimeter 2/area. coarseness. The ratio of the lengths of these axes. and regularity (some "'3 . A connected region is a region in which all pairs of points can be connected p. and so on..1) and are useful for establishing the orientation of an object. respectively. called the eccentricity of the region.nom-.1. As an example. Statistical approaches yield characterizations of textures as being smooth. It is thus important to note that the methods developed in both of these sections are applicable to region descriptions. the Euler numbers of the letters A and B are 0 and .. SENSING. it is useful to consider the Euler number as a descriptor. as indicated in the following discussion.. via '"' x'10 chi . is also an . grainy. its most frequent application is in establishing a measure of compactness of a region.. at least partially.3.406 ROBOTICS: CONTROL.. It is of interest to note that compactness is a dimensionless quantity (and thus is insensitive to scale changes) and that it is minimum for a disk-shaped region. Although no formal definition of texture exists.ti- '007 Some Simple Descriptors. AND INTELLIGENCE 8. deal with the arrangement of image primitives. 8.36). .. we intuitively view this descriptor as providing quantitative measures of properties such as smoothness. One of the simplest approaches for describing texture is to use moments of the intensity histogram of an image or region. For a set of connected regions. coarse. The major and minor axes of a region are defined in terms of its boundary (see Sec. The Euler number is defined simply as the number of connected regions minus the number of holes.2 Regional Descriptors A region of interest can be described by the shape of its boundary. Q. the use of these descriptors is limited to situations in which the objects of interest are so distinct that a few global descriptors are sufficient for their characterization. Structural techniques. 8.. As might be expected. A number of existing industrial vision systems are based on regional descriptors which are rather simple in nature and thus are attractive from a computational point of view. Texture.o .t Cow important global descriptor of its shape. The two principal approaches to texture description are statistical and structural.3. by a curve lying entirely in the region. The area of a region is defined as the number of pixels contained within its boundary. This is a useful descriptor when the viewing geometry is fixed and objects are always analyzed approximately the same distance from the camera. VISION. Let z be a random variable denoting r-. by the use of texture descriptors. on the other hand. such as the description of texture based on regularly spaced parallel lines. 8. some of which may have holes. . or by its internal characteristics.1. A typical application is the recognition of objects moving on a conveyor belt past a vision station.. Although the perimeter is sometimes used as a descriptor. The perimeter of a region is the length of its boundary. 3-4) . As indicated in Sec. the nth moment of z about the mean is defined as Mt.36 Examples of (a) smooth. L µ. where L is the number of distinct intensity levels.. . 2.HIGHER-LEVEL VISION 407 Figure 8.. and let p(zi ). and (c) regular texture.1.(z) _ (zi - m is the mean value of z (i. L.3. discrete image intensity.e... i = 1. the average image intensity): L m= zip(zi) (8. be the corresponding histogram. (b) coarse. 8. . It is important to note that the size of A is determined strictly by the number of distinct intensities in the input image.408 ROBOTICS: CONTROL. The second moment [also called the variance and denoted by a2(z)] is of particular importance in texture description.-a R = 1 - 1 (8.° . z2 = 1. a1 (top left) is the number of times that a point with intensity level z1 = 0 appears one pixel location below and to the right of a pixel with the same intensity. as follows: 0 1 0 1 0 0 1 1 2 1 1 2 1 2 1 0 2 0 0 1 0 1 0 0 0 If we define the position operator P as "one pixel to the right and one pixel below.3-5) 1 + a2(z) is 0 for areas of constant intensity [a2(z) = 0 if all zi have the same value] and approaches 1 for large values of a2(z). occur (in the position specified by P) relative to points with intensity z1. Let P be a position operator CDR CAD and let A be a k x k matrix whose element a. It is a measure of intensity contrast which can be used to establish descriptors of relative smoothness. AND INTELLIGENCE It is noted from Eq. while a13 (top right) is the number of times that a point with level z1 = 0 appears one pixel location below and to the right of a point with intensity z3 = 2.j is the number of times that points with intensity z. the measure T3' . The third moment is a measure of the skewness of the histogram while the fourth moment is a measure of its relative flatness. Measures of texture computed using only histograms suffer from the limitation that they carry no information regarding the relative position of pixels with respect to each other.. One way to bring this type of information into the texture analysis process is to consider not only the distribution of intensities but also the positions of pixels with equal or nearly equal intensity values. j < k. and z3 = 2. for example.. consider an image with three intensities. application of the concepts discussed in this section usually require that intensities be requantized into a few bands in order to keep the size of A manageable. with 1 < i. For example. The fifth and higher moments are not so easily related to histogram shape. SENSING. (8." then we obtain the following 3 x 3 matrix A: 4 2 3 'w. 1 A = 2 0 2 2 0 where. VISION. Thus. .3-3) that ao = 1 and µl = 0. For instance. CS' 'i3 z1 = 0. Let n be the total number of point pairs in the image which satisfy P (in the above example n = 16). but they do provide further quantitative discrimination of texture content. If we define a matrix C formed by dividing every elef`1 '"' 4-. The texture of an unknown region is then subsequently determined by how closely its descriptors match those stored in the system memory. The second descriptor has a relatively low value when the high values of C are near the main diagonal since the differences (i . the first property gives an indication of the strongest response to P (as in the above example).HIGHER-LEVEL VISION 409 occurrence matrix. For example. all = 4. The fourth descriptor is a measure of randomness. Since C depends on P. where "gray level" is used interchangeably to denote the intensity of a monochrome pixel or image. the operator used in the above example is sensitive to bands of constant intensity running at -45 ° (note that the highest value in A was CS. Maximum probability: max (cij) i. --3 ment of A by n. The third descriptor has the opposite effect. One approach for using these descriptors is to "teach" a system representative descriptor values for a set of different textures. partially due to a streak of points with intensity 0 and running at -45°).4.. the problem is to analyze a given C matrix in order to categorize the texture of the region over which C was computed. then cij is an estimate of the joint probability that a pair of points satisfying P will have values (zi. Inverse element-difference moment of order k: Cij (i-j)k 4. Element-difference moment of order k: E (i .j) are smaller there. Conversely. The matrix C is called a gray-level co. 8. achieving its highest value when all elements of C are equal. Entropy: i E cij log cij j 5.j 2. Uniformity: The basic idea is to characterize the "content" of C via these descriptors. z1). A set of descriptors proposed by Haralick [1979] include 1. For instance. In a more general situation.j)kCij j 3. it is possible to detect the presence of given texture patterns by choosing an appropriate position operator. the fifth descriptor is lowest when the cij are all equal. This approach is discussed in more detail in Sec.. V') . bS. VISION.aS allows us to generate a texture pattern of the form shown in Fig. 8.. 8. (G) (c) Figure 8.g. a topic which will be treated in considerably more detail in Sec. three applications of this rule would yield the string aaaS). (It is noted.c.37c can easily be generated in the same way. SENSING. aS. that these rules can also generate structures that are not rectangular). 0 (a) 00000. (b) Pattern generated by the rule S dimensional texture pattern generated by this plus other rules. A . such as the one shown in Fig.. These concepts lie at the heart of structural pattern generation and recognition.37a) and assign the meaning of "circles to the right" to a string of the form aaa .410 ROBOTICS: CONTROL. such that the presence of a b means "circle down" and the presence of a c means "circle to the left." We can now generate a string of the form aaabccbaa which corresponds to a three-by-three matrix of circles.a. A cA. As mentioned at the beginning of this section. `J. Suppose that we have a rule of the form S indicates that the symbol S may be rewritten as aS (e. a second major category of texture description is based aS which on structural concepts. Larger texture patterns.. Suppose next that we add some new rules to this scheme: S CO) bA. 8. AND INTELLIGENCE The approaches discussed above are statistical in nature. The basic idea in the foregoing discussion is that a simple "texture primitive" can be used to form more complex texture patterns by means of some rules which limit the number of possible arrangements of the primitive(s). A .37 (a) Texture primitive. (c) Two- . - 8. S . then the rule S -. however.5. If we let a represent a circle (Fig.37b. 38. 7. Salari and Siy [1984]) this type of representation is usually associated with binary data. . Some examples using the euclidean distance are shown in Fig. If p has more than one such neighbor. A number of algorithms have been proposed for improving computational efficiency while. An important approach for representing the structural shape of a plane region is to reduce it to a graph.HIGHER-LEVEL VISION 411 . at the same . ranging from automated inspection of printed circuit boards to counting of asbestos fibers on air filters. and (3) does not cause excessive erosion of the region... (2) does not break connectedness.. In the following discussion. the results of a MAT operation will be influenced by the choice of a given metric. as will be seen below. For each point p in R.. This procedure is fast. a direct implementation of the above definition is typically prohibitive from a computational point of view because it potentially involves calculating the distance from every interior point to every point on the boundary of a region. Typically. It is important to note that the concept of "closest" depends on the definition of a distance (see Sec. Figure 8.38 Medial axes of three simple regions. we find its closest neighbor in B. Although the MAT of a region yields an intuitively pleasing skeleton. straightforward to implement.. This is often accomplished by obtaining the skeleton of the region via a thinning (also called skeletonizing) algorithm. . Although some attempts have been made to use skeletons in gray-scale images (Dyer and Rosenfeld [1979].d (a) (c) ... 8.3) and.5. "t7 s.. The skeleton of a region may be defined via the medial axis transformation (MAT) proposed by Blum [1967]. therefore. The MAT of a region R with border B is as follows. CAD t:$$ time. attempting to produce a medial axis representation of a given region.. Thinning procedures play a central role in a broad range of problems in computer vision. these are thinning algorithms that iteratively delete edge points of a region subject to the constraints that the deletion of these points (1) does not remove endpoints. we present an algorithm developed by Naccache and Shinghal [1984]. (b) Skeleton of a Region. and.. then it is said to belong to the medial axis (skeleton) of R. yields skeletons that are in many cases superior to those p. If the neighborhood of p matches windows (a) to (c). The analysis of window (d) is slightly more complicated. Suppose that all d's are light and the e's can be either dark or light. a dark point p having no and n4 light will be a right edge point and a left edge point simultaneously. easy to show by example that its deletion would cause excessive erosion in slanting regions of width 2. and (4) a bottom edge point having n6 light. VISION. AND INTELLIGENCE to thinning by using. In either case p should not be flagged. An edge point p is flagged if it is not an endpoint or breakpoint. 7. An endpoint is a dark point which has one and only one dark 8-neighbor. or if its deletion would cause excessive erosion (as discussed below). (3) a top edge point having n2 light.. The test for these condi. and configuration (d) makes it a breakpoint. CONTROL.38b shows this effect quite clearly). for example. 'CS Zip 000 sue. For example. 8. respectively. . G?. vac 'TI [`' in' CS. As is true with all thinning algorithms. Consequently. however._.40. it is assumed that the boundaries of all regions have been smoothed prior °C° C3' CAD CAD 'C7 The procedure is then extended to the other types. ©IEEE. . '"' ". An edge point is a dark point which has at least one light 4-neighbor. In configuration (g). Other arrangements need to be considered. If p were deleted in configurations (e) and (f).3 `CD m"s.-r -O. they can be either dark or light. then p is an endpoint. 8.Y_ do' `C1 obtained with other thinning algorithms. With reference to the neighborhood arrangement shown in Fig. then p is a breakpoint. Configurations (a) through (c) make p an endpoint. the procedure discussed in Sec. 8. it is O. These will be called dark and light points. (From Naccache and Shinghal [1984]. 'Cs tions is carried out by comparing the 8-neighborhood of p against the windows shown in Fig. p is what is commonly referred to as a 'LS O'3 n3 112 III 114 p no lap 115 117 Figure 8. where p and the asterisk are dark points and d and e are "don't care" points. two cases may arise: (1) If all d's are light. SENSING. region points will be denoted by 1's and background points by 0's. 8. It is possible for p to be classified into more than one of these types. We begin the development with a few definitions..412 ROBOTICS. . the thin- ning algorithm identifies an edge point p as one or more of the following four types: (1) a left edge point having its left neighbor n4 light. "C2 PD.) . Q. then p is a break point and should not be flagged.-. noise and other spurious variations along the boundary can significantly alter the resulting skeleton (Fig. The following discussion initially addresses the identification (flagging) of left edge points that should be deleted. Assuming binary data. This condition yields the eight possibilities shown in Fig.41. (2) a right edge point having no light. or (2) if at least one of the d's is dark.39 Notation for the neighbors of p used by the thinning algorithm. A breakpoint is a dark point whose deletion would break connectedness.39. that is. If at least one d and e are dark.2.6. (From Naccache and Shinghal [1984]. windows shown in Fig.40 should not be flagged. Finally.40. Similar arguments apply if the roles of d and e were reversed or if the d's and e's were allowed to assume dark and light values. ©IEEE. or light. then p is not flagged. The asterisk denotes a dark point.HIGHER-LEVEL VISION 413 d d d d d (a) (b) (c) (d) Figure 8. *.40 has a particularly simple boolean representation given by G7.:. `-' (iT5 + n6) (8.) . the appearance of configuration (h) during thinning indicates that a region has been reduced to a single point. The essence of the preceding discussion is that any left edge point p whose 8-neighborhood matches any of the ¢'o °°0 w.. 8. Testing the 8-neighborhood of p against the four windows in Fig. and d and e can be either dark or light. its deletion would erase the last remaining portion of the region. if all isolated points are removed initially.40 If the 8-neighborhood of a dark point p matches any of the above windows. (From Naccache and Shinghal [1984]. Since it is assumed that the boundary of the region has been smoothed initially. typically due to a short tail or protrusion in a region.41 All the configurations that could exist if d is light in Fig. 8. B4 = no (nI + n2 + n6 + n7) (n2 + )T3) .) pip spur. and e can be dark. 8.3-6) (a) (b) (e) (d) (e) (f) (g) (h) Figure 8. the appearance of a spur during thinning is considered an important description of shape and p should not be deleted. ©IEEE. rotation. 8. at the cost of losing all other points in the region. thus producing only skeleton and background points at the end. Equation (8.39. VISION. 8.42b over that shown in Fig. 8.3-9) Using the above expressions. the algorithm stops. we can describe it by a set of moments which are invariant to these effects. SENSING. y) represent the intensity at point (x. Fig. AND INTELLIGENCE where the subscript on B indicates that n4 is light (i. and the n's are as defined in Fig. and light or flagged points be valued 0 (FALSE). The scanning sequence can be either along the rows or columns of the image. but the choice will generally affect the final result.42b shows the skeleton obtained by using the algorithm developed above. B6 = n2 (no + nI + n3 + n4) (n4 + )T5) (no + n7) (8. In the first scan we use B4 and Bo to flag left and right edge points. The moment of order (p + q) for the region is defined as 'C7 =°. (n2 + n3 + n5 + n6) (n6 + n7) (nj + n2) (8. y) in a region. 8. When the region is given in terms of its interior points. As a point of interest.414 ROBOTICS: CONTROL.3-7) B2 = n6 (no + n4 + n5 + n7) (no + nl) ()T3 + n4) (8.3-8) and for the bottom edge points. Y) x y (8. The fidelity of the skeleton in Fig. previously unflagged points be valued 1 (TRUE).3-6) is evaluated by letting dark. 8. It is not difficult to show that these conditions on B4 implement all four windows in Fig.. Then if B4 is 1 (TRUE). "-" is the logical COMPLEMENT. It was noted in Sec. This approach is easier to implement. well-known thinning algorithm (Pavlidis [1982]). the procedure is repeated. Similar expressions are obtained for right edge points.3-10) . It is again noted that previously flagged dark points are treated as 0 in evaluating the boolean expressions. Moment Invariants. and scale change can be used to describe the boundary of a region.42c shows the skeleton obtained by applying to the same data another. Otherwise. Example: Figure 8. "+" is the logical OR. otherwise. 8. 8.40 simultaneously. mne = E E xny"f(x. with the unflagged points constituting the skeleton. the thinning algorithm iteratively performs two scans through the data. and Fig. If no new edge points were flagged during the two scans.1 that Fourier descriptors which are insensitive to translation. p is left unflagged.42c is evident. Let f(x. An alternate procedure is to set any flagged point at zero during execution of the algorithm.42a shows a binary region.3. "'t Bo = n4 for top edge points.e. in the second scan we use B2 and B6 to flag top and bottom edge points. " " is the logical AND. p is a left edge point). we flag p. 3-15) (8.* #rfr rfff{r# #\+tf affrr44* fafala to f 44 * 4 * a ff # # rfria. ©IEEE. (c) Skeleton obtained by using another algorithm. The central moment of order (p + q) is given by µn9 = E E (x .tM M raaata#ff lfr rrR rtr rM*4 t* fifff }}ia tr#YrrMt*}ffrrf **t** fi{#rrtaYetti+}it}ff tr lrrrraf +}Y r.rrr rf.3-11) where x = m1o MOO y mo i (8. fRfiff 4fi#farrlatill rRir44f. .a r# Y f .tiff 1f (c) Figure 8.3-16) . (b) Skeleton obtained using the thinning algorithm discussed in this section.3-13) where -y = p2q+1 for (p + q) = 2. (From Naccache and Shinghal [1984]. y) of points in the region.Rr4rafffr*if raft! Rftrr tita#f* 4t tf lrar rt*:i ifirr i4at.HIGHER-LEVEL VISION 415 fa a*f ttila+t ttf lif *fftfaarr aY r4fair.fflr t#.x)r(Y .t.) where the summation is taken over all spatial coordinates (x.3-12) MOO The normalized central moments of order (p + q) are defined as 'qpq µn9 µ0o (8.y)9f(x.f 1#rrMf# ltfr*f **Not* #\ aft rr4 f Y i Y a 4iatia##*af+f {rt* # lf4f#a t tetra#t t of aaf. Y) x y (8. f r{ff44 R#rrta4 f ref#f it *4+r. 3..7102)2 + 47111 (8.i.42 (a) Binary region..3-14) The following set of moment invariants can be derived using only the normalized central moments of orders 2 and 3: 01 =7120+7102 02 = (7120 . (8. 416 ROBOTICS: CONTROL.. Three-dimensional information about a scene may be obtained in three principal forms. we point out that factors such as cost. VISION.. >?' on the surface of objects.3-21) This set of moments has been shown to be invariant to translation.4 SEGMENTATION AND DESCRIPTION OF THREE-DIMENSIONAL STRUCTURES Attention was focused in the previous two sections on techniques for segmenting and describing two-dimensional structures.3(7121 + 7103)2] + (37121 .(7121 + 7703)2] (8.7103)(7121 + 7703)[3(7730 + 7712)2 . z). z) coordinates of points .(7121 + 7103)2] (8.(7721 + 7103)2] + 47711(7130 + 7712)(7721 + 7703) (8.3-19) 06 = (7120 .7103)2 04 = (7130 + 7112)2 + (7121 + 7103)2 05 = (7130 . '0r . 7. z) gives the intensity of that point (the term voxel is often used to denote a 3D point and its intensity). In this case.3(7721 + 7703)2] + (37112 . Although research in this area spans more than a 0 10-year history." and "in front of. we obtain the (x.7102)1(7130 + 7712)2 . the relationships obtained from this type of analysis are sometimes referred to as 21/2 D information.3-18) (8. it is often possible to deduce relationships between objects such as "above. 8. In this section we consider the problem of performing these tasks on three-dimensional (3D) scene data." Since the exact 3D location of scene points generally cannot be computed from a single view. It is thus widely accepted that a key to the development of versatile vision systems capable of operating in unconstrained environments lies in being able to process threedimensional scene information. and scale change (Hu [1962]). y. we represent each point in the form f(x.. y.37712)2 + (37721 . SENSING. Finally. speed." "behind.7103)(7130 + 7112)1(7130 + 7112)2 .+ t. where the value of f at (x.3-20) 07 = (37721 .7730)(7121 + 7703)[3(7130 + 7712)2 . as well as intensity information about each point. rotation. If range sensing is used. The use of stereo imaging devices yields 3D coordinates. we may infer 3D relationships from a single two-dimensional image of a scene.1. y.ti its a. As indicated in Sec.3-17) (8. .37712)(7130 + 7112)[(7730 + 7112)2 . vision is inherently a 3D problem. AND INTELLIGENCE 03 = 0q30 . In other words. and complexity have inhibited the use of three-dimensional vision techniques in industrial applications. .4. .) f1. This approach is particularly attractive for polyhedral objects whose surfaces are smooth with respect to the resolution of the sensed scene.........HIGHER-LEVEL VISION 417 8.. we fit a plane to the group of points in each cell and calculate a unit vector which is normal to the plane and passes through the centroid of the group of points in that cell.... ... z . N ... . 8.. ... . ... Then. . \. .... .. (h) ..... .1 Fitting Planar Patches to Range Data One of the simplest approaches for segmenting and describing a three-dimensional structure given in terms of range data points (x. z) is to first subdivide it into small planar "patches" and then combine these patches into larger surface elements according to some criterion.. (From Shirai [1979]. .. subdividing the 3D space into cells and grouping points according to the cell which contains them.. .. .. '{... ..... (a) (c) R2 CPD PI C C RI R7 R8 R9 c2 PI C2 R3 C R4 J R5 R6 I c C1 C (f) Figure 8.... . .. ..43b shows a set of corresponding 3D points.. Part (a) of this figure shows a simple scene and Fig... A planar 'C7 (x. . We illustrate the basic concepts underlying this approach by means of the example shown in Fig. ©Plenum Press. ...... 8.. These points can be assembled into small surface elements by.... . ..43 Three-dimensional surface description based on planar patches..... .. y. . for example. ... .. . .. ... . . ..43. )'.. . It is noted that. at the end of this procedure. is often approximated by absolute values to simplify computation: G[f(x.4. the 3D gradient can be used to obtain patch representations (similar to those discussed in Sec. 7.I + IGZI The same operator oriented along the y axis is used to compute Gy.44 shows a 3 x 3 x 3 operator proposed by Zucker and Hummel [1981] for computing G.L0 of ax G[f(x.4. as shown in Fig.4. y.. WOW 8. y. This type of region classification is illustrated in Fig.g. and the magnitude of this vector is proportional to the strength of that change. As indicated in Sec.. These regions are then classified as planar (P). 7. Figure 8. with the direction of the patch being given by the unit normal.418 ROBOTICS: CONTROL.43f.43e. its gradient vector at coordinates (x. and that each surface has been assigned a descriptor (e. curved or planar). 8. AND INTELLIGENCE patch is established by the intersection of the plane and the walls of the cell.43c. Finally (and this is the hardest step). and oriented °'C The implementation of the 3D gradient can be carried out using operators analogous in form to those discussed in Sec. (7.43d. as indicated in Eq. 8.6.4-3) . the patches in a planar surface will all point in essentially the same direction).4-1) of z G[f(x. y. Given a function f(x. These concepts are just as applicable in three dimensions and they can be used to segment 3D structures in a manner analogous to that used for two-dimensional data. SENSING. 8.6-39). y. z)] = Gy GZ of ay (8. the gradient vector is normal to the direction of maximum rate of change of a function. VISION. 8..4. z) is ate) given by I GX 'T1 The magnitude of G is given by which. curved (C). 8. z)] = GJI + IG. ^.2 Use of the Gradient When a scene is given in terms of voxels. the scene has been segmented into distinct surfaces.1) which can then be combined to form surface descriptors. or undefined (U) by using the directions of the patches within each region (for example. z)] _ (Gx + Gy + Gi )"2 (8. All patches whose directions are similar within a specified threshold are grouped into elementary regions (R). y. as shown in Fig. z).4-2) (8. .6. the classified regions are assembled into global surfaces by grouping adjacent regions of the same classification. as illustrated in Fig. z) and into Eq.. The center of each operator is moved from voxel to voxel and applied in exactly the same manner as their two-dimensional counterparts. G3 and G. that is.6.44 A 3 x 3 x 3 operator for computing the gradient component G. 7.4-3) to obtain the magnitude. . while the magnitude of G gives an indication of abrupt changes of intensity within the patch. y. A key property of these operators is that they yield the best (in a least-squares error sense) planar edge between two regions of different intensities in a 3D neighborhood. It is of interest to note that the operator shown in Fig. y. (8.. ©IEEE. it follows that the components of the vector G establish the direction of a planar patch in each neighborhood.. s-.. and GZ = c. the responses of these operators at any point (x.4.) .44 yields a zero output in a 3 x 3 x 3 region of constant intensity. 8.4-2) or (8. It is a straightforward procedure to utilize the gradient approach for segmenting a scene into planar patches analogous to those discussed in the previous section.. as discussed in Sec.4-1) to obtain the gradient vector at (x. That is. . it indicates the presence of (7a d-+ C. G) = b. z) yield Gx.. which are then substituted into Eq. along the z axis to compute G. s-.HIGHER-LEVEL VISION 419 .. . r-+ Figure 8. It is not difficult to show that the gradient vector of a plane ax + by + cz = 0 has components GX = a. (Adapted from Zucker and Hummel [1981].. Since the operators discussed above yield an optimum planar fit in a 3 x 3 x 3 neighborhood.. (8. A concave line (labeled -) is formed by the intersection of two surfaces belonging to two different solids (e. that additional information in the form of intensity and intensity discontinuities is now available to aid the merging and description process. Note. edges in a 3D scene are determined by discontinuities in range and/or intensity data. 8. we consider basic types of lines. An example of such a patch representation using the gradient operators is shown in Fig.45 Planar patch approximation of a cube using the gradient.) always coincide.r 8.g. As illustrated in Fig. VISION.420 ROBOTICS: CONTROL.4. 8.4. A convex line (labeled +) is formed by the intersection of two surfaces which are part of a convex solid (e. ©IEEE.45. (From Zucker and Hummel [1981].. the borders of these patches may not cad . Since each planar patch surface passes through the center of a voxel.45.1. however. An occluding line (labeled with an arrow) is the edge of a surface which obscures a '_' BCD an intensity edge within the patch. SENSING. they can be grouped and described in the form of global surfaces as discussed in Sec. Patches that coincide are shown as larger uniform regions in Fig. Once patches have been obtained.g. a finer description of a scene may be obtained by labeling the lines corresponding to these edges and the junctions which they form. . 8. the intersection of one side of a cube with the floor).3 Line and Junction Labeling With reference to the discussion in the previous two sections.46... 8. Given a set of 'C3 surfaces and the edges between them. the line formed by the intersection of two sides of a cube). AND INTELLIGENCE Figure 8. Note that one of the lines changes label from occluding to convex between two vertices. it is easily shown that the junction dictionary shown in Fig. We note in Fig. no line can change its label between vertices. with the exception of a short concave line. as illustrated in Fig. For example. 8. This is typically accomplished via a set of heuristic rules designed to interpret the labeled lines and sequences of neighboring junctions. For example.HIGHER-LEVEL VISION 421 Floor Convex line Concave line Occluding line Figure 8. their junctions provide clues as to the nature of the 3D solids in the scene.48 contains all valid labeled vertices of trihedral solids (i. solids in which exactly three plane surfaces come together at each vertex). EG) . The occluding matter is to the right of the line looking in the direction of the arrow. Once the junctions in a scene have been classified according to their match in the dictionary. surface. Violation of this rule leads to impossible physical objects.46 Three basic line labels..49. After the lines in a scene have been labeled. The key to using junction analysis is to form a dictionary of allowed junction types. Figure 8. the objective is to group the various surfaces into objects. in a polyhedral scene. 8. Physical constraints allow only a few possible combinations of line labels at a junction.47 An impossible physical object.e. The basic concept underlying this approach can be illustrated with the aid of Fig. 8. and the occluded surface is to the left. 8.47.49b that the blob is composed entirely of an occluding boundary. We also note that there is a vertex of type (10) from the dictionary in Fig. Thus. b(1 . Removing the base leaves the single object in the background. AND INTELLIGENCE U U (1) (2) (5) (6) (3) (4) VU (9) (7) (8) (10) (II) (12) f f (13) (14) (I5) (16) Figure 8. 8.422 ROBOTICS: CONTROL. Several comprehensive efforts in this area are referenced at the end of this chapter. we point out that formulation of an algorithm capable of handling more complex scenes is far from a trivial task.48. This is strong evidence (if we know we are dealing with trihedral objects) that the three surfaces involved in that vertex form a cube. Similar comments apply to the base after the cube surfaces are removed. Although the preceding short explanation gives an overall view of how line and junction analysis are used to describe 3D objects in a scene. there is nothing in front of it and it can be extracted from the scene. VISION.48 Junction dictionary for trihedral solids. coo acs CD" arc w-. indicating where it touches the base. SENSING. which completes the decomposition of the scene. 4 Generalized Cones A generalized cone (or cylinder) is the volume described by a planar cross section as it is translated along an arbitrary space curve (the spine). (C) Figure 8.49 (a) Scene.. with the exception that the sweeping rule holds the diameter of the cross section constant and then allows it to increase linearly past the midpoint of the spine.HIGHER-LEVEL VISION 423 (a) (b) O.50 illustrates the procedure for generating generalized cones.50b we have essentially the same situation. ©Plenum Press.) 8. In machine vision. and transformed according to a sweeping rule. generalized cones provide viewpoint-independent representations of threedimensional structures which are useful for description and model-based matching e-r purposes. Figure 8. (c) Decomposition via line and junction analysis. The result is a hollow cylinder. CAD CAD .50a the cross section is a ring. held at a constant angle to the curve. (Adapted from Shirai [1979]. In Fig. In Fig. (b) Labeled lines. the spine is a straight line and the sweeping rule is to translate the cross section normal to the spine while keeping its diameter constant. 8. 8.4. CU. particularly when one is dealing with incomplete data. Variability in object orientation is handled by choosing rotation-invariant descriptors or by using the principal axis of an object to orient it in a predefined direction.424 ROBOTICS: CONTROL. bolt) to that object.5 RECOGNITION Recognition is a labeling process.. Another common constraint is that images be acquired in a known viewing geometry (usually perpendicular to the work space). :fl . VISION. while in (b) its diameter increased linearly past the midpoint in the spine. I-. 8. . In (a) the cross section remained constant during the sweep. spines. SENSING.50 Cross sections. +-+ 't3 obi 'O-. we first determine the center axis of the points and then find the closest set of cross sections that will fit the data as we travel along the spine. seal.g. and their corresponding generalized cones. wrench. In general. When matching a set of 3D points against a set of known generalized cones.-. that is. For the most part. C_. the recognition stages of present industrial vision systems operate on the assumption that objects in a scene have been segmented as individual units. AND INTELLIGENCE (a) (b) Figure 8. the function of recognition algorithms is to identify each segmented object in a scene and to assign a label (e. This decreases variability in shape characteristics and simplifies segmentation and description by reducing the possibility of occlusion. considerable trial and error is required. the problem reduces to computing the following distance measures: Dr(x*) = Ilx* .Ihmjmj decision function. x )T represent a column pattern vector with real components.M. area.(x*) is the smallest distance.. .(x*) > dj(x*) j = 1. 2. .5.-.M (8. upon substitution of x* into all decision functions. as defined in Eq. an unknown object represented by vector x* is recognized as belonging to the ith object class if. j = 1. This formulation agrees with the concept of a . It is not difficult to show that this is equivalent to evaluating the functions dj(x*) = (x*)Tmi . . 2..... Given M object CTS classes. one way to determine its class membership is to assign it to the class of its closest prototype. . .. 2.. = N E Xk k=l CD.. is the ith descriptor of a given object (e. average intensity.. . . . the procedures discussed in this section are generally used to . the basic problem in decision-theoretic pattern recognition is to identify M decision functions.5-1).. d. dl(x). with the property that the following relationship holds for any pattern vector x* belonging to class wi: CAD d. ( x ) . . if D. Let x = (XI... . j # i (8. 2.dM(x).. denoted by wI.5-2) where the xk are sample vectors known to belong to class w. With a few exceptions. sequences of directions in a chain-coded boundary). Suppose that we represent each object class by a prototype (or average) vector: N ADD m.g. . If we use the euclidean distance to determine closeness.g... C-. recognize two-dimensional object representations. Given an unknown x*.mill j = 1. tai.5-3) where llall = (aTa)112 is the euclidean norm. x. We then assign x* to class w. w.g.5-1) In other words. As will be seen in the following discussion. wM. (8. 8.1 Decision-Theoretic Methods Decision-theoretic pattern recognition is based on the use of decision (discriminant) functions.5-4) and selecting the largest value... . decision-theoretic methods are based on quantitative descriptions (e. M (8.HIGHER-LEVEL VISION 425 Recognition approaches in use today can be divided into two principal categories: decision-theoretic and structural. . 1. perimeter length).. .(x*) yields the largest value. . The predominant use of decision functions in industrial vision systems is for matching. statistical texture) while structural methods rely on symbolic descriptions and their relationships (e. where x. M (8. . y) will vary from one location to the next and that its values are in the range [ . .426 ROBOTICS: CONTROL. 1 ]. It is noted that. (b) Image f(x.5-5) where it is assumed that w(s.-. m. AND INTELLIGENCE Another application of matching is in searching for an instance of a subimage w(x. y).1. The procedure.v][f(s. as determined by the largest correlation coefficient. y) we define the correlation coefficient as E E [w(s. t) . with a value of 1 corresponding CAD to a perfect match. . The summations are taken over the image coordinates common to both regions. y). t) . and m f is the average intensity of f in the region coincident with w. -y(x. VISION.mf12 }I/2 (8. (c) Location of the best match of w in f.v is the average intensity of w. is to compute y(x.via .mw]2!: S ! [f(s. t) . in general. t) is centered at coordinates (x.51 (a) Subimage w(x. y) in a larger image f(x.m. y).mf] `-' -Y (X. t) . Y) = S I --h { S f [w(s. y) of f(x. At each location (x. then. y). SENSING. y) at each location Figure 8. 7. a': 8. The objective of this section is to introduce the reader to techniques suitable for handling this and other types of structural pattern descriptions. establishes the structure of the object in terms of this particular representation. . attempt to achieve object discrimination by capitalizing on these relationships.5-5).52c. y) and to select its largest value to determine the best match of w in f [the procedure of moving w(x. and Fig. Start A.5. (c) Boundary coded in terms of primitives.1 deal with patterns on a quantitative basis.2 Structural Methods The techniques discussed in Sec. fro cu- . Since this method consists of directly comparing two regions. 8. a a a b 14 dI C d b .52b shows a set of primitive elements of specified length and direction.HIGHER-LEVEL VISION 427 (x. together CAD with the order in which they occur. y) throughout f(x. we obtain the coded boundary shown in Fig. 8. what we have done is represent the boundary by the string aaabcbbbcdddcd. resulting in the string aaabcbbbcdddcd. (b) Primitives. By starting at the top left. Basically. and identifying instances of these primitives.51. Variations in intensity are normalized by the denominator in Eq. (8.52 (a) Object boundary. tracking the boundary in a clockwise direction. Structural methods. 8. 8. a Jb Id C 1 d C b (c) Figure 8. The known length and direction of these primitives. This idea is easily explained with the aid of Fig.52. on the other hand. 8. The quality of the match can be controlled by accepting a correlation coefficient only if it exceeds a preset value (for example. Part (a) of this figure shows a simple object boundary. ignoring any geometrical relationships which may be inherent in the shape of an object. it is clearly sensitive to variations in object size and orientation. Central to the structural recognition approach is the decomposition of an object into pattern primitives. An example of matching by correlation is shown in Fig.5.20].9). y) is analogous to Fig. The same information can be summarized in the form of a similarity matrix. 8. Example: As an illustration of the preceding concepts. the degree of similarity of this shape with respect to all the others is 6. SENSING. the . and so on. This is analogous to having five prototype shapes whose identities are known and trying to determine which of these constitutes the best match to an unknown shape. all we could have said using this method is that it is similar to the other five figures with degree 6. their degree of similarity is higher than any of the other shapes.1. If A had been the unknown. If the degree of similarity is used. As shown in the tree. 8. CS' . the degree of similarity k between two object boundaries. 'LS . The root of the tree corresponds to the lowest degree of similarity considered.428 ROBOTICS: CONTROL. The distance between two shapes A and B is defined as the D(A. D. Proceeding down the tree we find that shape D has degree 8 with respect to the remaining shapes. where s indicates shape number and the subscript indicates the order.. . we have s4(A) = s4(B). we can use either k or D.. The search may be visualized with the aid of the similarity tree shown in Fig. which in this example is 4. furthermore. 'ti a. . C)] more similar the shapes are (note that k is infjnite for identical shapes).5-6) (a) D(A. B). is defined as the largest order for "C} which their shape numbers still coincide.. then we know from the above discussion that the larger k is. suppose that we wish to find which of the five shapes (A. C) \V if A = B (8. That is.3.. That is. with the exception of shape A. . 0 (b) D(A.. shape F turned out to be a unique match for C and. A procedure analogous to the minimum-distance concept introduced in Sec. (8. all shapes are identical up to degree 8.1 for vector representations can be formulated for the comparison of two object boundaries that are described in terms of shape numbers.. sk(A) = sk(B). s8(A) = s8(B).53b..5.n. B) = 0 (c) D(A. In this particular case.53c. With reference to the discussion in Sec. sk+2(A) $ sk+2(B). a unique match would have also been found.5-7) max [D(A.Q BCD In order to compare two shapes. AND INTELLIGENCE Matching Shape Numbers. 8.53a best matches shape C.. s6(A) = s6(B). VISION. F) in Fig. 8. E.+ 4-. but with a lower degree of similarity. inverse of their degree of similarity: 111 sk+4(A) # sk+4(B). A and B. D(B. B) = k This distance satisfies the following properties: A. B. 8. The reverse is true when the distance measure is used. If E had been the unknown. .I (J4 `a. . as shown in Fig.5 . B) >. AI C2 I) . It can be shown that B = 0 if and only if C1 and C2 are identical..) String Matching. Suppose that two object contours CI and C2 are coded into b.A (8. . ©Pergamon Press. The number of symbols that do not match up is given by B = max (I C1 1.5-8) where I C I is the length (number of symbols) of string C.. Let A represent the number strings aI a2 a and b1 b2 .A r-) (8.5 9) . A simple measure of similarity between strings C1 and C2 is defined as the ratio R B max (I C1 I . of matches between the two strings. (b) Similarity tree.HIGHER-LEVEL VISION 429 (a) Degree 4----------0 ABCDEF A A B 00 B 6 W C 6 D 6 E 6 F 6 6---------JABCDEF 8 8 00 8 8 10 8 8 12 C 10------A c 12----14----A cCF' QCF I A Th BE D 03 8 8 D E I 00 8 F 00 0 A 0 E (c) D (b) B Figure 8.53 (a) Shapes. I C2 I) . (From Bribiesca and Guzman [1980]. where we say that a match has occurred in the jth position if aj = bj. (c) Similarity matrix.. respectively. 75 5.e A/B 2 c 2.'3 Example: Figure 8. the starting point on each boundary when creating the string representation is important.430 ROBOTICS: CONTROL. SENSING. .5 23. The boundaries were approximated by a polygonal fit (Fig.2 7.71 18 3 13.c 16.0 9.d I.02 19 1.2 103 10. 8.0 (e) (f) A/B 2.18 19 148 1.07 1. (c).5 I. (d) Their corresponding polygonal approximations. shift one string (with wraparound).e.3 I.89 1.f 1.54 (a).c 2.e 2. we can start at arbitrary points on each boundary. and compute Eq.02 1. I C2 I ).a 2 b 2.19 1.d 14.8 3.48 124 1.55 1.43 2c 2.48 1.c l.02 1.11 0.d I.18 1.18 1 02 1.54c and d) and then strings were formed by computing the interior angle (a) (b) (c) (d) A/B 1.a I.f 5. Alternatively.3 2.63 3.32 1.67 7.e I. The number of shifts required to perform all necessary comparisons is max (I CI I.32 1 1.47 1 47 1. Since the matching is done on a symbol-by-symbol basis. AND INTELLIGENCE Based on the above comment regarding B.67 4.93 0.5 2 e 4 23 19 3 4.b I.6 10. (Adapted from Sze and Yang [1981].c I.83 3.) .55 1.14 (8) Figure 8.1 2b 335 4.e If.d 2.07 2..b I. (8.f 2.39 1.3 8. A = 0 in this case).24 1.6 26.5-9) for each shift.d I.32 1 1.25 9.0 27. VISION. R is infinite for a perfect match and zero when none of the symbols in CI and C2 match (i.40 1.e I.7 2.32 1. ©IEEE. (b) Sample boundaries of two different object classes.b 1.08 1. 1.39 2.b 1.18 1.d 2. (e)-(g) Tabulations of R = A/B.17 7.a La I.2 8.25 1.54a and b shows a sample boundary from each of two classes of objects. CAD "C7 coo coo CAD °': X0. where a grammar is a set of rules of syntax (hence the name syntactic pattern recognition) for the generation of sentences formed from the given symbols. Thus. The results of computing the measure R for five samples of object 1 against themselves are shown in Fig. classification of this string into class 1 based on the maximum value of R would have been a simH...90 ° S8: 315 ° < 0 < 360 °.45'. Figure 8. a. In the context of the present discussion. we say that the pattern belongs to object class wl. Once the two grammars GI and G2 have been established. wI and w2. CAD CO/' A. as outlined at the beginning of Sec. the largest value in a comparison against class 2 would have been 1. tea.24.. "C7 ple. I'. in principle.54f shows the results for the 3'trings of the second object class.c refers to the third string for object class 1.0 7:. We consider first string grammars and then extend these ideas to higher-dimensional grammars. GI and G2. if string l. straightforward. the smallest value in comparing it with the other strings of class 1 would have been 4... Finally. Basically. By contrast.2.. Similarly. If the sentence belongs to L(GI ). . If the sentence is found to be invalid over both languages it is rejected. 8. whose rules are such that GI only allows the generation of sentences which correspond to objects of class wI while G2 only allows generation of sentences corresponding to objects of class w2. which are represented as strings of primitives. A. unambiguous matter. It is further possible to envision two grammars. 0 pp. Suppose that we have two classes of objects. We may interpret each primitive as being a symbol permissible in some grammar. the notation 1. For instance. the idea behind syntactic pattern recognition is the specification of structural pattern primitives and a set of rules (in the form of a grammar) which govern their interconnection.54g is a tabulation of R values obtained by comparing strings of one class against the other. sI : 0 ° < B <.a had been an unknown. we say that the object comes from class w2 if the sentence is in L(G2 ).HIGHER-LEVEL VISION 431 between the polygon segments as the polygon was traversed in a clockwise direction. Fig. Angles were coded into one of eight possible symbols which correspond to 45 ° increments. for example.O . 8.3 t3. A unique decision cannot be made if the sentence belongs to both L(GI ) and L(G2). Syntactic Methods. s2 : 45 ° < 0 <. where the entries correspond to values of R = A/B and. the problem is one of deciding in which language the pattern represents a valid sentence.53 BCD O'1 -. Syntactic techniques are by far the most prevalent concepts used for handling structural recognition problems.. String Grammars. 8.5. the syntactic pattern recognition process is. and denoted by L(G). Given a sentence representing an unknown pattern.54e.° t3.67. these sentences are strings of symbols which in turn represent patterns. indicating that the R measure achieved a high degree of discrimination between the two classes of objects. The important thing to note is that all values of R in this last table are considerably smaller than any entry in the preceding two tables. The set of sentences generated by a grammar G is called its language. . 8.. S . P.. y. . P. Strings of terminals will be denoted by lowercase letters toward the end of the alphabet: v... 8. String grammars are characterized primarily by the form of their productions.. 0. E.55b. S) BCD (8. A unique decision cannot be made if the sentence belongs to more than one language. 0.. with productions of the form A -.55b to describe the structure of this and similar skeletons.55a is represented by its skeleton. The empty sentence (the sentence with no symbols) will be denoted by X. z.. Of particular interest in syntactic pattern recognition are regular grammars. Lowercase letters at the beginning of the alphabet will be used for terminals: a.bA 4.B--c where the terminals a. and c are as shown in Fig. o'< °a. we define a grammar as the four-tuple afro G = (N. AND INTELLIGENCE When there are more than two pattern classes.all or A a with A and B in N. Maw Example: The preceding concepts are best clarified by an example. Strings of mixed terminals and nonterminals will be denoted by lowercase Greek letters: a. >. we will use the notation V* to denote the set of all sentences composed of elements from V.5-10) where N = finite set of nonterminals or variables E = finite set of terminals or constants P = finite set of productions or rewriting rules S in N = the starting symbol It is required that N and E be disjoint sets. b. Consider the grammar G = (N. When dealing with strings. ate-) productions are always of the form A -. a can be any string composed of terminals and nonterminals. A . and production rules 1. B. and (as above) a pattern is rejected if it does not belong to any of the languages under consideration.. 8. given a set V of symbols. . b.w 0.aA 2. S) with N = {A. .. . . c}. As indicated earlier. S}.X. except that more grammars (at least one per class) are involved in the process. w. Suppose that the object shown in Fig. and that we define the primitives shown in Fig. whose -. S is the starting symbol from which we generate all strings in L(G). In this case the pattern is assigned to class wi if it is a sentence of only L(Gi). A.. S. and a in E. SENSING. the syntactic classification approach is the same as described above.l . x. and context free grammars. Finally.432 ROBOTICS: CONTROL. and a in the set (N U E ) * . except the empty string.. c.. E _ {a. VISION. In the following discussion nonterminals will be denoted by capital letters: A.. E.a with A in N. b. that is. B. In more complicated situations the rules of connectivity. (c) Structure generated using a regular string grammar. 0 ." Since we have a nonterminal in the string abbA and a rule which allows us to rewrite it. (b) Primitives.55c. In the above example we have implicitly assumed that the interconnection between primitives takes place only at the dots shown in Fig. Use of Semantics.55b. A little thought will reveal that the grammar given above has the language L(G) = {ab"c ln > 11. we apply production 1 followed by two applications of production 2. if we apply production 2 two more times.55 (a) Object represented by its skeleton. where " " indicates a string derivation starting from S and using production rules from P. °.° 's7 coo 'i. If. In other words. as well as other information regarding factors such as primitive length and direction. we can continue the derivation. It is noted that we interpret the production S .aA and A -. It is important to note that no nonterminals are left after application of production 4 so the derivation terminates after this production is used. and the CS` 'CS coo r:. 8. we obtain the string abbbbbc which corresponds to the structure shown in Fig. 8. we obtain: S aA abA abbA.bA as "S can be rewritten as aA" and "A can be rewritten as bA. for instance.HIGHER-LEVEL VISION 433 b (b) (c) Figure 8. For example. G is capable of generating the skeletons of wrenchlike structures with bodies of arbitrary length within the resolution established by the length of primitive b. followed by production 3 and then production 4. where b" indicates n repetitions of the symbol b. The basic concepts underlying syntactic recognition can be illustrated by the development of mathematical models of computing machines. The direction of a. These line segments are 3 cm each. Connections must be simple and made only at the dots. The direction of c and a must be the same.. wrenchlike structures. No multiple connections are allowed. 8. syntax establishes the structure of an object or expression. by requiring that all primitives be oriented in the same direction. we have seen that grammars are generators of patterns. SENSING. Basically. Given an input pattern string.. Similarly. but it is semanti- '"' cally correct only if C # 0. is given by the direction of the perpendicular bisector of the line joining the endpoints of the two undotted segments.434 ROBOTICS: CONTROL. The direction of b must be the same as that of a and the length of b is 0. VISION. A = (Q. For instance. q0.25 cm. suppose that we attach semantic information to the wrench grammar just discussed. AND INTELLIGENCE number of times a production can be applied. while semantics deal with its meaning. A. called automata. E. We will focus attention only on finite automata. For example. we eliminate from consideration nonsensical (%] r. E is a finite input alphabet.bB B-c Connections to a are made only at the dot. «". Connections to b are made only at the dots. we avoid having to specify primitives for each possible object orientation. we are able to use a few rules of syntax to describe a broad (although limited as desired) class of patterns. S is a mapping from Q x E (the set of ordered pairs formed from elements of Q and E) into .5-11) where Q is a finite. by using semantic information. Recognition. the FORTRAN statement A = B/C is syntactically correct. X00 It is noted that. F) (8. denoted by 0. Connections must be simple and made only at the dots. The direction of a and b must be the same. must be made explicit. This production cannot be applied more than 10 times. these automata have the capability of recognizing whether or not the pattern belongs to a specified language or class. This information may be attached to the productions as follows: Production Semantic information S aA '-h A bA A . In the following discussion we consider the problem of recognizing if a given pattern string belongs to the language L(G) generated by a grammar G. Thus far. In order to fix these ideas. nonempty set of states. by specifying the direction 0. A finite automaton is defined as a five-tuple -. which are the recognizers of languages generated by regular grammars. This is usually accomplished via the use of semantics.. 8. . A state diagram for the automaton just discussed is shown in Fig. The set of 0. qI ..56 recognizes the string w = abbabb. if a b is input next. X0). . and so forth.o (l4 . 8. 6(q1. X.. the sequence of symbols in w causes the automaton to be in a final state after the last symbol in w has been input. but rejects the string w = aabab. There is a one-to-one correspondence between regular grammars and finite automata. starting in state q0. q2}.-' . b) = {q0}. q. It is noted that. The terminology and notation associated with Eq. The state diagram consists of a node for each state. and F (a subset of Q) is a set of final or accepting states.. q1.. Let the grammar be denoted by G = (N. the automaton moves to state qI . The procedure for obtaining the automaton corresponding to a given regular grammar is straightforward. (8. That is. (8. its state changes to q2. The state set Q for the automaton is formed by introducing n + 2 states. .. a) = {q2}. . `zoo . q0 is the starting state. b) _ {q1}.5-11) where Q = {qo. .-: . For example. 6(q1. Similarly. b(qo. E. the automaton in Fig. A string w of terminal symbols is said to be accepted or recognized by the automaton if. If. F = {q0}. qn + I } such that qi corresponds to Xi for 0 < i < n and qn + I is the final state. 6(q2.. a language is recognized by a finite automaton if and only if it is generated by a regular grammar.56 A finite automaton. '°J' . The final state is shown as a double circle and each arc is labeled with the symbol that causes that transition. . a) = {q2}. in this case. Figure 8. b) _ {qI}. E = {a.'Y CS' 't0 the collection of all subsets of Q.5-11) are best illustrated by a simple example. and suppose that N is composed of X0 plus n additional nonterminals XI. and the mappings are given by 6(qo.HIGHER-LEVEL VISION 435 Example: Consider an automaton given by Eq.+ 001 . cep BCD III . . and directed arcs showing the possible transitions between states. 3(q2. b}.. {qo. for example.56. P. the automaton is in state q0 and an a is input. where X0 = S. a) _ {qo}. X2. the initial and final states are the same. c) = 6(qt. namely. The grammars discussed above are best suited for applications where the connectivity of primitives can be conveniently expressed in a stringlike manner.. a) contains On the other hand. b(q1. E. c) = 6(q2. 'oh . F = {q3}. SENSING. q3}.436 ROBOTICS: CONTROL. . respectively. 1.a . .. P. b. where Ti and Tj are trees. 6.. and r is a ranking function which denotes the number of direct descendants of a node whose label is a terminal in the grammar. a) = (q. indicating that these transitions are not defined for this automaton. 0 <i <n. E. r. sets of nonterminals and terminals. AND INTELLIGENCE j. If Si -. a). Higher-Dimensional Grammars. b) = 0. An expansive tree grammar has productions of the form A -.c. S.bXI. S is the start symbol which can. where 0 is the null set. b) _ {q1. AI . All where A. Xl . be a tree. and the productions of G obtained as follows: 1. as before.a is in P. A are nonterminals. F). a) = 6(qI. and mappings 8(q0. we obtain the corresponding regular grammar G = (N. The mappings in 6 are defined by two rules based on the production of G. given a finite automaton A = (Q. with the starting symbol X0 corresponding to q0. we can write 6(q0. qI. 5(q2. E. and a is a terminal. A tree grammar is defined as the five-tuple G = (N. a). a) = 5(q2. from the above discussion.0 < j <n: 1. P is a set of productions of the form T Tj. XI -bX2 . If qj is in 6(qi. /Ac input symbols is identical to the set of terminals in G. there is a production Xi .aXj in P. then S (qi. c) = {q3}. If Xi -. Q = {qo. b) = 6(q0. 2.a in P. we have A = (Q. X2 . Example: The finite automaton corresponding to the wrench grammar given earlier is obtained by writing the productions as X0 -. VISION. E. q2}.aXj is in P. c}. in general. S) (8. q0. P. 2.aXI. X0) by letting N be identified with the state set Q. for each i and Then. then 8 (qi. there is a production Xi -. In the following discussion we consider two examples of grammars capable of handling more general interconnections between primitives and subpatterns. F) with E _ {a. a) contains qj. If a state in F is in 6(qi. For completeness. q2. q0.5-12) where N and E are. . A2 A e A3 c A3 (4) A2 -.HIGHER-LEVEL VISION 437 Example: The skeleton of the structure shown in Fig. r(b) = r(d) = r(e) = {1}.. It is noted that restricting the use of productions 2.a A. 1}. 8. The ranking functions in this case are r(a) = {0. -.57 (a) An object and (b) primitives used for representing the skeleton by means of a tree grammar. .. and 6 to .b A. 4. and connections to the circle primitive can be made anywhere on its circumference.. d A2 (5) A2 -. (a) 0 a Ib (b) Figure 8. (2) A. (3) A.. r(c) = {2}.e (6) A3 (7) A3 -a where connectivity between linear primitives is head to tail.57a can be generated by means of a tree grammar with productions (1) S -. 58. VISION. it is classified as type TI . (a) (b) Figure 8. '"' v`' L1.57a.. .58 Vertex primitives.) be applied the same number of times generates a structure in which all three legs are of the same length. SENSING. requiring that productions 4 and 6 be applied the same number of times produces a structure that is symmetrical about the vertical axis in Fig. AND INTELLIGENCE Figure 8.438 ROBOTICS: CONTROL. using local information. As in the previous discussion. the key to object generation by syntactic techniques is the specification of a set of primitives and their interconnections. Figure 8. If a T vertex is contained in a parallelogram of vertices it is classified as type T3.59 Further classification of T vertices. 8. Vertices of type T are further classified as either TI or T3. We conclude this section with a brief discussion of a grammar proposed by Gips [1974] for generating three-dimensional objects consisting of cube structures. If a T vertex is not contained in a parallelogram of vertices. (From Gips [1974]. 8. In this case.59 shows this classification. ©Pergamon Press. the primitives are the vertices shown in Fig. Similarly. 'C? -'. 8. and viewing geometry. 7. say. many of these problems result from the fact that relatively little is known about modeling the illumination-reflectance properties coo .CC boa CCDD boa ova .3 we spent considerable time discussing techniques designed to reduce variability in illumination and thus provide a relatively constant input to a vision system. 'G7 ago c~0 `fi 't3 .63.. '"' O.HIGHER-LEVEL VISION 439 The rules of the grammar consist of specifying valid interconnections between structures.L of 3D scenes. This is illustrated in Fig. Among these difficulties we find shadowing effects which complicate edge finding. the reader is reminded of the comments made in Secs. In this section we touch briefly upon a number of topics which are representative of current efforts toward advancing the state of the art in machine vision. In Sec. 7. Occlusion problems come into play when we are dealing with a multiplicity of objects in an unconstrained working environment.60.60. The line and junction labeling techniques discussed in Sec.62 illustrates the range of structures that can be generated with these rules.-y f]. The vertices denoted by double circles denote central vertices of the end cube of an object where further connections can be made. Although this is one of the most active research topics in machine vision.6 INTERPRETATION In this discussion. but they fall short of explaining the interaction of illumination and reflectivity in quantitative terms. The power of a machine vision system is determined by its ability to extract meaningful information from a scene under a broad range of viewing conditions and using minimal knowledge about the objects being viewed. reflectance. When viewed in this way. the scene shown in Fig. however. as detailed in Fig. in determining the presence of two wrenches behind the sockets. including variations in illumination. Even if the system `Z7 coo . Consider. we view interpretation as the process which endows a vision system with a higher level of cognition about its environment than that offered by any of the concepts discussed thus far.1 and 8. 8.61 which shows a typical derivation using the rules of Fig. A human observer would have little difficulty.and structured-lighting approaches discussed in that section are indicative of the extreme levels of specialization employed by current industrial systems to reduce the difficulties associated with arbitrary lighting of the work space. 8. and surface characteristics such as orientation (Horn [1977]. Clearly. interpretation of this scene is a totally different story. 8. were able to perform a perfect segmentation of object clusters from the back- . 8.w-.1 regarding the fact that our understanding of this area is really in its infancy. occluding bodies. Katsushi and Horn [1981]). For a machine. The back. Figure 8. Marr [1979]. for example. 8.. interpretation clearly encompasses all these methods as an integral part of understanding a visual scene. and the introduction of nonuniformities on smooth surfaces which often results in their being detected as distinct bodies. There are a number of factors which make this type of processing a difficult task. A more promising approach is based on mathematical models which attempt to infer intrinsic relationships between illumination.4 represent an attempt in this direction. ) 440 .Rule I Rule 2 Rule 3 LJ I It I.60 Rules used to generate three-dimensional structures. T? --------- L L --------- Rule 5 Figure 8. The blank circles indicate that more than one vertex type is allowed. ©Pergamon Press. 11 1 T. (Adapted from Gips [1974]. all the two-dimensional procedures discussed thus far for description and recognition would perform poorly on most of the occluded objects. several of the sockets would appear as partial cylindrical surfaces. For instance. and the middle wrench would appear as two separate objects. even when 3-v 3U. Qom) . The threedimensional descriptors discussed in Sec.63 requires the capability to obtain descriptions which inherently carry shape and volumetric information. 8.4 would have a better chance.60 (continued) ground. and procedures for establishing relationships between these descriptions. but even they would yield incomplete information. Processing scenes such as the one shown in Fig. 8.HIGHER-LEVEL VISION 441 Rule 7 Rule 8 ----n L Rule 9 W W Y T3 Figure 8. -.61 Sample derivation using the rules in Fig.64) to resolve the issue would be a natural reaction in an intelligent observer. 8. As an example of this type of reasoning.) they are incomplete. SENSING. AND INTELLIGENCE S -p Rule I Figure 8. (Adapted from Gips [1974].63 with the exception of the object occluded by the screwdriver. The decision to look at the scene from a different viewpoint (Fig. Ultimately. 8. ©Pergamon Press. 'z7 CAD 4-.442 ROBOTICS: CONTROL.60. VISION. CAD ti vii . The capability to know when interpretation of a scene or part of a scene is not an achievable task is just as important as correctly analyzing the scene. these issues will be resolved only through the development of methods capable of handling 3D information obtained either by means of direct measurements or via geometric reasoning techniques capable of inferring (but not necessarily quantifying) 3D relationships from intensity imagery. the reader would have little difficulty in arriving at a detailed interpretation of the objects in Fig. The basic idea behind this approach is to base the . 8. One of the most promising approaches in this direction is research in modeldriven vision (Brooks [1981]). Even in two-dimensional situations where the viewing geometry is fixed. '.HIGHER-LEVEL VISION 443 Figure 8.°h when viewed from different positions is one of the most serious problems in machine vision. Variability in the appearance of an object . 8.7 O-' .) interpretation of a scene on discovering instances of matches between image data and 3D models of volumetric primitives or entire objects of interest. ©Pergamon Press.60.62 Sample three-dimensional structures generated by the rules given in Fig. Vision based on 3D models has another important advantage: It provides an approach for handling variances in viewing geometry. object orientation can strongly influence recognition performance if not handled properly (the reader will recall numerous comments made about this in Sec. (From Gips [1974].NY 'a:+ "C1 . 8. VISION.444 ROBOTICS: CONTROL. Figure 8.64 Another view of the scene shown in Fig.63 Oblique view of a three-dimensional scene. SENSING.63. AND INTELLIGENCE Figure 8. . 2.2 was first utilized by Chow and Kaneko [1972] for detecting boundaries in cineagiograms (x-ray pictures of a heart which has been injected with a dye). 8.. As indicated in Sec. depending on a known viewing geometry.2. location. "00 .1 may be found in the book by Rosenfeld and Kak [1982]. C.7 CONCLUDING REMARKS The focus of the discussion in this chapter is on concepts and techniques of machine vision with a strong bias toward industrial applications. The references at the end of this chapter provide a pointer for further reading on both the decision-theoretic and structural aspects of pattern recognition and related topics. The problems encountered when these constraints are relaxed are addressed briefly in Secs. and orientation. the next task of a vision system is to form a set of descriptors which will uniquely identify the objects of a particular class. 1976]. This is a broad area in which dozens of books and thousands of articles have been written. 8. Our treatment of recognition techniques has been at an introductory level. Edge following may also be approached from a dynamic programming point of view. the key in selecting descriptors is to minimize their dependence on object size. One of the advantages of a model-driven approach is that. a significant portion of this chapter is dedicated to this topic. 8.S. Although vision is inherently a three-dimensional problem.3). Another interesting approach based on a minimum-cost search is given in Ramer [1975].+ . The Hough transform was first proposed by P. 1980]. As indicated in Sec. Following segmentation. REFERENCES Further reading on the local analysis concepts discussed in Sec.4) in that orientation and thus simplify the match between an unknown object and what the system would expect to see from a given viewpoint.2. Consequently.. patent and later popularized by Duda and Hart [1972]. most present industrial systems operate on image data which are often idealized via the use of specialized illumination techniques and a fixed viewing geometry. segmentation is one of the most important processes in the early stages of a machine vision system.HIGHER-LEVEL VISION 445 8. Additional reading on graph searching techniques may be found in Nilsson [1971. A generalization of the Hough transform for detecting arbitrary shape has been proposed by Ballard [1981]. 8. 8. V.6. 7. The material on graphtheoretic techniques is based on two papers by Martelli [1972. Hough [1962] in a U.4 and 8. The optimum thresholding approach discussed in Sec. The book by Rosenfeld and Kak [1982] contains a number of approaches for threshold selection 000 C/1 tow -CD f1. For further details on this topic see Ballard and Brown [1982]. 8.3. Further reading on optimum discrimination may be found in Tou and Gonzalez [1974]. it is possible to project the 3D model onto an imaging plane (see Sec. 8. The material on skeletons is based on a paper by Naccache and Shinghal [1984].1 has been used by Shirai [1979] for segmenting range data.4 is based on two papers by Jain [1981.1 see the book by Tou and Gonzalez [1974]. Nevatia and Binford [1977]. [1982]. CO.. Further reading for the material in Sec. 8. For further reading on tons. Early work on line and junction labeling for scene analysis (Sec.2 dealing with matching shape numbers is based on a paper by Bribiesca and Guzman [1980]. Further reading on Fourier descriptors may be found in Zahn and Roskies [1972]. The chain code representation discussed in Sec. Further reading on signatures may be found in Ambler et al. Marr [1979]. For a discussion of 3D Fourier descriptors see Wallace and Mitchell [1980]. 8.y din ADO. The string matching results are from Sze and Yang [1981]. 8. 'i7 'O. The discussion on using several variables for thresholding is from Gonzalez and Wintz [1977]. which also contains an extensive set of references to other work on skeleDavies and Plummer [1981] address some fundamental issues on thinning which complement our discussion of this topic.4.5.4. [1973]. Bajcsy and Lieberman [1976]. For further reading on the statistical aspects of texture see Haralick et al. 1976]). 1974].1 was first proposed by Freeman [1961. and Cross and Jain [1983]. For further details on generalized cones (Sec. and Shani [1980]. Webb and Aggarwal [1981]. 1983].fl >. [1975]. The concept of a quad tree was originally called regular decomposition (Klinger [1972. Other approaches to dynamic scene analysis may be found in Thompson and Barnard [1981]. Haralick [1978]. 1976]. Brice and Fennema [1970]. The approach discussed in Sec. o'.. see Lu and Fu [1978] and Timita et al. AND INTELLIGENCE and evaluation. An overview of region-oriented segmentation (Sec.2 may be found in Gonzalez and Wintz [1977]. SENSING. 8. A more comprehensive utilization of these ideas may be found in Waltz [1972.3) is given in a paper by Zucker [1976].3) may be found in Roberts [1965] and Guzman [1969]. The material in Sec. 8.2.'3 ""3 CAD _.3. Persoon and Fu [1977]. . Texture descriptors have received a great deal of attention during a") . The discussion on shape numbers is based on the work of Bribiesca and Guzman [1980] and Bribiesca [1981]. VISION.'< -P.. The extraction of a skeleton using Fourier descriptors is discussed by Persoon and Fu [1977]. . "L7 v-. the past few years. The gradient operator discussed in Sec.4) see Agin [1972].2 was developed by Zucker and Hummel [1981]. 8. For further reading on the decision-theoretic approach discussed in Sec. Our use of boundary characteristics for thresholding is based on a paper by White and Rohrer [1983]. Rajala et al. 0. Nahim [1974].446 ROBOTICS: CONTROL. The book by Pavlidis [1977] contains a comprehensive discussion on techniques for polygonal approximations.-. This technique has been extended to three dimensions by Sadjadi and Hall [1980]. The moment-invariant approach is due to Hu [1962]. fixes (`J . and Gonzalez and Wintz [1977]. and Aggarwal and Badler [1980]..4.3. Horowitz and Pavlidis [1974]. On structural texture. .. 8. 8. Nagel [1981]. [1983]. and Ohlander et al. For a more recent survey of work in this areas see Barrow and Tenenbaum [1981]. 8. CD.5.2.. [1979]. :.4. Additional reading on this topic may be found in Barrow and Tenenbaum [1977].. The material in Sec. G?. and Ballard and Brown [1982]. . (b) Find the normal representation of the line y = -2x+ 1.HIGHER-LEVEL VISION 447 structural pattern recognition see the books by Pavlidis [1977]. 8. Gonzalez and Thomason [1978].4 Suppose that an image has the following intensity distributions. 8.5 Segment the image on page 448 using the split and merge procedure discussed in Sec. have the same intensity.y. Assume that the edge starts on the first column and ends in the last column.O (8) (2) 8.8.3. Show the quadtree corresponding to your segmentation.2. Let P(R. where pi (z) corresponds to the intensity of objects and P2 (z) corresponds to the intensity of the background.2 (a) Superimpose on Fig. 8. Assuming that P1 = P2. 8. A set of survey papers on the topics discussed in that section has been compiled by Brady [1981]. 8. PROBLEMS 8. . 8. where the numbers in parentheses indicate intensity. . and Fu [1982].3 Find the edge corresponding to the minimum-cost path in the subimage shown below. Further reading for the material in Sec. Coq 0 0 (2) 1 't3 1 2 (1) (0) (1) (1) (7) 2 (6) '. (b) Compute the cost of the minimum-cost path. 8.7 all the possible edges given by the graph in Fig.) = TRUE if all pixels in R.1 (a) Develop a general procedure for obtaining the normal representation of a line given its slope-intercept equation y = ax + b.6 may be found in Dodd and Rossol [1979] and in Ballard and Brown [1982]. find the optimum threshold between object and background pixels. (b) What would be the normalized starting point of the chain code 11076765543322? 8.L) (b) Obtain the shape number for the fourth figure. left pixel has value 0. (b) Repeat for the slope density function. as explained in Section 8. p(z4) = 0. (b) .9 Give the fewest number of moment descriptors that would be needed to differentiate between the shapes shown in Fig. p(z2) = 0.12 (a) What is the order of the shape number in each of the following figures? .448 ROBOTICS: CONTROL..14 Obtain the gray-level co-occurrence matrix of a 5 x 5 image composed of a checkerboard of alternating l's and 0's if (a) P is defined as "one pixel to the right. p(z3) = 0. where d is the grid distance between pixels. 8. 0101030303323232212111.1. SENSING.3.3. Assume that z. then the maximum possible error in that cell is Vd.6 (a) Show that redefining the starting point of a chain code so that the resulting sequence of numbers forms an integer of minimum magnitude makes the code independent of where we initially start on the boundary. z2 = 1. Start at the corner closest to the origin.13 Compute the mean and variance of a four-level image with histogram p(zi) = 0.11 (a) What would be the effect on the resulting polygon if the error threshold were set to zero in the merging method discussed in Sec. 8. = 0.7 (a) Show that using the first difference of a chain code normalizes it to rotation.: 'C3 W/] .3.29.1." and (b) "two pixels to the right. 8. 8. and z4 = 3. 8.1.1 yields a polygon with minimum perimeter. 8. -°» pop a. AND INTELLIGENCE N N 1 8. VISION. J (4) 4 (I) 8. Assume that the square is aligned with the x and y axes and let the x axis be the reference line.3." Assume that the top. 8.2. z3 = 2. 8. yam) 'L7 COD 8.D CSC CAD Compute the first difference of the code 8.4.1? (b) What would be the effect on the splitting method? ono CAD C].10 (a) Show that the rubberband polygonal approximation approach discussed in Sec. (b) Show that if each cell corresponds to a pixel on the boundary.8 (a) Plot the signature of a square boundary using the tangent angle method discussed in Sec.3. Give a position operator that would yield a diagonal co-occurrence matrix. (8.5-4) to classify an unknown pattern vector x* is equivalent to using Eq. .20 Show that D(A.1'6 (a) Show that the medial axis of a circular region is a single point at its center. (8. 8.A in Eq. 8. 8.40. B) = 1/k satisfies the three conditions given in Eq.5-7).5-3). 8. (8.19 Show that using Eq.HIGHER-LEVEL VISION 449 8.21 Show that B = max C. (b) Sketch the medial axis of a rectangle.5-8) is zero if and only if C.17 (a) Show that the boolean expression given in Eq. (8. and C2 are identical strings. 8.3-7).18 Draw a trihedral object which has a junction of the form 8.3-6) implements the conditions given by the four windows in Fig. and an equilateral triangle. 8.15 Consider a checkerboard image composed of alternating black and white squares. (8. C2 I) . (8. each of size m X in. (b) Draw the windows corresponding to B0 in Eq. the region between two concentric circles. CHAPTER NINE ROBOT PROGRAMMING LANGUAGES Observers are not led by the same physical evidence to the same picture of the universe unless their linguistic backgrounds are similar or can in some way be calibrated. The method involves teaching the robot by leading it through the motions the user wishes the robot to perform. then the robot is run at an appropriate speed in a repetitive mode. Leading the robot in slow motion usually can be achieved in several ways: using a joystick. via (2) editing and playing back the taught motion. These systems can recognize a set of discrete words from a limited vocabulary and usually require the user to pause between words. Benjamin Lee Whorf 9. and high-level programming languages. a set of pushbuttons (one for each joint). is the most commonly used method in present-day industrial robots. Current state-of-the-art speech recognition systems are quite primitive and generally speaker-dependent. A major obstacle in using manipulators as general-purpose assembly machines is the lack of suitable and efficient communication between the user and the robotic system so that the user can direct the manipulator to accomplish a given task. or a master-slave maniCAD 450 <°' CAD . and vision for computer-based manipulators. and three major approaches to achieve it are discrete word recognition. Teach and playback is typically accomplished by the following steps: (1) leading the robot in slow motion using manual control through the entire assembly task and recording the joint angles of the robot at appropriate locations in order to replay the motion. dynamics.1 INTRODUCTION The discussion in the previous chapters focused on kinematics. control. trajectory planning. and it usually requires a training period to build up speech templates for recognition. Moreover. also known as guiding. teach and playback. and (3) if the taught motion is correct. sensing. The algorithms used to accomplish these functions are usually embedded in the controlling software modules. the usefulness of discrete word recognition to describe a robot task is quite limited in scope. There are several ways to communicate with a robot. speech recognition generally requires a large memory or secondary storage to store speech data. Although it is now possible to recognize discrete words in real time due to faster computer components and efficient processing algorithms. Teach and playback. In the edit-playback mode. The robot is guided and controlled by the program throughout the entire task with each statement of the program roughly correspond- ing to one action of the robot. . complex environment. and this type of unstructured interaction can only be handled by conditionally programmed methods. The set of angular positions that are recorded form the set-points of the trajectory that the manipulator has traversed. However. In robot-oriented programming. High-level programming languages provide a more general approach to solving the human-robot communication problem. or task-level programming.. the user can edit the recorded angular positions and make sure that the robot will not collide with obstacles while completing the task. 9. Current approaches to programming can be classified into two major categories: robot-oriented programming and objectoriented. robots operate in a spatially . With this method. then the above three steps are repeated. an assembly task is explicitly described as a 0`4 °C3 sequence of robot motions. and the robot is "played back" along the smoothed trajectory. The advantages of this method are that it requires only a relatively small memory space to record angular positions and it is simple to learn. These approaches are discussed in detail in the following two sections. These position set-points are then interpolated by numerical methods. the robot will run repeatedly according to the edited and smoothed trajectory. and sensory information has to be monitored. and presses a button to record any desired angular position of the manipulator. the most commonly used system is a manual box with pushbuttons. the description and representation of three-dimensional objects in a computer are imprecise. In the run mode. These tasks require no interaction between the robot and the environment and can be easily programmed by guiding. On the other hand. task-level programming describes the assembly task as a sequence of positional goals of the objects rather than the motion of the robot needed to achieve these goals. Orn (t4 `'3 1-' s0. ''t. Presently. We can identify several considerations which must be handled by any robot programming method: The objects to be manipulated by a robot are three-dimensional objects which have a variety of physical properties.2 CHARACTERISTICS OF ROBOT-LEVEL LANGUAGES The most common approach taken in designing robot-level language is to extend an existing high-level language to meet the requirements of robot programming. and properly utilized. the user moves the robot manually through the space.ROBOT PROGRAMMING LANGUAGES 451 pulator system. Robot programming is substantially different from traditional programming. In the past decade. manipulated.. If the task is changed. and hence no explicit robot motion is specified. robots have been successfully used in such areas as arc welding and spray painting using guiding (Engelberger [1980]). The main disadvantage is that it is difficult to utilize this method for integrating sensory feedback information into the control system. the use of robots to perform assembly tasks requires high-level programming techniques because robot assembly usually relies on sensory feedback. etc. The assembly task is partitioned into a sequence of actions such as moving the robot. Consider the task of inserting a bolt into a hole (Fig.452 ROBOTICS. 9. t_. The workspace is set up and the parts are fixed by the use of fixtures and cps feeders. and performing an insertion. t 3. The location (orientation and position) of the parts (feeder. Typically. this approach is ad hoc and there are no guidelines on how to implement the extension. We can easily recognize several key characteristics that are common to all robot-oriented languages by examining the steps involved in developing a robot program. SENSING. -. AND INTELLIGENCE To a certain extent.CONTROL. t The reader will recall that the use of the underscore symbol is a common practice in programming languages to provide an effective identity in a variable name and thus improve legibility. bolt-grasp. moving it to the beam and inserting the bolt into one of the holes. grasping objects.) are defined using the data structures provided by the language. the steps taken to develop the program are: E-a) 1. Feeder . etc. VISION. Sensory commands are added to detect abnormal situations (such as inability to locate the bolt while grasping) and monitor the progress of the assembly task. Figure 9. picking up the bolt.) and their features (beam-bore.1)..1 A simple robotic insertion task. 4. 2. beam. This requires moving the robot to the feeder. CFO has a rich set of primitives for robot operations and allows the users to design high-level commands according to their particular needs. AML is currently available as a commercial product for the control of IBM's robots and its approach is different from AL. it '-. Its design philosophy is to provide a system environment where different robot programming interfaces may be built.ROBOT PROGRAMMING LANGUAGES 453 5.. It runs on a Series-1 computer (or IBM personal computer) which also controls the robot. Thus. motion specification (step 3). concurrent execution. [1983]) as examples. Its first three joints are prismatic and the last three joints are rotary. s.1 A brief summary of the AL and AML robot programming languages AL was developed by Stanford University. Table 9. The choice of using these two languages is not arbitrary. Its characteristics are: High-level language with features of ALGOL and Pascal Supports both robot-level and task-level specification Compiled into low-level language and interpreted on a real time control machine Has real-time programming language constructs like synchronization. and sensing (step 4). It is the control language for the IBM RS-1 robot. A brief . [1982]) and AML (Taylor et al. These two languages represent the state of the art in robot-oriented programming languages.1. We will use the languages AL (Mujtaba et al. These characteristics are discussed in detail in this section.. Its characteristics are: Provides an environment where different user-interface can be built Supports features of LISP-like and APL-like constructs Supports data aggregation Supports joint-space trajectory planning subject to position and velocity constraints Provides absolute and relative motions Provides sensor monitoring that can interrupt motion `f' '$0 AML was developed by IBM. The RS-1 CAD . It provides a large set of commands to handle the requirements of robot programming and it also supports high-level programming features.-. and on-conditions ALGOL like data and control structure Support for world modeling robot is a cartesian manipulator with 6 degrees of freedom. Currently AL can be executed on a VAX computer and real-time control of the arms are performed on a stand alone PDP-11.+ description of the two languages is shown in Table 9. AL has influenced the design of many robot-oriented languages and is still actively being developed. The program is debugged and refined by repeating steps 2 to 4. The important characteristics we recognized are position specification (step 2). o1' cad . ' . The second statement in AL establishes the coordinate frame beam.O f3.454 ROBOTICS: CONTROL. 0>. 0.3 represented as 4 x 4 homogeneous transformation matrices. On the other hand.... VISION. EULERROT(< 0. A frame consists of a 3 x 3 submatrix (specifying the orientation) and a vector (specifying the position) which are defined with respect to some base frame. the first statement in AL means the establishment of the coordinate frame base. SENSING. . where vector is an aggregate of three scalars representing position and matrix is an aggregate of three vectors representing orientation. 15) inches from the origin of the reference frame.. 20.. 15 >. 0 >) >. EULERROT(<0. it is used to append units to the elements of the vector.+ 0-. 20.' 1`10 .2 are in cartesian coordinates and the format is <vector. 0)*inches). all of them in cartesian coordinates. The "*" is a type-dependent multiplication operator. and vectors (VECTOR). feeder = < < 25. matrix>."Y of the objects in the workspace is by coordinate frames. whose principal axes are rotated 90 ° about the Z axis of the reference frame. beam. 0. rotational matrices (ROT). 15.FRAME(nilrot.FRAME(nilrot. VECTOR(20. 0. Note: EULERROT is a subroutine which forms the rotation matrix given the angles. 90>)>. The AML frames defined in Table 9.t A. I. The . ". The parts are usually restricted by fixtures and feeders to minimize positional uncertainities. CND .-+ CO) T". 0. AND INTELLIGENCE Table 9.. 0>.2. The approach taken by AL is to provide predefined data structures for frames (FRAME). 90*deg). beam -. beam = < < 20.. They are usually . 0.. a.. 15)*inches). 0. Notes: nilrot is a predefined frame which has value ROT(Z. 0)*inches).. Table 9. AML provides a general structure called an aggregate which allows the user to design his or her own data structures. VECTOR(20.FRAME(ROT(Z. In order to further explain the notation used in Table 9. Here. AML: base = < < 20.y (1. 15. whose principal axes are parallel (nilrot implies no rotation) to the principal axes of the reference frame and whose origin is at location (20. VECTOR(25. 15. 0) inches from the origin of the reference frame. 0*deg).2.2 AL and AML definitions for base frames AL: base . . feeder -. 9. EULERROT(<0. The most common approach used to describe the orientation and the position . and whose origin is at location (20.. R.1 Position Specification In robot assembly.2 shows AL and AML definitions for the three frames base. Assembly from a set of randomly placed parts requires vision and is not yet a common practice in industry. 9. 0>)>. The "-" is the assignment operator A semicolon terminates a statement. and feeder shown in Fig. the robot and the parts are generally confined to a well-defined workspace.1. E). E = DOT(T6. 0.. 0)*inches. <<0. whose principal axes are . 0) inches from the origin of the base coordinate frame. and beam-bore with respect to their base frames.. 0 >) >) . 5>. 0>)>). 0. E .3. An advantage of using a homogeneous transformation matrix is that defining frames relative to a base frame can be simply done by postmultiplying a transformation matrix to the base frame. 9.base * TRANS(ROT(X.2a shows the relationships between the frames we have defined in Tables 9. < <0. Similar comments apply to the other three AL statements.ti chi C'` . < < 15. Figure 9. (DD .beam * TRANS(nilrot. The second statement establishes the coordinate frame E. -fl . 0. The meaning of the AML statements is the same as those for AL. The meaning of the three statements in AML is exactly the same as for those in AL. 0. 0>.3. 2.. Note: nilvect is a predefined vector which has value VECTOR(0. CAD 4-i parallel (nilrot implies no rotation) to the principal axes of the T6 coordinate frame. Cam}' '11 p. 0>)>).`3 . is provided. 0. but a system subroutine. p'. In order to illustrate the meaning of the statements in Table 9.. 0. Note: DOT is a subroutine that multiplies two matrices..ROBOT PROGRAMMING LANGUAGES 455 third statement has the same meaning as the first. 0>)>).3 AL and AML definitions for feature frames AL: T6 -. Note that the frames defined for the arm are not needed for AL because AL uses an implicit frame to represent the position of the end-effector and does not allow access to intermediate frames (T6. 0.feeder * TRANS(nilrot. VECTOR(15. VECTOR(0. < <0. AML has no built-in matrix multiplication operator. EULERROT(<0. 3>. < < 0. 0>)>). bolt-tip . and whose origin is at location (0. Table 9. 0. 0. EULERROT(< 180. 1>.. AML: T6 = DOT(base. 0. whose principal axes are rotated 180 ° about the X axis of the base coordinate frame. 0 >. 2.T6 * TRANS(nilrot.-. the first AL statement means the establishment of the coordinate frame T6. 0)*inches). 5)*inches). VECTOR(0. except for location. VECTOR(0. beam-bore . bolt-tip. beam-bore = DOT(beam. EULERROT(<0. DOT. EULERROT(<0. pp. AL provides a matrix multiplication operator (*) and a data structure TRANS (a transformation which consists of a rotation and a translation operation) to represent transformation matrices. As parts are moved or COD Table 9. and whose origin is at location (15. 0. E. EULERROT(<0. o. 1)*inches). 0. 0. nilvect) bolt-grasp bolt-tip * TRANS(nilrot.3 shows the AL and AML statements used to define the features T6. 5) inches from the origin of the T6 coordinate frame.1. bolt-grasp = DOT(bolt_tip. 180*deg). 3)*inches). bolt-tip = DOT(feeder.2 and 9. bolt-grasp. 0. 0. A convenient way of referring to the features of an object is to define a frame (with respect to the object's base frame) for it. as indicated in Fig. the relationships between coordinate frames become complicated and difficult to manage. The motion is usually specified as a sequence of positional goals for the robot to attain. Another way of acquiring the position and orientation of an object is by using the robot as a pointing device to gather the information interactively.2b). they do have some limitations.3. allows the user to lead the robot through the workspace (by hand or by a pendant) and. It consists of moving the robot from an initial configuration ". and moving to a final configuration. which can be quite tedious.2 Motion Specification The most common operation in robot assembly is the pick-and-place operation. it generates AL declarations similar to those shown in Tables 9. only specifying the initial and final configurations is not sufficient. s.CONTROL. by pointing the hand (equipped with a special tool) to objects..+ to a grasping . This eliminates the need to measure the distances and angles between frames.. Furthermore. AND INTELLIGENCE World World Beam Bolt grasp Bolt tip Beam Beam bore Bolt grasp Bolt tip Beam bore E T6 E Arm I T6 Base Base World World (b) (a) Figure 9..456 ROBOTICS. are attached to other objects. Since the inverse kinematics problem gives nonunique solutions.2 and 9.2 Relationships between the frames. the frames are adjusted to reflect the current state of the world (see Fig. 9. the robot's configuration is not uniquely determined given a point in the cartesian space. 9. The .2. As the number of features and objects increases. CD Although coordinate frames are quite popular for representing robot configurations. However. SENSING.3 p. picking up an object. POINTY (Grossman and Taylor [1978]).' r-' configuration. The natural way to represent robot configurations is in the joint-variable space rather than the cartesian space. VISION. the number of computations required also increases significantly. a system designed for AL. deceleration.L) L]" to.. describing a complex path as a sequence of points produces an unnecessarily long program. For example. As the robot's hand departs from its starting configuration or approaches its final configuration. approach and departure directions to produce a safe motion. In order for the system to generate 'C3 a collision-free path. which require the hand to travel along an axis.. One disadvantage of this type of specification is that the programmer must preplan the entire motion in order to select the intermediate points. if a straight line motion were used from point A to point C. the robot would collide with the beam. Thus. depending on the language. the motion is specified by using the MOVE command to indicate the destination frame the arm should move . The positional goals can be specified either in the joint-variable space or in the cartesian space. In AL. 9. have control over various details of the motion such as speed. Cep . AML allows the user to specify motion in the joint-variable space and the user can write his or her own routines to specify motions in the cartesian space. Furthermore. Instead Figure 9. physical constraints. The resulting path may produce awkward and inefficient motions. intermediate point B must be used to provide a safe path. the programmer must specify enough intermediate or via points on the path. Joints are specified by joint numbers (1 through 6) and the motion can be either relative or absolute (see Table 9.4).3 Trajectory of the robot.3. such as moving in a crowded area. The programmer must ..ROBOT PROGRAMMING LANGUAGES 457 path is planned by the system without considering the objects in the workspace and obstacles may be present on the planned path. and environmental constraints. such as an insertion. may prohibit certain movement of the robot. 0 'cs . acceleration.4).. in Fig. Via points can be specified by using the keyword "VIA" followed by the frame of the via point (see Table 9. 9. For the robot to perform tasks in the presence of these uncertainties. <10. 3 and 6 by 1 inch.. In general.1*Z*inches. of separate commands. aggregates of the form 'L3 ing (see Table 9. 20>). respectively (absolute move) MOVE(<1.Move joint 1 and 4 to 10 inches and 20 degrees. deceleration> can be added to the MOVE statement to specify speed. { Another way of specifying the above movement } MOVE barm TO bolt-grasp VIA A. 5>).5). move slowly).move command. VISION. gripper motions have to be tailored according to the environment and the task. acceleration. acceleration.458 ROBOTICS: CONTROL. Using the OPEN (for AL) and MOVE (for AML) primitives. MOVE barm TO bolt-grasp. '-' AML: -. The constraints can be an approach vector.5 shows the AL statements for moving the robot from bolt-grasp to A with departure direction along +Z of feeder and time duration of 5 seconds (i. Statements inside brackets { } are comments. 2. i. Table 9. 0 indicates the current location of the arm which is equivalent to base * T6 * E. respectively (relative move) DMOVE(<1.. SENSING. Notes: Statements preceeded by "--" are comments. one can either move the fingers apart (open) or move them together (close). -.S: f3. { Move along the current Z axis by 1 inch. For a two-fingered gripper. sensing must be performed. In AML. AND INTELLIGENCE Table 9. move relative } MOVE barm TO ® . departure vector. The sensory information gathered also acts as a feedback from the environment.e. ate) Nib .e..Move joints 1. the usual approach is to treat them as constraints to be satisfied by the move command. 6>. Both AML fir. and 5 degrees.3 Sensing and Flow of Control The location and the dimension of the objects in the workspace can be identified only to a certain degree of accuracy. Notes: barm is the name of the robot arm. the gripper can be programmed to move to a certain open- row <speed. 3.4 Examples of AL and AML motion statements AL: { Move arm from rest to frame A and then to bolt-grasp MOVE barm TO A. or a time limit.2. . enabling the robot to exam- ova and AL use a predefined variable to indicate the gripper (bhand corresponds to bgrm for AL and GRIPPER for AML). <1. 4>. Most languages provide simple commands on gripper motion so that sophisticated motions can be built using them. 2 inches. and deceleration of the robot. AL provides a keyword "WITH" to attach constraint clauses to the. "y a~) . 1>). Sensing in robot programming can be classified into three types: 1. the motion is halted (see Table 9.Move joint 1 and 4 to 10 inches and 20 degrees. <10. 4>. Vision is used to identify objects and provide a rough estimate of their position.ti .5 inches } OPEN bhand TO 2. { Open the hand to 2. . 1. with speed 1 inch/second. and each language has its own syntax. Most languages do not explicitly support vision. Note: WRT (means with respect to) generates a vector in the specified frame. 2. 3. -.Open the hand to 2. 20>.6). t Specified as an aggregate like <1. There is no general consensus on how to implement sensing commands. Force sensing is used in compliant motion to provide feedback for force-controlled motions. when the sensors are triggered. 5> which specifies joints 1 and 5.Acceleration and deceleration 1 inch/second2 MOVE(<1. -. AL provides primitives like FORCE(axis) and TORQUE(axis) for force sensing.5*inches. Force and tactile sensing can be used to detect the presence of objects in the workspace. <1.5 inches MOVE(GRIPPER. This is usually done by encoders that measure the joint angles and compute the corresponding hand position in the workspace. ine and verify the state of the assembly. Position sensing is used to identify the current position of the robot. The programmer can specify the sensors to monitor and. and the user has to provide modules to handle vision information.ROBOT PROGRAMMING LANGUAGES 459 Table 9. AML: -. They can be specified as conditions like FORCE(Z) > 3*ounces in the control commands. AML provides a primitive called MONITOR which can be specified in the motion commands to detect asynchronous events.5). 2. It also has position-sensing primitives like QPOSITION (joint numberst) which returns the current position of the joints.5 Examples of AL and AML motion statements AL: { Move arm from bolt-grasp to A MOVE barm TO A WITH DEPARTURE = Z WRT feeder WITH DURATION = 5*seconds. Tactile sensing can be used to detect slippage while grasping an object. respectively. For example. a part arriving on a conveyor belt may trip an optical sensor and activate the robot to pick up the part. "do-until-". Most languages provide the usual decision-making constructs like "if_ then _ else _ ". For example. and "while _ do _ " to control the flow of the program under different conditions. 0°n 0-o ono 'C1 9. Certain tasks require the robot to comply with external constraints. The robot arm is moved downward slightly and. "case-". Any sideward forces may generate unwanted friction which would impede the motion. the force exerted on the hand along the Z axis of the hand coordinate frame is returned by FORCE(Z). In order to perform this compliant motion. -.10*ounces WITH FORCE(X) = 0*ounces WITH FORCE(Y) = 0*ounces WITH DURATION = 3*seconds.4 Programming Support A language without programming support (editor. Note: The syntax for monitor is MONITOR(sensors.460 ROBOTICS: CONTROL. exert downward force while complying side forces } MOVE barm TO beam-bore WITH FORCE(Z) _ . A sophisticated language must provide a programming environment that (DD 4"' 01 . SENSING. One of the primary uses of sensory information is to initiate or terminate an action. or an action may be terminated if an abnormal condition has occurred.6 illustrates the use of AL's force sensing commands to perform the insertion task with compliance. test type. Table 9.Move joint 3 by 1 inch and stop if finons is triggered DMOVE(<3>. 1. limit2). AML: Define a monitor for the force sensors SLP and SLR. then this indicates that the hand missed the hole and the task is aborted.6 illustrates the use of force sensing information to detect whether the hand has positioned correctly above the hole. 0. fmons). { Insert bolt. In this case. AND INTELLIGENCE Table 9. VISION. force sensing is needed.6 Force sensing and compliant motion AL: { Test for presence of hole with force sensing } MOVE barm TO ® -1*Z*inches ON FORCE(Z) > 10*ounces DO ABORT("No Hole"). as it descends. forces are applied only along the Z axis of this frame. The compliant motion is indicated by quantifying the motion statement with the amount of force allowed in each direction of the hand coordinate frame.) is useless to the user. SRP>. <1>. If the force exceeds 10 ounces. F). Monitor triggers if the sensor values exceed the range 0 and F fmons = MONITOR(<SLP. insertion requires the hand to move along one direction only. limitl. Table 9. etc.2. The flow of a robot program is usually governed by the sensory information acquired. debugger. Complex robot programs are difficult to develop and can be difficult to debug. { Define via points frames } A . VECTOR(20. VECTOR(0. The robot programming system must have the ability to allow programs to be modified on-line and restart at any time.1*inches. The notation and meaning of the statements have already been explained in the preceding discussion. 15. B . 8)*inches). Hence. Sensor outputs and program traces.feeder * TRANS(nilrot. VECTOR(0. . 5)*inches). 3. Example: Table 9. different programs can be tested more efficiently.5)*inches).beam * TRANS(nilrot. it is not always feasible to restart the program upon failure. 0.FRAME(ROT(Z.5*inches. This is further illustrated by the following example.7 An AL program for performing an insertion task BEGIN insertion { set the variables } bolt-diameter . 0. 0. { Define feature frames } bolt-grasp .ROBOT PROGRAMMING LANGUAGES 461 allows the user to support it. . bolt_height*Z).0. VECTOR(0. 90*deg). bolt-tip .beam-bore * TRANS(nilrot. This feature allows testing of programs without actually setting up robot and workspace. 0. 5)*inches).7 shows a complete AL program for performing the insertion task shown diagramatically in Fig. 0. grasped . 0)*inches). Keep in mind that a statement is not considered terminated until a semicolon is encountered. Real-time interactions between the robot and the environment are not always repeatable.1. 2.t1 ado n-' . 20. Table 9.0 The reader should realize by now that programming in a robot-oriented language is tedious and cumbersome. Simulation. tries . feeder -.feeder * TRANS(nilrot. the debugger should be able to record sensor values along with program traces. VECTOR(25. robot programming imposes additional requirements on the development and debugging facilitates: 1. Moreover. On-line modification and immediate restart.FRAME(nilrot. VECTOR(0. 9. C D . bolt-height .bolt-grasp * TRANS(nilrot. 0)*inches). Since robot tasks requires complex motions and long execution time.feeder * TRANS(nilrot. { Define base frames } beam .. beam-bore * TRANS(nilrot. 1)*inches). VECTOR(0. beam-bore -. nilvect).false.0. 0. } IF NOT grasped THEN ABORT("failed to grasp bolt").1*Z*inches. AND INTELLIGENCE Table 9. WITH DEPARTURE = Z WRT feeder.9*bolt_diameter.Z WRT feeder. END insertion. { Position the hand just above the bolt } MOVE barm TO bolt-grasp VIA A WITH APPROACH = . SENSING. 0 9.>- HO-: { Check whether the hole is there } .! WITH APPROACH = -Z WRT beam-bore. { Attempt to grasp the'bolt } DO CLOSE bhand TO 0.tries + 1. WITH FORCE(Z) = -10*ounces WITH FORCE(X) = O*ounces WITH FORCE(Y) = O*ounces WITH DURATION = 5*seconds. UNTIL grasped OR (tries > 3). 0. .3 CHARACTERISTICS OF TASK-LEVEL LANGUAGES A completely different approach in robot programming is by task-level program- ming. END ELSE grasped .462 ROBOTICS: CONTROL. IF bhand < bolt-diameter THEN BEGIN{ failed to grasp the bolt.0.w MOVE barin TOO . { Move the arm to B } MOVE barm TO B VIA A { Move the arm to D } MOVE barm TO D VIA C MOVE barm TO 0 . { Do insertion with compliance } MOVE barm TO beam-bore DIRECTLY . { Abort the operation if the bolt is not grasped in three tries. tries . VISION.true.7 (continued) { Open the hand } OPEN bhand TO bolt-diameter + 1*inches. The natural way to describe an assembly task is in terms of the objects +.1*Z*inches ON FORCE(Z) > 10*ounces DO ABORT("No hole"). try again } OPEN bhand TO bolt-diameter + 1*inches.-. it must have information about the objects and the robot itself. and the program generator then generates a program that will produce the desired input-output behavior (Barr et al. Based on this description.ROBOT PROGRAMMING LANGUAGES 463 being manipulated rather than by the robot motions. is. A task-level programming system allows the user to describe the task in a high-level language (task specification). For the task planner to generate a robot program that performs a given task. Cell decomposition CAD ."- . It should be noted that these three phases are . where objects are defined as constructions or combinations. cylinder). grasping position. Task-level languages make use of this fact and simplify the programming task. volume. Task-level programming. using regularized set operations (such as union. operand. [1981. The most common approach is constructive solid geometry (CSG). The primitives can be represented in various ways: phi may" `CJ 1. in fact. they are computationally related. a task planner will then consult a database not completely independent. Figure 9.3. A geometric model provides the spatial information (dimension. These include the geometric and physical properties of the objects which can be represented by models. In the remaining sections we will discuss the problems encountered in task planning and some of the solutions that have been proposed to solve them. in the research stage with many problems still unsolved.-. As discussed in Chap. intersection).1 World Modeling World modeling is required to describe the geometric and physical properties of the objects (including the robot) and to represent the state of the assembly of objects in the workspace.. we can conceptually divide task planning into three phases: world modeling.4 shows one possible architecture for the task planner. shape) of the objects in the workspace. Generalized cylinders 4. 'CS (world models) and transform the task specification into a robot-level program (robot program synthesis) that will accomplish the task. [1979]. The concept of task planning is quite similar to the idea of automatic program generation in artificial intelligence. and program synthesis. bolo 9. A set of edges and points 2. final state. like automatic program generation. 1982]). task specification. of primitive objects (such as cube. A set of surfaces 3. The subtasks then pass through the subtask planner which generates the required robot program. numerous techniques exist for modeling three-dimensional objects (Baer et al. Geometric and Physical Models. 8. Requicha [1980]). The user supplies the input-output requirements of a desired program. and attachment relations are extracted. The task specification is decomposed into a sequence of subtasks by the task decomposer and information such as initial state. specifications. . I In the AUTOPASS system (Lieberman and Wesley [1977]).1 Knowledge -- Subtask planner . and points which are defined by the parameters in the corresponding procedure. AND INTELLIGENCE Task specification I Task decontposer 1. Physical properties such as inertia.. model (Requicha [1983]). This is equivalent to physically attaching a part to '=t . the shape of the object is defined by calls to other procedures representing other objects or set operations. no model can be 100 percent accurate and identical parts may have slight differences in their physical properties. The basic idea is that each object is represented by a procedure name and a set of parameters. One way of representing these states is to use the configurations of all the objects in the workspace. "Block". To deal with this. ylen. and revolute. Within this procedure. cylinder..4 Task planner. 9. edges... Table 9. AL provides an attachment relation called AFFIX that allows frames to be attached to other frames. Each assembly step can be succinctly represented by the current state of the world. GDP provides a set of primitive objects (all of them are polyhedra) which can be cuboid. xlen.2. they can be derived from the object model. ylen. objects are modeled by utilizing a modeling system called GDP (geometric design processor) (Wesley et al. These primitives are internally represented as a list of surfaces. wedge. °L. However. tolerances must be introduced into the ..Models Robot p1ogrtn Figure 9. and coefficient of friction may limit the type of motion that the robot can perform. For example. CALL SOLID(CUBOID. The task planner must be able to stimulate the assembly steps in order to generate the robot program.mom `Q: tea. Representing World States. Instead of storing each of the properties explicitly. mass. will invoke the procedure SOLID to define a rectangular box called Block with dimensions xlen. VISION. More complicated objects can then be defined by calling other procedures and applying the MERGE subroutine to them. zlen). cone. SENSING.8 shows a description of the bolt used in the insertion task discussed in Sec. hemisphere. and zlen. laminum. [1980]) which uses a procedural representation to describe objects.464 ROBOTICS: CONTROL. ROBOT PROGRAMMING LANGUAGES 465 Table 9. 2. . shaft-radius. VECTOR(1."Head". The first two of these have a function similar to the AFFIX statement in AL. "Head". the other parts attached will also move. The relations can be one of: ¢'d 'CIw describes that the frame beam-bore is attached to the frame beam. head_ nfacets. shaft_nfacets. head_ height.8 GDP description of a bolt Bolt: PROCEDURE(shaft_height. AL automatically updates the locations of the frames by multiplying the appropriate transformations. head-radius. END Bolt. Constraints. /* perform set union to get bolt */ CALL MERGE("Shaft". CAS' . shaft_ radius. another part and if one of the parts moves. /* height of shaft /* height of head /* radius of shaft /* radius of head /* number of shaft faces /* number of head faces /* specify floating point for above variables FLOAT.0.. head_ radius. /* define shape of the shaft */ CALL SOLID(CYLIND. head_ nfacets). */ indicates a comment. union). /* define shape of head */ CALL SOLID(CYLIND. rd. The nodes of the graph represents objects and the edges represent relationships. Conditionally attachment means that the object is supported by the gravity (but not strictly attached). shaft-radius. shaft_ nfacets. head-radius. O-. head_nfacets). head-height. beam-bore = FRAME(nilrot. An object can be rigidly. AUTOPASS uses a graph to represent the world state. or conditionally attached to another object. "Shaft". /* define parameters */ DECLARE shaft_ height. shaft_nfacets). An. 1. 3. This is used to indicate that the subgraph linked by this edge is an assembly part and can be referenced as an object. nonrigidly. AFFIX beam-bore TO beam RIGIDLY. Constraint relationships represent physical constraints between objects which can be translational or rotational. Attachment. Note: The notation /* . Assembly component. .0) *inches). For example. head-height. shaft-height. 9. At the highest level one would like to have natural languages as the input. 9. Then the statements in Table 9. For example. V_+ V .3. An alternate approach is to describe the task as a sequence of symbolic opera- tions on the objects.5. The states can be given by the configurations of all the objects in the workspace. y0. An assembly task can be described as a sequence of states of the world model.5 Block world. SENSING. and therefore. 9. If state A is the goal state and state B is the initial state. a set of spatial constraints on the objects are also given to eliminate any ambiguity. For example. VISION. then they can be used to represent the task of picking up Block3 and placing it on top of Block2.. the torque required to tighten a bolt cannot be incorporated into the state --n . where the assembly sequence is given.C] . this level of input is still quite far away. and one way of specifying configurations is to use the spatial relationships between the objects." However. AND INTELLIGENCE As the assembly proceeds.466 ROBOTICS: CONTROL..5. consider the block world shown in Fig.2 Task Specification Task specification is done with a high-level language. Not even omitting the assembly sequence is possible. then they would represent the task of removing Block3 from the stack of blocks and placing it on the table.9 can be used to describe the two situations depicted in fig.7- '31(D description. If we assume that state A is the initial state and state B is the goal state. the graph is updated to reflect the current state of the assembly. a serious limitation of this method is that it does not specify all the necessary information needed to describe an operation. This form of description is quite similar to those used in an industrial assembly sheet. C1. The advantage of using this type of representation is that they are easy to interpret by a human. easy to specify and modify. Face 3 Block 2 Block 1 Block 3 Block 2 Block 1 Face I Table Table Figure 9. We define a spatial relation AGAINST to indicate that two surfaces are touching each other. +'' . However. without having to give the assembly steps. Most robot-oriented languages have adopted this type of specification. An entire task like building a water pump could then be specified by the command "build water pump. Typically. The current approach is to use an input language with a well-defined syntax and semantics. The syntax of these statements is complicated (see Table 9. which can be planar or spherical faces. With the AFFIX statements. and vertices. the two operations in the block world ti. are defined by coordinate frames similar to those used in AL. and COPLANAR to specify the relationship between object features. It divides its assembly related statements into three groups: 1. 2. Object features. DRIVE IN bolt AT bolt-grasp SUCH THAT TORQUE IS EQ 12. example can be described as: PLACE Block3 SO THAT (Block2_face3 AGAINST Block3_facel) PLACE Block3 SO THAT (Block3_facel AGAINST Table) The spatial relationships are then extracted and solved for the configuration constraints on the objects required to perform the task. For example.1 can be specified as AFFIX bolt-tip TO barm. an object frame can be attached to barm to indicate that the hand is holding the object.. For example. State change statement: Describes an assembly operation such as placement and adjustment of parts.0 IN-LBS USING air-driver. Popplestone et al. AUTOPASS also uses this type of specification but it has a more elaborate syntax. Fastener statement: Describes a fastening operation. Tools statement: Describes the type of tools to use. For example. [1978] have proposed a language called RAPT which uses contact relations AGAINST.9 State description of block world State A: (Block]-face] AGAINST table) (Blockl_face3 AGAINST Block2_facel) (Block3_facel AGAINST Table) State B: (Block]-face] AGAINST Table) (Block]_face3 AGAINST Block2_facel) (Block2_face3 AGAINST Block3_facel) AL provides a limited way of describing a task using this method. FIT.10). PLACE bolt ON beam SUCH THAT bolt-tip IS ALIGNED WITH beam-bore. -fl . the inserting process in Fig.ROBOT PROGRAMMING LANGUAGES 467 Table 9. MOVE bolt-tip TO beam-bore. Then moving the object to another point can be described by moving the object frame instead of the arm. 9. would be used to describe the operation of inserting a bolt and tightening it. cylindrical shafts and holes. 3. edges. Specifies the constraints to be met during the execution of the command. L'. Grasping planning is probably the most important problem in task planning because the way the object is grasped affects all subsequent operations.10 The syntax of the state change and tool statements in AUTOPASS State change statement PLACE <object> <preposition> <object> <grasping> <final-condition> <constraint> <then-hold> where <object> <preposition> <grasping> <constraint> <then-hold> Tool statement Is a symbolic name for the object. Specifies tool operation parameters such as direction of rotation and speed. Before the task planner can perform the planning. Specifies the list of accessories.. CONTROL.3 Robot Program Synthesis are grasping planning.. Indicates that the hand is to remain in position on completion of the command. CU. Specifies how the object should be grasped. SENSING. it must first convert the symbolic task specification into a usable form. it is used to determine the type of operation.. One approach is to obtain configuration constraints from the symbolic relationships. AND INTELLIGENCE Table 9.468 ROBOTICS..3. The way . VISION. Is either IN or ON. . Indicates that the hand is to remain in position on completion of the command. Specifies the final condition to be satisfied at the completion of the command. These equations are then solved symbolically by using a set of rewrite rules to simplify them. The major steps in this phase bolic relationships and forms a set of matrix equations with the constraint parameters of the objects as unknowns. Specifies new attachment. The RAPT interpreter extracts the symCL. OD) a. Specifies where the tool is to be operated. .U. OPERATE <tool> <load-list> <at-position> <attachment> <final-condition> < tool-parameters > < then-hold > where < tool > < load-list > < at-position > < attachment> < final-condition > A < tool-parameters > < then-hold > Specifies the tool to be used. The synthesis of a robot program from a task specification is one of the most important and most difficult phases of task planning.0-0 ate' . The result obtained is a set of constraints on the configurations of each object that must be satisfied to perform the operation. V V V V 9. motion planning. and plan checking. 3. The main advantage of this method is its simplicity and most of the tools needed are already available in the geometric modeling system. Would lead to collisions with other objects. A guarded approach to the destination 4. the method used to choose a grasp configuration is a variation of the following procedure: 1.g. a correction is made to avoid the collision (Lewis and Bejczy [1973]). The set is then pruned according to whether they: Are reachable by the robot. A set of candidate grasping configurations are chosen based on: Object geometry (e. once grasped.. particularly when the workspace is clustered with obstacles.-y 1. for a parallel jaw gripper. A compliant motion to achieve the goal configuration '-3 A'. Typically. The final configuration is selected among the remaining configurations (if any) such that: It would lead to the most stable grasp. 2. .`' e-. In this method. Several algorithms have been proposed for planning collision-free path and they can be grouped into three classes: .ROBOT PROGRAMMING LANGUAGES 469 the robot can grasp an object is constrained by the geometry of the object being grasped and the presence of other objects in the workspace. a candidate path is chosen and the path is tested for collision at a set of selected configurations. Stability (one heuristic is to have the center of mass of the object lie within the fingers).e It would be the most unlikely to have a collision in the presence of position errors. may One of the important problems here is planning the collision-free motion. A usable grasping configuration is one that is reachable and stable. A free motion to the desired configuration without collision 3. generating the correction is difficult. The robot must be able to reach the object without colliding with other objects in the workspace and. After the object is grasped. and only a subset of constraints are considered. the object must be stable during subsequent motions of the robot. the robot must move the object to its destination and accomplish the operation. Hypothesis and test. 0'n . Most of the current methods for grasp planning focus only on finding reachable grasping positions.. A guarded departure from the current configuration 2. a good place to grasp is on either side of parallel surfaces). However. Uncertainty reduction. . This motion can be divided into four phases: 1. If a collision occurs. Grasping in the presence of uncertainties is more difficult and often involves the use of sensing. ' 'C7 . These functions have the characteristic that. This method has the advantage that adding obstacles and constraints is easy. Along its tangent is the positional freedom and along its normal is the force freedom.4 CONCLUDING REMARKS We have discussed the characteristics of robot-oriented languages and task-level programming languages.CJ path reduces to comparing the swept volume of the object with the swept ate) volume of the free space. Explicit free space. Then. Then generating compliant motions is equivalent to finding a hybrid position/force control strategy that guarantees the path of the robot to stay on the C surface. However. Several algorithms have been proposed in this class. the idea is equivalent to transforming the robot's hand holding the object into a point. with rotation. approximations must be made to generate the configuration space and the computations required increase significantly. This algorithm performs reasonably well when only translation is considered. . .. ONO Q. Penalty functions. On the other hand. Then. 1983b] proposed another method by representing the free space as overlapping generalized cones and the volume swept by the moving object as a function of its orientation. Brooks [1983a.GO CD. finding a collision-free path amounts to finding a path that does not intersect any of the expanded obstacles.'3 . It is a task configuration which allows only partial freedom in position.d a. SENSING. Conceptually. Current work has been based on using the task kinematics to constraint the legal lie on a C-surfacet (Mason [1981]) in the robot's configuration space.470 ROBOTICS: CONTROL. Then the derivatives of the total penalty function with respect to the configuration parameters are estimated and the collision-free path is obtained by following the local minima of the total penalty function. and expanding the obstacles in the workspace appropriately.. robot configurations to 9. . Generating the compliant motion is another difficult and important problem. task-level t A C-surface is defined on a C-frame.. VISION. The robot is guided and controlled by the program throughout the entire task with each statement of the program roughly corresponding to one action of the robot. Lozano-Perez [1982] proposed to represent the free space (space free of obstacles) in terms of the robot's configuration (configuration space). A C-frame is an orthogonal coordinate system in the cartesian space. finding the collision-free . In robot-oriented languages. AND INTELLIGENCE 2.. A total penalty function is computed by adding all the individual penalty functions and possibly a penalty term relating to minimum path. the penalty functions generally are difficult to specify. 3. This method involves defining penalty functions whose values depend on the proximity of the obstacles. an assembly task is explicitly described as a sequence of robot motions.. The frame is so chosen that the task freedoms are defined to be translation along and rotation about each of the three principal axes. their values increase. However. . as the robot gets closer to the obstacles.. many problems in task-level languages. None semaphores Yes No 1 Force. 5. Task-level languages are much easier to use. Arm Robot Pascal Stanford Arm Robot Pascal Object Robot-or Mix Robot object-level Concurrent Lisp. 3. tactile Sensing command Parallel processing Multiple robot References Position. 4. vision vision proximity IN PARALLEL IN PARALLEL Semaphores None No Yes 4 No 5 Yes 6 2 3 1. Bridgeport. were used to illustrate the characteristics of robotoriented languages. Darringer and Blasgen [1975]. Proximity. PL/I Language basis Robot PL/I . [1983]. Force. Conn. s4-' . rotation PL/I Implicit PL/I Force. General Electric Co. and hence no explicit robot motion is specified. Automation Systems A12 Assembly Robot Operator's Manual.. [1982].11a Comparison of various existing robot control languages Language Institute Robot controlled AL AML IBM AUTOPASS IBM IBM HELP GE Allegro JARS MAPLE IBM IBM Stanford PUMA Stanford Arm IBM JPL PUMA '+o.a Pascal Pascal Compiler or Both inter preter Interpreter Both Interpreter None Joints Pascal Compiler Interpreter Geometric Frame data ty pe Motion Frame specified by Pascal Control struct ure Aggregate Model Joints Pascal Frame Joints.. such as task planning. February 1982.. APL. P50VE025. I=i languages describe the assembly task as a sequence of positional goals of the objects rather than the motion of the robot needed to achieve these goals. AL and AML. We conclude that a robot-oriented language is difficult to use because it requires the user to program each detailed robot motion in completing a task. Craig [1980]. 6. 2. Two existing robot programming languages. frame Pascal None Translation. However. obstacle avoidance. Taylor et al. Lieberman and Wesley [1977].. object modeling.ROBOT PROGRAMMING LANGUAGES 471 Table 9. Position force COBEGIN. Mujtaba et al. 5. 1979. Gruver et al. Lisp Both Interpreter Frame Joints. [1981]. 4. Park [1981]. [1983]. must be solved before they can be used effectively. 2. AND INTELLIGENCE Table 9. Oldroyd [1981].472 ROBOTICS: CONTROL. REFERENCES Further reading in robot-level programming can be found in Bonner and Shin [1982]. vision None 0""'y Fortran '-+ z None Joints vii o00 . Popplestone et al. Further BCD CD. [1978. sensory information utilization.. [1981]. Takase et al. Takase et al. VISION. SENSING. User's Guide to VAL. second edition. We conclude this chapter with a comparison of various languages. frame If-then Position. frame Pascal Interpreter Frame Joints. Shimano [1979]. Geschke [1983]. Danbury. Synder [1985]. as shown in Table 9. Franklin and Vanderbrug [1982]. Oldroyd [1981]. version 11. trajectory planning. Unimation. and Taylor et al. If-then-else while-do Position Force Force. Inc. 1981]. 1980]. rotation Transform base Interpreter Fortran. Paul [1976.11b Language Institute Robot controlled MCL McDonnell Douglas Cincinnati Milacron T-3 PAL Purdue Stanford Arm Robot RAIL Automatix Customdesigned Cartesian arm Robot Pascal RPL SRI VAL Unimate PUMA PUMA Robot-or object-level Language basis Complier or interpreter Geometric data type Motion specified by Control structure Sensing command Parallel processing Multiple robot References Robot APT Robot Robot Basic Compiler Frame Translation. Park [1981]. 3. vision None Position. force Semaphores No 5 '"17 Frame Frame If-then-else INPAR Yes 1 None No 2 No 3 No 4 1. Lozano-Perez [1983a]. [1984]. and grasping configurations.11a and b.. Conn. Grossman and Taylor [1978]. 1982]) and utilize "knowledge" to perform reasoning (Brooks [1981]) and planning for robotic assembly and manufacturing. The program has to index the location for each pallet and signal the user when the tray is full. 9. 1983b]. YB. respectively. 9. In task planning.1 with an AML statement. and (xc. ZB). 9. 1983b] presented a configuration space approach for moving an object through a crowded workspace. yo.3 with a VAL program. 9. [1982]. Lozano-Perez [1982. Lewis and Bejczy [1973]. and Wesley et al.1 Write an AL statement for defining a coordinate frame grasp which can be obtained by rotating the coordinate frame block through an angle of 65 ° about the Y axis and then translating it 4 and 6 inches in the X and Y axes. 9. `C7 . peg A has two disks of different sizes. [1981. Each disk has an equal thickness of 1 inch. with disks having smaller diameters always on the top of disks with larger diameters. zA ). Brooks and Lozano-Perez [1983]. Lo Xo 9. Assume that the locations of the feeder and tray are known. 9.3 with an AUTOPASS program. (xA. Darringer and Blasgen [1975]. 9. whose coordinate frames are. ONO CAD Oho -ate CAS 'C3 -'- iii PROBLEMS 9. are at a known location from the reference coordinate frame (xo. 9. Lozano-Perez and Wesley [1979]. Initially. 1982]. Future robot programming languages will incorporate techniques in artificial intelligence (Barr et al. Lieberman and Wesley [1977].5 Repeat Prob. yc. 9. 9.ROBOT PROGRAMMING LANGUAGES 473 reading in task-level programming can be found in Binford [1979].4 Repeat Prob.2 Repeat Prob. [1975].3 with an AML program. You are asked to write an AL program to control a robot equipped with a special suction gripper (to pick up the disks) to move the two disks from peg A to peg C so that at any instant of time disks of smaller diameters are always on the top of disks with larger diameters. Finkel et al. A. [1981] presented a homogeneous transformation matrix equation in describing a task sequence to a manipulator. as shown in the figure below. yA. Lieberman and Wesley [1977]. zo). respectively.8 Repeat Prob. Various obstacle avoidance algorithms embedded in the programming languages can be found in Brooks [1983a. 9.6 Repeat Prob. and C. B. (XB.7 with an AML program. Lozano-Perez [1983a]. [1980]. Takase et al. [1981.7 Tower of Hanoi problem. zc). Three pegs. and Mujtaba et al.3 Write an AL program to palletize nine parts from a feeder to a tray consisting of a 3 x 3 array of bins. Languages for describing objects can be found in Barr et al. 2 STATE SPACE SEARCH One method for finding a solution to a problem is to try out various possible approaches until we happen to produce the desired solution. A prob474 CDD CAD `J'.1 INTRODUCTION A basic problem in robotics is planning motions to solve some prespecified task. we imagine a world of several labeled blocks resting on a table or on each other and a robot consisting of a TV camera and a moveable arm and hand that is able to pick up and move blocks. Timaeus. Such an attempt involves essentially a trial-and-error search.' .-t CDRy b/) . or configuration. O°. A plan is. To discuss solution methods of this sort. planning means deciding on a course of action before acting. given some initial situation. Robot actions change one state.CHAPTER TEN ROBOT INTELLIGENCE AND TASK PLANNING That which is apprehended by intelligence and reason is always in the same state." for example. This action synthesis part of the robot problem can be solved by a problem-solving system that will achieve some stated goal. but that which is conceived by opinion with the help of sensation and without reason. achieve those actions. In some problems the robot is a mobile vehicle with a TV camera that performs tasks such as pushing objects from place to place through an environment containing other objects. In the "blocks world. it is helpful to introduce the notion of problem states and operators. Research on robot problem solving has led to many ideas about problemsolving systems in artificial intelligence. Here. and then controlling the robot as it executes the commands necessary to. t3. In a typical formulation of a robot problem we have a robot that is equipped with sensors and a set of primitive actions that it can perform in some easy-to-understand world. a representation of a course of action for achieving the goal. in the "Dialogues of Plato" 10. thus. is always is a process of becoming and perishing and never really is. -ti (^p con 10. we briefly introduce several basic methods in problem solving and their applications to robot planning. In this chapter. of the world into another. A graph representation of the state space search is illustrated in Fig. The only operator that the robot can use is MOVE X from Y to Z.1 Introductory Examples Before proceeding with a discussion of graph search techniques. (3a . 10. and so on until the goal state is produced.ROBOT INTELLIGENCE AND TASK PLANNING 475 lem state.> .. The robot is asked to change the initial state to a goal state in which the three blocks are stacked with block A on top. then applies operators to these. Consider that a robot's world consists of a table T and three blocks.2. there must be nothing on it. The initial state of the world is that blocks A and B are on the table. which moves object X from the top of object Y onto object Z. A solution to a problem is a sequence of operators that transforms an initial state into a goal state. We can simply use a graphical description like the one in Fig. and C. Methods of organizing such a search for the goal state are most conveniently described in terms of a graph representation.DD Figure 10. Y. B. we obtain a state space i. A. the object to be moved. 10. and (2) if Z is a block.1 as the state CCD representation.1 A configuration of robot and blocks.PP. In order to apply the operator. v°> '1. or the state space.2. is a particular problem situation or configuration. The nodes of the graph are linked together by arcs that correspond to the operators. we consider briefly some basic examples as a means of introducing the reader to the concepts discussed in this chapter. the operator is not to be used to generate the same operation more than once).Z) . and block C on the bottom. be a block with nothing on top of it. transforms the state into another state. It is useful to imagine the space of states reachable from the initial state as a graph containing nodes corresponding to the states. and block C is on top of block A (see Fig. or simply state. The operator MOVE X from Y to Z is represented by MOVE(X. The set of all possible configurations is the space of problem states. it is required that (1) X.1). ono Blocks World. 'ox . An operator. . 10. `c° 10. A solution to a problem could be obtained by a search process that first applies operators to the initial state to produce new states.. block B in the middle. when applied to a state. If we remove the dotted lines in the graph (that is. T.B) Path Selection. Suppose that we wish to move a long thin object A through a crowded two-dimensional environment as shown in Fig. A) MOVE (C. a) where x = horizontal coordinate of the object 1 x<5 y<3 y = vertical coordinate of the object 1 a = orientation of the object 10 1 if object A is parallel to x axis if object A is parallel to y axis Both position and orientation of the object are quantized. VISION. C) MOVEA C (B.T. AND INTELLIGENCE C A B Initial State C DUB MOVE (C. we may choose the state space representation (x. map motions of the object once it is grasped by a robot arm. MOVE(A. C) B B MOVE (B. 10.2 State space search graph. It is easily seen from Fig. 10.C). B) MOVE (C. B. T.A. MOVE(B.2 that a solution that the robot can obtain consists of the following operator sequence: MOVE(C. T.3. T) MOVE (C. To. The operators or robot . SENSING. A) MOVE (A.476 ROBOTICS: CONTROL. y.T). T. T. T. B) B MOVE (A. T. T. search tree. B) B A FA B C C B Goal State A N B Figure 10. 5 and visualized on a sketch of the task site in Fig. depending on whether the monkey is on top of the box or not. shown in Fig. Thus the initial state is (2. are able to save two rotations by util- c'. (3. Let the object A be initially at location (2. izing a little more distance. reveals that these paths. W = horizontal position of the monkey x = 1 or 0. where .4. A monkeyt is in a room containing a box and a bunch of bananas (Fig.z) can be selected as the state representation.0).1) and the goal state is t=. 1:90 There are two equal-length solution paths.2). oriented parallel to the y axis.6.Y. . and each "rotate" of length 3. 10. by initially moving the object away from the goal state. 10.2. 10. respectively t It is noted that the monkey could be a mobile robot. 10.ROBOT INTELLIGENCE AND TASK PLANNING 477 3 3 4 i Figure 10.7). Monkey-and-Bananas Problem.3) and oriented parallel to the x axis. We assume for illustration that each "move" is of length 2. Closer examination.3 Physical space for Example 2.fl cam. commands are: MOVE t x direction one unit MOVE t y direction one unit ROTATE 900 The state space appears in Fig. and the goal is to move A to (3.-0 _-r .3.x. however. How can the monkey get the bananas? The four-element list (W. The bananas are hanging from the ceiling out of reach of the monkey. These paths may not look like the most direct route. i.. Y. SENSING.Y. i I -lL'-. Monkey goes to horizontal position U.z) That is.4.478 ROBOTICS: CONTROL. . state (W. 10. VISION.z) '" (U.5 Solution to the graph of Fig. I I I I I . .---4. i I . x = x coordinate of object Y = horizontal position of the box z = 1 or 0.z) by the applying operator goto(U). 10. goto( U). AND INTELLIGENCE = i coordinate of object End Stuff T/ ex = orientation of object 0 5 L Figure 10. .0. depending on whether the monkey has grasped the bananas or not.Y.O.O. respectively The operators in this problem are: 1.3.z) can be transformed into (U.4 Graph of the problem in Fig. goto(U) (W.O. Figure 10. or in the form of a production rule.Y. the monkey must be at the same position W as the box.ROBOT INTELLIGENCE AND TASK PLANNING 479 L I 2.z) . W. climbbox. Monkey climbs on top of the box. W.7 Monkey-and-bananas problem.V. 3.O. the monkey should be at the same position W as the box. Monkey pushes the box to horizontal position V.O. or (W.1.W. Such a condition imposed on the applicability of an operator is called the precondition of the production rule. or climbbox pushbox(V) (W. in order to apply the operator pushbox(V).z) It should be noted from the left side of the production rule that. A C B Figure 10. .z) -' (W.z) It should be noted that. in order to apply the operator climbbox.O. but not on top of it. pushbox(V).(V. but not on top of it. or grasp (C. pushbox(C). in rule 2. '-. and grasp. VISION. AND INTELLIGENCE 4. Monkey grasps the bananas. they are goto(U). resulting in the next state (U. It should be noted that in order to apply the operator grasp.l. Now three operators are applicable.O. Continuing to apply all operators applicable at every state. the set of goal states is described by any list whose last element is 1. we produce the state space in terms of the graph representation shown in Fig.C. the monkey and the box should both be at position C and the monkey should already be on the top of the box.B.O).I.O). It is noted that both the applicability and the effects of the operators are expressed by the production rules. climbbox.8.B.C. The effect of the operator is that the monkey has pushed the box to position V. Figure 10. For example. . SENSING. pushbox(V) and climbbox (if U = B). In this formulation. the operator pushbox(V) is only applicable when its precondition is satisfied.8 Graph representation for the monkey-and-bananas problem.O) (C.O. The only operator that is applicable is goto (U).l) CAD where C is the location on the floor directly under the bananas. 10.480 ROBOTICS: CONTROL. Let the initial state be (A. It can be easily seen that the sequence of operators that transforms the initial state into a goal state consists of goto(B). grasp. Step 6. and put it on CLOSED. relational indexed file structure. Step 7.ROBOT INTELLIGENCE AND TASK PLANNING 481 10. For each member of M that was already on OPEN or CLOSED.8. A database that contains the information relevant to the particular task. Create a search graph G consisting solely of the start node s. Put s on a list called OPEN. Step 4. a graphsearch control strategy can be considered as a means of finding a path in a graph from a (start) node representing the initial database to one (goal node) representing a database that satisfies the termination (or goal) condition of the production system. this database may be as simple as a small matrix of numbers or as complex as a large. 2. decide whether or . such as the one shown in Fig.8 is generated by the control strategy. One way to describe the search process is to use production systems. Depending on the application. Step 2. 10. Create a list called CLOSED that is initially empty.2. The various databases produced by rule applications are actually represented as nodes in the graph. Expand node n. Step 3. fl. If n is a goal node. a solution path from the initial state to a goal state can be easily obtained by inspection. Install these members of M as successors of n in G.3 Step 1. Select the first node on OPEN. Add these members of M to OPEN. 10. A general graph-search procedure can be described as follows. generating the set M of its successors that are not ancestors of n. Call this mode n. a graph such as the one shown in Fig. exit with failure. Thus. and a right side that describes the action to be performed if the rule is applied. remove it from OPEN. A set of rules operating on the database. Establish a pointer to n from those members of M that were not already in OPEN or CLOSED. LOOP: if OPEN is empty. Step 5.2 Graph-Search Techniques For small graphs. In terms of production system terminology. 3. Each rule consists of a left side that determines the applicability of the rule or precondition. exit successfully with the solution obtained by tracing a path along the pointers from n to s in G (pointers are established in step '-' ate) S3. For a more complicated graph a formal search process is needed to move through the state (problem) space until a path from an initial state to a goal state is found. A control strategy that specifies which rules should be applied and ceases computation when a termination condition on the database is satisfied. A production system consists of: 1. 7). a. Application of the rule changes the database. goal nodes should be put at the very beginning of OPEN.. the members of M are not already on either OPEN or CLOSED. that is.D of the nodes. then none of the successors generated in step 6 has been generated previously.. Reorder the list OPEN. T To promote earlier termination. . Nodes of equal depth are ordered arbitrarily. but always in favor of goal nodes. For each member of M already on CLOSED. The second type of blind search orders the nodes on OPEN in descending order of their depth in the search tree. One important method uses a real-valued evaluation function to compute the "promise" (. they may already be on OPEN or CLOSED. and the task-dependent information used is called heuristic information. a depth bound is set. Step 8. The deepest nodes are put first in the list. The search that results from such an ordering is called depth-first search. To prevent the search process from running away along some fruitless C). Ties among the nodes are resolved arbitrarily. If no heuristic information from the problem domain is used in ordering the nodes on OPEN. Thus. AND INTELLIGENCE not to redirect its pointer to n. Let the evaluation function f at any node n be "'. providing that a path exists. The blind search methods described above are exhaustive search techniques for finding paths from the start node to a goal node. either according to some arbitrary criterion or according to heuristic merit. . Go to LOOP. In step 8 of the graph search procedure. . and t If the graph being searched is a tree. A useful best-first search algorithm is the so-called A* algorithm described below. each member of M is added to OPEN and is installed in the search tree as successors of n. VISION. Step 9. ors f(n) = g(n) + h(n) where g(n) is a measure of the cost of getting from the start node to node n. The choice of evaluation function critically determines search results. some arbitrary criterion must be used in step 8.O coo path forever. For many tasks it is possible to use task-dependent information to help reduce the search. If the graph being searched is not a tree. GAD ^O. The first type of blind search procedure orders the nodes on OPEN in increasing order of their depth in the search tree. SENSING. In this case. The resulting search procedure is called uninformed or blind. It has been shown that breadth-first search is guaranteed to find a shortest-length path to a goal node.$ The search that results from such an ordering is called breadth-first search.482 ROBOTICS: CONTROL. No node whose depth in the search tree exceeds this bound is ever generated. This class of search procedures is called heuristic or best-first search. it is possible that some of the members of M have already been generated. Nodes on OPEN are ordered in increasing order of their values of the evaluation function. decide for each of its descendants in G whether or not to redirect its pointerl. heuristic information can be used to order the nodes on OPEN so that the search expands along those sectors of the graph thought to be the most promising.. If so. Set CLOSED to the empty list. OLD points to its successors. f(n) represents an estimate of the cost of getting from the start node to a goal node along the path constrained to go through node n. Each successor in turn points to its successors. and so forth. If SUCCESSOR was not on OPEN. Now we must decide whether OLD's parent link should be reset to point to BESTNODE. by comparing their g values. call that node OLD. Remove it from OPEN.) For each such SUCCESSOR. pick the node on OPEN with the lowest f value. and add OLD to the list of BESTNODE's successors. and add OLD to the list of BESTNODE's successors. So see whether it is cheaper to get to OLD via its current parent or to SUCCESSOR via BESTNODE. Otherwise. This is a bit tricky. That is. Set SUCCESSOR to point back to BESTNODE. It should be if the path we have just found to SUCCESSOR is cheaper than the current best path to OLD (since SUCCESSOR and OLD are really the same node). its h value to whatever it is. report failure. d. See if BESTNODE is a goal node. If we have just found a better path to OLD. the node on CLOSED OLD. generate the successors of BESTNODE. or the path that has been created between the start node and BESTNODE if we are interested in the path). then )F. appropriately. Since this node already exists in the graph. Compute g(SUCCESSOR) = g(BESTNODE) + cost of getting from 0 Zoo a0' BESTNODE to SUCCESSOR. Start with OPEN containing only the start node. it has already been generated but not processed). Place it on CLOSED. The A* Algorithm Step 1. (First we need to see if any of them have already been generated. If SUCCESSOR is cheaper. If OLD is cheaper (or just reset OLD's parent link to point to BESTNODE. See if SUCCESSOR is the same as any node on OPEN (i. Check to see if the new path or the old path is better just as in step 2c. record the new cheaper path in g(OLD). C/] . see if it is on CLOSED.ROBOT INTELLIGENCE AND TASK PLANNING 483 h(n) is an estimate of the additional cost from node n to a goal node. Until a goal node is found. exit and report a solution (either BESTNODE if all we want is the node. ti. we can throw SUCCESSOR away. repeat the following procedure: If there are no nodes on OPEN. but do not set BESTNODE to point to them yet. c.e. Otherwise. b. If so. until each branch terminates with a node that 08. we must propagate the improvement to OLD's successors. and its f value to h + 0. If so. These back links will make it possible to recover the path once a solution is found.. then we need do nothing. Step 2. and set the parent link and g and f values call (IQ as cheap). Set that node's g value to 0. or h. do the following: a. Call it BESTNODE. and update f(OLD). reset the parent and continue propagation. This condition is easy to check for. stop the propagation.-: O. Of course. VISION. from the initial states. To reason forward.`S. it is important that h(n) be a measure of the cost of getting from node n to a goal node. establishing sub- a. There are two directions in which such a search could proceed: (1) forward. Compute f(SUCCESSOR) = g(SUCCESSOR) + h(SUCCESSOR). CAD . o. But it is possible that with the new value of g being propagated downward.3 PROBLEM REDUCTION Another approach to problem solving is problem reduction. then its g value already reflects the better path of which it is part. e..CONTROL. So the propagation may stop here. If not. changing each node's g value (and thus also its f value). the path we are following may become better than the path through the current parent. the right sides are matched against the current state and the left sides are used to generate new nodes representing new goal states to be achieved. see if its parent points to the node we are coming from. So compare the two. If so. The main idea of this approach is to reason backward from the problem to be solved. The rules in the production system model can be used to reason forward from the initial states and to reason backward from the goal states. 000 0-0 . and add it to the list of BESTNODE's successors. the left' sides or the preconditions are matched against the current state and the right side (the results) are used to generate new nodes until the goal is reached. If the path we are propagating through is now better. The objective of a search procedure is to discover a path through a problem space from an initial state to a goal state. To reason backward. terminating each branch when you reach either a node with no successors or a node to which an equivalent or better path has already been found..fl caw . By describing a search process as the application of a set of rules. then put it on OPEN. Each node's parent link points back to its best known parent. Note that because g(n) and h(n) must be added. It is easy to see that the A* algorithm is essentially the graph search algorithm using f(n) as the evaluation function for ordering nodes. 10. from the goal states. CAD . If SUCCESSOR was not already on either OPEN or CLOSED. SENSING.484 ROBOTICS.. 0°° (DD CD.. and (2) backward. another possibility is to work both forward from the initial state and backward from the goal state simultaneously until two paths meet somewhere in between. This strategy is called bidirectional search. If the path through the current parent is still better. it is easy to describe specific search algorithms without reference to the direction of the search. As we propagate down to a node. do a depth-first traversal of the tree starting at OLD. AND INTELLIGENCE either is still on OPEN or has no successors. This continues until one of these goal states is matched by a initial state. So to propagate the new cost downward. continue the propagation. C.B. One way of selecting bl) 'o" °f° ryas problem-reduction operators is through the use of a difference. the difference for (S. 10. finally. the problem is solved and there is no difference.ROBOT INTELLIGENCE AND TASK PLANNING 485 Figure 10.1 ..G) is a partial list of reasons why the goal test defining the set G is failed by the member of S.0. O. and D are called AND nodes. an OR arc will be used. A problemreduction operator transforms a problem description into a set of reduced or successor problem descriptions. where S is the set of starting states. G) . if problem B can be solved by solving any one of the subproblems E and F. 10. Here.10.) s. The reduction of problem to alternative sets of successor problems can be conveniently expressed by a graphlike structure.9 An AND/OR graph.9. and D. °O° 0. F is the set of operators.0). we can suppress the symbol F and denote the problem simply by ({ (A. (If some member of S is in G.F.B. 0. C.2 are for OR graphs through which we want to find a single path from the start node to a goal node.o a>?? 'C3 o. Loosely speak- ing. 0) } . On the other hand. `'a Example: An AND/OR graph for the monkey-and-bananas problem is shown in Fig. Since the operator set F does not change in this problem and the initial state is (A. the original problem is reduced to a set of trivial primitive problems whose solutions are obvious. an AND arc will be marked on the incoming arcs of the nodes B. The nodes B.g p-+' (7. Suppose problem A can be solved by solving all of its three subproblems B. and D. Each of these produces an alternative set of subproblems. For a given problem description there may be many reduction operators that are applicable. Thus it again requires a search process. so we may have to try several operators in order to produce a set whose members are all solvable. the problem configuration is represented by a triple (S. 0. problems and sub-subproblems until.G). Some of the subproblems may not be solvable.F. coo °o. 10. and G the set of goal states. It is easily seen that the search methods discussed in Sec. C.. These relationships can be shown by the AND/OR graph shown in Fig. however. F = {f1. This problem is also primitive since (B.0)}.0) fails to satisfy the goal test is that the last element is not 1. Using f4 to reduce the initial problem. p-^ The difference is that the monkey is not at B. This process of completing the solution of problems $=.). Since ({(A.Gf. 2.Gf.2.) and ({f4(sI)}G).B. grasp}. The operator relevant to reduce this difference is f4 = grasp. is the set of state descriptions to which the operator f4 is applicable and sI is that state in Gf.).) must be solved first.0.0)}. This operator is then used to reduce the problem to a pair of subproblems ({(A. VISION.486 ROBOTICS: CONTROL. one of the nodes.).Gf. we first calculate its difference. because (1) the box is not at C. SENSING.B. pushbox(V). 10. and f3 = climbbox.f4} _ {goto(U).).0.0) is not in Gf.0)}.0.0. Applying operator f2 results in the subproblems ({(A. The terminal nodes are solved nodes since they are associated with primitive problems.0)}.0.0)}. is obtained as a consequence of solving the first subproblem.0.Gf.B. The reason that the list (A. .f2.0. then it is a solved node if and only if at least one of its successors is solved. a solution graph from node n to a set of nodes N of an AND/OR graph is analomoo. The objective of the search process carried out on an AND/OR graph is to show that the start node is solved. respectively.) and (fI (s1II ).0)}. The definition of a solved node can be given recursively as follows: 1. 3.Gf. where sII eGf. corresponds to the original problem description. f2 = pushbox(C). and (3) the monkey is not on the box. we obtain the followCC.B. In an AND/OR graph.1. then it is a solved node if and only if all of its successors are solved. and the relevant operator is fI = goto(B).B. ing pair of subproblems: ({(A.Gf.0. `ti 5.B. Roughly speaking. Now the first of these problems is primitive.B. Those nodes in the graph corresponding to primitive problem descriptions are called terminal nodes.Gf.0)}.B. AND INTELLIGENCE From the example in Sec. we calculate its difference.0. If a nonterminal node has OR successors. and f2 is applicable to solve this problem. The state described by (A.B.0) is in the domain of f2. ti. we calculate the difference for the initial problem. The task of the production system or the search process is to find a solution graph from the start node to the terminal nodes. The operators relevant to reduce these differences are. A solution graph is the subgraph of solved nodes that demonstrates that the start node is solved.B. 0rCAD fI = goto(C). climbbox.B.B. called the state node.0) so the second problem becomes ({(B.) and (f2(s1I). where Gf. its difference is zero since (A. First.0) is in the domain of fI and fI is applicable to solve this problem. If a nonterminal node has AND successors.f3.Gf..Gf.). To solve the problem ({(A. obtained as a consequence of solving ({(A. Note that fI (sIII ) = (B. (2) the monkey is not at C.0.0.0. generated earlier is continued until the initial problem is solved. 487 .O}.0. Primitive (IJ'1(sI11)1.O.) Primitive Primitive Primitive Figure 10.({A.G).B.0}.O.Gf) s111eGf.) ({C.) .G/.G1.C.B.0}.10 AND/OR graph for monkey-and-bananas problem.513 E G).G).O}.) Sill e GJ5 ({/3(5I21>}.O.G) ({A. ({A.B. Until INIT is labeled SOLVED or until INIT's h value becomes greater 'i3 than FUTILITY. This is equivalent to saying that NODE is not solvable. Initialize S to NODE. and so on until eventually (In other words. AND INTELLIGENCE every successor thus produced is an element of N. Trace the marked arcs from INIT and select for expansion one of the as yet unexpanded nodes that occurs on this path. repeat the following procedure: (1) Select from S a node none of whose descendants in G occurs in S. b. SENSING. we process it before processing any of its ancestors. (4) Mark CURRENT SOLVED if all of the nodes connected to it through the new marked arc have been labeled SOLVED. Generate the successors of NODE. (3) Mark the best path out of CURRENT by marking the arc that had the minimum cost as computed in the previous step. then for each one (called SUCCESSOR) that is not also an ancestor of NODE do the following: (1) Add SUCCESSOR to G. From each successor node to which this arc is directed. Propagate the newly discovered information up the graph by doing the following: Let S be a set of nodes that have been marked SOLVED or whose h values have been changed and so need to have values propagated back to their parents. label it SOLVED and assign it an h value of 0. and remove it from S. v°) cc" . Assign as CURRENT's new h value the minimum of the costs just computed for the arcs emerging from it. (2) If SUCCESSOR is a terminal node. (2) Compute the cost of each of the arcs emerging from CURRENT.) Compute h(INIT). The cost of each arc is equal to the sum of the h values of each of the nodes at the end of the arc plus whatever the cost of the arc itself is. make sure that for every node we are going to process. If there are successors. but with the ability to handle the AND arc appropriately. we need an algorithm similar to A*. we continue to select one outgoing arc. gous to a path in an ordinary graph. In order to find solutions in an AND/OR graph. Step 2. The AO* Algorithm Step 1. c. Let G consist only of the node representing the initial state. (3) If SUCCESSOR is not a terminal node.. Such an algorithm for performing heuristic search of an AND/OR graph is the so-called AO* algorithm. Until S is empty. It can be obtained by starting with node n and selecting exactly one outgoing arc. `°° '+i CAD '.) Call this node CURRENT.488 ROBOTICS: CONTROL. 'i7 yo')o. (Call this F-+ node INIT. Call the selected node NODE. compute its h value. repeat the following procedure: r-' a. If there are none. then assign FUTILITY as the h value of NODE. VISION. 10. A quantity FUTILITY is needed. readers who are unfamiliar with propositional and predicate logic may want to consult a good introductory logic text before reading the rest of this chapter. Each node in the graph points both down to its immediate successors and up to its immediate predecessors. Thus the idea of a proof. even if it could ever be found. 0 . In this formalism. retrieving. as shown in the following: It is raining.y fit C. an estimate of the cost of a path from itself to a set of solution nodes. Each node in the graph is associated with an h value.1 man CIF FUTILITY. the statements that are already known to be true.e. So add to S all the ancestors of `.. we can conclude that a new statement is true by proving that it follows from f]. Propositional logic is appealing because it is simple to deal with and there exists a decision procedure for it. We can easily represent real-world facts as logical propositions written as well formed formulas (wffs) in propositional logic. and h serves as the estimate of goodness of a node. The language of logic or.4 USE OF PREDICATE LOGIC Robot problem solving requires the capability for representing.t Let us first explore the use of propositional logic as a way of representing knowledge. representing the portion of the search graph that has been explicitly generated so far. can be used to express a wide variety of statements. then its new status must be propagated back up the graph.°. If the estimated cost of a solution becomes greater than the value of ''' F. It is noted that rather than the two lists. the AO* algorithm uses a single structure G. mathematical deduction). more specifically. the first-order predicate calculus.. Readers who want a more complete and formal presentation of the material in this section should consult the book by Chang and Lee [1973]. can be extended to include deduction as a way of deriving answers to questions and solutions to problems. Can (5) If CURRENT has been marked SOLVED or if the cost of CURRENT was just changed. then the search is abandoned. OPEN and CLOSED. FUTILITY should be selected to correspond to a threshold such that any solution with a cost above it is too expensive to be practical.ROBOT INTELLIGENCE AND TASK PLANNING 489 CURRENT. that were used in the A* algorithm. RAINING t At this point. and manipulating sets of statements.4. as developed in mathematics as a rigorous way of demonstrating the truth of an already believed proposition. The logical formalism is appealing because it immediately suggests a powerful way of deriving new knowledge from old (i. A breadth-first algorithm can be obtained from the AO* algorithm by assign- 'T7 ing h = 0. . The g value (the cost of getting from the start node to the current node) is not stored as in the A* algorithm. AND INTELLIGENCE It is sunny.490 ROBOTICS: CONTROL. to represent the fact "Robot is in Room r. and we would not be able to draw any conclusions about similarities between John and Paul. deduce that it is not sunny if it is raining.D John is a man We could write JOHNMAN But if we also wanted to represent Paul is a man n. Suppose we want to represent the obvious fact stated by the sentence 'L1 '. then we had available a good way of reasoning with that knowledge. A predicate symbol is used to represent a relation in a domain of discourse. RAINING -SUNNY Using these propositions. FOGGY If it is raining then it is not sunny. But a major motivation for choosing to use logic at all was that if we used logical statements as a way of representing knowledge. VISION. In this section. atomic formulas are composed of predicate symbols and terms. we would have to write something such as PAULMAN which would be a totally separate assertion. function symbols. we briefly introduce the language and methods of predicate logic. we could. So we appear to be forced to move to predicate logic as a way of representing knowledge because it permits representations of things that cannot reasonably be represented in propositional logic. SENSING. We are in even more difficulty if we try to represent the sentence All men are mortal because now we really need quantification unless we are willing to write separate statements about the mortality of every known man. In general. we can represent real-world facts as statements written as wffs. for example. variable symbols. In predicate logic.. . . and constant symbols. SUNNY It is foggy. " we might use the simple atomic formula: INROOM(ROBOT. It would be much better to represent these facts as MAN(JOHN) MAN(PAUL) Nro since now the structure of the representation reflects the structure of the knowledge itself. But very quickly we run up against the limitations of propositional logic. ROBOT and rI are constant symbols. A constant symbol is the 0°o 0 c(5 C). For example.r1 ) In this atomic formula. The elementary components of predicate logic are predicate symbols. g. INROOM(x. and (implies). Variable symbols are terms also. The (true) sentence "Robot is not in Room r2 " might be represented as --. this property is represented by adding the existential quantifier (3x) in front of P(x). We might use the following atomic formula to represent the sentence "John's mother is married to John's father. Thus INROOM(ROBOT. V (or). Atomic formulas are the elementary building blocks of predicate logic. e. and they permit us to be indefinite about which entity is being referred to. If P(x) has value T for at least one value of x.ri) has value T. for example.r2 ) Sometimes an atomic formula.y)." (not) is used to negate the truth value of a formula. that is. For example. The connective " " is used for representing "if-then" statements. For example. the function symbol mother can be used to denote the mapping between an individual and his or her female parent. the truth values of composite expressions made up of these wffs are given by the following table: P T F T F '>y Q PvQ PAQ T T T F T F F F `r1 'r1 P T T F T Q -P F T F T T T F F . " might be represented by (3x)INROOM(x. This property is represented by adding the universal quantifier (Vx) in front of P(x). Function symbols denote functions in the domain of discourse. the sentence "All robots are gray" might be represented by (Vx)[ROBOT(x) ==> COLOR(x." MARRIED[ father(JOHN). Formulas built by connecting other formulas by A's are called conjunctions.. BANANAS) ran The symbol " .r2) has value F. mother(JOHN)] An atomic formula has value T (true) just when the corresponding statements about the domain are true and it has the value F (false) just when the corresponding statement is false. Formulas built by connecting other formulas by V's are called disjunction.ROBOT INTELLIGENCE AND TASK PLANNING 491 simplest kind of term and is used to represent objects or entities in a domain of discourse. as in the sentence "If the monkey is on the box.ri ) If P and Q are two wffs. BOX) GRASP(MONKEY. We can combine atomic formulas to form more complex wffs by using connectives such as A (and). P(x). has value T for all possible values of x.GRAY)] The sentence "There is an object in Room r. then the monkey will grasp the bananas": ON(MONKEY. and INROOM(ROBOT. it changes the value of a wff from T to F and vice versa.INROOM(ROBOT. -a (IQ . some problem-solving tasks can be regarded as the task of finding a proof for a theorem.. we have In predicate logic. from the wffs (Vx)[W1 (x) Inference rules are applied to produce derived wffs from given ones.. In artificial intelligence.492 ROBOTICS. AND INTELLIGENCE If the truth values of two wffs are the same regardless of their interpretation. Let the initial state so be described by the following set of wffs: -ONBOX CDR .(PVQ) -. .. produces the wff W2 (A) W2(x)] and WI (A). in this example. Using modus ponens and universal specialization together. Another rule of inference. Contrapositive law: rte-' P Q In addition. there are three operators-grasp.( _P) PVQ is equivalent to is equivalent to is equivalent to is equivalent to A<7 P --. produces the wff W(A) from the wff (Vx)W(x). these two wffs are said to be equivalent. The sequence of inferences used in the proofs gives a solution to the problem. there are rules of inference that can be applied to certain wffs and sets of wffs to produce new wffs. that is. Q -PA -Q _PV _Q (PAQ) V (PAR) (PVQ) A (PVR) QAP QVP is equivalent to is equivalent to rye is equivalent to is equivalent to is equivalent to is equivalent to is equivalent to o'0 PA (Q AR) PV (Q V R) (Z4 0. SENSING. such derived wffs are called theorems. VISION. we can establish the following equivalences: --. and pushbox. universal specialization. where A is any constant symbol. In the predicate logic. and the sequence of inference rule applications used in the derivation constitutes a proof of the theorem. the operation to produce the wff W2 from wffs of the form WI and WI W2. "'y . CONTROL. for example.P deMorgan's laws: --.(P A Q) Distributive laws: PA (Q VR) PV(QAR) Commutative laws: PAQ PVQ Associative laws: (PA Q) V R (PV Q) V R is equivalent to is equivalent to '=-r ''y-. 10.. We assume that. -Q -P -(3x)P(x) -(Vx)P(x) (Vx)[ -P(x)] (3x)[-P(x)] 41) . Using the truth table. climbbox..O. ore CD' cum Example: The state space representation of the monkey-and-bananas problem can be modified so that the states are described by wffs. One important inference rule is modus ponens. 5 MEANS-ENDS ANALYSIS So far. and the predicate HB has value T only when the monkey has the bananas.pushbox(x. a mixture of the two directions is appropriate. 0'C .C.ROBOT INTELLIGENCE AND TASK PLANNING 493 AT(BOX. The effects of the three operators can be described by the following wffs: 1. however." The goal wff is (3s)HB(s) This problem can now be solved by a theorem-proving process to show that the monkey can have the bananas (Nilsson [1971]). the monkey will be on the box in the state attained by applying the operator climbbox to state s." 3. one direction or the other must be chosen. if the monkey is not on the box in state s. for a given problem. climbbox (Vs) {ONBOX (climbbox (s)) } C3' meaning "For all s." It is noted that the value of grasp(s) is the new state resulting when the operator is applied to state s. Such a mixed strategy would make it possible to solve the main parts of a problem first and then go back and solve the small problems that arise in connecting the big pieces together. then the box will be at position x in the state attained by applying the operator pushbox(x) to state s. grasp (Vs){ONBOX(s) A AT(BOX.s))} meaning "For a1 x and s.x.HB The predicate ONBOX has value T only when the monkey is on top of the box.ti . Often. if the monkey is on the box and the box is at C in state s. then the monkey will have the bananas in the state attained by applying the operator grasp to state s.s) ==> HB(grasp(s))} meaning "For all s.B) AT( BANANAS . pushbox (VxVs){-ONBOX(s) AT(BOX. 2. we have discussed several search methods that reason either forward or backward but. C) -. 10. producing output A" output A' Figure 10. The most important data structure used in the means-ends analysis is the "goal. the desired situation. Transform object A into object B. So a subproblem of getting to a state in which it can be applied is generated. AND INTELLIGENCE A technique known as means-ends analysis allows us to do that. producing output A' preconditions for Q. Three main types of goals are Cam) provided: Type 1. The technique centers around the detection of the difference between the current state and the goal state. The means-ends analysis is applied recursively to the subproblems. the differences can be assigned priority levels. Then we have a second subproblem of getting from the state it does produce to the goal state. Type 2. producing it to A. Transform A to B Reduce difference between A and B. Once such a difference is determined. producing output A' Transform A' to B Reduce difference between A and B Apply operator Q to A Select a relevant Reduce difference operator Q and apply between A and the Apply Q to A". and a history of the attempts so far to change the current situation into the desired one. Reduce a difference between object A and object B by modifying object A. VISION. From this point of view. and if the operator is really effective at reducing the difference. It is also possible that the operator does not produce exactly the goal state. CAD CAD CAD C's '-' ." The goal is an encoding of the current problem situation.Y could be considered as a problem-reduction technique. Differences of higher priority can then be considered before lower priority ones.11 Methods for means-ends analysis. SENSING. the means-ends analysis °°A CJ) '. In order to focus the system's attention on the big problems first.494 ROBOTICS: CONTROL. If the difference was determined correctly. It is possible that the operator may not be applicable to the current state. an operator that can reduce the difference must be found. then the two subproblems should be easier to solve than the original problem. 12 shows the difference-operator table that describes when each of the operators is appropriate.ROBOT INTELLIGENCE AND TASK PLANNING 495 T'pe 3. and a given operator may be able to reduce more than one difference.OBJ2) Fig. OBJ ) A SMALL(OBJ) N one -- <<<< _J. The . 10. 5. 10. or for an apply goal the operator Q is immediately applicable. when all relevant operators have been tried and have failed. Notice that sometimes there may be more than one operator that can reduce a given difference. there is no difference between A and B.LOC) 1.OBJ2) A HOLDING(OBJ1) PLACE(OBJI. > WALK(LOC) AT ( OBJ . the recursion may stop. Associated with the goal types are methods or procedures for achieving them. Its design was motivated by the observation that people often use this technique when they solve problems. OBJ) A LARGE(OBJ) A CLEAR(OBJ) A HANDEMPTY AT (ROBOT . in which A is the initial object or state and B the desired object or the goal state. CARRY(OBJ. The first program to exploit means-ends analysis was the general problem solver (GPS). The recursion stops if. in the case of a reduce goal. In trying to transform object A into object B. can be interpreted as problem-reduction operators that give rise either to AND nodes. the initial task is represented as a transform goal. AT ( ROBOT. in the case of transform or apply.. with failure. AT (ROBOT . LOC ) A AT(ROBOT. AT( ROBOT . > -HOLDING (OBJ ) ON(OBJ 1.11. LOC ) PICKUP(OBJ) 4. Apply operator Q to object A. or to OR nodes. A differenceoperator table lists the operators relevant to reducing each difference. For a reduce goal.-.LOC) 2. for a transform goal. "CJ "t7 Results AT(OBJ . the transform method uses a matching process to discover the differences between the two objects. shown in a simplified form in Fig. For GPS. . difference with the highest priority is the one chosen for reduction. These methods. Consider a simple robot problem in which the available operators are listed as follows: Preconditions Operator PUSH(OBLLOC) CAD sue. LOC) A AT(ROBOT.LOC) 3. OBJ) - HOLDING(OBJ) PUTDOWN(OBJ) HOLDING ( OBJ ) AT(ROBOT.OBJ2) 6. L-' ^. The operator PLACE will put them there.C. applied. f=. It is important that significant differences be reduced before less critical ones. ""' C1. STRIPS. --) . Section 10. This difference can be reduced by applying WALK. 7. The objects must be placed back on the desk. "ti s. G1. Once the robot is at the location of the two objects. The order in which differences are considered can be critical.. AND INTELLIGENCE Operator Difference PUSH CARRY WALK PICKUP PUTDOWN PLACE Move object Move robot Clear object Get object on object Get hand empty Be holding object . So the path leads to a dead end. The operator PUTDOWN can be applied to reduce that difference. To reduce the difference. Once PUSH is performed.. The objects on top must also be moved. it can use PICKUP and CARRY to move the objects to the other room.o° C1' (t. its preconditions must be met. and the surface of the desk can be cleared by applying operator PICKUP twice. Following the other possibility. But it cannot be applied immediately. attempted. This results in two more differences that must be reduced: the location of the robot and the size of the desk. either PUSH or CARRY could be chosen.6 describes a robot problem-solving system. But after one PICKUP. The robot can be brought to the correct location by using the operator WALK. The operator PICKUP can be . but there are no operators that can change the size of an object. The main difference between the initial state and the goal state would be the location of the desk. the problem is close to the goal state." (°i ti. `D-. PUSH has three preconditions. In order to apply PICKUP. Since the desk is already large. since the robot must be holding the objects.12 A difference-operator table. SENSING. The location of the robot can be handled by applying operator WALK.Y. operator PUSH will be . an attempt to apply the second time results in another difference-the hand must be empty. one precondition creates no difference. two of which produce differences between the initial state and the goal state. Figure 10. Suppose that the robot were given the problem of moving a desk with two objects on it from one room to another. which uses the means-ends analysis. If CARRY is chosen first.496 ROBOTICS: CONTROL. but not quite. Another difference must be eliminated. the robot must be at the location of the objects. VISION. The second com- ponent is a list of predicates called the delete list. Consider a more concrete example with the initial database shown in Fig.. State descriptions and goals for robot problems can be constructed from logical statements.Z) Precondition: Delete list: Add list: Move object X from Y to Z CLEAR(X). As an example. Rules in STRIPS consist of three components.6 PROBLEM SOLVING The simplest type of robot problem-solving system is a production system that uses the state description as the database. we may describe the goal as ON(B. PUTDOWN(X) Precondition and delete list: HOLDING(X) Add list: ONTABLE(X). HANDEMPTY 't7 [n' 00000'.Y). HANDEMPTY Add list: HOLDING(X) 2.ROBOT INTELLIGENCE AND TASK PLANNING 497 10. When a rule is applied. CLEAR(Z) ON(X. Robot actions change one state. 10. PICKUP(X) Precondition and delete list: ONTABLE(X). delete from the database the assertions in the delete list.2 is generated. A set of rules is used to represent robot actions.-D a)' 'C7 '-' example is given below: MOVE(X.A) ONTABLE(A) ONTABLE(B) HANDEMPTY Block B has a clear top Block C has a clear top Block C is on block A Block A is on the table Block B is on the table The robot hand is empty 0.x. One simple and useful technique for representing robot action is employed by a robot problem-solving system called STRIPS (Fikes and Nilsson [1971]). The MOVE action for the block-stacking '_' . ON(X. CLEAR(X). In terms of logical statements.d .-- `O' If MOVE is the only operator or robot action available. 10.Y) ON(X. or configuration. or database.Y.1. The goal is to construct a stack of blocks in which block B is on block C and block A is on block B. The first is the precondition that must be true before the rule can be applied.C) A ON(A. This situation can be represented by the conjunction of the following statements: CLEAR(B) CLEAR(C) ON(C. CLEAR(X). 10. the search graph (or tree) shown in Fig.B). When a rule is applied to a state description. add the assertions in the add list to the database. It is usually expressed by the left side of the rule.1 and the following four robot actions or operations in STRIPS-form: 1. consider the robot hand and configurations of blocks shown in Fig. CLEAR(Y) GL/ . The third component is called the add list. of the world into another.1 .Z). CLEAR(Z). CLEAR(Y) Add list: HANDEMPTY.. and the goal description. STACK(A. can °. VISION.'3 CAD co) coo QCD 'l7 reveals the structure of a plan in a fashion that allows parts of the plan to be extracted later in solving related problems.. j) of the table. CLEAR(Y) Suppose that our goal is ON(B.1. 0. AND INTELLIGENCE 3. Triangle tables can easily be constructed from the initial state description. PUTDOWN(C). 10. . 0) for i < N + 1 are those statements in the initial state description that survive as preconditions of the ith operator.. In other words.t +-' s..Y).. then the last row is the (N + 1)th row.. s-.. The entries in the (N + 1)th row of the table are then those statements in the original its '07° O. A relevant operator is one whose add list contains formulas that would remove some part of the difference. The solution sequence of actions consists of: {UNSTACK(C.. The entries in cell (i. An example of a triangle table is shown in Fig. CD" 'J' COD U. SENSING. The need for a plan generalization is apparent in a learning system. If there are N operators in the plan sequence.1 (y.Y) Precondition and delete list: HOLDING(X). The entries in cell (i... are those statements added to the state description by the jth operator that survive as preconditions of the ith operator. 10.. The entries in the row to the left of the . These tables are concise and convenient representations for robot plans. .' . To accomplish this.. Working forward from the initial state description shown in Fig. ors goo .B). UNSTACK(X.A). For the purpose of saving plans so that portions of them can be used in a later planning process.13. the operators in the sequence.. Let the leftmost column be called the zeroth column. b11 . plans are stored in a triangle table with rows and columns corresponding to the operators of the plan. Y) Precondition and delete list: HANDEMPTY. The triangle table '°o om. Briefly. state description.. CLEAR(X) 4. STACK(BC). The next step is to generalize the specific plan by replacing constants by new parameters. CLEAR(X). It is called a "plan" for achieving the goal. with a solution path between the initial state and the goal state indicated by dark lines.. Let the top row be called the first row. for j > 0 and i < N + 1.B)}. the preconditions and effects of any portion of the plan need to be known. this technique involves looking for a difference between the current state and a goal state and trying to find an operator that will reduce the difference.C) A ON(A. PICKUP(A). STACK(X. ^t7 O--+. i. This continues recursively until the goal state has been reached. f3. we wish to elevate the particular plan to a plan schema. ON(X. we obtain the complete state space for this problem as shown in Fig. We have just seen how STRIPS computes a specific plan to solve a particular robot problem. STRIPS and most other planners use means-ends analysis. then the jth column is headed by the jth operator in the sequence.. '-' .Y) Add list: HOLDING(X).498 ROBOTICS: CONTROL. PICKUP(B). it can apply means-ends analysis to solve problems. If a problem-solving system knows how each operator changes the state of the world or the database and knows the preconditions for an operator to be executed. that are components of the goal. 10.14. and those added by the various operators. ON(X. B) untitack(C.)) ON TABLET B) un. the ith row with all columns to the left of the ith column. Since robot plans must ultimately be executed in the real world by a mechanical device. CAD m-.B) Ilack(4. The fourth kernel is outlined by double lines in Fig. Thus. n( ON(B C CLEAR( ONTABI ONTABI HANDE ON(CB) CLEAR(C) ONTABLE(4) ONTA BLE(B) H ANDEM PTY SO ONTABLE(B) HANDEMPTY pie ONTABLEIC HANDEMPTY plc In( .ROBOT INTELLIGENCE AND TASK PLANNING 499 CLEAR(4) ON FABLE)B) CLEAR(C) ONTABLE(C) HANDEMPTY puldawn(B) pickup(B) pickup(A) putdow n)C) putduen(4) pickup(C C LEAR(A) C CLEAR(C) H OLDINGrB) O NTABLEI.tack(4 C) CLEAR( CLEAR(A) CLEAR(B) ON(-I.e.B) CLEARIC HOLDING) 4) ONTABLE(B) non plc n( C) ON)C..W ONTA BLE(A) / CLEAR(B) CLEAR(C) HOLDING( I) ONTABLE( B) ONT16t. Let us define the ith kernel as the intersection of all rows below. (N + 1)th kernel [i. ith operator are precisely the preconditions of the operator. contains those conditions of the initial state needed by subsequent operators and by the goal.EI C) I.ack(4.4) CLEAR(C) CLEAR(S) HOLDING(C) ONTABLE(A) O.A) CLEAR(B) HOLDING(C) ONTABLE(4) (C ON(C. C) CLEAR(A) ONTABLE(B) ONIA BLE(C) H ANDEM PT'. CLEAR(C) ON (C. the selves.C)// CLEAR(S) ON(CA) CLEAR(C) `.A) umtack(C.tack(B.14. f) ON(4 K ONTABLE(B) HANDEMPTY HANDEMPTY HANDEMp Figure 10. The entries in the ith kernel are then precisely the conditions that must be matched by a state description in order that the sequence composed of the ith and subsequent operators be applicable and achieve the goal. the wroth column)..B) CLEARIC ON(4 B) CLEAR(. the execution system must acknowledge the possibility that the actions '-' . the first kernel (i.-r CAD C)) :7. '-s . 10.A) -tck(B. The entries in the column below the ith operator are precisely the add formula statements of that operator that are needed by subsequent operators or that are components of the goal.e.n(C) plc pickup(A) putduwn(4) ON(B C) CLEAR(B) HOLDING(4) ONTABLE(C) ON(B.13 A state space for a robot problem. r-.C) k(C..B) ONTA BLE(B) CLEAR(S) ON(B CI ON(CA) ONTABLB)A ) %R.C ZZ ON TA B LE(B) tack(C.C) ON(C. and includ ing.B)\\umtack(f.fI ONM C) C LEA R(C) HOLDING(S) ONTABLE(4) CLE AR)A) HOLDING) B) ONTABLEl C) ON(A33) CLEAR) 4) HOLDINGIC) ONTABLEi) B) IC f) Ila B G CLEAR(S) ON(A. EABLE(C tlack(B A) \' umit B_A) . the (N + 1)th row] contains the goal conditions themThese properties of triangle tables are very useful for monitoring the CD' actual execution of robot plans. AND INTELLIGENCE 0 I HANDEMPTY I CLEAR(C) unstack(C. Then.A) HOLDING(C) ONTABLE(B) CLEAR(B) 2 2 putdown(C) 3 HANDEMPTY 3 pickup(B) 4 CLEAR(C) HOLDING(B) 4 stack(B. we can look for the highest numbered matching kernel.) Actually. At the beginning of a plan execution. C) 5 ONTABLE(A) CLEAR(A) HANDEMPTY 5 pickup(A) 6 CLEAR(B) HOLDING(A) stack(A. the statements in the ith kernel must be matched by the new current state description.) Now sup. As actions are executed. These problems could be dealt with by generating a new plan (based on an updated state description) after each execution step. we can do better than merely check to see if the expected kernel matches the state description after an action. which was used when the plan was created. SENSING. "°'h (1.B) Figure 10. f1. no changes occur in the world except those initiated by the robot itself.C) ON(A. CAD (We assume that a sensory perception system continuously updates the state description as the plan is executed so that this description accurately models the current state of the world. but obviously.1 actions of a plan sequence.?' pose the system has just executed the first i .14 A triangle table.500 ROBOTICS: CONTROL.B) 7 ON(B. that is. 6 the entire plan is applicable and appropriate for achieving the goal because the statements in the first kernel are matched by the initial state description. if an unanticipated effect places us CAD CAD w^. in the plan may not accomplish their intended tasks and that mechanical tolerances may introduce errors as the plan is executed. in order for the remaining part of the plan (consisting of the ith and subsequent actions) to be both applicable and appropriate for achieving the goal. The kernels of triangle tables contain just the information needed to realize such a plan execution system. . unplanned effects might either place us unexpectedly close to the goal or throw us off the track.A) ON(C. VISION. such a strategy would be too costly. so we instead seek a scheme that can intelligently monitor progress as a given plan is being executed. we know that °-' `°-A o-' moo. (Here we assume that the world is static. CAD CAD CND C1. Then. we set a boundary at column i and move up to the next-to-bottom row and begin scanning this row from left to right. "ti CAD :. on the other hand. . 'n. etc. If the goal kernel (the last row of the table) is matched. Let the initial state of the robot's world model be as shown in Fig. Starting in the bottom row. To find the appropriate matching kernel. otherwise. the sixth kernel would now be matched. we scan the table from left to right. Now suppose that the system attempts to execute the pickup block A action. t7.ROBOT INTELLIGENCE AND TASK PLANNING 501 closer to the goal. the number of the highest numbered matching kernel cannot be greater than i.-. but not past column i. by searching again for the highest numbered matching kernel. then we know that the ith operator is applicable to the current state description. in particular. If we scan the whole row without finding such a cell. let us return to our blockCAD CAD CAD I-< . 10.14.. we check each one in turn starting with the highest numbered one (which is the last row of the table) and work backward. 10. stacking problem and the plan represented by the triangle table in Fig. the result of the error is that the highest numbered matching kernel is now kernel 4. supposing the highest numbered matching kernel is the ith one. looking for the first cell that contains a statement that does not match the current state description. if we find such a cell in column i. 4-. and if an execution error destroys the results of previous actions. With the column boundary set to k.' HOLDING(A). If we find a cell containing an unmatched statement. the system executes the action corresponding to this ith operator and checks the outcome. the process terminates by finding that the kth kernel is the highest numbered matching kernel when it completes a scan of the kth row (from the bottom) up to the column boundary.. . As an example of how this process might work. the goal kernel is matched.15.~ c°' 4-. r-' "'t '°. putting the system back on track.C) is thus reexecuted. ] If there were no execution error. otherwise. Assume that there are two operators. the appropriate actions can be reexecuted. it does not add . execution halts. we reset the column boundary and move up another row to begin scanning that row. Example: Consider the simple task of fetching a box from an adjacent room by a robot vehicle. Replanning is initiated when there are no matching kernels. In this case. GOTHRU and PUSHTHRU.-. The fact that the kernels of triangle tables overlap can be used to advantage to scan the table efficiently for the highest numbered matching kernel. the procedure has the flexibility to omit execution of unnecessary actions or to overcome certain kinds of failures by repeating the execution of appropriate actions.O. In an ideal world. In a real-world situation. `S' `"+ ~A' the perception system accurately updates the state description by adding HOLDING(B) and deleting ON(B. G-. In this case. [Assume again that -. as before.y o. The action corresponding to STACK(B. we need only execute the appropriate remaining actions. C/' . but the execution routine (this time) mistakes block B for block A and picks up block B instead. Suppose that the system executes actions corresponding to the first four operators and that the results of these actions are as planned.C). this procedure merely executes in order each action in the plan. R3) BOX (Bi) INROOM (Bi. ROOM R3 Initial data base Mo: INROOM (ROBOT.rl. The precondition GI for PUSHTHRU is GI : INROOM(BI.d. VISION.r2).CONNECTS (. the problem-solving process could continue.r2) The difference-operator table is shown in Fig.rl ) A CONNECTS(d. GOTHRU(d.rl . However.R2.rl) A CONNECTS (d.rl) A INROOM(ROBOT.r1) A INROOM(ROBOT. (:1 When STRIPS is given the problem.16. INROOM(b.R2) CONNECTS (D2. This problem cannot be solved immediately.rl. STRIPS finds the operator PUSHTHRU(BI.S).rl. 10.R1)] Figure 10.r2) Robot goes through door d from room rI into room r2 Precondition: INROOM(ROBOT.RI ) .R4) CONNECTS (D1.r2) PUSHTHRU(b.S) Add list: INROOM(ROBOT.502 ROBOTICS: CONTROL.R1.R2) (Vx Vy Vz) [CONNECTS (x)(y)(z) .r2) Robot pushes object b through door d from room rl into room r2 Precondition: INROOM(b.Ri ). AND INTELLIGENCE ROOM Ri I DOOR Di ROOM R.S) for any value of S Add list: INROOM(ROBOT. if the initial database contains a statement INROOM(BI.15 Initial world model.rl .rl.d.r2) the robot is in room rI and door d connects room rI to room r2 Delete list: INROOM(ROBOT. it first attempts to achieve the goal Go from the initial state Mo.r1) A CONNECTS(d. SENSING. INROOM(b.r)(y)(z)] Goal G0: (3x) [BOX(x) A INROOM (x.r2) Delete list: INROOM(ROBOT.RI ) whose effect can provide the desired statement. BOX ROBOT DOOR D. .RI . . this precondition is set up as a subgoal and STRIPS tries to accomplish it from Mo.rl.z.R2) whose effect can produce the desired statement. the process could continue. .R2) to Mo to yield MI : INROOM(ROBOT.RI ).z) ==> CONNECTS (x. G2: INROOM(ROBOT. Its precondition is the next subgoal.R3 ). namely: INROOM(ROBOT. (vxvyvz)[CONNECTS (x.RI) with the substitutions rI = R2 and d = DI. STRIPS is able to accomplish G2. .R2 ) CONNECTS (D2.R2. y) Now STRIPS attempts to achieve the subgoal GI from the new database MI .z. R2 ).R2 ) CONNECTS (D 1. rl.. BOX(BI ) INROOM(BI.RI ). It therefore applies GOTHRU(DI. CONNECTS(DI. (Vx)(Vy)(Vz) [CONNECTS (x. Although no immediate solution can be found to solve this problem. It finds the operator PUSHTHRU(BI.R2 .ROBOT INTELLIGENCE AND TASK PLANNING 503 Operator Difference GOTHRU PUSHTHRU Location of box Location of robot Location of box and robot Figure 10.R2. From the means-ends analysis.R2 )..DI . d = DI Again STRIPS finds the operator GOTHRU(d. BOX(BI) INROOM(BI.16 Difference-operator table. STRIPS and the current database contains finds that if rl = R2. Application of this operator to MI yields M2: INROOM(ROBOT.y)] .R2) Using the substitutions rI = RI and d = DI.r1) A CONNECTS (d.y.R2).R3 ).RI . CONNECTS(DI.RI .z) CONNECTS(x. y. and the triangle table for the generalized plan is shown in Fig.'4 plan could be generalized as follows: GOTHRU(dl.R.R2 ). RI.Di. AND INTELLIGENCE I NROOM (ROBOT.R.R. Next.d2.rl . VISION. Recall that the (i + 1)th row of a triangle table (excluding the first cell) represents the add list..R1) INROOM(B. PUSHTHRU(BI. The triangle table for the plan is given in Fig.17.RI .R2. Conceptually.R1. . This attempt is successful and the final operator sequence is GOTHRU(DI. STRIPS attempts to accomplish the original goal Go from M2.) CONNECTS(D. Upon the selection by STRIPS of a relevant add list.. we must extract from this family an economical parameterized operator achieving the add list. of the ith head of the plan.R1) Figure 10.) CONNECTS(x. and boxes. Hence the L1. .Ri.R.7 ROBOT LEARNING We have discussed the use of triangle tables for generalized plans to control the execution of robot plans.R1.) INROOM(B1. .R1) 3 INROOM(ROBOT.18.RI ) We would like to generalize the above plan so that it could be free from the specific constants DI.r2 ) PUSHTHRU(b. Triangle tables for generalized plans can also be used by STRIPS to extract a relevant operator sequence during a subsequent planning process.) GOTHRU(D1. rooms.R.e.R....z) INROOM(ROBOT. and BI and used in situations involving arbitrary doors. i.17 Triangle table.)-.DI . 10. of the sequence coo .r2.504 ROBOTICS: CONTROL.) PUSHTHRU(Bi. we can think of a single triangle table as representing a family of generalized operators.R i ) I CONNECTS(D1. AI ..r3) and could be used to go from one room to an adjacent second room and push a box to an adjacent third room. . SENSING.c) CONNECTS(x. 10. 10. `D- d.. R2..v. Learning by analogy has been considered as a powerful approach and has been applied to robot planning. we will obviously not be interested in the application of 0P. STRIPS uses a generalization scheme for machine learning. Often a given set of relevant statements will appear in more than one row of the table.p. OP. Also. i < n. one could design the system with a learning capability.. then the relevant instance of the generalized plan need not contain those operators whose sole purpose is to establish preconditions for the tail.p.) CONNECTS( pH. . is not 3..p.....)) IN ROOM (p(. Since this add list is achieved by applying in sequence OPI . have used only some subset of A1. OP. ) CONNECTS( p. CD' has been proposed (Tangwongsan and Fu [1979]).p. In that case only the lowest-numbered row is selected. and will therefore not be interested in establishing any of the preconditions for these operators. called PULP-I... a. or help establish the preconditions for some operator that adds a statement in the subset. in general. some steps of a plan are needed only to establish preconditions for subsequent steps.pa.. . since this choice results in the shortest operator sequence capable of pro- ducing the desired statements..ps) CONNECTS(h. An n-step plan presents STRIPS with n alternative add lists. In order to obtain a robot planning system that can not only speed up the planning process but can also improve its problem-solving capability to handle more complex tasks.py.) GOTHRU(pa.ps.: ) CONNECTS(i. The system uses an analogy between a current unplanned task and any known similar tasks to reduce the search 4"' I N ROOM (RO BOT.+ needed in the relevant instance of the generalized plan.p.... and the add lists that provide the greatest reduction in the difference are selected.po) IN ROOM (RO BOT.)) Figure 10..ps) INROOM( p. STRIPS tests the relevance of each of a generalized plan's add lists in the usual fashion.\.. STRIPS will. . In general. . Another form of learning would be the updating of the information in the difference-operator table from the system's experience. . OP? . . A robot planning system with learning.. Suppose that STRIPS selects the ith add list AI. Any of the first i operators that does not add some statement in this subset. .+ I..: ) IN ROOM (ROBOT-P5) PUSHTHRU( p(. ..18 Triangle table for generalized plan.. . If we lost interest in a tail of a plan.ROBOT INTELLIGENCE AND TASK PLANNING 505 OPI .p... any one of which can be used to reduce a difference encountered during the normal planning process.v. in establishing the relevance of the ith head of the plan.p. . 506 ROBOTICS: CONTROL. To carry out this transformation. it is simply dropped out of the candidacy. After the applicability check. The analogy of two task state- ments is used to express the similarity between them and is determined by a semantic matching procedure. Models of task states also must include the configurations of all objects and linkages in the world a-. A. If the plan is not applicable. the smaller the value. if no candidate is found. These systems issue robot commands such as: PICKUP(A) and STACK(X. The world model for a task must contain the following information: (1) geometric description of all objects and robots in the task environment. Computer simulation of PULP-I has shown a significant improvement of planning performance. But they can be expected to produce a much more detailed robot program.\ There are three phases in task planning: modeling. In the foreseeable future. and the desired final (goal) state. and (4) descriptions of robot and sensor characteristics. the system terminates with failure. VISION. the initial state of the environment. In other words. and manipulator program synthesis.8 ROBOT TASK PLANNING The robot planners discussed in the previous section require only a description of the initial and final states of a given task.' . the task environment. task specification. (2) physical description of all objects. The output of the task planner would be a robot program to achieve the desired final state when executed in the specified initial state.. instead of predicate logic. Each candidate plan is then checked by its operators' preconditions to ensure its applicability to the current world state.Y) without specifying the robot path. This improvement is not merely in the planning speed but also in the capability of forming complex plans from the learned basic task examples. the robot carrying out the task. Of course. The one with the smallest value of semantic matching has the top priority and must be at the beginning of the candidate list. (3) kinematic description of all linkages. These planning systems typically do not specify the detailed robot motions necessary to achieve an operation. Col CAD 10. the closer the meaning. several candidate plans might be found. -ti. These candidate plans are listed in ascending order according to their evaluation values of semantic matching. C?' mow. is used as the internal representation of tasks... model. ago wry `CS 0'Q awe A.d '+r"+ 'C7 . A semantic network. Initially a set of basic task examples is stored in the system as knowledge based on past experience. past experience in terms of stored information is retrieved and a candidate plan is formed. SENSING. a task planner would transform the task-level specifications into manipulator-level specifications. however. Based on the semantic matching measure. AND INTELLIGENCE for a solution. robot task planners will need more detailed information about intermediate states than these systems provide. the task planner must have a description of the objects being manipulated. The matching algorithm measures the semantic "closeness". for example. In CSG. objects. 10. and positioning accuracy of each of the joints. force sensing allows the use of compliant motions. (2) using the robot itself to specify robot configurations and to locate features of the objects.19b. and the form of the constraints depends on the shapes of the objects. tasks are actually defined by sequences of states of the world model. The major sources of geometric models are computer-aided design (CAD) systems and computer vision. 10. the basic idea is that complicated solids are constructed by performing set operations on a few types of primitive solids. For task planning. Boundary representation 2. Methods 1 and 2 produce numerical configurations which are difficult "'' CAD CAD 10. determine how fast they can be moved or how much force can be applied to them before they fall over. vision enables the robot to obtain the configuration of an object to some specified accuracy at execution time. The mass and inertia of parts. Sweep representation 3.1 Modeling The geometric description of objects is the principal component of the world t"" '-' .ROBOT INTELLIGENCE AND TASK PLANNING 507 ate model.19a can be described by the structure given in Fig. for example.2 Task Specification A model state is given by the configurations of all the objects in the environment. This is the reason why a task planner needs geometric descriptions of There are additional constraints on motion imposed by the kinematic structure of the robot itself. tea. Volumetric representation CAD There are three types of volumetric representations: (1) spatial occupancy. and (3) using symbolic spatial relationships among object features to constrain the configurations of objects.8. and (3) constructive solid geometry (CSG). There are three methods for specifying configurations: (1) using a CAD system to position models of the objects at the desired configurations. There are three major types of three-dimensional object representation schemes (Requicha and Voelcker [1982]): 1. touch information could serve in both capacities.8. The kinematic models provide the task planner with L". velocity and acceleration bounds. Many of the physical characteristics of objects play important roles in planning robot operations. (2) cell decomposition. Another important aspect of a robot system is its sensing capabilities. The object in Fig. 10. In addition to sensing. there are many individual characteristics of manipulators that must be described. the information required to plan manipulator motions that are consistent with external constraints. A system based on constructive solid geometry has been suggested for task planning. The legal motions of an object are constrained by the presence of other objects in the environment. 19 Constructive solid geometry (CSG). we should be able to specify tasks. n. a configuration is described by a set of symbolic spatial relationships that are required to hold between objects in that ""A configuration. AND INTELLIGENCE (a) (b) Figure 10. Set relational operators: U. to interpret and modify. attributes of C: radius. given symbolic spatial relationships for specifying '`n . union. VISION. Assume that the model includes names for objects and object features.508 ROBOTICS: CONTROL. width. difference. 'C1 must then be simplified as much as possible to determine the legal ranges of configurations of all objects. intersection. Since model states are simply sets of configurations and task specifications are configurations. height. -. height. The first step in the task planning process is transforming the symbolic spatial relationships among object features to equations on the configuration parameters of objects in the model. In the third method. Sao sequences of model states. These equations C^. The symbolic form of the relationships is also used during program synthesis. SENSING. Attributes of A and B: length. 1 Symbolic Spatial Relationships The basic steps in obtaining configuration constraints from symbolic spatial relationships are: (1) defining a coordinate system for objects and object features.21. against f3) and (f2 against f4 ) The purpose is to obtain a set of equations that constrain the configuration of Blocks relative to the known configuration of Block2. 10.3 Manipulator Program Synthesis The synthesis of a manipulator program from a task specification is the crucial phase of task planning.8. Each against relationship between two faces. trans(x. motion planning.9. That is. Consider the following specification. given in the state depicted in Fig. Configurations of entities are the 4 x 4 transformation matrices: 1 ado 0 1 0 0 0 0 1 0 1 0 0 fI = 0 0 0 0 1 -1 f2 = 0 1 0 0 0 1 0 0 1 0 1 1 0 1 1 0 1 0 0 1 1 0 0 0 1 0 1 -1 0 0 0 0 1 1 0 0 0 1 0 f3 = 0 1 0 1 f4 = 0 1 0 Let twix(O) be the transformation matrix for a rotation of angle B around the x axis. face f on .ROBOT INTELLIGENCE AND TASK PLANNING 509 10. (2) defining equations of object configuration parameters for each of the spatial relationships among features. The output of the synthesis phase is a program composed of grasp commands. as shown in Fig.20: PLACE Blockl (f.I.9 BASIC PROBLEMS IN TASK PLANNING 10. This program is generally in a manipulator-level language for a particular manipulator and is suitable for repeated execution without replanning. say. with M = M.y. and z. the face fI of Block2 must be against the face f3 of Blockl and the face f2 of Block2 must be against the face f4 of Blockl. The major steps involved in this phase are grasp planning.z) the matrix for a translation x. 10. and error tests. Each object and feature has a set of axes embedded in it. and (4) solving the equations for the configuration parameters of each object. 10. (3) combining the equations for each object. several kinds of motion specifications. y. and let M be the matrix for the rotation around the y axis that rotates the positive x axis into the negative x axis. and error detection. 10. . y.yl. VISION. object A and face g on object B.9-1) The two against relations in the example of Fig. SENSING.9-2) }z Figure 10.20 Illustration for spatial relationships among objects.z2 )f2Block2 tz (10. AND INTELLIGENCE Figure 10.y2. generates the following constraint on the configuration of the two objects: A = f-IM twix(6) (0.15 generate the following Blockl = f3 IM twix(61) trans(O.21 Axes embedded in objects and features from Fig.20.+ (10.510 ROBOTICS: CONTROL. z)gB equations: a.zl )fiBlock2 Blockl = fa IM twix(62) trans(0. 10. 9-4) is . (10._y (10.z2) f2 Applying the rewrite rules.y1. twix(01)(f2)-' twix(-02) = M(f4)-'M Also. we obtain twix(01) = M(f4)-'M(f'2) = I From Eq. The rotational equation can be obtained by replacing each of the trans matrices by the identity and only using the rotational components of other matrices. .9-3) is transformed to (10.9-6) [0 -1 (f'2) = M(f'4)-'M = 1 0 0 1 0 0 0 1 0 0 0 0 0 0 Eq.ROBOT INTELLIGENCE AND TASK PLANNING 511 Equation (10. Eq.9-6) is satisfiable and we can choose 02 = 0. obtained by setting the last row of the matrix to [0. Block2.9-4) where the primed matrix denotes the rotational component of the transformation.9-7) -1 Blockl = 0 1 0 0 0 0 1 0 yI -1 2 +z1 0 0 0 1 -1 0 0 0 1 0 0 0 0 1 0 0 2-y2 0 -1 2 +z2 (10.9-6). (10.zl) fI = fa IM twix(02) trans(0. Setting the two expressions equal to each other and removing the common term. Thus.9-5) can be rewritten as '0". -z2) twix( -02)M-' f4 = I . Letting 02 = 0 in Eq.-.1].9-8) . (10. (10. (10.. (10.9-5) (10. (10.9-3) f3 'M twix(01) trans(0.9-2) becomes 0 (10.y2. we get f3 'M twix(01) trans(0. since i.9-7) we conclude that 01 = 0.9-2) consists of two independent constraints on the configuration of Blockl that must be satisfied simultaneously. It can be shown that the rotational and translational components of this type of equation can be solved independently. o-`1 '-.y1.z1 + 1)(f2)-' x trans(0. (10.y1 + l.0. The rotational equation for Eq.0. (f'3)-'M twix(01)(f'2)-' twix(-02)M(f 4) = I Since f3 = I. AND INTELLIGENCE Equating the corresponding matrix terms. C7' 10. VISION.512 ROBOTICS: CONTROL. The algorithms for robot obstacle avoidance can be grouped into the following classes: (1) hypothesize and test. second. for example. a set of linear constraints are derived by using differential approximations for the rotations around a nominal configuration. They can be used to model. fits. The values of the configuration parameters satisfying the constraints can be bounded by applying linear programming techniques to the linearized constraint equations. test a selected set of configurations along the path for possible collisions. edges and vertices.2 Obstacle Avoidance The most common robot motions are transfer movements for which the only constraint is that the robot and whatever it is carrying should not collide with objects in the environment. hypothesize a candidate path between the initial and final configuration of the robot manipulator. Several obstacle avoidance algorithms have been proposed in different domains. The contact relationships treated there include against. or a feature in contact with a region of another feature. The basic method consists of three steps: first.U+ obstacle(s) that would cause the collision. propose an avoidance motion by examining the . (2) penalty function. The entire process is repeated for the modified motion. NO' '-' possible collision is found. = z2. detecting potenC)' CAD A. Taylor [1976] extended this approach to noncontact relationships such as for a peg in a hole of diameter greater than its own. Therefore. These relationships give rise to inequality constraints on the configuration parameters. The method's basic computational operations are detecting potential collisions and modifying proposed paths to avoid collisions. an ability to plan motions that avoid obstacles is essential to a task planner. that is. y. the relationship of the position of the tip of a screwdriver in the robot's gripper to the position errors in the robot joints and the slippage of the screwdriver in the gripper. The method used in the above example was proposed by Ambler and Popple- stone [1975].. = 0 2+z.9. and (3) explicit free space. COQ The main advantage of the hypothesize and test technique is its simplicity. SENSING. The hypothesize and test method was the earliest proposal for robot obstacle avoidance. we obtain 2-y2 =1 Y. = 0. In this section. we briefly review those algorithms that deal with robot obstacle avoidance in three dimensions. third. an object in a box.O Ate. if a C3' . cylindrical shafts and holes. and z. and coplanar among features that can be planar or spherical faces. yz = 1. the position of Blockl has 1 degree of freedom corresponding to translations along the z axis. The first operation.' arc . =2+z2 Hence. After simplifying the equalities and inequalities. can be very difficult. By using many points of the robot.On a collision. This simplicity. Pursuing the local minima of the penalty function can lead to situations where no further progress can be `CS in' 'CS '-y `J+ 'LS COD a. such as enclosing spheres. the penalty function for an obstacle would have to be defined as a transformation of the configuration space obstacle. It is noted that in Fig.. amounts to the ability to detect nonnull geometric intersections between the manipulator and obstacle models. D!' and drops off sharply with distance from obstacles.. modifying a proposed path. adding a penalty term for deviations from the shortest path.ROBOT INTELLIGENCE AND TASK PLANNING 513 tial collisions.8 that the second operation.. 10. Under such conditions. whereas in Fig." '°J . The decision can be made so as to follow local minima in the penalty function. The penalty function methods are attractive because they provide a relatively simple way of combining the constraints from multiple objects. An approach proposed by Khatib [1980] is intermediate between these two extremes. . we can compute the value of the penalty function and estimate its partial derivatives with respect to the configuration parameters. a more accurate detection of potential collisions could be accomplished by using the information from vision and/or proximity sensors.-.22. The total penalty function is computed by adding the penalties from individual obstacles and. . O0' !<.. the path search function must decide which sequence of configurations to follow. possibly.22b moving the tip of the manipulator in the same way leads to CAD . however. In general. (DD penalty function on manipulator configurations that encodes the presence of objects. rather than a single one. When. The distinction between these two types of penalty functions is illustrated in Fig. such as two-link manipulator. 10. is achieved only by assuming a circular or spherical robot.22. motions of the robot that reduce the value of the penalty function will not necessarily be safe. On the basis of this local information.`3 . . however.. At any configuration. We have pointed out in Sec. This capability is part of the repertoire of most geometric modeling systems. the gradient of this field at a point on the robot is interpreted as a repelling force acting on that point.. it is possible to avoid many situations such as those depicted in Fig. For more realistic robots. 10. 10.22a moving along decreasing values of the penalty function is safe. The method uses a penalty function which satisfies the definition of a potential field. only in this case will the penalty function be a simple transformation of the obstacle shape. The key drawback of using penalty functions to plan safe paths is the strictly local information that they provide for path searching. an attractive force from the destination is added. attempts to avoid a collision with one obstacle will typically lead to another collision with a different obstacle. Typical proposals for path modification rely on approximations of the obstacles. . Otherwise.. These minima represent a compromise between increasing path length and approaching too close to obstacles.`J (~D fro ten ''7 . 10. subject to kinematic constraints.. the penalty is infinite for configurations that cause collisions 'may. The second class of algorithms for obstacle avoidance is based on defining a v. 'F1 nom' .p y°° r.. These methods work fairly well when the obstacles are sparsely located so that they can be dealt with one at a time.the space is cluttered. In addition. The motion of the robot results from the interaction of these two forces. ' can CAD basis of the particular subsets of free-space which they represent and in the representation of these subsets. In these cases. In particular. the rest of the operation is influenced by choices made during grasping. In this section. Penalty functions are more suitable for applications that require only small modifications to a known path.) made. The surfaces on the robot used for grasping. AND INTELLIGENCE 1000 I 1 500 8100 (a) (b) Figure 10. VISION. SENSING. in relatively cluttered spaces other methods will either fail or expend an undue amount of effort in path searching. the algorithm must choose a previous configuration where the search is to be resumed. cps 10. The algorithms differ primarily on the CAD sp. rather than simply finding the first path that is safe. Obstacle avoidance is then the problem of finding a path. The third class of obstacle avoidance algorithms builds explicit representations of subsets of robot configurations that are free of collisions. within these subsets. Moreover.514 ROBOTICS: CONTROL.9. are gripping surfaces. The disadvantage is that the computation of the free space may be expensive. but other aspects of the general problem of planning grasp motions have received little attention. Several proposals for choosing collision-free grasp configurations on objects exist.. other methods may be more efficient for uncluttered spaces. target object refers to the object to be grasped. that connects the initial and final configurations. but in a different direction from the previous time. The manipulator BCD . This suggests that the penalty function method might be combined profitably with a more global method of hypothesizing paths.3 Grasp Planning A typical robot operation begins with the robot grasping an object. These backup points are difficult to identify from local information. The advantage of free space methods is that their cad use of an explicit characterization of free space allows them to define search methods that are guaranteed to find paths if one exists within the known subset of free space. such as the inside of the fingers. the free space.22 Illustration of penalty function for (a) simple circular robot and (b) the twolink manipulator. (Numbers in the figure indicate values of the penalty function. The manipulator configuration which has it grasping the target object at that object's initial configuration is the initial-grasp configuration. However. it is feasible to search for short paths. p.0 v. in' about the axis between the grippers. The second difference is that grasp planning must consider the detailed interaction of the manipulator's shape and that of the target object.-. 'z3 . however. c. Choose a set of candidate grasp configurations. 2.ROBOT INTELLIGENCE AND TASK PLANNING 515 configuration that places the target object at its destination . configuration of the target object is subject to substantial uncertainty. an additional consideration in grasping is certainty: the grasp motion should reduce the uncertainty in the target object's configuration. and existence of collisionfree paths from initial to final-grasp configuration. The choice can be based on considerations of object geometry. involving the grasped object. Existing approaches to grasp planning differ primarily on the collision-avoidance constraints. 'C7 . Potential collisions of gripper and neighboring objects at initial-grasp configuration. Existence of collision-free path to initial-grasp configuration.s--+ F40 . For parallel jaw grippers. If the initial '-' L]. potential collisions of whole manipulator and neighboring objects at final-grasp configuration. p convexity (indicates that all the matter near a geometric entity lies to one side of a specified plane). most existing methods for grasp planning treat it independently from obstacle avoidance. The third is stability: the grasp should be stable in the presence of forces exerted on the grasped object during transfer motions and parts mating operations.-. a common choice is grasp configurations that place the On. stability. Potential collisions of whole manipulator and neighboring objects at initialgrasp configuration. S. not a path..fl grippers in contact with a pair of parallel surfaces of the target object.. The second is reachability: the robot must be able to reach the initial grasp configuration and. There are three principal considerations in choosing a grasp configuration for objects whose configuration is known. Because of these differences. Note that candidate grasp . The set of candidate grasp configurations is then pruned by removing those that are not reachable by the robot or lead to collisions. to find a collision-free path to the final grasp configuration. Most approaches to choosing safe grasps can be viewed as instances of the following method: 1.. . The third difference is that grasp planning must deal with the interaction of the choice of grasp configuration and the constraints imposed by subsequent operations C3. An additional consideration in choosing the surfaces is to minimize the torques BCD-. for example: . there are significant differences. Choosing grasp configurations that are safe and reachable is related to obstacle avoidance. potential collisions of gripper and neighboring objects at final-grasp configuration. or uncertainty reduction. d. The first difference is that the goal of grasp planning is to identify a single configuration. with the object in the hand. The first is safety: the robot must be safe at the initial and final grasp configurations. b.. is the final-grasp configuration. 'C7 a.7j CAD configurations are those having the gripping surfaces in contact with the target object while avoiding collisions between the manipulator and other objects. However. COI 2. another is choosing the one least likely to cause a collision in the presence of position error or uncertainty.. >~' mineral exploration. to make inferf-] f1. are beginning to enter the commercial marketplace. They solve problems in such specialized fields as medical diagnosis. For a task to qualify for "knowledge engineering. Cam/. torque. a choice is made among the remaining configurations. The expert must be able to articulate that special knowledge. AND INTELLIGENCE 3.516 ROBOTICS: CONTROL. and understand speech can only claim limited success. researchers have found that amassing a large amount of knowledge. Such high-performance expert systems. In building such expert systems. The primary sources of the expert's abilities must be special knowledge. 10. and oil-well log interpretation. The task must have a well-bounded domain of application. p.10 EXPERT SYSTEMS AND KNOWLEDGE ENGINEERING Most techniques in the area of artificial intelligence fall far short of the competence of humans or even animals. They differ substantially from conventional computer programs because their tasks have no algorithmic solutions and because often they must make conclusions based on incomplete or uncertain information. There must be at least one human expert who is acknowledged to perform the task well. and other knowledge about a given field. hear sounds. if any. judgement. judgment. One possibility is choosing the configuration that leads to the most stable grasp." the following prerequisites must be met: 1. ences. in one area of artificial intelligence-that of reasoning from knowledge in a limited CAD domain-computer programs can not only approach human performance but in some cases they can exceed it. . rules of thumb. After pruning. 3. previously limited to academic research projects. SENSING. proximity. 4. It is not difficult to see that sensory information (vision. rather than sophisticated reasoning techniques.10. These programs use a collection of facts. 10. VISION. coupled with methods of applying those rules.1 Construction of an Expert System Not all fields of knowledge are suitable at present for building expert systems. is responsible for most of the power of the system. Computer systems designed to see images. and experience. and experience and also explain the methods used to apply it to a particular task. or force) should be very useful in determining a stable and collision-free grasp configuration. r-. A simple example of a production rule is: IF the power supply on the space shuttle fails. Predicting mass spectrographs for candidates 4. The approach called for: . for example. .10. its status.T . It was devised in the late 1960s by Edward A. In a conventional computer program. an explanation module is also included. in the field. One of the earliest and most often applied expert systems is Dendral (Barr et al. either starting with the initial evidence in a situation and working toward a solution. AND a backup power supply is available. The structure of an expert system is modular. allowing the user to challenge the system's conclusions and to examine the underlying reasoning process that led to them. Rule-based systems work by applying rules. In an expert system there is usually a clear separation of general knowledge about the problem (the knowledge base) from information about the current problem (the input data) and methods (the inference machine) for applying the general knowledge to the problem. so it is difficult to change the program. and its history. or starting with hypotheses about possible solutions and working backward to find existing evidence-or a deduction from existing evidence-that supports particular hypothesis.2 Rule-Based Systems The most popular approach to representing the domain knowledge (both facts and heuristics) needed for an expert system is by production rules (also referred to as SITUATION-ACTION rules or IF-THEN rules). rather than one. 10.C A. CAD a04 yam. where the system can be changed by the BCD "CS simple addition or subtraction of rules in the knowledge base.. knowledge pertinent to the problem and methods for using this knowledge are intertwined. CAD CAD '.:! CAD CAD =CA 1. Feigenbaum and Joshua Lederberg at Stanford University to generate plausible structural representations of organic molecules from mass spectrogram data. to have a naturallanguage interface to facilitate the use of the system both during development and °Q. This is particularly true of rule-based systems. An expert system differs from more conventional computer programs in several important respects. though not yet common. while another part of the system-the global database-is the model of the "world" associated with a specific problem.0. With this separation the program can be changed by simple modifications of the knowledge base. Comparing the results with data . might be brought to bear on a problem.. Deriving constraints from the data 2. They can also work by directed logical inference. [1981. noting the results. Generating candidate structures 3. and applying new rules based on the changed situation. AND the reason for the first failure no longer exists. 1982]). THEN switch to the backup power supply..-: . It is desirable. Facts and other knowledge about a particular domain can be separated from the inference procedure-or control structure-for applying those facts. the abilities of several human experts. In some sophisticated systems.ROBOT INTELLIGENCE AND TASK PLANNING 517 Sometimes an expert system can be built that does not exactly match these prerequisites. R1 selects the rule having the most IF clauses for its applicability. Put panels in Unibus cabinets. 1982]). cad . exhaustively evaluating all hypotheses. VISION. illustrates the very common Al problem-solving approach of "generation and test.. medical-knowledge automation. chaining forward from the data. When it has finished. It chains backward from hypothesized diagnoses. Lay out system floor plan. MYCIN matches treatments to all diagnoses that have high certainty A-..`3 'r5 1. Of the applicable rules.y 10. planning and scheduling. linking patient data to infection hypotheses.. 5'a values. The initial version of RI was developed by John McDermott in 1979 at Carnegie-Mellon University. design by Edward Shortliffe at Stanford University in the mid-1970s. `ACT' "-h . AND INTELLIGENCE This rule-based system.. Put components into CPU cabinets. space defense. SENSING. Do the cabling. 6.) The system now has about 1200 rules for VAXs. has been very successful in configuring VAX computer systems from a customer's order of various standard and optional components. If there is not enough information to narrow the hypotheses. on the assumption that that rule is more specialized for the current situation. 2. a special language for executing production rules. It is an interactive system that diagnoses bacterial infections and recommends antibiotic therapy. Another rule-based system.3 Remarks The application areas of expert systems include medical diagnosis and prescription. b°0 GA' . together with information about some 1000 VAX components. air-traffic con'C1 A'3 cod M.. One of the best-known expert systems is MYCIN (Barr et al.z . Correct mistakes in order. The total system has about 3000 rules and knowledge about PDP-11 as well as VAX components." Dendral has been used as a consultant by organic chemists for more than 15 years. (R1 is written in OPS 5. 3. military threat assessment. chemical-data interpretation. tactical targeting. Put boxes in Unibus cabinets and put components in boxes. R1. At each point in the configuration development. signal interpretation. It is currently recognized as an expert in mass-spectral analysis. 4. ^y. several rules for what to do next are usually applicable. for Digital Equipment Corp.10. Because the configuration problem can be solved without backtracking and without undoing previous steps.518 ROBOTICS: CONTROL. it asks the physician for additional data. .. MYCIN represents expert judgmental reasoning as condition-conclusions rules. 5. to see if the evidence supports a diagnosis. [1981... using rules to estimate the certainty factors of conclusions based on the certainty factors of their antecedents. mineral and oil exploration. and at the same time it provides the expert's "certainty" estimate for each rule. chemical and biological synthesis. the system's approach is to break the problem up into the following subtasks and do each of them in order: . in that a new set of computer programs must be written for each operating environment and. hence. two main approaches to robot planning were proposed. A robot planner attempts to find a path from our initial robot world to a final robot world. computer-configuration selection... are often better suited for causal modeling. °. The newer expert systems contain knowledge about causality and structure. usually requires extensive computing power for searching and inference in order to solve a reasonably complex real-world problem. Another change is the increasing trend toward non-rule-based systems. A solution to a problem could be the basis of a corresponding sequence of physical actions in the physical world. for that world. significantly limits the robot's flexibility in real-world applications. to write a computer program to solve problems.2) .M. and expert-system construction. and they are also inappropriate for highlighting structure and function. and other knowledge-representation structures. is to have a fairly general robot planner which can solve robot problems in a great variety of worlds. they also tend to simplify the reasoning required. combine rule-based and non-rule-based portions which cooperate to build solutions in an incremental fashion. Some expert systems. has been regarded computationally infeasible. typified by the STRIPS system. using the "blackboard" approach. like any other general problem-solving process in artificial intelligence.t" -9- 't" O. .11 CONCLUDING REMARKS The discussion in this chapter emphasizes the problem-solving or planning aspect of a robot.6 . the second approach lacks generality. The second approach is to select a specific robot world and. Such coo particular expertise. The first approach. with each segment of the program contributing its own . By providing knowledge representations more appropriate to the specific problem. using semantic networks. Existing methods for task planning are considered computationally infeasible for real-time practical applications. However. On the other hand. frames. 'L3 °. coo systems. These systems promise to be considerably more robust than current systems and may yield correct answers often enough to be considered for use in autonomous systems.y. ors . circuit analysis. The limitations of rule-based systems are becoming apparent: not all knowledge can be structured as empirical associations. :j. One approach. In contrast to high-level robot task planning usually requires more detailed and numerical information describing the robot world. knowledge-base access and management.fl ago . There appear to be few constraints on the ultimate use of expert systems.a) . Planning should certainly be regarded as an intelligent function of a robot. The path consists of a sequence of operations that are considered primitive to the system.S . hence. and. speech understanding. 10. structure damage assessment. Such associations tend to hide causal relationships.. the nature of their design and construction is changing. VLSI design.0Y =D: a>) . equipment fault diagnosis. manufacturing process planning and scheduling. not just as .U+ any intelligent assistants. In late 1971 and early 1972. computer-aided instruction.ROBOT INTELLIGENCE AND TASK PLANNING 519 trol.'Y vac . we still need powerful and efficient planning algorithms that will be executed by high-speed special-purpose computer systems. (0. Further basic reading for Sec. NN are the number of missionaries and cannibals in the left .0). [1983].3 is based on the material in Whitney [1969].1 Suppose that three missionaries and three cannibals seek to cross a river from the right bank to the left bank by boat. no missionary and cannibal are on the left bank. one can represent the state description by (N. and Rich [1983]. The discussion in Secs." Use an AND/OR graph to chart the steps in your search for a proof.2 Imagine that' you are a high school geometry student and find a proof for the theorem: "The diagonals of a parallelogram bisect each other.bank.2). [1981.. 10. 10. where N. Ne).8 and 10.-r . 10.2).. Early representative references on robot task planning (Secs. Again. and Winston s. the cannibals will eat the missionaries. 10. (0. the goal state is (3. (b) A robot is intelligent if it can perform a task which. 1982]. 10. 1980]. Siklossy and Dreussi [1973]. HayerRoth et al.3) and the possible intermediate states are (0. (1. Hint: Using the state-space representation and search methods described in Sec. [1984]. Robot planning.1). Additional reading for the material in Sec. Fikes et al. is still a very active area of research. (3. (2.e. 10.2). 10.0). REFERENCES Further general reading for the material in this chapter can be found in Barr et al. and Taylor [1976].6 may be found in Fikes and Nilsson [1971] and Rich [1983]. SENSING. and Davis and Comacho [1984].3). requires intelligence. (c) If a block is on the table.. Requicha and Voelcher [1982]. Ambler and Popplestone [1975]. If the missionaries are outnumbered at any time by the cannibals. and Weiss and Allanheld [1984].. Additional reading and references for the material in Sec. (3. if performed by a human..9) are Doran [1970]. For real-time robot applications.. AND INTELLIGENCE Powerful and efficient task planning algorithms are certainly in demand.2. [1972].7 can be found in Tangwongsan and Fu [1979]. special-purpose computers can be used to speed up the computations in order to meet the real-time requirements. respectively.1).. then it is not also on another block. 10.2 and 10. VISION. Nilsson [1971]. 7c' w<` -. More recent work may be found in Khatib [1980]. The maximum capacity of the boat is two persons. The initial state is (0. Nilsson [1971. Indicate the solution subgraph that constitutes a proof of the theorem. (3.4 may be found in Chang and Lee [1973].1). 10. which provides the intelligence and problem-solving capability to a robot system. .3 Represent the following sentences by predicate logic wffs. Complementary reading for the material in Secs.520 ROBOTICS: CONTROL. i. PROBLEMS 10..5 and 10..10 may be found in Nau [1983]. Propose a computer program to find a solution for the safe crossing of all six persons. (a) A formula whose main connective is a = is equivalent to some formula whose main connective is a V. grasp the bananas. push the box under the bananas.ROBOT INTELLIGENCE AND TASK PLANNING 521 10. step by step.4 Show how the monkey-and-bananas problem can be represented so that STRIPS would generate a plan consisting of the following actions: go to the box. climb the box. grab the bananas.6 Show how the monkey-and-bananas problem can be represented so that STRIPS would generate a plan consisting of the following actions: go to the box. 10.5 Show.4. 10. climb the box. Use means-ends analysis as the control strategy. how means-ends analysis could be used to solve the robot planning problem described in the example at the end of Sec. 10. . push the box under the bananas. moment. Force. A quantity which is characterized by direction as well as magnitude is called a vector. then 'TS a= IaI W. namely. Vectors can be compared only if they have the same physical meaning F-] and dimensions. length. vectors are represented by lowercase bold letters. those having magnitude only and those having magnitude and direction. a vector is represented graphically by a directed line segment whose length and direction correspond to the magnitude and direction of the quantity under consideration. A scalar . and acceleration are examples of vectors. Usually. The notation .2) vii van .ti .APPENDIX A VECTORS AND MATRICES This appendix contains a review of basic vector and matrix algebra. 1) A unit vector a has unit length in the assigned direction. Time. and coordinates are scalars.a is used to representea vector having the same magnitude as a but in the opposite direction. velocity. If a is the magnitude or length of the vector a. Scalars can be compared only if they have the same units. A quantity characterized by magnitude only is called a scalar. IaI=1 522 (A.< (A.r1 . This is represented as I a I. A. mass. In the following discussion. Associated with vector a is a positive scalar equal to its magnitude. density. Two vectors a and b are equal if they have the same length and direction.l SCALARS AND VECTORS The quantities of physics can be divided into two classes.is usually represented by a real number with some unit of measurement. while matrices are in uppercase bold type. . A.2). denoted by a . .5) This can be seen by constructing a polygon having those vectors as consecutive sides and drawing a vector from an initial point of the first to the terminal point of The difference between two vectors a and b.1 Vector addition. b-0 Figure A. that is. a.b. 1).1. A. A. with the vectors a and b drawn from the same origin point (see Fig. A. (a + b) + c = a + (b + c) =a+b+c the last (see Fig. Addition of three or more vectors is associative. as in Fig.3) aX + ay + a2 where a. (A. and az are components of a along the principal axes.2 Vector subtraction.VECTORS AND MATRICES 523 a Figure A. Any vector a in three-dimensional space can be normalized to a unit vector as a w1al) a (A. a + b = b + a (A. is defined as the vector extending from the end of b to the end of a.2 ADDITION AND SUBTRACTION OF VECTORS Addition of two vectors a and b is commutative.4) This can be verified easily by drawing a parallelogram having a and b as consecutive sides. For any two vector elements of V. For any three vector elements of V. 4-.4 LINEAR VECTOR SPACE A linear vector space V is a nonempty set of vectors defined over a real number field F. the product of m and a is another vector element in V.8) (3) (m+n)a=ma+na where m and n are scalars. VISION. For any scalars m and n in F. 2. b = ma and (A.a) e V such that a + (-a) = 0 6. AND INTELLIGENCE A. there is a unique vector (. There is a unique element called the zero vector in V (denoted by 0) such that for every element a e V . For every vector element a e V. 3. then ma=la=a1=a 7.3 MULTIPLICATION BY SCALARS Multiplication of a vector a by a scalar m means lengthening the magnitude of vector a by I m I times with the same direction as a if m > 0 and the opposite direction if m < 0. A. the sum is also a vector element belonging to V. vector addition is commutative.6) (A. For every vector element a e V and for any scalar m e F. For any two vector elements of V. 4. If m = 1..e 0 + a_= a + 0 = a 5. vector addition is associative. SENSING.7) IbI = Imi Ial The following rules are applicable to the multiplication of vectors by scalars: (1) m(na) = mna (2) m(a + b) = ma + mb (A. multiplication by scalars is distributive. Thus. which satisfies the following conditions of vector addition and multiplication by scalars: 1. and any vectors a and b in V.524 ROBOTICS: CONTROL. . 9) If the only way to satisfy this equation is for each scalar ci to be identically equal to zero.. 0. in V is linearly dependent if and only if there exist n scalars {cI .6 LINEAR COMBINATIONS.`3 vectors in a three-dimensional vector space... c. they lie in the same plane.or threedimensional vectors.. -4)T in . . c2 . m(na) = (mn)a = mna Examples of linear vector space are the sets of all real one-. Example: Let constitute a a = (1. be Then. (A. b. in F such that every vector x in V. x2. . c. Three linearly dependent vectors in threedimensional space are coplanar. c linearly dependent set the three-dimensional vector space because 3a . can be expressed as x=cI eI +C2e2+ {ei} is said to span the vector space V..c = 0 These vectors also are coplanar.. b = (0. . C2. } in F (not all equal to zero) such that Cl X] + C2 X2 + C3 X3 + +CX=0 (A. AND DIMENSIONALITY If there exists a subset of vectors {ej. A. For any scalars m and n in F. That is. A. .. e. 10) then we say that x is a linear combination of the vectors lei). and c = (3. e2. . 3. . 0)T.. 4--C in V and a set of scalars a. they lie in the same line. then the set of vectors { xi } are said to be linearly independent. and any vector a in V. the three vectors a. c3 .VECTORS AND MATRICES 525 m(a + b) = ma + mb (m + n)a = ma + na 8.5 LINEAR DEPENDENCE AND INDEPENDENCE A finite set of vectors {x1.2b . Two linearly dependent vectors in a three-dimensional space are collinear. . two... BASIS VECTORS. a. The set of vectors . 2)T.0 )( {cI . . that is. 2. 11) In particular.. then any triple of noncoplanar vectors can serve as basis vectors. e2. we usually use {i. it follows from Sec. k} to denote the basis vectors instead of {e1. can be expressed uniquely as a linear combination of the basis vectors. However. . every vector x e V can be expressed uniquely as a linear combination of the basis vectors. OY. e2. e2..-y C'. if a set of basis vectors {et. e3} are all drawn from a common origin 0. one can form various coordinate frames commonly used in engineering work. Figure A. Thus. for a vector space V. then these vectors form an oblique coordinate system with axes OX. j. if n = 3. Furthermore. ej e. if each of the basis vectors is of unit length..3). e3} are orthogonal to each other. A.. if they intersect at right angles at the origin 0. In a three-dimensional vector space. r=rleI+r2e2+ a. A. VISION. e3} (see Fig. that is. If the basis vectors {e1. s. We shall use the notation V to represent a vector space of dimension n. 'C7 . the basis vectors are the minimum number of vectors that span the vector space V. By properly choosing the direction of the basis vectors. In this case. In other words.3 Coordinate systems. The basis vectors for a vector space V are a set of linearly independent vectors that span the vector space V.6 that any vector r e V. . One can choose different sets of basis vectors to span a vector space V. A. an n-dimensional linear vector space has n basis vectors. and OZ drawn along the basis vectors (see Fig. "'y (A.7 CARTESIAN COORDINATE SYSTEMS Given a set of n basis vectors {e1. . e. A. e. the coordinate system is called orthonormal. SENSING. then they form a rectangular or Cartesian coordinate system. The dimension of a vector space V is equal to the number of basis vectors that span the vector space V. once a set of basis vectors are chosen to span a vector space V. AND INTELLIGENCE e. e2.4).526 ROBOTICS: CONTROL. j. If the basis vectors {i. A.12) where 0 is the angle between the two vectors (see Fig.8. The scalar product a -b= Ial Iblcos0= Ialb=IbI lalcos0= Ibla (A. The first is the inner or dot or scalar product. A.1 Inner Product (Dot Product or Scalar Product) The inner product of two vectors a and b results in a scalar and is defined as tip a-b= COD lal IbI cos 0 (A. and negative if it is in the opposite direction.13) is the component of b along a.4 Cartesian coordinate system.5).VECTORS AND MATRICES 527 k k i Right-handed Left-handed Figure A. The scalar quantity b = I b I cos 0 (A.14) . we use only right-handed coordinate systems. Similarly. It is positive if the projection is in the same direction as a. then the coordinate system is called a right-handed coordinate system. if the basis vectors are chosen in the directions along the principal axes and a left-handed rotation of 90 ° about OZ carries OX into OY. then the coordinate system is called a left-handed coordinate system. k} of an orthonormal coordinate system are chosen in the directions along the principal axes and a right-handed rotation of 90 ° about OZ carries OX into OY. that is.8 PRODUCT OF TWO VECTORS In addition to the product of a scalar and a vector. Throughout this book. two other types of vector product are of importance. b is numerically equal to the projection of b on a. A. The other is the vector or cross product. b is equal to the product of the magnitude of a and the component of b along a. In particular if a = b. c=axb (A. then a b = a a = IaI IaI = Ialz is the square of the length of a. Thus. (A.19) A.528 ROBOTICS: CONTROL.c is a zero vector or orthogonal to a. and so b .-O I a I cos0 Figure A. then either (or both) of the vectors is zero or they are perpendicular to each other because cos (± 90 °) = 0. then cos 0 = 1 and a b is equal to the product of the lengths of the two vectors.5 Scalar product. two nonzero vectors a and b are orthogonal if and only if their scalar product is zero.. AND INTELLIGENCE --.8.16) one cannot conclude that b = c but merely that a (b . Since the inner product may be zero when neither vector is zero.20) . It is also equal to the product of I b I and the component of a along b. Hence. Thus. it follows that division by a vector is prohibited. SENSING.15) If the scalar product of a and b is zero. that is. VISION. (A. If a and b have the same direction and 0 = 0 °.17) (b + c) a = b a + c a (A.2 Vector Product (Cross Product) The vector or cross product of two vectors a and b is defined as the vector c. the scalar product is commutative: a b = b a !11 (A.c) = 0. if (A. The dot product of vectors is distributive over addition.18) and S]. 21) The vector c is so directed that a right-handed rotation about c through an angle 0 of less than 180 ° carries a into b.6). Thus. The cross product of b x a has the same magnitude as a x b but in the opposite direction of rotation as a x b. since h = IbI sin 0.6 Cross product.6. we note that the cross product is distributed over addition.VECTORS AND MATRICES 529 which is orthogonal to both a and b and has magnitude IcI =Ial IbI sin B (A. The cross product a x b can be considered as the result obtained by projecting b on the plane W X Y Z perpendicular to the plane of a and b. A.22) and the cross product are not commutative. that is..24) (A. a x (b + c) = a x b + a x c and (A.25) (b + c) x a = b x a + c x a Figure A. then one (or both) of the vectors is zero or else they are parallel.(a x b) c:= (A. Cp' '?J .Q . where 0 is the angle between a and b (see Fig. then 0 is 0 ° or 180 ° and 0 la x b l = Ial IbI sin B= 0 (A. coo b x a = . In Fig. the cross product a x b has a magni- tude equal to the area of the parallelogram formed with sides a and b. A. Also. If vectors a and b are parallel. if the cross product is zero. rotating the projection 90 ° in the positive direction about a and then multiplying the resulting vector by I a l .23) Conversely. AND INTELLIGENCE Applying the scalar and cross product to the unit vectors i. SENSING.26) ixi=jxj=kxk=0 j xk=-kxji k xi ixj=-jxik -ixk=j Using the definition of components and Eq.12).a1 b3 )j + (a1 b2 . b)c a (b x c) a x (b x c) (A.28) A.a3b2)i + (a3b1 . the scalar product of a and b can be written as a b = (a1i + a2j + a3 k) = a1b1 + a2b2 + a3b3 = aTb (b1i + b2j + b3 k) (A. k along the principal axes of a right-handed cartesian coordinate system. The cross product of a and b can be written as a determinant operation (see Sec.27) where aT indicates the transpose of a (see Sec. according to whether (a b) is positive or negative.9 PRODUCTS OF THREE OR MORE VECTORS For scalar or vector product of three or more vectors. o-c v'. A. VISION.15). (A. A. `c$ .29) The product (a b) e is simply the product of a scalar (a b) and the vector The resultant vector has a magnitude of I a b I I c I and a direction which is the same as the vector c or opposite to it.26). i j a2 b2 k a3 b3 a x b = a1 b1 _ (a2b3 . we usually encounter the following types: (a c.i= 0 (A.a2b1)k (A.k= 1 j= j k= k.530 ROBOTICS: CONTROL. j. we have i i i= j j= k. that is. A. Expressing the vectors in terms of their components in a three-dimensional vector space yields i j by cy k bZ cz a (b x c) = (ai + ayj + azk) bX CX = ax(bycz . which is meaningless. We also observe that the volume of a parallelepiped is independent of the face chosen as its base (see Fig. a.31) Note that the parentheses around the vector product b x c can be taken out CI.7). b. is a scalar whose magnitude equals the volume of a parallelepiped with the vectors a. c00 without confusion as we cannot interpret a b x c as (a b) x c.7 Scalar triple product.30) = hA = volume of parallelepiped where h and A are. A.bzcy) + ay(bcx . the height and area of the parallelepiped.VECTORS AND MATRICES 531 The scalar triple product. respectively. Thus. . a (b x c). a b x c= b a b x c Figure A.bcz) + az(b. and c as coterminous edges (see Fig.bycc) aX bX CX ay az bz cz by cy (A.(b x c) = IaI IbI I c I sin 0 cos a (A.cy .7). a. By following a clockwise traversal along the circle. . b. (A. if two of the three vectors are equal. If three vectors a. n. Similarly.34) (A.532 ROBOTICS: CONTROL. and c are coplanar.33) indicates an anticyclic permutation.15). reversing the direction of the arrows. and X are scalars and Eq. then dotting (bx c)] Thus. the vector a x (b x c) lying in the plane of b and c can be expressed as a linear combination of b and c..8 Cyclic permutation. the scalar triple product vanishes.36) (A. Hence.d (A. the scalar triple vector can be used to prove linear dependence of three coplanar vectors. Equation (A. It follows p. then (ele2e3) # 0 and they form a right-handed coordinate system if (el e2e3) > 0 and a left-handed coordinate system if (e1 e2e3) < 0. SENSING. (A.) ax(bxc)=mb+nc both sides of Eq. that if el. Then. A.8. Finally. and e3 are basis vectors for a vector space V3. is a vector perpendicular to (b X c) and lying in the plane of b and c (see Fig.34) becomes a x (b x c) = X[(a c)b . VISION. and c are noncollinear.32) indicates a cyclic permutation on these vectors.34) with the vector a will yield zero: Since the vector a x (b x c) is also perpendicular to the vector a. we obtain Eq.9). m 0 _ c -n a b a = x where m.' because there is no confusion on the position of dot and cross operators. AND INTELLIGENCE These results can be readily shown from the properties of determinants (see Sec. Suppose that the vec- tors a. (A. a x (b x c). `r1 End Also. An illustration of cyclic permutation is shown in Fig. The vector triple product.32). which is usually written as (abc) '.35) (A. a bxc=axb c. we obtain Eq. A.33). that is.. (A. e2. then (abc) = 0.37) . b. A. while Eq.(a b)c] b b a Cyclic Anticyclic Figure A. (A. r(t) = lim '-' At Ar At-0 At (A.(b c) (a d) C.22) it follows that (a x b) x c = -c x (a x b) = -(c b)a + (c ="a a)b (A.9 Vector triple product.10 DERIVATIVES OF VECTOR FUNCTIONS The derivative of a vector r(t) means dr dt "fl = At-0 lim r(t + At) . It can be shown that X = 1.43) .(abc)d and c)d (A.(a x b _ (abd)c . 1 k (A. (A. (a x b) x (c x d) _ (a x b d)c .VECTORS AND MATRICES 533 Figure A. For example. (A.42) that dr dt drY 1.39) More complicated cases involving four or more products can be simplified by the use of triple products. (A. i + dt Ldt J j + Ldtj I drr..42) It follows from Eqs. 11) and (A. so the vector triple product becomes (A.40) (a x b) (c x d) = a b x (c x d) (b d) (a c) . using Eq.41) A.38) Also. AND INTELLIGENCE and the nth derivative of r(t) is d"r dtn dn rx 1 . If b(t) can be expressed in a rectangular coordinate system.44) J Using Eq.47) bz(T) dT + cZ .+ c where c is a constant vector.45) where in is a scalar NIA dt (a b) _ \.534 ROBOTICS: CONTROL. (A.. then a. the integral of the vector b(t) means a(t) = I b(T)d7. VISION.11 INTEGRATION OF VECTOR FUNCTIONS If da/dt = b(t).(t) = J (A. da dt b+a J L db J (4) f(a x b) _ da dt x b + a x db dt (5) f(abc) _ ab \ A dt 1 J _ (6) f[a x (b x c)] _ da dt (b c) dbt x Cl J a X J A. j L dtn i + dnry 1 L dtn 3 + dnrz I 1 k J dtn \. the following rules for differentiating vector functions can be obtained: (1) (2) (3) d (at b) = da dt dt d (ma) = m da at at 4- db dt (A. (A. SENSING.46) ay(t) = S by(T) dT + cy a.(t) = S bx(T) dT + cX (A.42). n j = 1.... Both column and row matrices are often referred to as vectors. n amt amn (A. . denoted by AT.. The transpose of a matrix A. A matrix consisting of a single column (row) is called a column (row) matrix.. . .) of order m by n is a rectangular array of real or complex numbers (called elements) arranged in m rows and n columns..2.m j = 1. is defined to be the matrix whose row number is identical with the column number of A.n aml am2 annt (A.j] = a. 2.49) then all a21 a12 a22 . . . am2 AT = i = 1. amn In particular.in j=1..12 MATRIX ALGEBRA In the remainder of this appendix...48) Unless it is otherwise noted.... all a21 a12 a22 aln a2n A = [a.. . 2. .. . In other words. we shall discuss another important mathematical tool. .n.1 i = 1. matrices.50) a2n . the transpose of a column matrix is a row matrix and vice versa. (A. we will assume that A is a real matrix. if 4-I ^C7 all a21 a12 a22 a 1» a2n A = i = 1.2.... m a1. A matrix A (or A.... which is essential for the analysis of robotic mechanism.2. 2...VECTORS AND MATRICES 535 A.. .56) A + (-A) = 0 .53) A null matrix is a matrix whose elements are all identically equal to zero. That is. Thus.A + AT 2 N (A. Q. 2..e. . III III all = .. Two matrices of the same order are equal if their respective elements are equal. It is noted that if A is skew. 2. AND INTELLIGENCE A square matrix of order n has an equal number of rows and columns (i. j for all i. SENSING. VISION. j (A. That is. .. n t`' (A. j = 1.536 ROBOTICS: CONTROL.52) then the matrix is called a skew matrix. A diagonal matrix is a square matrix of order n whose off-diagonal elements are zero.55) A-B=C (1) A + B = B + A Matrix addition has similar properties as real number addition: (2) (A+B)+C=A+(B+C) (3) A + 0 = A (4) (0 is the zero or null matrix) (A. n ago (A. That is. If the elements of a square matrix are such that 00..13 SUMMATION OF MATRICES Two matrices A and B of. the elements all = 0 unity. then A = Any nonsymmetric square matrix A can be made into a symmetric matrix C by letting C . = 0 i.j = cU all . . and a. j = 1 . all = 1 if i # j for i.. A. m = n).b11 = c11 for all i. This matrix is called the identity matrix and denoted by In or I w A symmetric matrix is a square matrix of order n whose transpose is identical to itself.51) A unit matrix of order n is a diagonal matrix whose diagonal elements are all if i = j and all = 0 if i j. .54) (A. A+B=C and or or all + b. if all = b11 for all i and j.aj1 -AT . A = AT or all = aj1 for all i and j. then A = B. the same order can be added (subtracted) forming a resultant matrix C of the same order by adding (subtracting) corresponding elements.. That is. aikbki k=I (A. (A. we sum the product terms of the corresponding elements in the ith row of A and the jth column of B. 2.14 MATRIX MULTIPLICATION The product of a scalar and a matrix is formed by multiplying every element of A by the scalar.. That is. as in Eq. 2. n (Amxn)(B. matrix multiplication is not commutative even if the matrices are conformable. (1) (kA)B = k(AB) = A(kB) .p) = Cm xp or cU = F. then the number of column of A must be equal to the number of row of B and the resultant matrix C has the row and column number equal to those of A and B. Thus. The unit matrix commutes with any square matrix.58) In Eq.57) 1A=A CAD where a and b are scalars. n The following rules hold for the product of any (m x n) matrices and any scalars: (1) (2) a(A+B)=aA+aB (a+b)A=aA+bA a(bA) = (ab)A (3) (4) (A. then ABABA If AB = BA. m j = 1. In order to obtain the element at the ith row and jth column of C. (A. In general.58).. That is. . kA = Ak = [kaij] = [a..jk] i = 1.58). if A and B are square matrices of order n.VECTORS AND MATRICES 537 A.. respectively. we postmultiply the ith row of A by the jth column of B. Two matrices can be multiplied together only if they are conformable.. .59) Matrix multiplication is associative and distributive with respect to matrix addition. Thus. IA = Al = A (A. In other words. that is. . we can either say B is premultiplied by A or A is postmultiplied by B to obtain C. then we say the matrices are commutative. if AB = C. In general.. lowing rules for the product of matrices: (1) (2) (matrix) .jAij i=I i=1 (A. It is worthwhile to note the folS"... ..62) .. I (row matrix) I < n = (matrix). we see that for the product of three matrices. I (row matrix) I X (column matrix). x n ... p (matrix).15 DETERMINANTS The determinant of an n x n matrix A is denoted by all a21 Vie' a12 a22 ain a2n IAI = (A. AND INTELLIGENCE (2) (3) (4) acs A(BC) = (AB)C (A + B)C = AC + BC (A.. SENSING..61) and ant . n (column matrix )n . AB = 0 does not imply that A = 0 or B = 0.60) C(A + B) = CA + CB assuming that the matrix multiplications are defined.. . x .. = (row matrix) I x n (column matrix). that is.< p = (matrix ). .. n n IAI = E a1 Aid _ E a. x (matrix )n ..n ...Sometimes in matrix addition or multiplication.. I = scalar (row matrix) I (matrix)... From rule (2). I = (column matrix).538 ROBOTICS: CONTROL. VISION. we can either postmultiply B by C or premultiply B by A first and then multiply the result by the remaining matrix. ann and is equal to the sum of the products of the elements of any row or column and their respective cofactors. . A. it is more convenient to partition the matrices into submatrices which are manipulated according to the above rules of matrix algebra. . . . .. .. ai-1. . . . until reaching a determinant of order 1. a2 JAI = ail ai2 aij . a determinant of order n depends upon n determinants of order n . which is a scalar. ai+1.j-1 ai_1.j-l ai+1. Aij is the cofactor of aij. .j+l and an2 .63) where Mij is the complementary minor. ain an2 an. j + 1 anti From the above definition.1 ai-1.. ai-1.2 .1.1 ai+1.1 determinants of order n . and we delete the elements in the ith row and jth column.VECTORS AND MATRICES 539 Here.j+1 . ai-1. In other words. a1j a21 atn .: ai+l. obtained by deleting the elements in the ith row and the jth column of I A I. .2..1+1 a1. . which can be obtained as Aij = (-1)i+jMij (A. if all a21 a12 a22 . then all a12 ..j-1 an. ~D+ . . each of which in turn depends upon n . ... and so on. a1. . an.2 ai+1.j-1 a1. . then the determinant is scaled by k. .a21a12 (A. then the determinant remains unchanged.w-. its deter7. then IAI = 0. If all the elements of any row (or column) of A are multiplied by a scalar k. 3. If all the elements of any row (or column) of A are zero.65) The following properties are useful for simplifying the evaluation of determinants: 1. Example: Let A = °-n .. minant is changed. it can be evaluated as all JAI = a21 a31 a12 a22 a32 a13 a23 a33 = a11a22a33 + a12a23a31 + a13a32a21 . G1. VISION. If A and B are of order n.a12a21a33 . .16) of a matrix A of order n is less than n.. SENSING. then I AB I = IAI I B 5. IAI = IATI.a11a32a23 (A. A.w-.64) For a third order determinant. If any two rows (or columns) of A are interchanged. For n = 2 we have all a12 JAI = a21 a22 = alla22 .a31a22a13 . If the rank (see Sec. AND INTELLIGENCE A simple diagonal method can be used to evaluate the determinants of order 2 and 3. minant is zero. .540 ROBOTICS: CONTROL. f3. 6. If a multiple of any row (or column) is added to other row (or column). 2. then the sign of its deter- 4. The rank of a matrix indicates the number of linearly independent rows (or columns) in the matrix. the adjoint of A is denoted by adj A.. is the adjoint of A divided by the determinant of A..i] i.VECTORS AND MATRICES 541 Then.a2) . j = 1. n (A. A. a matrix of order m x n can have a rank equal to the smaller value of m and n.a) (c2 . then the matrix is singular and the rows of the matrix are not linearly independent. Thus. 2. A-I. then the determinant of that matrix is nonzero. that is.67) JAI The product (in either order) of a nonsingular n x n matrix A and its inverse is AA-' = A-IA = I (A. A. if the rows of a square matrix A of order n are linearly independent. or less.66) Sometimes. Thus. The rank of a matrix A of order m x n is equal to the order of the largest submatrix of A with nonzero determinant.a22) _ (a-b)(b-c)(c-a) This is the Vandermonde determinant of order 3. . that is.fl [A1 ]T = [A.. If the determinant of a square matrix of order n is zero.16 RANK OF A MATRIX In general. and ..a) (b2 .(c . the determinant can be used as a test for matrix singularity. The inverse of a nonsingular square matrix A. and the matrix is said to be nonsingular. A-I = the identity matrix I. a 1 a2 JAI = 0 0 b-a c-a d-' b2-a2 C2 -a2 _ (b .68) .j is the cofactor of aid in I A I. then the transpose of the matrix formed from the cofactors Al is called the adjoint of A.17 ADJOINT AND INVERSE MATRICES If A is a square matrix and Ai. [Ac]T JAI = add A (A. . a 3 x 3 matrix c A= has the inverse f i A-I = 1 aei + dhc + gfb .71) '. (A.(af . Similarly.t-11 . A..gec (ei ..fg) (dh .69) (A.bg) . is conformable..afh ..)T = In general.)T (A. AND INTELLIGENCE Thus...67) and (A..ce) x -_(di . from Eqs.be a b d e g h Similarly.542 ROBOTICS: CONTROL. (A. then the inverse of their product is the product of the inverse of each matrix in reverse order: (AI A2 .dbi . A are square matrices of order n.. SENSING. a 2 x 2 matrix A = has the inverse a b c (A2)T(A. A.. A2.68).73) The proof of this result is left as an exercise. . may be stated as follows: [A-1 + BTCB]-I = A - ABT[BABT + C-']-1BA (A.bd) An important result called the matrix inversion lemma.72) d A_I - 1 ad ..70) If A1. then the transpose of their product is the product of the transpose of each matrix in reverse order: (A... VISION. AZ I Ai I (A.A2A3 . if the matrix product of Al A2 ..ge) (ai .)-I = A. A.ch) (bf . .cd) (ae . (adj A)A = A(adj A) = and I A 1 1.cg) -(ah . ..fh) -(bi . and Noble [1969]. Pipes [1963].76) (A.78) Tr(A + B) = Tr(A) + Tr(B) Tr (AB) = Tr (BA) Tr (ABCT) = Tr (CBTAT) REFERENCES Further reading for the material in this appendix may be found in Frazer et al. [1960]. Trace A = Tr (A) aii (A. Thrall and Tornheim [1963].18 TRACE OF A MATRIX The trace of a square matrix A of order n is the sum of its principal diagonal elements.74) Some useful properties of the trace operator on matrices are: Tr (A) = Tr (AT) (A.VECTORS AND MATRICES 543 A.75) (A.77) (A. . Bellman [1970]. DC' This appendix reviews three methods for obtaining the jacobian for a six-link manipulator with rotary or sliding joints.. .. and angular velocity vectors of the manipulator hand with respect to the base coordinate frame (xo. Based on the moving coordinate frame concept (Whitney [1972]).(9). zo). PZ(t)]T [vv(t). linear velocity. yo. the superscript T denotes the transpose operation. WZ(t)]T . Py(t). Vy(t). Wy(t). (Chap. the linear and angular velocities of the hand can be obtained from the velocities of the lower joints: = J(9)9(t) = [J.C where. One advantage of resolved motion is that there exists a linear mapping between the infinitesimal joint motion space and the infinitesimal hand motion space.1 VECTOR CROSS PRODUCT METHOD Let us define the position. This 4-' . J2(q).1) fi(t) [w (t). B. J6(q)]4(t) 544 (B. 5) one needs to determine how each infinitesimal joint motion affects the infinitesimal motion of the manipulator hand.APPENDIX B MANIPULATOR JACOBIAN In resolved motion control. .2) . VZ(t)]T (B. respectively: P(t) V(t) A A [PX(t). as before.a0 mapping is defined by a jacobian.. SI C/] CI 0 . x indicates cross product.11 and its link coordinate transformation matrices in Fig. . the elements of the jacobian are found to be: -SI [d6(C23C4S5+S23C5)+S23d4+a3C23+a2C2)-CI(d6S4S5+d2) CI [ d6 (C23 C4 S5 + S23 C5) + S23 d4 + a3 C23 + a2 C2 ] .MANIPULATOR JACOBIAN 545 where J(q) is a 6 x 6 matrix whose ith column vector Ji(q) is given by (Whitney [1972]): Zi I X i'P6 Zi _ I if joint i is rotational (B. For a six-link manipulator with rotary joints.3) Ji(q) = zr-I 0 if joint i is translational and 4(t) = 141(0--46 (t)]T is the joint velocity vector of the manipulator. . 2. the jacobian can be found to be: J(B) = Z° X °P6 Z° ZI X 'P6 ZI . For the PUMA robot manipulator shown in Fig. . and zi.I P6 is the position of the origin of the hand coordinate frame from the (i .1)th coordinate frame expressed in the base coordinate frame. Z5 X 5P6 Z5 ti. 2.4) .I is the unit vector along the axis of motion of joint i expressed in the base coordinate frame.13.SI (d6 S4 S5 +d2) JI(B)= 0 0 0 1 r d6 C1 d2 CI d6SI S4 S5 + d2SI S4S5 + J2(0) = J2z . (B. i. . C1 [ d6 C3 C4 S5 + d6 S3 C5 + d4 S3 + a3 C3 Sl S23 (d6 C5 + d4) . .C1 C23 S4 .S1 C4 .5154 ) SS . AND INTELLIGENCE where J2z Sl [ d6 S23 C4 S5 . VISION.S1 C23 S4 + C1 C4 S23 S4 d6 (S1 C23 C4 + C1 S4 )S5 + d6 S1 S23 C5 .d6 Cl C4 S5 J5(6) = .d4 C3 + a3 S3 I .S1S4)S5 + C1S23C5 (S1 C23 C4 + C1 S4 )S5 + S1 S23 C5 .d6 C23 S4 S5 d6 C23 C4 S5 .S1 C1 0 where J3z = . + 0j). C.546 ROBOTICS: CONTROL.d6 C23 C5 .).d6 (C1 C23 C4 .. + 0.Cl S23 (d6 C5 + d4 ) d6 CI S23 S4 S5 . S13 = sin (0.S23 C4 S5 + C23C5 where Si = sin 0.S1 [ d6 S3 C4 S5 .d6 C1 S23 C5 J6(0) _ 0 (CIC23C4 . and C13 = cos (0.d6 Sl S23 C4 S5 J4(0) _ Cl S23 SI S23 C23 d6S23S4C5 d6 S23 S4 S5 d6 Cl C23 S4 C5 + d6 Sl C4 C5 + d6 S1 C23 S4 S5 .d4C23 + a3S23 + a2 S2 Cl [ d6 C23 C4 S5 + d6 S23 C5 + d4 S23 + a3 C23 + a2 C2 Cl d6S1S4S5 d6 S4 S5 I J3(0) = J3z . = cos 8.d6 C3 C5 .. SENSING. MANIPULATOR JACOBIAN 547 If it is desired to control the manipulator hand along or about the hand coordinate axes, then one needs to express the linear and angular velocities in hand coordinates. This can be accomplished by premultiplying the v(t) and Q(t) by the 3 x 3 rotation matrix [°R6]T, where °R6 is the hand rotation matrix which relates the orientation of the hand coordinate frame to the base coordinate frame. Thus, [°R6]T 0 0 0 6R° _0 [J(q)]4(t) (B.5) 6R°_ [°R6]TJ where 0 is a 3 x 3 zero matrix. B.2 DIFFERENTIAL TRANSLATION AND ROTATION METHOD [1981] utilizes 4 x 4 homogeneous transformation matrices to obtain differential translation and rotation about a coordinate frame from which the jacobian of the manipulator is derived. Given a link coordinate frame T, a differential change in T corresponds to a differential translation and rotation along and about the base coordinates; that is, Paul 1 -bZ 1 by 0 0 0 Zip T + dT = ,4N bZ - bX 1 -s,, 0 T (B.6) bX 0 0 1j or b,. 1 bZ 1 dT = bZ - b.0 1 0 1 0 1 0 0 0 0 0 0 1 0 0 1 T o°' -83' 0 5., 0 1 0 0 0 0 0 = AT where 1 (B.7) 0 1 0 0 .-1 dx 1 -bZ 1 by 0 0 0 1 1 0 1 0 0 1 0 0 0 1 0 0 0 dy dZ 1 bZ - bx 1 0 0 0 0 0 0 0 -by 0 bX 0 0 0 0 0 0 (B.8) b = (bX, by, 6Z)T is the differential rotation about the principal axes of the base coordinate frame and d = (dx, dy, dd)T is the differential translation along the °`.° 548 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE principal axes of the base coordinate frame. Similarly, a differential change in T can be expressed to correspond with a differential translation and rotation along and about the coordinate frame T: 1 0 1 0 0 1 dX 1 -SZ 1 6y 0 0 0 1 T + dT = T 0 0 LO dy dz Sz -6, 1 (B.9) 0.l 0 0 -Sy 0 SX 0 11 1 0 5y 0 -Sz 1 0 0 0 1 1 0 1 0 0 1 0 or dT = T Sz -SX 1 0 0 0 0 0 1 -6y 0 8., 0 0 0 0 0 (B.10) = (T)(TA) where TA has the same structure as in Eq. (B.8), except that the definitions of S and d are different. S = (Or, by, 6Z)T is the differential rotation about the principal axes of the T coordinate frame, and d = (dx, dy, dz)T is the differential translation along the principal axes of the T coordinate frame. From Eqs. (B.7) and 3.. III (B.10), we obtain a relationship between A and TA: s., AT = (T)(TA) or TA=T-'AT (B.11) Using Eq. (2.2-27), Eq. (B.11) becomes n (Sxn) n(Sxs) n(Sxa) n(Sxp)+d TA=T-IAT= s(Sxn) s(Sxs) s(Sxa) s(Sxp)+d a (S x n) 0 a (S x s) 0 a (S x a) a (S x p) + d 0 0 (B.12) where S = (Sx, by, SZ)T is the differential rotation about the principal axes of the base coordinate frame, and d = (dr, dy, dz)T is the differential translation along the principal axes of the base coordinate frame. Using the vector identities '«"+ x and (y x z) = -y (x x z) = y (z x x ) x(x x y) =0 MANIPULATOR JACOBIAN 549 Eq. (B.12) becomes 0 TQ o 0 0 0 0 0 (B.13) Since the coordinate axes n, s, a are orthogonal, we have nxs=a then Eq. (B.13) becomes 0 sxa=n a x n= s 6- (p x n) + x s) + x a) + 0 TQ = 6 6 a s 6 0 (B.14) n 0 0 0 o If we let the elements of TA be 0 TSz T6y T6X Td, Tdy Tdz (B.15) TA = Tdz T6Y 0 T6X 0 0 0 0 0 then equating the matrix elements of Eqs. (B.14) and (B.15), we have p) + d] p) + d] x p)+d] T6X = (B.16) 6 a 550 ROBOTICS. CONTROL, SENSING, VISION, AND INTELLIGENCE Expressing the above equation in matrix form, we have dx dy [n, s, a]T 0 [(p x n), (p x s), (p x [n s, a]T a)]T dZ bx (B.17) by aZ where 0 is a 3 x 3 zero submatrix. Equation (B. 17) shows the relation of the differential translation and rotation in the base coordinate frame to the differential translation and rotation with respect to the T coordinate frame. Applying Eq. (B.10) to the kinematic equation of a serial six-link manipulator, we have the differential of °T6: d °T6 = °T6 T6A induce an equivalent change in °T6 T6A ..1 (B.18) In the case of a six-link manipulator, a differential change in joint i motion will d°T6 = °T6T6p = °AI'A2 ... 1-2A; I'-IA.;-IA. ... 5A6 (B.19) where is defined as the differential change transformation along/about the joint i axis of motion and is defined to be 0 dO1 '-IAi - dOi 0 0 0 0 0 0 0 0 0 0 0 0 0 if link i is rotational (B.20) 0 0 0 0 0 0 0 0 0 if link i is translational 0 0 0 0 0 ddl 0 From Eq. (B.19), we obtain T6Q due to the differential change in joint i motion T6Q = (;-IA;'A;+I = Ui-' i-IA;Ui . . . 5A6)-1'-IA;('-'A;'Ai+I . . . 5A6) (B.21) where Ui = '-'A;'Al+I ... 5A6 MANIPULATOR JACOBIAN 551 Expressing Ui in the form of a general 4 x 4 homogeneous transformation matrix, we have nx sx ax px Ui = ny nz sy sz ay az P), (B.22) Pz 1 0 0 0 Using Eqs. (B.20) and (B.22) for the case of rotary joint i, Eq. (B.21) becomes 0 T6A -az 0 nz sz Pxny -Pynx Pxsy - Pysx az - nz 0 0 -Sz 0 pxay - Pyax 0 dOi (B.23) 0 For the case of a prismatic joint i, Eq. (B.21) becomes 0 T6A 0 0 0 nz sz 0 0 0 0 ddi (B.24) 0 0 az 0 a.. 0 0 From the elements of T60 defined in Eq. (B.15), equating the elements of the matrices in Eq. (B.15) and Eq. (B.23) [or Eq. (B.24)] yields O.' r Pxny - Pynx Pxsy - Pysx Pxay - Pyax nz sz dOi if link i is rotational az (B.25) ddi if link i is translational 552 ROBOTICS- CONTROL, SENSING, VISION, AND INTELLIGENCE Thus, the jacobian of a manipulator can be obtained from Eq. (B.25) for i = 1,2,...,6: T6dx dqI T",dy, dq2 T6d z dq3 ..-. T6a x T6U Sy = J(q) (B.26) dq4 dq5 N Z dq6 where the columns of the jacobian matrix are obtained from Eq. (B.25). For the PUMA robot manipulator shown in Fig. 2.11 and its link coordinate transformation matrices in Fig. 2.13, the jacobian is found to be: Jlx fly JI(0) _ Jl z - S23 (C4 C5 C6 - S4 S6) + [1. C23 S5 C6 S23 (C4 C5 S6 + S4 C6) + C23 S5 S6 -523C455 + C23C5 where Jlx = [ d6 (C23 C4 S5 + S23 C5) + d4 S23 + a3 C23 + a2 C2 ] (S4 C5 C6 + C4 S6 ) - (d6S4S5 +d2)IC23(C4C5C6 -S4S6) -S23S5C6I J1 y = d6 (C23 C4 S5 + S23 C5) + d4 S23 + a3 C23 + a2 C2 I ( - S4 C5 S6 + C4 C6 ) - (d6 S4 S5 + d2 ) [ - C23 (C4 C5 S6 + S4 C6) + S23 S5 S6 JIz = Ld6(C23C4S5 +S23C5) +d4S23 + a3 C23 +a2C2I(S4S5 - (d6 S4 S5 +d2)(C23C4S5 +S23C5) J2x J2y J2z J2(0) = S4 C5 C6 + C4 S6 -S4C5S6 + C4 S6 S4 S5 MANIPULATOR JACOBIAN 553 where J2x = (d6S3C5 + d6 C3 C4 S5 + d4 S3 + a3 C3 +a2)(S5C6) - (- d6 C3 C5 + d6 S3 C4 S5 - d4 C3 + a3 S3) (C4 C5 C6 - S4 S6 ) J2 y = - (d6 S3 C5 + d6 C3 C4 S5 + d4 S3 + a3 C3 + a2 ) (S5 S6 ) + (- d6 C3 C5 + d6 S3 C4 S5 - d4 C3 + a3 S3 ) (C4 C5 S6 + S4 C6 ) J2z = - (d6 S3 C5 + d6 C3 C4 S5 + d4 S3 + a3 C3 + a2 ) C5 - (- d6 C3 C5 + d6S3C4S5 -d4C3 +a3S3)(C4S5) (a3 + d6 C4 S5) (S5 C6) + (d4 + d6 C5) (C4 C5 C6 - S4 S6 ) - (a3 + d6 C4 S5) (S5 S6) - (d4 + d6 C5) (C4 C5 S6 + S4 C6 ) - (a3 + d6 C4 S5) C5 + (d4 + d6 C5 ) C4 S5 J3(0) = S4 C5 C6 + C4 S6 - S4 C5 S6 + C4S6 S4S5 d6 S5 S6 d6 S5 C6 0 J4(0) = - S5 C6 S5 S6 C5 I d6 C6 I -d6 S6 0 S6 J5(0) = C6 0 554 ROBOTICS: CONTROL, SENSING, VISION, AND INTELLIGENCE B.3 STROBING FROM THE NEWTONEULER EQUATIONS OF MOTION The above two methods derive the jacobian in symbolic form. It is possible to numerically obtain the elements of the jacobian at time t explicitly from the Newton-Euler equations of motion. This is based on the observation that the ratios of infinitesimal hand accelerations to infinitesimal joint accelerations are the elements of the jacobian if the nonlinear components of the accelerations are deleted from the Newton-Euler equations of motion. From Eq. (B.2), the accelerations of the hand can be obtained by taking the time derivative of the velocity vector: .-y C/1 [ 6(t)] = J(q)9(t) + J(q, q)q(t) (t)1 ( B . 27 ) [q1(t), 46 (t) ] T is the joint acceleration vector of the manipuwhere lator. The first term of Eq. (B.27) gives the linear relation between the hand and the hand accelerations and the joint accelerations can be established from the Newton-Euler equations of motion, as indicated by the first term in Eq. (B.27). From Table 3.3 we have the following recursive kinematics (here, we shall only consider manipulators with rotary joints): ... , o0° joint accelerations. The second term gives the nonlinear components of the accelerations and it is a function of joint velocity. Thus, a linear relation between C/] 'RoWi = 'Ri-1('-IRowi-1 + zogi) 'RoWi = 'Ri-1[`-IR06i-1 (B.28) (B.29) + zogi + ('-IR0wi-1) x z04i] 'Roiri = ('RoWi) x ('Ropi*) + (`RoWi ) X [(`RoWi) x (`Ropi*)] + 'Ri-1(i-IRovi-1) (B.30) The terms in Eqs. (B.29) and (B.30) involving wi represent nonlinear Coriolis and centrifugal accelerations as indicated by the third term in Eq. (B.29) and the second term in Eq. (B.30). Omitting these terms in Eqs. (B.29) and (B.30) give us the linear relation between the hand accelerations and the joint accelerations. Then if we successively apply an input unit joint acceleration vector (41, q 2 4 2 ,.. ..Q 46 T = ( 1 , 0, 0, ... 46 )T 0) T, 1)T, etc., the columns of the jacobian matrix q2 can be "strobed" out because the first term in Eq. (B.27) is linear and the second (nonlinear) term is neglected. This numerical technique takes about 24n(n + 1)/2 multiplications and 19n(n + 1)/2 additions, where n is the number of degrees of freedom. In addition, we need 18n multiplications and 12n additions to convert the hand accelerations from referencing its own link coordinate frame to referencing the hand coordinate frame. = (0, 0, 0, ... , (41 , 42, ... , 46 ) T = (0, 1, 0, ... , 0) T, (41 ..d MANIPULATOR JACOBIAN 555 Although these three methods are "equivalent" for finding the jacobian, this "strobing" technique is well suited for a controller utilizing the Newton-Euler equations of motion. Since parallel computation schemes have been discussed and developed for computing the joint torques from the Newton-Euler equations of motion (Lee and Chang [1986b]), the jacobian can be computed from these schemes as a by-product. However, the method suffers from the fact that it only gives the numerical values of the jacobian and not its analytic form. C.' REFERENCES Further reading for the material in this appendix may be found in Whitney [1972], Paul [1981], and Orin and Schrader [1984]. BIBLIOGRAPHY Aggarwal, J. K., and Badler, N. I. (eds.) [1980]. "Motion and Time Varying Imagery," Special Issue, IEEE Trans. Pattern Anal. Machine Intelligence, vol. PAMI-2, no. 6, pp. 493-588. Agin, G. J. [1972]. "Representation and Description of Curved Objects," Memo AIM-173, Artificial Intelligence Laboratory, Stanford University, Palo Alto, Calif. Albus, J. S. [1975]. "A New Approach to Manipulator Control: The Cerebellar Model Articulation Controller," Trans. ASME, J. Dynamic Systems, Measurement and Control, pp. 220-227. Ambler, A. P., et al. [1975]. "A Versatile System for Computer Controlled Assembly," Artificial Intelligence, vol. 6, no. 2, pp. 129-156. obi Ambler; A. P., and Popplestone, R. J. [1975]. "Inferring the Positions of Bodies from Armstrong, W. M. [1979]. "Recursive Solution to the Equations of Motion of an N-link Manipulator," Proc. 5th World Congr., Theory of Machines, Mechanisms, vol. 2, pp. 1343-1346. amp Specified Spatial Relationships," Artificial Intelligence, vol. 6, no. 2, pp. 157-174. 'c7 '-' [i7 s." Astrom, K. J. and Eykhoff, P. [1971]. "System Identification-A Survey," Automatica, vol. 7, pp. 123-162. Baer, A., Eastman, C., and Henrion, M. [1979]. "Geometric Modelling: A Survey," Computer Aided Design, vol. 11, no. 5, pp. 253-272. C70 Bajcsy, R., and Lieberman, L. [1976]. "Texture Gradient as a Depth Cue," Comput. Graph. Image Proc., vol. 5, no. 1, pp. 52-67. Ballard, D. H. [1981]. "Generalizing the Hough Transform to Detect Arbitrary Shapes," Pattern Recog., vol. 13, no. 2, pp. 111-122. Ballard, D. H., and Brown, C. M. [1982]. Computer Vision, Prentice-Hall, Englewood Cliffs, N.J. mentation," Artificial Intelligence, vol. 8, no. 3, pp. 241-274. 556 fro Barnard, S. T., and Fischler, M. A. [1982]. "Computational Stereo," Computing Surveys, vol. 14, no. 4, pp. 553-572. Barr, A., Cohen, P., and Feigenbaum, E. A. [1981-82]. The Handbook of Artificial Intelligence, vols. 1, 2, and 3, William Kaufmann, Inc., Los Altos, Calif. Barrow, H. G., and Tenenbaum, J. M. [1977]. "Experiments in Model Driven Scene Seg`'+ N., acv °C7 BIBLIOGRAPHY 557 _ °>¢ Barrow, H. G., and Tenenbaum, J. M. [1981]. "Interpreting Line Drawings as ThreeDimensional Surfaces," Artificial Intelligence, vol. 17, pp. 76-116. Bejczy, A. K. [1974]. "Robot Arm Dynamics and Control," Technical Memo 33-669, Jet Propulsion Laboratory, Pasadena, Calif. `P< Bejczy, A. K. [1979]. Bejczy, A. K. [1980]. ono "Dynamic Models and Control Equations for Manipulators," "Sensors, Controls, and Man-Machine Interface for Advanced '-' .n° Technical Memo 715-19, Jet Propulsion Laboratory, Pasadena, Calif. Binford, T. O. [1979]. "The AL Language for Intelligent Robots," in Proc. IRIA Sem. Languages and Methods of Programming Industrial Robots (Rocquencourt, France), pp. 73-87. Blum, H., [1967]. "A Transformation for Extracting New Descriptors of Shape," in Models for the Perception of Speech and Visual Form (W. Wathen-Dunn, ed.), MIT Press, Cambridge, Mass. Bobrow, J. E., Dubowsky, S. and Gibson, J. S. [1983]. "On the Optimal Control of Robot Manipulators with Actuator Constraints," Proc. 1983 American Control Conf., San Francisco, Calif., pp. 782-787. Bolles, R., and Paul, R. [1973]. "An Experimental System for Computer Controlled Mechanical Assembly," Stanford Artificial Intelligence Laboratory Memo AIM-220, Stanford University, Palo Alto, Calif. Bonner, S., and Shin, K. G. [1982]. "A Comparative Study of Robot Languages," IEEE Computer, vol. 15, no. 12, pp. 82-96. Brady, J. M. (ed.) [1981]. Computer Vision, North-Holland Publishing Co., Amsterdam. Brady, J. M., et al. (eds.) [1982]. Robot Motion: Planning and Control, MIT Press, Cambridge, Mass. Bribiesca, E. [1981]. "Arithmetic Operations Among Shapes Using Shape Numbers," Pattern Recog., vol. 13, no. 2, pp. 123-138. Bribiesca, E., and Guzman, A. [1980]. "How to Describe Pure Form and How to Measure Differences in Shape Using Shape Numbers," Pattern Recog., vol. 12, no. 2, pp. 101-112. Brice, C., and Fennema, C. [1970]. "Scene Analysis Using Regions," Artificial Intelligence, vol. 1, no. 3, pp. 205-226. Brooks, R. A., [1981]. "Symbolic Reasoning Among 3-D Models and 2-D Images," Artificial Intelligence, vol. 17, pp. 285-348. Brooks, R. A. [1983a]. "Solving the Find-Path Problem by Good Representation of Free Space," IEEE Trans. Systems, Man, Cybern., vol. SMC-13, pp. 190-197. Brooks, R. A. [1983b]. "Planning Collision-Free Motion for Pick-and-Place Operations," Intl. J. Robotics Res., vol. 2, no. 4, pp. 19-44, Brooks, R. A., and Lozano-Perez, T. [1983]. "A Subdivision Algorithm in Configuration Space for Find-Path with Rotation," Proc. Intl. Joint Conf. Artificial Intelligence (Karlsuhe, W. Germany), pp. 799-808. Bryson A. E. and Ho, Y. C. [1975]. Applied Optimal Control, John Wiley, New York. oho \.o pro Teleoperation," Science, vol. 208, pp. 1327-1335. Bejczy, A. K., and Lee, S. [1983]. "Robot Arm Dynamic Model Reduction for Control," Proc. 22nd IEEE Conf. on Decision and Control, San Antonio, Tex., pp. 1466-1476. Bejczy, A. K., and Paul, R. P. [1981]. "Simplified Robot Arm Dynamics for Control," Proc. 20th IEEE Conf. Decision and Control, San Diego, Calif., pp. 261-262. Bellman, R. [1970]. Introduction to Matrix Analysis, 2d edition, McGraw-Hill, New York. Beni, G., et al. [1983]. "Dynamic Sensing for Robots: An Analysis and Implementation," Intl. J. Robotics Res., vol. 2, no. 2, pp. 51-61. ooh `ti ti" `'' .mob moo -`= ban cam o;. °O, CU- yob '17 °°° °_? J. Graphics. M. Pattern Anal. '-' 'S7 via 0." Tech. M. [1981a]." Comput. "The Application of Logic Programming to the Generation of Plans for Robots. Reading. pp. pp 289-297. "MAPLE: A High Level Language for Research." Ph." J. Cybern. 2. and Comacho. 77. pp. C. no. R. pp. 14.. 222-238. G.. R. pp. Mach. J. 47-53 (in French).Y. Canali. H. Y. vol. 97-103. Mech. [1980]. "Description and Displacement Analysis of Mechanisms Based on 2 x 2 Dual Matrices.. 4. 248-270. J." Pattern Recog. Mechanical Engineering. Introduction to Robotics: Mechanics and Control. A. B.D. Chase. Mich. B. McGraw-Hill. C. [1983]. Chase. and Blasgen. A. vol. M. 1. App. vol. Karnopp. JARS: JPL Autonomous Robot System. Chang. O. J. H. vol. Craig. and Hartenberg. K.D. S. The Computer. Yorktown Heights.. 2. IBM T.. R.. 1166-1169. Graphics Image Proc." Comput. and Espiau. Pasadena. "Adaptive Control Strategies for Computer-Controlled Manipulators. W. ASME. 5. and Control Engineering Program. Dissertation. SMC-13.. Chaudhuri.. [1981b]. 4." Ph. Series B. vol. T. M. Engr.. Derksen." Trans. W. J. [1956]. Note 65. R. et al.. vol. Northwestern U. "An Ultrasonic Proximity Sensor Operating in Air. Denavit... A. Stanford Research Institute. 137-146. 21. 6. ASME. New York. °U' . H. Thesis. Denavit." in Mechanical Assembly. "Vector Analysis of Linkages. "A Note on Fast Algorithms for Spatial Domain Techniques in Image Processing. Davis." IEEE Trans.. no. pp. Kurtz. [1975]." Nouvel Automatisme. 25-39. pp. "Use of Optical Proximity Sensors in Robotics. W. J. Addison-Wesley. Robotics and Teleoperators Group. Menlo Park. SENSING." Sensors and Actuators. [1971]. pp. J. Industry. C." IEEE Trans. C. J. [1973].. Snyder.. vol. R. 95-123 (in Italian)." Trans." Fisica e Tecnologia. Catros. Davis. New York. 93.. and Image Proc. 388-410. 25.. S. Calif. Series B. A. "Thinning Algorithms: A Critique and a New Methodology. "The Detection of Unresolved Targets Using the Hough Transform. pp. Cowart. D. Watson Research Center. pp.-.. pp. and Ruedger. J. E. 85. F. PAMI-5. Symbolic Logic and Mechanical Theorem Provono 'LS CAD ing. [1975]. and Bayazitoglu. [1984]. A. C. J. [1983]. vol. Cross. Davies. vol. N. 1. N. in' '-y try . 317-327. Chow. J." Comput. 14. 215-221.. t17 Cep :C1 'c7 . "Markov Random Field Texture Models. "A Kinematic Notation for Lower-Pair Mechanisms Based on Matrices. Darringer. "A Survey of Edge Detection Techniques. "Development and Application of a Generalized d'Alembert Force for Multifreedom Mechanical Systems. Dynamics of Mechanical and Electromechanical Systems. T. Intell. and Pridmore-Brown. vol. Academic Press. and Biomed. Rulifson. no.. no. F. Systems. J. Mass. 53-63. [1986].Y 't7 Q"N m°` °'a arc °0° tip '"j C!4 . [1983]. "The QA4 Language Applied to Robot Planning. and Lee. iii Crandall. K. M. J.°. J. Man. vol. E. S. vol. R. [1983]. et al. [1963]. AND INTELLIGENCE Canali. "Automatic Boundary Detection of the Left Ventricle from Cineangiograms.. A. [1968]. B. [1972]. vol. Information. [1955]. L. Y. Ann Arbor.558 ROBOTICS: CONTROL. Chung. Craig. [1972]. IBM Research Report RC 5606. [1981]. pp. E. University of Michigan.. r^a Ill. Evanston. and Waldinger. Industry. 2.. VISION.°.. and Plummer. "Sensori di Prossimita Elettronici.. Calif. Jr. C. and Kaneko. E. pp.. D. and Jain. Engr. Jet Propulsion Laboratory. C. Vision." Robotica. L.. P. [1980]. Res. J. no. Falb. [1973]. [1983]. "Decoupling in the Design and Synthesis of Multivariable Control Systems. Drake. Joint Conf.. J. Machine Intelligence. Fahlman. S. "A Note on Two Problems in Connection with Graphs. S.. 1. "Programming Vision and Robotics Systems with RAIL. Pattern Classification and Scene Analysis. vol. vol. N. Analysis of Mechanisms and Robot Manipulators. P. R. Duda. 0.-r o<° C". R. (ed. G. J. vol. W. J.BIBLIOGRAPHY 559 Dijkstra. J. and Hart. pp. 189-208. 3. . American Elsevier. AMACOM. Fikes. et al. Mass. 5 (B. pp. and Hart. H. R." Artificial Intelligence. no." IEEE Trans. 1962 Spring Joint Computer Conf. 269-271. vol.. Duda. Reston. D. 6. no. E. P. Dodd. 000 o°- 000 C. Nitzan.. New York. E. a Programming Language for Automation. vol. Duffy. Fink. [1972]." Proc. no. R. G. ASME. pp. pp. ACM. "Planning and Robots. vol. 259-271. Measurement and Control. 392-406. :`' vii ail !s" (1) a0. "A Foundation for a Unified Theory of Analysis of Spatial Mechanisms. PAMI-1. Fairchild [1983].. Finkel. 5.. pp.) [1957]. pp." Proc. Ernst. no. Doran. and Rooney. F. "Learning and Executing Generalized Robot Plans.. Fikes. 519-532. E. [1974]. and DesForges. R.97. and Rossol. New York. vol. 13-30. [1979]. Duda.. 101. Engr. "Use of Range and Reflectance Data to Find Planar Surface Regions. pp. Duffy. D.. and Rosenfeld. no. C. 1. J. G. [1970]. no. vol. f1' r'. no. 1. New York. Dyer. [1979]. G. A. Dubowsky. pp. Engelberger. J. E." Trans. New York. pp. Franklin.. 1159-1164. R.°° C/] 'LS "'+ . "MH-1. R. [1967]. CCD Imaging Catalog." Intl. Palo Alto. Pattern Anal. pp.. "STRIPS: A New Approach to the Application of Theorem Proving to Problem Solving.2. "An Overview of AL." Artificial Intelligence. 1. 4. vol. Robotics Res.. S. Calif. "Using Compliance in Lieu of Sensory Feedback for Automatic Assembly.. 193-200.. Series B. 3. Artificial Intelligence. do' £S. Draper Laboratory..6 . 39-51.) [1979]. pp.. 88-89. 0. pp." Comm. H." Artificial Intelligence. "Thinning Algorithms for Grayscale Pictures. "Use of the Hough Transformation to Detect Lines and Curves in Pictures. [1979]. Reston Publishing Co. McGraw-Hill. E." IEEE Trans. R. and Wolovich. N. New York." Report T-657. ASME. Dynamic Systems. no. and Barrett. Industry. 758-765. P. New York. and Vanderbrug. "The Application of Model Referenced Adaptive Control to Robotic Manipulators. Michie. Featherstone. [1982]. L. J." Numerische Mathematik. 1. yam. A. [1983]. Fairchild Corp. P. Hart.). pp. pp. Plenum. "A Planning System for Robot Construction Tasks. Television Engineering Handbook. eds." IEEE Trans. 2. 0. 't7 C/1 :°. Cambridge. P. vol. [1975]. R. vol. A. 1-49. S." Trans. Automatic Control. [1972]. J. [1977]." SME Robots VI. C. Robotics and Automated Manufacturing. 251-288. E. San Francisco. J. 12. "The Calculation of Robot Dynamics Using Articulated-Body Inertia. 4th Intl.. Dorf. 651-655. W. John Wiley.. E. and Nilsson. Machine Intell. Computer Vision and Sensor-Based Robots. [1980]. pp." in Machine Intelligence. Vol. Calif. [1980]. 11-15. L. J. John Wiley. T. Va. [1975]. [1962]. J. A Computer-Oriented Mechanical Hand. no. Pattern Anal. 4. Robotics in Practice. [1959]. E. 3/4. Meltzer and D.. (eds. C. J.. PAMI-1. and Nilsson. [1971]. D. 15. R. vol. New York. A. 10. pp. (ed. R. and Safabakhsh. 2. [1980]. [1984]. H. pp. 2 vols. [1968]. 12. Fu. 345-385. Introduction to Fourier Optics. "An Approach to Nonlinear Feedback Control with Application to Robotics... Fu. "Learning Control Systems and Intelligent Control Systems: An Intersection of Artificial Intelligence and Automatic Control. Elementary Matrices. "Computer Vision. B. vol. New York.. F. England. Reading." Pattern Recog. T. System Theory. C. "Determining the Minimum-Area Encasing Rectangle for an Arbitrary Closed Curve." Pattern Recog. no. "Fast Nonlinear Control with Arbitrary Pole Placement for Industrial Robots and Manipulators. R. K. vol. Gonzalez. W. Gonzalez. J. [1983]. Cybern.. 1. [1950]. vol. C." Proc. PAMI-5. 18. R. 65-78. G. P. J. [1977]. and Thomason. C.. -00 p. N. C. [1974]. S. E.° . Freeman. "How Vision Systems See. 12th chi (~D co) -+' . Mass. (T. Classical Mechanics. 90-93. Systems. pp." IEEE Trans. Galey.. Fu. 6. no. Surveys." McGraw-Hill Yearbook of Science and Technology." Computer. 70-72. AC-16. "On the Encoding of Arbitrary Geometric Configurations. Gonzalez. [1978]. Freund. J. Young and K." Mechanism and Machine Theory. 57-97. [1971]. C.) [1982b]. "Computer Processing of Line Drawings. Gonzalez.).-y cc' `°n ago t^7 a°" 00o A. and Shapira. ed. Duncan. 191-213. J. R. 12. and Hsia." Machine Design. C. A. 13. Gonzalez. Reading. Computers.. Chelsea. Fu. Mass." IEEE Trans. S. H. 7. and Collan." Intl. pp. pp. 17-32. R. 111-122. vol. [1959].. and Ha.. pp. Gilbert. The Theory of Matrices. Automatic Control. 2. 1.). and Mui. . C." Comm..S. 4. pp. pp. SENSING. R. 1..560 ROBOTICS: CONTROL. pp.. Academic Press. Digital Image Processing. no. "Computer Vision Techniques for Industrial Applications and Robot Control. Pattern Anal. Addison-Wesley. Freeman. Englewood Cliffs.. Addison-Wesley. and Wintz. [1982a]. Syntactic Pattern Recognition and Applications.. 409-413. K. [1975]. Tou. "A Syntax-Directed Program that Performs a Three-Dimensional Perceptual Task. pp. Cambridge. Gonzalez. [1986]. vol. P.3 `dpi off cacd. pp. pp. Goodman." in Computer-Based Automation (J. R. ACM. C.. C. 55. R. New York. "A System for Programming and Controlling Sensor-Based Robot Manipulators.. New York.J. C. Machine Intell. A. H. Reading. VISION. Prentice-Hall. no.. Mass. H. Gips. no. 91-96. Goldstein. Geschke." in Handbook of Pattern Recognition and Image Processing. 15. Gonzalez. S.. McGraw-Hill. vol." IEEE Trans. G. [1960]. R. [1974]. J. AND INTELLIGENCE Frazer. vol. no. Robotics Res. Man. R. . "A Survey of Robotics Sensor Technology. T. Cambridge University Press. "Industrial Computer Vision. 1. W. no. McGraw-Hill. 6.. K. vol. pp. 189-200. Gantmacher. Addison-Wesley. 3-16. EC-10.t7 . 1-7. R. [1985b]. 101-109. [1977]. Plenum. Freeman. New York." Comput. 12. Elec. [1981]. I.. no. E. [1982]. pp. "A Survey of Image Segmentation. "Digital Image Enhacement and Restoration. Fu. . vol. R. vol. [1982]. pp. S. K. [1983]. no. 15. 260-268. SMC-14. and Fittes. Gonzalez. [1985a]... Syntactic Pattern Recognition: An Introduction. "Gray-Level Transformations for Interactive Image Enhancement. [1961]. B.. vol. M. eds." IEEE Trans. Special Issue of Computer on Robotics and Automation. vol. K.'3 Annual Southeastern Symp. " IEEE Trans.. [1976]. [1984]. vol. Q. pp. L. Hollerbach. 'T- °>u a. Robotics Res. R. 321-333. vol. M. and Taylor. et al. "Automated Tactile Sensing. Washington. Cybern. Hartenberg. "Application of Theorem Proving to Problem Solving. Joint Conf.. New York. and Dinstein. vol. [1980]. Herrick." in Automatic Interpretation and Classification of Images (A. 11. "A Recursive Lagrangian Formulation of Manipulator Dynamics 'ti 1. J. (Z7 yin urnU 7C' t.. C. Pattern Recog. et al. Man. Kinematic Synthesis of Linkages.. Harmon. ed. Cybern. 855-860. vol. Mass. 1268-1271.' ANC C]. and Camana. ASME J. vol. Holland. L. (eds. 100-107. "Dynamic Scaling of Manipulator Trajectories. Haralick. [1978]." IEEE Trans. Grossman." Intl. Hollerbach. "An Adaptive Control Scheme for Mechanical Manipulators-Compensation of Nonlinearity and Decoupling Control.C. -e^ hip c'' °w° . [1982]. H." IEEE Trans. R.. M. Measurement. C. SMC-14. "Statistical and Structural Approaches to Texture." U. Hart. Grasseli." Proc. "A High-Resolution Imaging Touch Sensor. vol. S.. "Methods and Means for Recognizing Complex Patterns. 2. vol. pp. D. T. 2. Reston. Rossol.." IEEE Trans." IEEE Trans. no. [1976]. Measurement and Control. Guzman. 2. [1977]. D. 33-44. J. [1962]. D. pp. Harris.. K.- ti. Rossol. . Horowitz. Hough." IEEE Trans... Cybern. J. "Interactive Generation of Object Models with a Manipulator.654. Television Theory and Servicing.BIBLIOGRAPHY 561 Green. B. R. L." Trans. Joint Conf. Academic Press. 1.. Shanmugan. 2d Intl. [1983]. SMC-3. Systems. McGraw-Hill. W. pp. Dynamic Systems. "Constant Variance Enhancement-A Digital Processing Technique.. 2d ed. pp. Systems." Trans. D. Man.. no.. [1968]. 610-621." Intl. Patent 3.. . Robotics Res. P. "A Formal Basis for the Heuristic Determination of Minimum-Cost Paths. J. pp." Proc. 2. V. 4th Intl. Robotics Res. D. and Lenat. Nilsson. 45-60. Addison-Wesley.. Hayer-Roth. Gruver.. vol. SMC-4. 102-106. [1964].° Hillis. Systems. [1984]. 46-50.. M. pp. Man. and Pavlidis..). Man. Plenum. 16.. A." Intl. E. 6.. "Picture Segmentation by a Directed Split-andMerge Procedure." Proc.069. N. G. L. vol. 7S' . no. D.) [1983]. P. Reading. "Decomposition of a Visual Scene into Three-Dimensional Bodies. 'L] '=1 Va. Horowitz. Artificial Intelligence. W. pp. R.. D. Optics. and Ward. Hackwood. Systems. Haralick. "Understanding Image Intensities. to appear June 1986." Appl. pp. "Textural Features for Image Classification. S. P. J. "Industrial Robot Programming Languages: A Comparative Evaluation. R.. 667-679. Systems. S. Joint Conf. [1973]." in Computer Vision and Sensor-Based Robots (G. J. no. no. I. 1st Intl. 9. Man. 1. SMC-8. eds. vol. and a Comparative Study of Dynamics Formulation Complexity. R. no. Building Expert Systems. 424-433. Horn.T. and Denavit. N.).. Hemami. [1980]. J. 3-32. 106. "CONSIGHT-I: A Vision-Controlled Robot System for Transferring Parts from Belt Conveyors. [1969]. J. 730-736. SMC-10. [1982]. Automatic Control. A. pp. "Nonlinear Feedback in Simple Locomotion Systems." Artificial Intelligence. S. [1977]. Dyn. P. 4. moo C1. C. and Tomizuka. Reston Publishers. S. Systems. A). Waterman. pp.. R. R. Cybern.. B. New York. Pattern Recog. "A Torque-Sensitive Tactile Array for Robotics. [1979]. 201-231. Cybern. W. M. H.. Dodd and L. and Raphael. ASME J. R. and Control. M. [1969].'a zoo`s C3. C. pp.. [1974]. vol. pp. vol. [1979]. New York... no. AC-19. 8. pp. pp." J. K. [1963]. [1976]. B. Speech. 739-747. C. vol.. 030 sive Screw Displacements. Ketcham. 99-118. 5. Koivo. Klinger. [1982]. D. Fu. 179-187. AND INTELLIGENCE Hu. [1981]. Machine Intell. Mass. N.r -T. "A Laser Time-of-Flight Range Scanner for Robotic Vision.. ed. Kane. 14. Itkis. ASSP-27. no. 68-105. PAMI-5. "A Fast Two-Dimensional Median Filtering Algorithm. G. K. "The Use of Kane's Dynamical Equations in Robotics. Soc. "The Near-Minimum-Time Control of Open-Loop Articulated Kinematic Chains. John Wiley.. vol. J. no. 241-245. [1962]. Systems. D. 5. no. R.-122-139. no. O. Jarvis. Jarvis. J. General Motors Research Laboratories. 889-894. no. M. H. pp. A. and Yao. and Horn. pp. T. and Soni.." Report GMR-4247. no. L'Ecole Nationale Superieure de l'Aeronautique et de 1'Espace. Johnston. pp.. "Dynamic Scene Analysis Using Pixel-Based Processes. A. SENSING.. PAMI-5. Dynamic Systems. New York.. E. "The Development of Equations of Motion of Single-Arm Robots. Ishizuka. 7. K.." J. 3. Pattern Anal... Man. [1978]. H. vol. D. Jain. Mich. B..:D C/1 `. [1970]. Measurement and Control. J. "Photographic Image Enhancement by Superposition of Multiple Images. R. R. J. vol. pp. Inform. An Introduction. K. and Howell. Machine Intell. 12. 259-266. and Levinson. [1975]. vol. [1976]. vol. 141-184. S." Docteur Ingenieur Thesis. 4. H. Huang. S. pp. vol. H. T. "A Rule-Based Damage Assessment System for Existing Structures." MIT Artificial Intelligence Laboratory Memo 308. "Visual Pattern Recognition by Moment Invariants. E." Computer. [1971].. Khatib. Kirk. "Commande Dynamique dans 1'Espace Operationnel des Robots Manic." SM Archives. Theory." Trans. Signal Proc. Kahn. L. A. A. "Segmentation of Frame Sequences Obtained by a Moving Observer. pp. R.. and Roth. [1980].562 ROBOTICS: CONTROL. R. vol. 45. pp. Cybern. J. 3. [1983b]. Katushi. 120-125. 12-18.. PhotoOptical Instrum. pp." Artificial Intelligence. pp. ASME. and Tang. Prentice-Hall.' aye Abp °u°'° ash . [1976]. 2. vol. series B. R. 8. "A Perspective on Range Finding Techniques for Computer Vision. Robotics Res." IEEE Trans. U. vol. [1974]. [1979]. [1981]. A. Huston. SMC-12. Toulouse. vol. vol. J. 8. and Kelly. pp. "Kinematic Analysis of Spatial Mechanisms via Succes>v' C17 . Warren. Yang. Cambridge. Engr. Sci. [1972]." IEEE Trans. Rustagi. New York. A. T. "Experiments in Picture Representation Using Regular Decomposition. T." Intl.. ASME." IEEE Trans. Huston. [1983]. E. Kohli. A. Y. I. Optimal Control Theory. for Industry. M. Passerello. Control Systems of Variable Structure. VISION. [1983]. pp. and Guo.. vol. pp. MIT. "Real-Time Image Enhancement Techniques. Appl. R. A. R.. S. Academic Press." IEEE Trans.. France.)." IEEE Trans. "Proximity Sensor Technology for Manipulator End Effectors. and Harlow. W. F. pp. Engr. Englewood Cliffs. 164-172. 95-108. Acoust.J.." Proc. 93. 2. 3-21. T. [1983a]. "Force Feedback in Precise Assembly Tasks. G. "Dynamics of MultirigidBody Systems." Mechanism and Machine Theory.. [1977]. Engr. R. Trans.... "Patterns and Search Statistics. 2. pp." Phot. L. Mech. Klinger. Pattern Anal. Kohler. 8. A. vol. 505-512.. "Numerical Shape from Shading and Occluding Boundaries. [1983]. 17. 74. P. 1. "Adaptive Linear Controller for Robotic Manipula- BCD ." Comput.4 c°° CO) °o- 1110 K^A pulateurs en Presence d'Obstacles. vol. M. -ox Jain. vol. Inoue. `t7 00q pp. no. [1983]. D. 303-339. 13-18. P. pp." in Optimizing Methods in Statistics (J. Graphics Image Proc. M. and Control. Lee. C. "A Geometric Approach to Deriving Position/Force Trajectory in Fine Motion. "On the Control of Robot Manipulators. and Chung. Jet Propulsion Laboratory. Landau. [1982]. "Adaptive Perturbation Control with Feedforward Compensation for Robot Manipulators. C. Dynamics. G.. 1982 Pattern Recognition and Image Processing Conf. R. [1985]. G. K. "An Approach of Adaptive Control for Robot Manipulators." IEEE Trans. S. no.. 21't7 bye New York." Proc. Automatic Control. G. Lee. C. [1984]. J. M. R. Lee. S. vol. JAI Press. and Nigam. D. S. pp. University of Michigan. Pasadena. and Huang. pp.. Lee. [1986]. ed. AC-28. Man. Lee. 3. G. 12. C." Proc.BIBLIOGRAPHY 563 tors. Chung." Proc. 1. "Autonomous Manipulation on a Robot: Summary of Manipulator Software Functions. [1984].D. tea` Lee. Mudge. 4. C. S. Man. "Efficient Parallel Algorithm for Robot Inverse Dynamics Computation. 't7 °°." IEEE Trans. vol. no.4 C. p0' ass 63. pp. [1985]. no. M. San Diego. G. J. St. AC-29. [1983]..5 . B. Aerospace and Electronic Systems." Trans..).. tea' a". C. and Chang. Lee. CO) ti" pry (4W . and Lee.. Computer Information and Control Engineering Program. S. "Robot Arm Kinematics. 15. Lee. Cybern. C. 6. J. Systems. Lewis. West Lafayette. [1979].. Tex. A. 2. Mudge. IEEE Computer Press. and Turney. 634-640. R. 1. 62-80. "An Adaptive Control Strategy for Mechanical Manipulators. no. T. J. S... Saridis. vol. [1983]. vol. vol. H.. SMC-16. 1454-1459. San Antonio. [1986a]. H. Conn. C. Purdue University. P. pp. S. J.. "Resolved Motion Adaptive Control for Mechanical Manipulators. Marcel Dekker. pp. School of Electrical Engineering. may: Decision and Control. no.. B. S. S. Nev. 6th IFAC Conf. J. 2d ed. Mich. "Robot Arm Kinematics and Dynamics. Automatic Control. S. S. 9. Lee.. R. Lee.. pp. G. 58-83. G. L. P." in Advances in Automation and Robotics: Theory and Applications (G. 106. Dynamic Systems. C. S. Washington D.. H." Simulation.. yam' (7a d'Alembert Equations of Motion for Mechanical Manipulators.-' ANN y. B. 27th Soc." IEEE Trans. G. C. 1205-1210. [1985].. and Ziegler. S. Silver Spring. Dissertation. C. "'. and Chang. and Fu.. [1974]. Cybern. G. T." Computer. M. no. J. C. Lee. pp. S. 44. L. pp. Lee. 1. C. "Development of the Generalized 40. M. Calif. C. N. G. 162-171. G. N. Lee.. pp. no. Robotics and Automation. pp. R. Y. "A Maximum Piplined CORDIC Architecture for Robot Inverse Kinematics Computation.. no. [1986b]. S. 1985 IEEE Intl. C. "A Geometric Approach in Solving the Inverse Kinematics of PUMA Robots." IEEE Trans.... Ind. 442. `r1 o`h alb 0C7 Tie a°. Lee. vol." Technical Memo 33-679. Las Vegas. ASME. C. Systems. 127-136. pp. 3. pp. pp. B. Measurement and Control. Mo. M. vol. pp. vol. Adaptive Control-The Model Reference Approach. Md. "Elimination of Redundant Operations for a Fast Sobel Operator. [1982].. 22nd Conf. Turney. AES-20. Tutorial on Robotics. Photooptical Instrumentation Engineers." Technical Report TR-EE-86-5. "On the Control of Mechanical Manipulators. Calif. G. 27-57. 695-706. Estimation and Parameter Identification. D. and Chung." Ph. [1984].. vol. H. 134-142." Proc.. [1982]. C. vol. 837-840. SMC-13. C." IEEE Trans. Robotic Systems. [1985]. G. "An Approach to Motion Planning and Motion Control of Two Robots in a Common Workspace. Lee. Louis. and Lee. Conf. Lee. Lee. [1984].. G." Proc. C. S. Ann Arbor. "Hierarchical Control Structure Using A°° Special Purpose Processors for the Control of Robot Arms. Lee. Gonzalez. 242-245. [1983]. Chung.C. G. N." J. no. 691-697. 2. pp. A Configuration Space Approach. D. J. R. Man. Y. "Optimum Path Planning for Mechanical Manipulators. Luh.. [1983a]. J. Graph. pp. P." IEEE Trans. J. [1984].. and Paul. T. Jet Propulsion Laboratory. Luh. Marck. 102. J." Proc. "Analysis of the Computed Torque Drive Method and Comparit/1 Controlled c/] Cwt icy 't7 -`7 son with Conventional Position Servo for a Computer-Controlled Manipulator. "Automatic Generation of Dynamic Equations for Mechanical Manipulators." IEEE Trans. 1066-1073. vol.. vol. Y.. Y.. 4. [1983b]. 468-474. 3. (9) Cwt [Z7 . K. R. "Scheduling of Parallel Computation for a Computer Mechanical Manipulator. pp. and Cybern. pp. no. 298-316. vol. pp. Luh. vol. S. S. eds. 71. S. S. vol. Joint Conf." IEEE Trans.i t3. Y.. AC-28. vol. A. 3.. 21. 22. T. and Lin. SMC-13. Cybern. 7. Y. "Approximate Joint Trajectories for Control of Industrial Robots Along Cartesian Path. 120.. "°. vol. S. [1980a]. [1973]. R. vol. S. Dynamic Systems. and Paul. Intl. 560-570. T.. no.`t O. and C. no. T.. pp. A. AND INTELLIGENCE Sao Lewis. Y.. Measurements and Control. pp. C.. no." IEEE Trans.. Stanford University." Proc. [1980b]. Brady. 2. S. 321-333. J. J. vol." IEEE Trans. SENSING.)." Comm. Walker. Joint Conf. Lin. Joint Automatic Control Conf. W. Cybern. pp. Lozano-Perez. M.. SMC-11. Lozano-Perez. Y. Systems.. Res. vol. no. "Robot Programming. coat CAN `<b `'' pp. Palo Alto.." Proc. Charlottesville. no. [1982]. J. Va. P. S. S. J. AC-25. Walker.. Y. MIT Press." Trans. pp. C. [1977]. 142-151. 108-120. Luh. Y. and Lin.. vol. 214-234. [1978]. M. Lieberman' L. vol. Systems.. 3rd Intl. Mass. "An Algorithm for Planning Collision-Free Paths Among Polyhedral Obstacles. Y.. 133-153." IEEE Trans. J. "A Syntactic Approach to Texture Analysis. Cambridge. "AUTOPASS: An Automatic Programming System for Computer Controlled Mechanical Assembly. [1981]. [198la]." Technical Memo 33-601. [1983]. P. Artificial Intelligence. Luh. S.T7 C/) tar woo 0o= (y' tea. "Algorithms for Complex Tactile Information Processing." Proc. IEEE. Image Proc. Man.. ASME. S. "An Anatomy of Industrial Robots and their Controls. SMC-14. [1981b]. vol. no. vol. "Conventional Controller Design for Industrial Robots-A Tutorial. S. . Automatic Control. V. K. [1982]. Automatic Control. I. R. pp.. Lu. and Lin. TA-2D. [1981]. Lin. pp. S. S. no." IBM J. pp. Luh. [1983a]. W. SMC-12. 10. Cybern. pp. and Luh. M. "Visual Information Processing: The Structure and Creation of Visual «~+ . and Wesley. "On-Line Computational Scheme for Mechanical Manipulators. Measurement and Control." Trans. VISION. Artificial Intelligence. [1979].. "Task Planning. Systems. (M. Man.564 ROBOTICS: CONTROL. [1983b]. [1973]. Calif. C. Pasadena." in Robot Motion: Planning and Control. 7. Man. AC-28. "Resolved-Acceleration Control of Mechanical Manipulators. "Spatial Planning. C-32. 773-774. 3. ASME. pp.' 'LS "Planning Considerations for a Roving Robot with Arm. 691-698. \. Lozano-Perez. S. Automatic Control. 'V' f>. Comput. Lozano-Perez. Dynamic Systems. "Automatic Planning of Manipulator Transfer Movements. no. [1979]. Luh. C. S. 10. et al. Devel. J. and Wesley. 12. 69-76. A. (IQ Mart. 444-450. Luh. Chang. and Fu. Markiewicz.. R." IEEE Trans. M. pp. Lozano-Perez." Comput." IEEE Trans. Calif. 821-841. no. J. 303-330. 3. T. and Bejczy. "Formulation and Optimization of Cubic Polynomial Joint Trajectories for Industrial Robots. ACM. A. B. Systems. no. J. "Real-Time Adaptive Contrast Enhancement. STAN-CS-81-889 CSD. 1108-1126. "Kinematics of Major Robot Linkages. pp. 0000 . vol. Neuman.. [1977]. vol. J. D. 329-338. PAMI-3. no. Devices. D. M. [1984]. J. J. [1976]." IEEE Trans. 705-710. "ARM: An Algebraic Robot Dynamic Modeling Program. S." IEEE Trans. Marr. O. Atlanta. 6. Principles of Interactive Computer Graphics. 62-74. K.fl . R. [1982]. L. Newman. [1979]. "Discrete Dynamic Robot Modelling. no. J. or. Nevins. no. pp. vol.. H. Symp. Pattern Anal... Martelli. 25. 409F"` C%] ti. Joint Conf. pp. et al. 2. AL User's Manual. New York. and Goldman." EDN. 8. Eyes. "Automatic Visual Inspection. H. pp. and Ears. 13. Mason. pp.. vol. D. M. 29-39. Calif. 169-182." Artificial Intelligence. "An Application of Heuristic Search Methods to Edge and Contour Detection. 2. R. [1974-1976]. W." Proc. Cybern. J. "Industrial Robots: Getting Smarter All The Time. R. pp. Martelli. 2. and Tourassis. pp. and Sproull. no. pp. 7. Machine Intell.. Myers." Comput. vol. Cambridge. Stanford University. Mundy. B. ACM." Sci. vol." IEEE ti' Nagel.BIBLIOGRAPHY 565 Representations.. V. D. E. 6. R. [1982]. vol. no.." Computer. T. pp. V. A.. 3. C. 1861-1978. [1974]. 3d ed. vol. Milenkovic. 85-95.. pp. and Whitney. pp. et al.. SMC-11. 2. Whitney. [1968]. M. Nevins. [1983]." Computer. no. and Interface Electronics. 63-85. "Edge Detection Using Heuristic Search Methods. Ill." IEEE Trans. "Sensors and Transducers. -+N z z z z z z . 13th Intl. 32-38. McDermott.. Mujtaba. and Binford. D. J. "Compliance and Force Control for Computer Controlled Manipulator. and Wise. Calif. A. Mass. A." Pattern Recog. [1977]. Goldman. pp. E. 77-86. Engr. 21-31. Japan. Conf. Robotics. vol. Nau. 26." Comput. 1977 Conf. [1978]. Man. Tokyo. P." NSF Project Reports 1 to 4.. no.. 19. and Neuman.. "Exploratory Research in Industrial Modular Assembly.a' 000 y"¢ w. pp. C. "The Theory of Measurement of a Silhouette Description for Image Processing and Recognition." Proc. Ga. pp. 73-83. pp. "Industry Begins to Use Visual Pattern Recognition. Chicago. pp. "Special Issue on Solid-State Sensors. N. [1985]. Naccache.fl . P. "SPTA: A Proposed Algorithm for Thinning Binary Patterns. SMC-14.. vol.. [1981]. s"° ono Mujtaba.. [1983]." Comm. S. P.. no. 14.. [1981]. "The AL Robot Programming Language. M. 16-31 to 16-47. Industrial Robots. "A Computer with Hands. San Francisco. J. "Computer-Controlled Assembly. [1984].." Instruments and Control Systems. Man. Narendra.) [19791. W. [1972]. Nevatia. "Expert Computer Systems. no. AFIPS Proceedings. Nahim. R. Intl. pp.. McGraw-Hill. L. J. vol. [1980]. 6. vol. Cybern.. S. R. Systems." 1968 Fall Joint Computer Conf. 122-137. T. L. 418. C. pp. "Representation of Moving Rigid Objects Based on Visual Observations. Merritt. "Description and Recognition of Curved Objects. and Binford. 55. no. 6. [1982]." Computer. pp. Intl. F. Draper Laboratory. Graphics Image Proc." Proc. Actuators. and Fitch. 655-661. D. C. R. Decision and Control. Elect. D.. and Huang. [1981]. vol. 1. [1980]." Proc. Freeman.." IEEE Trans. Palo Alto. McCarthy. 0"r °«' coo tit °'" -'A \1. Murray. Vision. (eds. M. J. T. Meindl. vol. Systems. Am. 5. [1981]. Artificial Intelligence. `o: f3. 77-98. and Shinghal. pp. vol. 238. 418-432. 16.. 8. S. P. 103-113. '_' "vi °°' . Cybern." Proc. SMC-9. SENSING.l C1. 6. "Robot Control: Issues and Insight. Oldroyd. Man. no. Man. vol.. Orin.. P. Joint Automatic Control Conference. Intl. 11. vol. G. D... Robot Manipulator: Mathematics. Problem-Solving Methods in Artificial Intelligence. E. Society of Manufacturing Engineers. 3. pp. 313-333. C7" Paul. 179-189. R. Systems... pp." Comput. Calif. SMC-7. "Efficient Computation of the Jacobian for Robot Manipulators. Trajectory Calculation. New York. D.J. Nilsson. R. [1979]. [1976].. Tex. 193-204. Man. A. Palo Alto. 1977. . New York. Englewood Cliffs.. 170-179." Proc.j '1y CO) . McGhee. pp. no. T. Neuman.t Calif. vol. "Compliance and Control" Proc. Systems. Noble. Orin. The SRI Robot Programming System (RPS): An Executive Summary. and Lee. . D.. K. no. E.. 72. [1979]. 702-711. Cambridge.. J. B." IEEE Trans. P. 449-455. [1981]. Paul. and Servoing of a Computer Controlled Arm.. P. "A Multiprocessor-Based Controller for the Control of Mechanical Manipulators. Programming and Control. and Tourassis. P. V. [1968]. Robotics Res. G. Dearborn. [1980]. New Haven. VISION. pp. Also appears in The Industrial Robot.fl 0. 173-182. P. R.>." IEEE Trans. S. no. W. [1985].=: sib l77 ()' fin . Calif. R. T. [1983]. Stanford University. 4.. pp. Nilsson. Random Variables. W. Computer Science Department.. `0° 0. [1971]. pp. Md. Park. E. Prentice-Hall. P. [1972]. T. and Fu. [1981]. pp. A.. vol. vol.b" s:. :. Palo Alto.. "'. pp. Principles of Artificial Intelligence. Shimano. Houston. Robotics and Automation. Yale University." IEEE Trans. [1969]. Ga. Paul. R. "Pipelined Approach to Inverse Plant Plus Jacobian Control of Robot Manipulators. Algorithms for Graphics and Image Processing. [1977]. . Orin. Robotics. Paul. B. Biosci. M.. New York. [1979]."Shape Discrimination Using Fourier Descriptors. "Manipulator Cartesian Path Control. Applied Linear Algebra. [1976]. N. [1981]. Ind. D. [1982]." presented at SHARE 56." Technical Paper MR76-615. Pavlidis. D. "The Kinematics of Manipulators under Computer Control." Math. S. 107-130. vol. no. Man. Paul. McGrawHill. West Lafayette. N. Ohlander. Papoulis. McGraw-Hill. E. 10-17. C. "Kinematic and Kinetic Analysis of Open-Chain Linkages Utilizing Newton-Euler Methods." IEEE J.. N. pp.566 ROBOTICS: CONTROL. and Reddy. Tioga Pub. Mich. E.. "MCL: An APT Approach to Robotic Manufacturing.N. Rockville. Springer-Verlag. L. Vukobratovic.. 4. Nigam. 2. Price. 3. Atlanta. SMC-11.. MIT ten" Press. Stanford Artificial Intelligence Laboratory. P.'C "WAVE: A Model-Based Language for Manipulator Control." Artificial Intelligence Project Memo No. Cybern. J. SRI International. Pavlidis. no. J. C. Conf. "Modeling. [1977]. K. B. Computer Science Press. Third Yale Workshop on Applications of Adaptive Systems Theory. and Hartoch. 169-175.. and Shimano. R. W. Mass. 1981. Purdue University. 8.. AND INTELLIGENCE Trans. R. G. R. [1981]. March 9-13. pp. 66-75. . "Kinematic Control Equations for Simple Manipulators. R." Intl. ooh Pieper. Palo Alto. pp. vol.1 Paul.. Probability. [1984]. B. and Mayer. RA-1.. [1965]. . 4. Graphics Image Proc. 43.. and Schrader. and Stochastic Processes. [1984]. SMC-15. Structural Pattern Recognition. "Picture Segmentation Using a Recursive Region Splitting Method. Vol. Systems. Persoon. R. Cybern. Conn." Memo AIM-177. Systems. vol. Menlo Park. Cybern. D. Calif. " Proc. no. H. pp. Robotics Res. 4. Theory and Practice of Robots and Manipulators. M. W-. Roth. eds. Mass. 159-168. 152-159. New York. N. [1983]. WW` obi c/) . H. Ambler.. vol. 2. 9-24. [1984]. Roberts. Elsevier. and Craig. and Control. Saridis. Automatic Control. C. 14. N.. Popplestone. 10. vol. 1. and Lee. PAMI-2. Mach." Intl. J. Rosenfeld. [1981]. J. A. Raibert.041 4. McGraw-Hill. 2. Pro ova New York.. Rocher. H. and Bellos. 45-60. Artificial Intelligence. Reddy. pp. [1973].. pp. no. 5. Academic Press. J. C..p y." IEEE Trans. Sadjadi. [1963]. SMC-9. 3. A.. "An Approximation Theory of Optimal Control for Trainable Manipulators. "The Ridge-Seeking Method for Obtaining the Skeleton of Digital Images. Requicha. and Kak. Plenum. Dodd and L. 669-673. Robotics Res. 280-293. I. and Snyder. Matrix Methods in Engineering. pp." Comput.. "Solid Modeling: A Historical Summary and Contemporary Assessment. L. P. A.y °-. E. 12. Congr.. U.way ins CAD Q." 1st CISM-IFTMM Symp. [1975]. A. aye yam .. P0." Artificial Intelligence. vol. [1983]. SMC-14. "An Interpreter for a Language Describing Assemblies." IEEE Comput.. Dynamic System. L. J. and Scheinman. Rosen. E. pp. A. Pipes.). [1980].. pp. J. no. R. 126-133.. J. G." Computer. I." in Optical and Electro-Optical Information Processing.moo Pieper. J." in Computer "t7 CD. Joint Conf. "Extraction of Line Structures from Photographs of Curved Object. vol. pp. vol.. and Voelcker. no. Sacerdoti. . (J. Intell. Requicha.. P. oho CS. and Hon. vol. Prentice-Hall.-. New York. Methods. 524-528. too .). Englewood Cliffs. and Hall. no. 547-557." IEEE Trans. no. 4. A." Computing Surveys. Man. B. "Application of the OneDimensional Fourier Transform for Tracking Moving Objects in Noisy Environments. B.. Cambridge. ASME. Tippett et al. vol. 4. "Three-Dimensional Moment Invariants. W. 79-107. "Design and Implementation of a VLSI Tactile Sensing Computer.^ t3. pp. "Machine Perception of Three-Dimensional Solids. pp. 131-137. pp. R.." IEEE Trans.. Systems. eds. S. New York. 3. N. A.BIBLIOGRAPHY 567 Rajala. no. [1982]... G.. [1965]. N.w. Raibert. W. vol. Graphics. "Use of Sensors in Programmable Automation. Image Proc. pp. Artificial Intelligence. L. D. no. E." IEEE Trans. R." Proc.. [1979]. 102. Cs] CAD . [1979]. vol. . and Siy. R. [1982]. F.. L. Graphics Image Proc. MIT Press. 127-136.. vol. Rich. "Representation for Rigid Solids: Theory.ti o0' . J. 12-23. E. Popplestone.. A. vol. no. "Hybrid Position/Force Control of Manipulators." Industrial Robot. 3. Ambler... 81-103. and Tanner. pp. no. P. Rastegar. J. [1977]. and Bellos. Digital Picture Processing. no. Cybern." Intl. AC-28. [1978]. E. [1980]. 2d ed. A Structure for Plans and Behavior.00 a-. vol. Salari. pp. "The Kinematics of Manipulators under Computer Control. M. Cybern. Theory of Machines and Mechanisms. and Keissling. 4th Intl.. 12. II Intl. [1975].. D. pp. Pattern Anal. V. 2." Comput.J. D. is. 5. Rossol. 2.. and Roth. "Intelligent Robotic Control. 437-464. pp. Riddle.E E-^ 'gym . D. Q0' 0. Measurement. 2. pp. [1969]. [1977]. [1982]. E. and Nitzan. 93-113. 3. vol. F. G. C. G. Saridis. Vision and Sensor-Based Robots (G. A. vol. "Methods for Analyzing Three-Dimensional Scenes." Trans. [1980]. Vision. "On the Design of Computer Controlled Manipulators. [1983]. vol. "RAPT.. 1. Obi 60. P. A. Systems Man. A. pp.. A. [1983]. B. Graphics and Applications. 2. Ramer. Requicha. "Computer Architectures for Vision. 3-18. "Towards a Theory of Geometric Tolerancing. G. and Systems. S. A Language for Describing Assemblies. A. and Yang. N. Computer and Information Sciences. Kinematics and Mechanisms Design. Prentice-Hall. Computer Science Department. Y. Mass. Silver. Austin. G. "A Structural Approach to Robot Programming and Teaching. 736-751. 8. [1982].. PAMI-3. R. G. R. Artificial Intelligence. "Nonlinear Feedback. Rept. [1972]. Joint Conf.. Chicago. Computer Software Applications Conf.. "Versatile Hall-Effect Devices Handle Movement-Control Tasks. Decision and Control." EDN.-. pp. 1. Scheinman. "On Force Sensing Information and its Use in Controlling Manipulators. and Dreussi. Man. Systems. and Arimoto. ASME. pp. J. vol. vol.. "A 3-D Model-Driven System for the Recognition of Abdominal Anatomy from CT Scans. C. N. 878-883. pp. Dodd and L. Biosci. Palo Alto. W. "VAL: A Versatile Robot Programming and Control System. L." Math. [1979]. E.°u dad °C) .' . 3rd Intl. 1984 Conf. and Stephanou. H. SENSING.in Robot Arm Control. Symon. D. C. [1976]. . no. on Industrial Robots. K.." IEEE Trans... University of Texas.." Proc. . 25. R. "Modelled Exploration by Robot. vol. 260-268." IEEE Trans. Shirai. "Design of a Computer Manipulator. "A Hierarchical Approach to the Control of a Prosthetic Arm. Intell. E.. S. H. Saridis. Shimano. Takegaki. 4." Proc. Cambridge. eds. M. [1972].. MIT Press. "On the Equivalence of the Lagrangian and Newton-Euler Dynamics for Manipulators. Plenum... Systems. SMC-7. Shimano. Sussman. 585-591. [1985].. [1977]. Shani. W. Paul." Proc. pp." Intl. E. [1980]. 430. Dynamic Systems. and Fu. pp. and Lobbia. Stepanenko Y. Comput. Proc. vol. G. J... pp. E. and Vukobratovic. and Roth.C. M. SMC-11.. S. pp. o00 Manipulators." IEEE Trans. "Minimum-Perimeter Polygons of Digitized Silhouettes. Cybern. Siklossy. et al. no. [1980]..). VISION.^. and Hansen. pp. Measurement and Control. no. [1972].. G. [1981]. pp. 111. N. Tex. "Parameter Identification and Control of Linear Discrete-Time Systems. 1. AC-17..' -. no. New York. 119-126. [1978]." Artificial Intelligence Laboratory Memo AIM-92. Chazin. S. "The Synthesis of Manipulator Control Programs from Task-Level `Z. Cybern. L. no. M. Robotics Res. Tarn. [1979]. Spencer. [1984]. Siklossy. 28. and Berg. 5th Intl. [1969]. Calif. [1979]. Industrial Robots: Computer Interfacing and Control. vol. "Three-Dimensional Computer Vision. no. "A New Feedback Method for Dynamic Control of tea. "An Application of Learning to Robotic Planning." Proc. J.. 3rd Intl.. D. 6.." in Computer Vision and Sensor-Based Robots (G. vol. Symp. 6. [1981]. 102.J. Y. "Dynamics of Articulated Open-Chain Active Mechanisms. John Wiley. B. J." AI Memo 203.tea New York. C-21." IEEE Trans. E. K. Tangwongsan. vol. R." Tech." Las Vegas." Intl. [1976]. W. U. Automatic Control. Joint Conf.. C. H. N. T. L. J. [1981]. [1979].568 ROBOTICS: CONTROL. Mechanics. T. "An Efficient Robot Planner which Generates its Own Procedures. O"' o06 -. B. 2." IEEE Trans. 3. AND INTELLIGENCE Saridis. Snyder. B. 4. 274-289. pp. 9th Intl. T." Trans. J. 151-156. . pp.. 60-70. Mach. J. W. vol. aid (7. vol. Suh. no. 1. K. 52-60. V. J. no. 119-125. and Radcliffe. "Micro-Planner Reference Manual.. 3. Rossol.. Sklansky. Washington. and Charniak. P. 676-678. 303-333. Winograd. Englewood Cliffs. [1971]. pp. Mass. J. pp.. B. Nev. J. Addison-Wesley. pp. H. Pattern Anal. [1973]. Taylor. pp. 423. D. vol. Pattern Recog. Stanford University. z`` 'F. "A Simple Contour Matching Algorithm. Takase.. 407-420. Reading. 137-170. Sze. R. [1970]. Man. Cambridge. K. "An Iterative Method for the Displacement Analysis of Spatial Mechanisms.. "On the Dynamic Analysis of Spatial Linkages using 4 x 4 Matrices. I. J." IEEE Trans. pp. Danbury. J. L. 205-211. "Lower-Level Estimation and Interpretation of Visual Motion. "Generating Semantic Descriptions from Drawings of Scenes with Shadows... N. University of Michigan. vol.. F. pp. Taylor. H. Computer-Based Automation. Ann Arbor. "AML: A Manufacturing Language. [1980]. 309-314. D.. and Tsuji. I. M. Plenum. J. . D." Automatica. [1974]. Mich. [1982]. 23." Proc.). Conn. AC-7. pp. S.. vol. 16-21.. AC-22. 3-10. Cybern.. 1980. Evanston. T..." in Applied Computation Theory (R. 8.." 1BMJ. and Meyer. Pattern Recogniton Principles. Appl. Walker. 14. R. J. 4. Mass. [1959].. H. Automatic Control. [1977]. ed. Shirai.. New York. [1980]. [1979]." IEEE Trans. J. vol. Uicker. Series E. Wallace.. Dissertation. Mech. S. T.. and Gonzalez. Webb. Mudge. 2d ed.. and Hartenberg. 104. Mich. R. :r1 'CS f3. ECE Department. [1964]. 10. ASME. J. and Tornheim. L. 424-436. Systems. [1981]. vol. 3. 20-28. Stanford University. T. Mudge. Uicker. no." Ph. RSD-TR-4-82. 10. G. [1979]. Calif. D. Res. "Analysis of Three-Dimensional Movements Using Fourier Descriptors. "Connection Between Formulations of Robot Arm Dynamics with Applications to Simulation and Control. 6. pp. Vector Spaces and Matrices. A. pp. J. S. Tou. "Equivalence of Two Formulations for Robot Arm Dynamics." Intl. Devel.. no. no.. vol. "Variable Structure Systems with Sliding Mode: A Survey. Denavit. Tomovic. "Visually Interpreting the Motion of Objects in =°." Report AIM-282. D. try coo °G" `-' . I. T.. J. Reading." IEEE Trans. R. [1980]. Tou. "An Adaptive Artificial Hand. "Parallel Local Operations for a New Distance Transformation of a Line Pattern and Their Applications. 19-41.. H. vol.. no. vol. Turney. [1965]. Machine Intell. C. W. Pattern Anal." SEL Report 142. and Orin. Y. 1737-1752.D. pp. J. C.. J. 47. N. SMC-9.. MIT. "Automata Theoretical Approach to Visual Information Processing. Utkin. T. 3. [1963].. New C. pp. and Lee. S. Robotics Res. [1982]. 2. 1." Computer.`' ??. no.. G. Yeh. and Barnard... N. ASME. Taylor. and Boni. vol. C. . the University of Michigan." Trans. Turney. and Stokic.. Summers. J. M.J. N. no. L. pp. Systems. Ill. Version 11. J. P. M. Addison-Wesley. 628-643. R. I. York. B. and Fukumura." IRE Trans. PAMI-4.) [1985]. and Mitchell. Vukobratovic. pp. J. PAMI-2. T. [1962]. R. and Aggarwal. Man. (ed. vol. J.. [1981]. [1979].BIBLIOGRAPHY 569 Specifications. Pattern Anal. T. (1) Thrall. T. 31. pp.. pp. Inc. Automatic Control. 16. Jr. J. Intell.. E. Mass. User's Guide to VAL." IEEE Trans. `oh UDC `"' E°' 0°i (1) acs e0. Unimation.y 'mod FIi A. D. G. W. pp. Waltz. [1972]. John Wiley. M. R. [1983]. "Description of Texture by a Structural Analysis. dissertation. Waltz. V. [1976]. Artificial Intelligence Laboratory. J. 212-222. Mach. Thompson. S. Toriwaki. Ann Arbor. Northwestern University. Englewood Cliffs. Palo Alto. O.. and Lee." Trans. no. Unger. Measurement and Control. 183-191. Artificial Intelligence Lab. IRE. Tomita. P. "Contribution to the Decoupled Control of Large-Scale Mechanical Systems. "Planning and Execution of Straight Line Manipulator Trajectories. vol." Ph. R." CRIM Technical Report No. vol. Prentice-Hall. . S. Kato. [1982]. "Efficient Dynamic Computer Simulation of Robotic Mechanisms..D. a°. no. 583-588.. "Pattern Detection and Recognition. vol. pp. pp. 382-399. Graphics Image Proc. "Resolved Motion Rate Control of Manipulators and Human Prostheses. A. Pattern Anal.. Comput.. Zucker. C. D. Reading.570 ROBOTICS: CONTROL. [1981]. A. VISION. pp. et al. Res.." Comput. and Gonzalez. vol." Comput. [1983]. 7. and Roskies. M. Weiss. E. IEEE. "State Space Models of Remote Manipulation Tasks. pp. Man-Machine Systems. Whitney." Comput. D. ASME. vol. (ed. "A Three-Dimensional Edge Operator. E.. R. "Application of Dual Number Quaternian Algebra to the Analysis of Spatial Mechanisms. Series B. E. New Jersey. Mech." Trans. Graphics Image Proc. Wolfe. T. 1." Proc. Res. 5.. Devel. J. T. [1976]. [1969].C. vol. S. vol. D. no. White. no. Systems. "Fourier Descriptors for Plane Closed Curves. 643-654.. vol. 879-888. J. S. L [1979]... "Real-Time Digital Image Enhancement.. 14. Intl. Engr. A Practical Guide to Designing Expert Systerns. 27. 303-309. "The Mathematics of Coordinated Control of Prosthetic Arms and Manipulators. 3. no. 47-53. pp. C. "Fast Median Filter Implementation. 154-160. S. vol. Yang. Engr. Man.. and Grossman. S." Trans. pp. woo Urn . J. Comput. [1980]. Artificial Intelligence. J. Wise. Whitney.. Man. vol. Zucker." IEEE Trans. v°g 0.». Z. J. no. H. W. 101-109. 152-157. [1972]." . "Resolved Motion Force Control of Robot Manipulator. 42-48. 4. 400-411. "Controller Design for a Manipulator Using Theory of Variable Structure Systems.. C. pp.. Mach. Rowman and Allanheld. pp. Yang. A. Whitney. J. R." Trans." `J° 3. [1978].. no. "Kinematic Analysis of Spatial Mechanisms . [1969a]. SMC-12. pp. Young.2 ". Appl. vol. C.) [1982]. Intell. vol. [1964]. vol. Elect. Photo-Optical Inst. "A Geometric Modeling System for Automated Mechanical Assembly. Winston. 495-508. ASME. vol. 'F' 'i7 'L7' by Means of Screw Coordinates." IEEE Trans. W. K. Addison-Wesley. D... R. Industry. 152-157. 61-73. 69. "Image Thresholding for Optical Character Recognition and Other Applications Requiring Character Image Extraction. G.>p ate (/7 T. Will. Graphics Image Proc. Cybern. C.. PAMI-3.!1 °w° day . A. SMC-8. 64-74. 195-210. Zahn. Woods. P. series E. 266-275. 93.. vol. P.. Weska. vol. Artificial Intelligence.' ion a. [1969b]. "Displacement Analysis of Spatial Five-link Mechanisms Using 3 x 3 Matrices with Dual-Number Elements. pp.. pp. Dynamic Systems. A. pp. Wesley." IEEE Trans. 2d ed. 2. K. "Special Issue on Solid-State Sensors. vol. D. [1971]. pp. vol.. and Interface Electronics. and Rohrer. and Mannos. D. Soc. "Region Growing: Childhood and Adolescence. vol. Industry. R. pp. G. MMS-10. no. R. Engr. no." Computer. C-24. 3. no. H. no. "A System for Extracting ThreeDimensional Measurements from a Stereo Pair of TV Cameras. 3.o vii W)0 °`W 5C7 ate: °") mom IEEE Trans. R. no. Systems. 259-265. Mass. 324-331. pp. R. ASME. pp. 24.. "An Experimental System for Computer Controlled Mechanical Assembly. pp. 269-281. J. and Paul. vol. vol. 7. D. 29." Proc. and Freudenstein. AND INTELLIGENCE Space. 91. 2.. and Freudenstein. T. Actuators." IEEE Trans. 122. 1. no." IEEE Trans. Measurement and Control. and Kulikowski. K. E. D. M. 40-49. [1975]. "A Survey of Threshold Selection Techniques. Cybern. no.. [1972]. S. 9. and Cunningham. Yuan. [1982]. Yakimovsky. pp. ASME. P. SENSING. 31. 207. 8. and Hummel. [1979]." Trans. Joint Conf." IBM J.. R. 1. Devel. [1978]. vol. Wu.. M." IEEE Trans. [1984]. pp. Y.. M. Washington. [1984]. 5. Devices. C-21. [1981]. J." IBM J.. pp." Proc. 298 Capacitive sensors. 300 model. 202 Centrifugal forces/torques. 363 linking. 299 line. . 323 charge-coupled device. 280 Cartesian path control. 3 space control. 28 homogeneous translation matrix. 14 Boolean expressions. 396 description. 298 television. 94 't7 . 42 solution. 18 C-frame. 36 Basic homogeneous rotation matrix.. 12 Armature-controlled dc motor. 42 Area. 14. 470 C-surface. 454 Approach vector of hand. 341 Boundary. 301 calibration. 206 Artificial skin. 202. 83 terms. 298.'3 'O' C B Base coordinates. 339 Blocks world. pip . 406 Arm (see also Robot arm) configurations. 188 c. 184. 256 Adjacency. 475 Body-attached coordinate frame. 93. 297 vidicon. 286 Automation. 363 Bounded deviation joint path. 175 robot. 470 Camera area.ti '. 329 Adjoint of a matrix. w". 434 Autoregressive model.-. 358 smoothing.INDEX A Acceleration-related terms. 541 Aggregate. 28 rectangle. 312 C3" Binary image creation. 96 (see also Dynamic coefficients of manipulators) Adaptive controls. 187 path trajectory.7 571 . 246 model-referenced. 248 resolved motion. 396 detection. 318 solid-state. 184. 244 perturbation. 61 matrix. 244 autoregressive model. 246 Axis of rotation. 402 rotation matrices. 202. 74 vision (see Vision) Configuration indicators. 'L3 "J' P7' =D< C/1 t-. 226 Convolution mask. 111 dc motor. 60 Connected component. 202. 202. 202. 46 Characteristic equation. 211 Controls adaptive. 3 D rn-. 94 theorem. 384 Compensation feedback. . _0" 239 resolved motion adaptive. 232 resolved motion acceleration. 213. 13.572 INDEX Chain codes. 195 Cylindrical coordinates for positioning subassembly. 205 model-referenced adaptive. 539. 406 eccentricity. 48 robot.. 328 Contact sensing. 202. 395 area. 210 proportional. 202. 202. 202. 60. d'Alembert equations of motion. 1 Denavit-Hartenberg representation of linkages. 227 resolved motion. 202. 404 major axis. 212 Cofactor. 106 Correlation coefficient. 206 Damping ratio. 237 self-tuning adaptive. t-. 402 boundary. 106 forces/torques. 2 Classification of manipulators. 202. 402 CAD '-' tom. 36 representation. 223 nonlinear decoupled. 71 Degrees of freedom. 205 Computer simulation. . 396 product rule. 214 Cincinnati Milacron robot. 51 Closed-loop transfer function. 205 joint motion. 245 Decision equations. 426 Cubic polynomial joint trajectory. 73 Degenerate case. 35 Depth. 196 jerk constraint. 329 region. 396 compactness. 244 computed-torque. 49 Coriolis acceleration. . 246 variable structure. 332 C1. 256 resolved motion force.. 191 acceleration constraint. 14. 382. BCD . 541 Color. 48 spherical. 409 Coordinate frames. 35 cylindrical.. 19 Computed torque technique. 402 Euler number. 249 feedforward.. 396 chain codes. 124 principle. 196 velocity constraint. 406 Connectivity. 196 torque constraint. 406 connected region. 267 Controller positional.t `S7 s. 249 Composite homogeneous transformation matrix. 406 Fourier. 241 resolved motion rate. 244 near-minimum-time. 325 Description. 202 joint servo. 202. 202. 31 rotation matrix. 83 terms.^y Co-occurrence matrix. 406 basic rectangle. a. 219. G Eccentricity. 219. 249 Focal length. 351 element. 409 Gear ratio.". 82 cartesian coordinates. 96 centrifugal and Coriolis. 118 Error actuating signal.. 329 euclidean. 178 Dynamic coefficients of manipulators.. 416 Difference image. 96 Sao F acceleration-related. 402 signatures. 330. 400 region. 470 Exponential weighting function.. 96 gravity. 446 Frequency domain. 334 Feedback compensation. 402 moments. 356 linking. 85 d'Alembert equations of motion. 423 coordinates. 402 Edge detection. 425 PRO Equations of motion. . 29 . 418 Endpoint.. 342 Entropy. 412 Enhancement. 389 Direct kinematics. 215 centrifugal and Coriolis. 82 E False contour. 314 Force sensing. 399. . 428 Disturbances. 215 Drive transform. 459 Forgetting factor. 83 Forward kinematics. 248 cue Distance chessboard. 22 Expert systems. 215 gravity loading. 96 Dynamics of robot arm. 330. . 330 city-block. 406 Eulerian angles. 411 texture. 450 Discriminant function. ): minor axis. 30 rotation matrices. 22. 330 between shapes. 414 perimeter.: >-^ °°°°° . 363 three-dimensional. 48 number. 92 Newton-Euler. 12 Discrete word recognition. 334 Freeman chain codes.. 398 skeleton. 12 Fourier descriptors. 124 tan '17 tan Geometric interpretation homogeneous transformation matrices. 406 shape numbers. 330 definition of. 425 mixed. 407. 406 polygonal approximation. 259 Lagrange-Euler. 370 gradient. 114.INDEX 573 Description (Cont. 25 Global scaling. 353 laplacian. 249 Feedforward compensation. 406 three-dimensional.. 207 Generalized cones. 248 Forward dynamics. 210 Euclidean distance. 425 norm. 404 transform. 425 Euler angles. 334 BCD . 516 Explicit free space.. 302 Fast Fourier transform. 289. 301 intensity. 450 H Hall-effect sensors. 308 composite. 42 sliding vector of. 374 Imaging geometry. 332 Interpolation points. 342 gray level.-. 358 difference.574 INDEX Gradient definition. 83. 339. . 42 position vector of. 93 terms. 389 digital. 357. 00 . 52 screw algebra. 313 stereo. 42 Handyman robot. 95 Gray level. 90 Intensity. 52 dual matrix. 301 thresholding. 36 coordinate frame. 450 Histogram equalization. 353 direction. 438 Graph. 369 search. 297 averaging. 335 spatial coordinates. 301 enhancement. 301 Guiding. 52 dual quaternion. 305 Image Gravity loading forces. 346 Homogeneous coordinates. 43 normal vector of. 93 tensor. 13.. 481 solution. . 4 Hard automation. 304 backlighting. 485 edge detection. 315 Homogeneous transformation matrices. 418 I Grammar. 346 local. 52 geometric approach. 54 Illumination. 30. 27 basic rotation matrix. 331 quantization. 364 magnitude. 301 smoothing. 486 0-0 0000 Ill-conditioned solution. 475. 27. 307 model. 28. 42 coordinates. 269. 388 preprocessing. 311 basic translation matrix. 297. 369 AND/OR. 365 ''L3. 83. 342 linearization. 301 element. 304 diffuse. 31. I High-level programming languages. 42 coordinate system. 84 Inverse kinematics solution. 277 Inertia forces. 312 geometric interpretation. 150 Interpretation. 28. 348 specification. 354 three-dimensional. 436. 52. 301 motion. 318 perspective. 224 Hand approach vector of. 325 transformations. acquisition. 434 Inverse dynamics.:t `C3 A.. 279 Hamiltonian function. 52 ICU G. 304 directional. 306 structured. 52 iteration. 301 sampling. 13. 307 Inductive sensors. 301 transformations. 307 Hough transform. 338 binary. 60 inverse transform. 431. 420 Line labeling. 154 motion controls. 307 translation. 308 transpose of.r r-. 7'. 273 Least-squares estimation. 85 Joint parameters. 247 Length of link. 154 Light-emitting diode. 97 equations of motion. 40 Inverse transform technique. 42 determinant. 37 velocity. 85. 20. 34 twist angle. 14. 13. 547 strobing technique. 36. 431 (see also Robot programming languages) Laplacian. 541 arm. 554 vector cross product method. 27. 85.INDEX 575 Inverse of homogeneous transformation matrix. 425. 34 of path. 82 formulation. 536 Medial axis. 536 square. 250 Link coordinate system. 54. 538 equation. 92 Language. 56 premultiply by.D "'. 336 >>. 13. 84 Lagrangian function. 176 homogeneous. 329 Lift-off position. 34 length. 411 Median filter. 238 Lagrange-Euler computational complexity. 38 Link parameters. 29 Lorentz force. 20. 536 summation. 33. Machine vision (see Vision) Manipulator control (see Controls) Manipulator jacobian.. 42 Kinematics inverse solution (see Inverse kinematics solution) Kinetic energy. 311 skew. L Lagrange multiplier. 3 axis. 544 differential translation and rotation method. 356. 279 Lower-pair joints. 54 rotation. 542 multiplication of. 535 adjoint. 89 Knot points. 544 Manipulator trajectory (see Trajectory planning) Master-slave manipulator.. 428. 420 Linearized perturbation model. 379 Laser ranging. 37 angle. 35 M K Kinematic equations for manipulators. 150 L7. 536 symmetric. 53 J Jacobian matrix (see Manipulator jacobian) Joint. t') . 536 postmultiply by. 34 Junction labeling. 34 Local scaling. 33. 537 null. 000 . 536 transformation. 289 variables. 283 . 34 distance. 4 Matching. 311 scaling. 27 inversion lemma. 429 Matrix. 30. 33 interpolated trajectories. 202 sensors. 535 unit. f3. 103 Nonlinear decoupled feedback control. decoupling theory. 384 connectivity. 4 Model-referenced adaptive control. 329 selection. 459 specification. 328 Plane fitting. 301 distance. 454 vector of hand. 328 definition.. 22. 254 Perimeter.T^ z 'S7 R+. 244 P Moments. 268. 114. 118 formulation. . 407. 228 Normal vector of hand. 202 Pattern primitive. 149 length. 417 Polaroid range sensor. . . 328 processing. 213 `COQ CDR mo=t s. 69 matrix. 276 Polygonal approximation. 42 Positional controller. 28.Ay 7-7 `t7 1. 97 transfer function. 213. 239 sensing. 82. 427 recognition (see Recognition) Penalty functions. 16 Parameter identification. 27 Picture element (see Pixel) Pitch. 48 Pixel adjacency. 57 Obstacle avoidance. 456 Moving coordinate system. 149 Open-loop control. 42 Orthogonal transformation. 329 aggregation.576 INDEX Minotaur I robot. 329 gray level. 213 stability criteria. 388 specification. 16 Orthonormal transformation. 331 Newton's second law. 283 Orientation error. 301 intensity. 299 Physical coordinates. 42 0 OAT solution. z s. 107 N Near-minimum-time control. 329 constraint. 283 Photosite. 512 constraint. 111 Newton-Euler equations of motion. 220 performance. 313 Photodiode. 335 definition. 227 228 boo decoupled input-output relationship. 239 indicator. 399. 470 Performance criterion. 248 index. 400 Pontryagin minimum principle. 210 Optical proximity sensors. 301 neighbors of. 149.1t CAD `t7 224 Position error. 210 multijoint robots. 250 residual. 252 Path.. 223 Neighborhood averaging. 406 Perspective transformation. 476 trajectory tracking problem. 414 Motion. 114. 3. 3 Cincinnati Milacron. 3 Versatran. 91 Range sensing. 427 syntactic.O -_- Resolved motion. 453. 384 merging. 6. I dynamics. 504 manipulator. 428. 237 Revolute robot. 85. 450 AL. 3 definition of. 4 programming. 79 PUMA. .I- '-' . 387 splitting. 474 vision. 10. 471 HELP. 9. 464. 5 Robot programming languages. 203 ''° Potential energy.. 41 robot. 431 Recursive Lagrange equations. 2. 80 Unimate. 4 intelligence. 1. 203 revolute. 7. 406 growing. 489 Probing technique. 6. 12 learning. 8. 6. 41. 425 Proximity sensing. 471 AML. 6. 2 control (see Controls) dynamics. 450 matching. 4 kinematics. 41. 2. 211 Prototype. 471 AUTOPASS. 202. 429 semantic. 146 Programming languages (see Robot programming languages) Programming support. 90 Q Quantization. CD. 4 MINIMOVER. 3 Robbins-Monro stochastic approximation method. 362 Robot arm category. 3 Sigma. 387 Regular decomposition. 184 R Radius of gyration. 450 programming synthesis. 203 link coordinate transformation matrices. 8. 460 Proportional controller. 79.INDEX 577 PUMA control strategy. 468 sensing. 243 Robot (see also Robot arm) arm categories. 446 . 91 Predicate logic. 296. 6. 3 Stanford. 471 C17 was "CS Q. 241 rate control. 276 Pseudo-inertia matrix. 256 force control. 118 Reference image. 232 acceleration control. 82 Handyman robot. 82 Newton-Euler equations of motion. 302 Quaternion. 3 cylindrical. 433 structural methods. 239 adaptive control. 79. 425. 424 correlation. 82 historical development. 425 discrete word. 3 task planning. 268 Recognition. 425. 393 Region description. 453. 12 Minotaur I robot. 426 decision-theoretic. 3 cartesian. 267 spherical.. 474 kinematics. 264 Semantics. 276 range. 355. 311 about an arbitrary axis. 28. 267 capacitive. 273 torque. 472 object-oriented. 428 (see also Description. 466 A?. 454 task. 8. 8. 154 Shape numbers. 411 Sliding mode. 374 Self-tuning adaptive control. 384. Recognition) Simplified dynamic model. 433 Sensing.): JARS. 221 Scaling. 472 Robotic manipulators (see Manipulators) Robotic vision (see Vision) Roll. 458 Sensor calibration. 362 wrist. 14. 289 Hall effect. 279 inductive. 289 Sensing and flow of control. 472 robot-oriented. 226 Sliding vector of hand. 280 contact. 28 Rotational kinetic energy. 267 structure light. 301 domain. 3 Signatures. 291 Set-down position. 267 optical.578 INDEX 22 geometric interpretation. 269. 14. 451. 267 laser. 268 ultrasonic. 363 region-oriented. 276. 273 noncontact. 267 coo con Sensing (Cont. 8. 288 Smoothing. 335 Sobel operators. 398 Similarity. 331 processing. 388 edge linking. 296. 29 Segmentation. 268 slip. 269 Sigma robot. 284 triangulation. 47 motion. 269 time of flight. 126 . 48 Rotating coordinate systems. 267 external state. 42 Slip sensing. 282 vision. C!] C/] C/] 1:4 .): force. 471 MAPLE. 462 VAL. 25 homogeneous. 20 with Euler angles representations.-- 0 . >_v . 402. 134 Skeleton. 103 Rotation. 289 touch. 283 proximity.fl "C7 Obi . 472 RAIL. 29 local. S Sampling.5- arc Robot programming languages (Cont. 277 internal state. 302 frequency. 331 Specification end-effector. 311 Rotation matrices. 456 position. 22. 28 global. 296. 451 PAL. 472 task-level. 363 based on motion. 361 Spatial coordinates. 428 Sheet of light.p. 311 factor. 384 three-dimensional. 471 MCL. 451 RPL. 357. 416 thresholding. 213. 436 T Task planning. 16 orthonormal. 526 derivatives of vector functions. 332 Texture. 205 Transformation orthogonal. 523 cartesian coordinate systems. 214 Structured light. 268 Twist angle of link. 80 State diagram. 215 Stereo imaging. 361 H-3 Thresholding. 524 CAD -00w0 °w` Television cameras. 16 Transformation matrices. 282 Undamped natural frequency. 524 multiplication. 425. CC] C/] 0O0 Tree grammar. 358. 126 . 3 Spur. 474 Steady-state errors. 533 integration of vector functions. 152 transition. 374 multilevel. 308 Translational kinetic energy. 167 4-3-4 trajectory. 358. 374 Time-optimal control problem. 325 String. 428 Triangulation. 382 dynamic. 450 dam' quad. 307 Transition between path segments. 413 Stanford robot. 224 (OD U Ultrasonic sensors. 427 grammar. 427 resonant frequency. 27. 89 V VAL commands._. 358. 154 Trajectory segment. 34 Two point boundary value problem. 429 recognition. 165 cartesian path. 299 Template. 3 Unimation PUMA 600 arm (see Robot arm) Unsharp masking. 226 Vector. 377 optimum. 357. 223 Torque sensing. 406 22i cam) 1C3 . 435 State space. 13. 374 local.) Trajectory planning. 7. 149 3-5-3 trajectory. 289 Touch sensors. 49 robot. 269 Switching surfaces. 534 linear vector space. 284 Trace operation. 522 addition. 245 Unimate robot.-. 225 CV. 'C7 if) 000 . 156 5-cubic trajectory. 377 global. 156. 431 matching. 434 Structural pattern recognition. 374 based on boundary. 297 field. 181 Translation. 6. 181 Transfer function of a single joint.INDEX 579 Spherical coordinates for positioning subassembly. 156. 379 based on several variables. 474 specification. 299 frame. 175 joint-interpolated. 374. 203 Variable structure control. 466 Teach and play. 357. 387 similarity. 276. p'' AA. 463 world states. 22. 308 modeling.+ vii W Window. 296 higher-level. 5 Via points. 222 Voxel. 332 World coordinates. 296 sensors. 527 subtraction. 523 Versatran robot.580 INDEX Vector (Cont. 416 acs s. 298 Vision definition of. 3. 297 steps in.): product of. 304 low-level. 457 Vidicon. 362 illumination for. 289 Y Yaw. 464 Wrist sensor. 48 . 296 Voltage-torque conversion.
Copyright © 2024 DOKUMEN.SITE Inc.