Robots that see and feel

Robots that see and feel

Lower costs, increased performance, and greater ease of use mean more applications for intelligent robots.

Lower costs, increased performance, and greater ease of use mean more applications for intelligent robots.

By Ed Roney
Fanuc Robotics America

Edited by James J. Benes
associate editor

A 3D vision sensor guides a Fanuc Robotics intelligent I-21i robot in picking discrete parts from a tray for assembly operations.

Vision-guided robots not only verify that the right parts will be picked up but also that they are correctly oriented.

Intelligent robots employing vision technology now have the flexibility and functionality to perform tasks that were once thought too difficult or too risky to automate. Advances in vision-system technology have focused on more precise part recognition and greater simplification in programming, setup, and use. However, the most significant factor involves cost. Today, vision-system hardware is under $10,000 — 10 years ago systems would have cost $30,000 to $40,000.

Typically, PC-based vision systems are interfaced with robots, especially for electromechanical assembly. They are primarily used for part identification, discrimination, location, orientation, inspection, and measurement. Because of the tight tolerances and the need to inspect critical mating parts, vision is required for precision electronics and fiberoptics assembly.

Robots are good at repetitive tasks, but they are not adept at accommodating changing parameters involving part position, size, or dropoff locations. But a vision system detects these parameters and guides the robot accordingly. The vision system also verifies that the correct parts are available to the robot.

Another significant advancement has been in vision-system software. For example, the electronics industry has long used vision systems to detect essentially flat objects. The software available 10 years ago required consistent, predictable contrast and texture. Now, robust pattern-matching algorithms tackle 3D objects involving shadows and contrast variations. These algorithms deal more with part geometry than with part image. Today, poor lighting, fuzzy images, parallax error, partial occlusion, or variations in part qualities pose much less of a problem. This has allowed vision-guided robots to handle a wider range of industrialassembly, material-handling, and load/unload applications.

Vision systems are also much easier to program. Some applicationspecific systems do not require programming, just setup. Programming languages and interfaces of more general-purpose vision systems are also easier to use, letting many end users rather than integrators program their systems.

Robotic/vision systems, long a mainstay of high-volume automotive welding and painting applications, are increasingly used in more general-industrial applications thanks to their greater agility. For instance, about 10% of Fanuc Robotic's general-industrial systems are equipped with vision. Five years ago, only 1% were shipped with vision capabilities. One integrator, who deals primarily with robotics for general-industrial-machining applications, reports that 30% of the systems he configures incorporate vision.

In machining or assembly applications, advances in vision software reduce or eliminate the need for gaging tools, fixturing, or other partpresentation equipment. For example, a Fanuc Robotics intelligent robot with a 3D-vision sensor and binpicking software selects randomly oriented parts piled in bins or trays, reducing the need for manual labor, part feeders, positioning jigs, and other peripherals. Some applications require an additional, fixed, overhead 2D camera to find the rough location of all parts within its field of view and select a few good candidates for 3D-vision processing. The robot aligns the 3D sensor closely over the parts, and the 3D-imageprocessing function calculates the location and orientation of a chosen part. The application software identifies one part type from another, sorts parts based on special features (such as top surface from bottom surface), picks up partially overlapped parts, and automatically recovers from a collision during pickup operations. Vision also determines if a mating part at the drop-off point is oriented and positioned correctly.

The use of vision and robotics is an important step toward agile manufacturing. Not only is the need for costly dedicated fixtures reduced, but the process is flexible and quickly accommodates frequent part changeover. Quality control is monitored-without the subjectivity of an operator, and detailed reports are generated, including dimensional gaging, surface defects, and production runs.

Often, robotic systems cannot be justified without a vision component. That's because a robot may be capable of replacing the motions of an operator, but it needs vision to verify that all is well with a process.

The latest robotic innovations concern systems front ended with vision and equipped with force sensors. These units eliminate the drudgery and inaccuracy of manual assembly. For example, a clutch assembly requires the mating of a cylindrical sprag into a component containing a series of notches. The sprag must progressively negotiate each layer of notches as it is inserted into the component. Manual assembly is tedious and hard on the hands; a typical assembler workcycle is 40-min on, 20-min off. A vision system, on the other hand, guides the robot to properly orient the sprag for assembly. Then the robot equipped with force sensors essentially "feels" the location of the notches as the components are assembled. And it does this throughout a shift without downtime.

Mr. Roney is manager for vision products & application development at Fanuc Robotics America, Rochester Hills, Mich.

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.