Skip navigation
20/20 Machine Vision

20/20 Machine Vision

More shops are taking a closer look at vision-guided robotics.

More shops are taking a closer look at vision-guided robotics.

By James Benes,
associate editor

Robotic systems with vision and force sensors "feel" part-feature locations.

Machine vision with geometricpatternmatching algorithms stores location and relationships of geometric part features.

Vision-guided robots are good for shops doing batch-type operations.

3D-vision systems locate parts and recognize their tilt to guide a robot for correct part presentation to a machine.

Vision-guided robots are catching the eyes of small-to-medium-size shops seeking factoryfloor automation for the first time. These systems reduce costs by introducing parts to machining cells via simple conveyors that eliminate expensive fixturing, which is especially beneficial to shops doing batch-type operations. Instead of purchasing new fixtures, frequently changing fixtures, and maintaining them, a vision system can run specific numbers of one type of part and then change over to another.

According to Ed Roney, development manager of vision products and applications at Fanuc Robotics America, Rochester Hills, Mich., there are two types of vision systems, pixel based and geometry based. Pixel-based vision is common in the electronics and semiconductor industries for precisely placing parts, which are generally small, flat, and have common part-to-part shapes. With these parts, lighting across the vision field is fairly constant.

However, pixel-based systems are sensitive to nonuniform changes in lighting and have problems with shadows created by relatively large, bulky parts, typical of those processed in the metalworking industries. For these applications, the trend is geometry-based vision systems.

While pixel-based systems store images of parts to be located, geometrybased-vision technology, such as in Fanuc Robotics' geometricpatternmatching algorithms, stores locations and relationships of geometric part features — edges, circles, corners, and squares. These systems use this model to recognize parts based on their features, and this makes them less sensitive to lighting changes typical of metalworking environments.

"With geometricpatternmatching algorithms and increased processing power, today's vision systems can determine more part features for greater discrimination," explains Roney. For example, in an automotive plant, gears were stacked in bins with rows separated by plastic slip sheets. Over time, previous parts would leave oil rings on the slip sheets. A simple overhead camera looking for the circular shape of the gear would sometimes misinterpret an oil ring on the sheet for a real part. A geometry-based vision system, on the other hand, looks for specific part features, such as gear teeth, to differentiate between a part and an oil ring."

Vision guiding a robot involves its six degrees of freedom — X, Y, and Z axes and rotation about each. 2D-vision systems determine rotation about the Z axis to locate parts on a flat surface, such as on a conveyor. But parts stacked in a bin, for example, are often tilted in more than one direction, so 3Dvision systems are needed to locate parts and recognize tilt so robots can properly grab parts and correctly present them to machines.

General-purpose vision systems are usually more expensive than application-specific systems because they need more tools to meet the demands of unidentified applications. Regardless of who does the software programming, the shop or an integrator, there are development and maintenance costs involved. On the other hand, with applicationspecific vision-guided robotic systems, shops simply specify application parameters, and the system comes with the appropriate tools. However, they do have to calibrate the system's camera image to match the robot's coordinate system. Many shops do this themselves; others use integrators.

"Vision is a powerful and productive tool, but shops must be careful not to expect too much," cautions Roney. "People tend to think of a vision-guided robot as we do our eyes and hands. Vision doesn't see things the way a human eye does. Humans are adaptive and can make fine adjustments between eyes and hands when reaching and picking up an object. Vision-guided robotic systems are not adaptive and have only one opportunity to locate a part and orient it correctly.

"When machine vision was first introduced, shops commonly overestimated its capabilities, which led to frequent, disappointing results. Although the technology has greatly advanced, it is still a good idea to run system trials to test a vision/robot solution to ensure expected results."

Ten years ago a vision system from a major supplier might cost $20,000. Today, one with considerable functionality costs about $5,000 to $8,000. "Because of the relatively low cost of today's vision systems, many smaller, inexperienced integrators are in the industry," notes Roney. "Vision systems and robotics are still rather sophisticated technologies, and my advice is to look for a supplier with a range of application experience and one with the ability to provide long-term support. The availability of training for marrying vision with robotics is another critical issue, particularly for shops looking to implement such systems themselves."


Gazing into his crystal ball, Roney says, "I don't envision revolutionary leaps in vision technology, and cost has probably bottomed out. However, 3D use and color should increase, and vision may be more closely integrated with robotics as a single unit, rather than as an enhancement.

"Frame grabbers — the cards that transfer data between a camera and a PC — will probably be replaced by direct communication between the two, except in extremely fast, high-volume applications requiring dozens of pictures/sec. In most manufacturing applications, a machine's cycle time dictates vision/robot speed, so highspeed performance is not required.

"Robotic systems front ended with vision and equipped with force sensors are one of the latest innovations for assembly operations. The vision system guides the robot to properly orient a part, while force sensors essentially 'feel' its feature locations for proper alignment and assembly."

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.