Canon machine vision systems let robots “see” in 3D

1

It should not come as a surprise that the most cutting-edge automation technology has been deployed in assembly and manufacturing applications. In factory and automotive production facilities where complex items are assembled, repetitive tasks and processes can be performed with greater speed and safety by machine. High labor costs are a significant obstacle for many manufacturers, as is the shortage of skilled workers. Robot-based bin picking technologies combat these challenges and increase overall safety and productivity by turning over these tasks to smart machines.

Advances in 3D sensing, combined with detailed decision-making programming have expanded the capabilities of the robotic systems already in place. Imaging and optical giant Canon has been working to create 3D enabled machine “vision” systems, allowing them to identify, select and pick up a variety of parts. According to Canon Machine Vision Engineer Grant Zahorsky, these systems are crucial for the automation industry.

“Our system is really a catalyst to provide eyes for robots with which to see.”

The Canon RV-Series machine vision system uses a high-end camera lens and high-powered LED projector to assist in robotic automation. Robots have been developed to pick up parts to be used in assembly lines, but only if those parts were pre-aligned in specific orientations that could be recognized and then picked up. Properly aligning parts in a way that they could be used can eat away at the cost and time savings the implementation of such an automation was designed to accomplish.

Canon’s 3D machine vision system, however, uses advances in machine vision to allow it to pick from collections of randomly oriented, mixed, or a bin of randomly placed parts. The system uses a projector to project patterns on top of the parts, and at the same time, calculates the depth and distance to each part.

The system can then take the information from the distance measurements to identify what parts are in the bin, and then match the parts it sees to an existing CAD library.

The system is comprised of a camera as well as a projector, which allows the system to create the 3D map of points, and allowing the parts to be identified in any orientation, says Zahorsky.

“By looking at the deformation of those bars that we’re projecting onto the parts, we can infer that some things are closer to the camera or further away, and then from there we create a detailed 3D point cloud, that gives us information of where everything is in the bin.”

The addition of the projector and the use of structured light means that the system can handle less-than-ideal lighting and part conditions as well. A client that applied the technology to crankshafts was able to use a pair of cameras to accurately identify and pick up shiny, wet, or even rusty parts (depending on the step in the treatment process) without changing the software.

Another plant was concerned about changing lighting conditions within their space, so they decided to test it by turning off all the factory floor lights. The system was still able to detect and recognize parts in the dark just as well as it could when the lights were on. The system’s auto exposure adjustment can adjust for quick changes in exposure. In 2D vision systems, it was much more crucial that the lighting remain the same between each pick.

The system can be programmed to take on up to 200 different tasks, which could mean choosing 200 different parts from different bins with different gripping tools. For each task, you can then also program in up to 256 different ways to grasp the part. But unlike some prior systems, this type of programming isn’t done on the robot – it is completely within the software.

The primary market for the system is usually the automotive industry, because of the complexity of tasks required to manufacture cars as well as the size of the industry itself. According to Zahorsky, their customers are a mix of those that are just beginning to automate, and those that want to improve their current processes.

“Just like 3D printing, 3D machine vision in general is a very new technology – especially to a lot of the bigger companies – so I think there’s always a challenge of getting in on the ground floor, and starting with the companies that haven’t adopted it yet. But there’s also the companies that have been involved quite a bit in 3D machine vision in the past, and they’re looking to refine it and increase their speed, making their plant a bit more effective.”

Share.

About Author

Carla Lauter

Carla Lauter is the editor of SPAR3D.com and the SPAR 3D Newsletter. Before joining SPAR 3D, Carla spent 10 years on NASA and National Science Foundation funded projects focusing on Earth science and communication. She has worked on web-based outreach and online interactives for NASA Earth Science, including products for satellite missions measuring sea level, salinity and hyperspectral ocean color.

SPAR 3D NEWSLETTER

Subscribe to our weekly enewsletter, delivering news and market information for professionals involved in 3D imaging technology.

You may unsubscribe from our mailing list at any time. Diversified Communications | 121 Free Street, Portland, ME 04101 | +1 207-842-5500

1 Comment

  1. Avatar
    Mike Sanders on

    It’s awesome that 3D sensing can expand the capabilities of robotic systems. My boss has been telling me about how he wants to make some machines work better in the coming weeks. I’ll share this information with him so that he can look into his options for machine vision that can help with this.

Leave A Reply

© Diversified Communications. All rights reserved.