December 21, 2024
What Are the Newest Trends in Machine Vision?

What trends in cameras, software, and lighting are likely to play out in the machine vision industry over the next few years?

To find out, Vision Systems Design Editor in Chief Linda Wilson asked the experts at a press briefing and roundtable event in mid-April about VISION, a machine vision conference and exhibition scheduled to occur October 8-10, 2024, in Stuttgart, Germany.

The experts at the event were:

  • Jan Hartmann, managing partner, IDS Imaging Development Systems GmbH (Obersulm, Germany)
  • Raoul Kimmelmann, managing director, RAUSCHER GmbH (Olching, Germany)
  • Hardy Mehl, board member for finance (CFO) and operations (COO), deputy chairman, BASLER AG (Ahrensburg, Germany)
  • Olaf Munkelt, managing director, MVTec Software GmbH (Munich, Germany)
  • Mark Williamson, chairman of the board of VDMA Machine Vision and consultant to STEMMER Imaging (Puchheim, Germany)

Below is an edited version of VSD’s interviews with these industry leaders.

What are the key characteristics that customers want in cameras and sensors?

Williamson: On the sensor side, there is obviously getting the cost down and getting more resolution at lower price. That is all Moore’s law: the smaller you can make the pixel, the less sensor space you use; therefore, you can build more sensors on a wafer, which gets the price down.

Hartmann: There is a bit of split. There are the complex technologies and expensive sensors but for a lot of applications, especially when it comes to AI image recognition, the complexity of the sensor is not that important anymore. Maybe with a cheaper sensor you can get really good results, based on AI. When you have an application where you can use AI and image quality is not super important—maybe it is good enough for your algorithm—people will go with cheaper sensors.

We see a lot of people starting their development with a cheap webcam from Logitech as proof-of-concept. They normally switch to an industrial camera after the first development phase. I see also the sensor developers pushing the technology to the limit. High resolution, faster speeds, more low light, HDR.

Related: Continued Downturn in European Machine Vision Market Predicted

What role will deep learning play in machine vision? Where will it have the most impact?

Munkelt: Classification and anomaly detection. If you have a classification problem where you want to say, “OK, this object belongs to class A, or class B, or class 3, deep learning adds value because the classification power is much better than the standard classification methods we have so far. If you look at food, for example, you want to classify fruit. Is it quality A, quality B, quality C, or A1, A2, A3? This is sometimes very hard to say even for an experienced person. One would say, “This is A2,” and another would say, “No, this is A1.”

Another example is anomaly detection, which also is a kind of classification. You say, “This is an anomaly, and this is not an anomaly.” The problem is to some extent you only show good parts. You only show the perfect picture of your business card to the system. If it was your business card with a missing r or part of the r, this would be an anomaly. This works in certain applications, but it doesn’t work in general. It is all up to the data and how you enter the data and how you arrange the data to make the system learn that this is OK, and all the rest are not OK. This is a classification problem.

Related: Challenging Machine Vision Misconceptions

Where is imaging beyond the visible spectrum heading? Is this likely to become more mainstream in the next few years?

Mehl: We see more and more applications—especially in the shortwave infrared (SWIR) area. Classic examples are food sorting or recycling material or the semiconductor industry to look at wafers and see circuits within the wafer. We definitely see that demand is increasing. Also, the technology that we can now offer is much cheaper compared to years ago when those technologies were only used in super high-end scientific applications or military applications. Now it is making inroads into industrial vision.

It is just starting. One of the key elements that drove this is special sensor technology from Sony that wasn’t available before. Sony (SenSWIR product line) created sensors that are much more affordable compared with the higher-end technologies that were available in the past.

What other new technologies will fuel growth in the machine vision industry?

Williamson: Event based. A camera has a frame rate, so if there is something that happens quicker than a frame cycle, you will miss it. Within event-based, if nothing is changing, the actual sampling rates is kilohertz not hertz. So, if you get little things that change, it will pick them up. You only get data when something changes on one pixel. That opens up applications that people haven’t thought of. That’s where event-based imaging is struggling at the moment. It has got to find those applications where it is really going to make a difference.

What about illumination? What trends do you see?

Kimmelmann: Lighting is still a great area. There is no single product or technology that is a game changer. Customers are becoming more and more aware that this is key to everything that they do later even if it is deep learning. You need to have a proper light. In the past, customers just threw lights on the subject, and now they have become more and more aware that they need to have the proper light. But there is no single recipe: use this product for this application. Everything is still individualized. 

link

Leave a Reply

Your email address will not be published. Required fields are marked *