Contest details

In machine vision, the question of the performance of software tools and their adaptation to the level of expertise of users is at least as important as that of lighting or image quality. This article provides a brief overview of the different tools that vision software solutions must integrate. It also reviews the various existing programming approaches.

The pre-treatments
In machine vision, no processing can make up for poor image quality. However good it may be, a raw image is rarely usable as it is. The analysis of the information it contains often requires that it be modified, either to improve its quality, by eliminating noise for example, or to prepare the image for subsequent processing: by bringing out certain characteristic elements , by separating the objects of interest from the background, by eliminating the elements in contact with the border of the image… The preprocessing tools traditionally available as standard in a machine vision software solution include LUTs (Look-Up Table) and spatial and / or frequency filters.

A LUT is the simplest operator that can be found in image processing. It converts the gray levels of an image by applying a function to each of the pixels. Its use makes it possible in particular to improve the contrast and the luminosity of the dark areas of an image. Thus, to lighten an image, we apply the logarithm function to each gray level. Conversely, to make an image a little too saturated darker, we apply an exponential function. We can notice that thresholding is nothing other than a particular LUT transformation, one which associates black with all the gray levels below a certain threshold, and white with all the others.

More complex, the filtering operators are numerous and make it possible to carry out a number of processing operations on a raw image. They are mainly used to reveal the outlines of objects, eliminate noise, soften or accentuate details. These operators act either in the spatial domain, by recalculating the value of each pixel as a function of the values ​​of neighboring pixels, or in the frequency domain, by acting not on the raw image, but on its frequency representation. This is generally obtained by means of a Fast Fourier Transformation.

Localization tools
Locating an object or shape within an image is often one of the first operations performed in a machine vision application. The main approaches used by vision software for locating objects are based on edge detection or pattern recognition. Typically, in the first approach, the software analyzes the pixels located along a network of profiles, which can be made up of straight parallel or concurrent (spoke) lines, portions of concentric circles, etc. It thus detects the transitions present in the image along these profiles, which makes it possible, for example, to locate the contours of an object. The second, more complex approach is based on the detection in the image of sets of pixels coinciding with a known model.a priorior “learned” by the system. The recognition of shapes can relate to texture characteristics as well as to geometric characteristics of the object sought. Traditionally, pattern recognition tools were based on the now well-known technique of Normalized Cross Correlation (NCC). Simple to implement and to accelerate, this one however suffers from important limitations. First of all, it requires that the target always appear in the field of view of the camera with approximately the same orientation, and at the same distance. For the same reason, the NCC is unable to locate objects larger or smaller than the reference model, even if the geometric shape is the same. Furthermore, the NCC cannot work on non-rectangular models. This means that, whatever the shape of the object to be found in the image, training can only be carried out, at best, on the smallest rectangle containing the object; which makes it impossible to detect touching objects, or even the management of variable funds. Finally, the NCC technique fails when the brightness varies. More recent geometry-based techniques overcome these difficulties. Insensitive to rotations and scale factor, these techniques can adapt to many constraints, such as non-linear changes in lighting, video inversion, noise, occlusion, objects in contact. learning can only be performed, at best, on the smallest rectangle containing the object; which makes it impossible to detect touching objects, or even the management of variable funds. Finally, the NCC technique fails when the brightness varies. More recent geometry-based techniques overcome these difficulties. Insensitive to rotations and scale factor, these techniques can adapt to many constraints, such as non-linear changes in lighting, video inversion, noise, occlusion, objects in contact. learning can only be performed, at best, on the smallest rectangle containing the object; which makes it impossible to detect touching objects, or even the management of variable funds. Finally, the NCC technique fails when the brightness varies. More recent geometry-based techniques overcome these difficulties. Insensitive to rotations and scale factor, these techniques can adapt to many constraints, such as non-linear changes in lighting, video inversion, noise, occlusion, objects in contact. More recent geometry-based techniques overcome these difficulties. Insensitive to rotations and scale factor, these techniques can adapt to many constraints, such as non-linear changes in lighting, video inversion, noise, occlusion, objects in contact. More recent geometry-based techniques overcome these difficulties. Insensitive to rotations and scale factor, these techniques can adapt to many constraints, such as non-linear changes in lighting, video inversion, noise, occlusion, objects in contact.

Read more:
https://acrochat.com/read-blog/78824
https://www.ourboox.com/books/how-machine-vision-associated-with-deep-learning/
https://articles.digiplanet.biz/4-tips-for-the-future-of-food-manufacturing/
https://linktr.ee/dzoptics
https://catbuzzy.com/read-blog/50411
https://figshare.com/s/5068c417cbc6cf66e2c6
https://voynetch.com/17720
https://caramellaapp.com/simonx226/y_H7y9QfA/artificial-intelligence-extends-the-capabilities-of-machine
https://www.theloop.com.au/project/iotforall/portfolio/industrial-machine-vision-systems/413173
https://www.storeboard.com/blogs/technology/machine-vision-everything-you-need-to-know/5422632

Winner: 1

Category:

Entry Frequency: one-time

Facebook required: no

Twitter required: no

Giveaway