3.5 Focal length, magnification and working distance

Focal length (FL) affects the field of view and magnification, as it is one of the
most important parameters when choosing optics.

Focal length refers to the distance between the main plane of the lens and a particular point where the light is focused from infinity. Below, Figure 1 visualises how the focal length, marked with f and f ′, affects the image height. A long focal length magnifies more than a short one (Greivenkamp 2004).

FIGURE 1 Long and short focal length. Figure parameters: a: working distance, a′: image distance, f : object-side focal length, f ′: image-side focal length, F: object side focal point, F′: image side focal point, y: object height and y′ image height. Increased focal length magnifies the object size on image plane (y′), while the working distance can remain unchanged.

The working distance, object size and focal length affect the magnification. From above, Figure 15, we can see that a longer focal length increases the magnification without extending the working distance a. Magnification β can be approximated for a non-complex optical setups as follows (Greivenkamp 2004):

(1)   \begin{equation*}     \beta = \frac{a'}{a} = \frac{y'}{y} \end{equation*}

For thin lenses:

(2)   \begin{equation*}    \beta = \frac{1}{f'} = \frac{1}{a} + \frac{1}{a'} \end{equation*}

Below, Figure 2 explains the lens opening angles and their relations with the horizontal, vertical and diagonal field of view (FOV). By looking at the angles and rays of the image, we can see that the relation between the field of view and working distance correlates; the image field of view decreases while the working distance shortens and vice versa.

FIGURE 2 Field of view (FOW) is affected by the lens focal length, lens opening angles and working distance. These parameters can be evaluated, and the lens should be selected according to the working distance and target measures.

The relation between the opening angle and focal length is the opposite; the larger the focal length, the narrower the opening angle. Most of the lens parameters described in Table 2 (Post 3.1) can be calculated either with pen and paper or using services like Vision Doctor (Doctor 2022) or sensor manufacturers’ web tools, which are provided for designing imaging systems.

While there are many parameters to consider, the infrared (IR) cut and possible colour corrections are the least worth mentioning features. Suppose an imaging system is for spectral imaging, and the interesting wavelength range is in IR. It might be good to exclude lenses designed to block IR light or lenses that have some other undesirable colour corrections by default (Greivenkamp 2004; Stemmer 2022).

Found something useful? Wish to cite? This post is based on my dissertation. For citations and more information, click here to see the scientific version of it.

References

Greivenkamp, John, E. 2004. Field Guide to Geometrical Optics, Vol. FG01. SPIE, https://doi.org/10.1117/3.547461.

Doctor, 2022. Vision doctor, home page. https://www.vision-doctor. com/en/. (A private, independent, non-commercial website project providing solutions to machine vision. Accessed on 4.5.2022).

Stemmer 2022. Stemmer Imaging, The Imaging and Vision Handbook. URL:https://www.stemmer-imaging.com/en/the-imaging-vision-handbook/.

3.4 Lens image circle and sensor diagonal

A carefully selected lens provides high-quality images from the objects. The image circle is formed when a light strikes a perpendicular target, i.e., the sensor, which forms a circle of light.

Below, Figure 1 shows the relation between the lens image circle and sensor diagonal. As a lens-related property, the image quality typically deteriorates towards border areas, and the images might suffer from shading or vignetting.

To avoid mechanical vignetting, choosing the right size optics with the sensor is necessary. If the lens image circle or lens mount is too small, the image will be heavily vignetted (lower-middle image).

Figure 1. Image vignetting and shading. In the illustration on the left, the image circle is larger than the sensor diagonal, and the shade is caused by Cos4 vignetting. In contrast, in the illustration on the right, the image circle is smaller than the sensor diagonal, causing mechanical vignetting. Both situations can cause vignetting or shades. If the object is in the middle of the non-shaded area, the image can be cropped, as shown with a dotted line in the upper-middle image. The right side example produces images that are not usable (lower-middle image).

Another source of vignetting, Cos4, is seen in the upper-middle image (above, Figure 1). If the light travels to the edges of the image from a further distance and reaches the sensor at an angle, it affects image quality; the light falloff is determined by the cos4(θ) function, where θ is the angle of incoming the light with respect to the image space’s optical axis.

The drop in intensity is more significant for wide incidence angles, causing the image to appear brighter at the centre and darker at the edges (Greivenkamp 2004). Since it is sometimes difficult to find inexpensive optics of the right size for the desired sensor (other parameters that depend on the target might limit the selection), the lens image circle can be oversized. In such cases, the images can be cropped and used, or the effect can be controlled by decreasing the aperture size.

Found something useful? Wish to cite? This post is based on my dissertation. For citations and more information, click here to see the scientific version of it.

References

Greivenkamp, John, E. 2004. Field Guide to Geometrical Optics, Vol. FG01. SPIE, https://doi.org/10.1117/3.547461.

3.3 Lens mount

The lens must be attached steadily to the sensor. Two common lens mounts in
machine vision systems are S and C.

Figure 1. Two Basler ace sensors with Basler C-mount lenses and optical components. This kind of optics and optical components allows to build and test different imager ideas with a reasonable price.

S-mount lenses are small and inexpensive, often used with board-level sensors. A typical S-mount lens has a fixed focus and minimal adjustment possibilities, whereas C-mount lenses are the most common in machine vision applications.

C-mount optics have a wide range of compatible components, enabling them to be used in prototype imagers constructed with different optical and optomechanical components.

Typically, these lenses have adjustable iris and focus. The price range resonates with the optics’ quality and resolution. Several other mounts exist, but C- and S-mounts are considered the most suitable for basic research due to their satisfactory quality, available accessories and affordability.

Above, Figure 1 show how sensors, C mount lenses and optical components can be combined for specific imaging purposes.

Found something useful? Wish to cite? This post is based on my dissertation. For citations and more information, click here to see the scientific version of it.

3.2 Forming an optical image

Figure 1 (below) visualises the basic setup, in which the lens reconstructs the scattered light into an image, which happens in a light-sensitive area between the lens and the sensor. The parameters that impact the image are the lens radii, the distances between the lenses, the working distance and the distance between the lens and the sensor (Greivenkamp 2004).

Figure 1. Forming an optical image

As we see from Figure 1, the lens reconstructs the scattered light into an image captured with a sensor. The sensor’s photosensitive pixel area captures an optical image formed through a lens. The object size, lens properties such as focal length, lens radii and the distance between the object, lens and sensor are examples of optical parameters that affect the captured image. The image circle is formed when a light strikes a perpendicular target, i.e., the sensor, which forms a circle of light.

Found something useful? Wish to cite? This post is based on my dissertation. For citations and more information, click here to see the scientific version of it.

References

Greivenkamp, John, E. 2004. Field Guide to Geometrical Optics, Vol. FG01. SPIE, https://doi.org/10.1117/3.547461.

3.1 Optics’ terminology

The simplest example of a machine vision imager is a combination of sensor and a commercial lens. The lens selection parameters should be defined based on the target features and imaging setup.

Optics terms and definitions , collected from (Greivenkamp 2004) and (Stemmer 2022).

Short definitions of the commonly used terms are introduced above in Table 1. Each lens parameter should be evaluated from the sensor and imaging setup point of view since the decisions directly affect the imaging quality and the prototype’s ability to perform its expectations.

A carefully designed system with suitable optics can reduce image-quality-related issues, decreasing the need for computational corrections (Greivenkamp 2004). While selecting the right lens, the lens features such as mount, field of view (FOV), focal length (FL), depth of view (DOF), resolution, possible polarisation and IR-cut are important (Greivenkamp (2004) and Stemmer (2022)). Within the next posts, we will deepen our optics understanding.

Found something useful? Wish to cite? This post is based on my dissertation. For citations and more information, click here to see the scientific version of it.

References

Greivenkamp, John, E. 2004. Field Guide to Geometrical Optics, Vol. FG01. SPIE, https://doi.org/10.1117/3.547461.

More info

Stemmer 2022. Stemmer Imaging, The Imaging and Vision Handbook. URL:https://www.stemmer-imaging.com/en/the-imaging-vision-handbook/ (A leading international machine vision technology provider. Accessed on 7.4.2022).

3 Introduction to optics and optomechanical components

When the sensor is mounted with optical elements, such as lenses and other optomechanical components, we can call it a machine vision camera or machine vision imager. Machine vision cameras require optical design.

Commercial lenses can be used with optomechanical components and machine vision sensors. Further, prototype imagers can be built from scratch using, for example, extension tubes, individual lenses and filters.

Combining a sensor and lens requires only a basic understanding of an imager’s design, whereas the more complex systems require a mathematical understanding of the geometrical optics. Both approaches can be valuable in science and enineering. Prototypes pave the way for new applications, and the optics, if needed, can be improved within the continuation studies and tests.

This series of posts introduces the basic terminology for optics, which would help to select and understand the optical parameters for designing a machine vision imager.

At first, we will get known with optics’ terminology (Post 3.1); next, within Post 3.2, we will form an optical image. Post 3.3 introduces lens mounts, 3.4 explains the image circle and sensor diagonal details. Focal length, magnification and working distance are covered in Post 3.5, and the series of posts is finalised with a peek of optical and optomechanical components (Post 3.6).

Links to related posts

Found something useful? Wish to cite? This post is based on my dissertation. For citations and more information, click here to see the scientific version of it.

From sensors to analysis … and further

Welcome to araita.com. This website is a place to read and learn the whole scale, from sensors to machine vision systems and analysis (AI). We will learn basic terminology, operating principles and more from machine vision sensors, optics, optical components, programming, data acquisition, image transformation and analysis using machine learning methods.

Since the author is currently working as a scientist, araita.com is based on academics and science, and therefore it contains material from her publications and dissertation. The visualisations and materials are here for citations (the origin is mentioned in every post) and re-use but by following academic conventions.

Besides the machine vision, machine learning, and hyperspectral imaging materials, araita.com contains the author’s teaching portfolio, publication details and a series of posts related to management, leadership and communication, which is raised from her MBA education.

You will find the educational materials From sensors to analysis materials under the “content” menu or from the right sidebar.

What is hyperspectral imaging?

Hyperspectral (HS) imaging is a non-invasive imaging technology that can reveal phenomena invisible to human vision capability. The spectral analysis aims to analyse the substances based on how their properties absorb or reflect light. Many substances or materials can be identified by their unique spectral signature, which is determined by how the substances reflect differently different wavebands of light. HS data is typically highly dimensional, which provides high accuracy and robustness for characterisation and identification tasks (Camps-Valls and Bruzzone 2005; Bioucas-Dias et al. 2013).

.

Figure 1. Monochromatic, RGB, spectroscopy, multispectral and HS features.

We can see from Figure 1 the monochromatic, RGB, spectroscopy, multispectral and HS image features. Monochromatic images have one channel with spatial x and y dimensions. RGB images are constructed from three colour channels (spectral dimension λ) and spatial x and y dimensions.

Spectroscopy measures dozens to hundreds of spectral channels, but the spatial dimensions are limited to one pixel. Multispectral and HS images have spatial and spectral dimensions (x, y and λ). The multispectral image typically has three to ten frames, and the spectral channel is wider than with HS images. HS images provide an almost continuous spectrum, which is constructed from hundreds or even thousands of narrow spectral channels.

Found something useful? Wish to cite? This post is based on my dissertation. For citations and more information, click here to see the scientific version of it.

HS imaging extends traditional photography by measuring electromagnetic radiation typically from visible (VIS) and near-infrared (NIR) light (below, Tables 1 and 2). In contrast, with RGB images’ red, green and blue frames, an HS image can be considered a data cube of tens, hundreds or thousands of frames (spectral bands or channels), each representing the intensity of a different wavelength of light, as seen above.

Table 1. Commonly used colour definitions.
The visible range for the human eye is approximately 380 nm to 740 nm.
Table 2. Spectral sub-divisions with abbreviations and spectral wavelength range.

The power of spectral imaging lies in the detailed pixel-wise information. Since substances emit electromagnetic radiation, the intensity distribution at different wavelengths forms the radiation spectrum. The spectrum is continuous if no clear boundaries exist between the wavelength ranges. For example, the spectrum of the light from the sun or an incandescent lamp is continuous, while the light from a fluorescent lamp is a discontinuous line spectrum. Because each substance has its characteristic line spectrum, it is identifiable.

By looking at the spectrum in Figure 2 (below), higher peaks can be observed, and their position on the wavelength axis and intensity can be compared with the known spectra. For example, the real plant in Figure 2 can be distinguished from artificial plants
based on its spectrum.

Figure 2. Preview of an HS image and pixel spectra. Red dot and spectrum: a living plant; orange circle
and spectrum:
an artificial plant. Left: the preview of an HS image, middle: reference
spectra of a living plant, right: an example spectrum of an artificial plant.

Figure 2 is a preview of an HS image. The red spot in the figure represents a selected spatial pixel, and the spectrum is visualised on the right. Each pixel has its own spectrum, so the HS image contains spatial and spectral domains, which enable, for example, accurate pixel-wise classification (Ahmad et al. 2022). The common definitions and spectral sub-divisions and range can be seen in Table 2.

Application areas

The history of HS imaging consists of earth observation and remote sensing applications. Since developments in sensor technology led to a reduction in imagers’ physical sizes and made them more affordable (especially in VIS and VNIR range), this non-invasive method gained interest and yielded promising results in many other application fields. Some examples current applications are related to the domains of agriculture (Thorp et al. 2017), forestry (Adão et al. 2017), medicine (Fei 2020), mining (Krupnik and Khan 2019), biology (Salmi et al. 2022), food industry (Pathmanaban et al. 2019), space (Lind et al. 2021) and
defence (Makki et al. 2017)

.

Interested? Want to hear more, study this area at the University of Jyväskylä? Interested on academic or business research collaboration?

See our laboratory and contact our research group!

References

Adão, T., Hruška, J., Pádua, L., Bessa, J., Peres, E., Morais, R. & Sousa, J. J. 2017. Hyperspectral imaging: A review on uav-based sensors, data processing and applications for agriculture and forestry. Remote sensing 9 (11), 1110. https: //doi.org/10.3390/rs9111110.

Ahmad, M., Shabbir, S., Roy, S. K., Hong, D., Wu, X., Yao, J., Khan, A. M., Mazzara, M., Distefano, S. & Chanussot, J. 2022. Hyperspectral image classification—traditional to deep models: A survey for future prospects. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 15, 968 https://doi.org/10.48550/arXiv.2101.06116.

Bioucas-Dias, J. M., Plaza, A., Camps-Valls, G., Scheunders, P., Nasrabadi, N. M. & Chanussot, J. 2013. Hyperspectral remote sensing data analysis and future challenges. IEEE Geoscience and Remote Sensing Magazine 1 (2), 6–36. https://doi.org/10.1109/MGRS.2013.2244672.

Camps-Valls, G. & Bruzzone, L. 2005. Kernel-based methods for hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing 43 (6), 1351-1362. doi:10.1109/TGRS.2005.846154.

Fei, B. 2020. Hyperspectral imaging in medical applications. In Data Handling
in Science and Technology, Vol. 32. Elsevier, 523–565. https://doi.org/10.1016/B978-0-444-63977-6.00021-3.

Krupnik, D. & Khan, S. 2019. Close-range, ground-based hyperspectral imaging for mining applications at various scales: Review and case studies. Earth-science reviews 198, 102952. https://doi.org/10.1016/j.earscirev.2019.102952.

Lind, L., Laamanen, H. & Pölönen, I. 2021. Hyperspectral imaging of asteroids using an fpi-based sensor. In Sensors, Systems, and Next-Generation Satellites XXV, Vol. 11858. SPIE, 65–78. https://doi.org/10.1117/12.2599514.

Makki, I., Younes, R., Francis, C., Bianchi, T. & Zucchetti, M. 2017. A survey of landmine detection using hyperspectral imaging. ISPRS Journal of Photogrammetry and Remote Sensing 124, 40–53. https://doi.org/10.1016/j.isprsjprs. 2016.12.009.

Salmi, P., Calderini, M., Pääkkönen, S., Taipale, S. & Pölönen, I. 2022. Assessment of microalgae species, biomass, and distribution from spectral images using a convolution neural network. Journal of Applied Phycology 34, 1–11. https: //doi.org/10.1007/s10811-022-02735-w.

Thorp, K. R., Wang, G., Bronson, K. F., Badaruddin, M. & Mon, J. 2017. Hyperspectral data mining to identify relevant canopy spectral features for estimating durum wheat growth, nitrogen status, and grain yield. Computers and Electronics in Agriculture 136, 1–12. doi:10.1016/j.compag.2017.02.024.

2.4 How to pre-process a color filter array?

As we learned, a colour sensor provides colour filter arrays (CFAs), which need to be pre-processed in order to achieve an RGB image. Convolution and bilinear interpolation is one simple, but efficient combination of methods.

Figure 1 shows that there are missing values in each RGB colour frame. However, the value of each missing pixel can be easily calculated, for example, with a bilinear interpolation, where each of the 2D plane’s pixel values is an average of the neighbouring pixels. The red, green and blue planes can be calculated separately with convolutional operations by placing zeros to missing values in R0, G0 and B0, and sliding a weighted 3 × 3 kernel over the red, green and blue colour planes, filling the middle pixel with the result (Eskelinen, 2019).

Figure 1. Bilinear convolution. The Bayer BGGR colour filter array is divided into three colour planes R0, G0 and B0. Each of the missing values is set to zero (0). A weighted kernel (WeightsRG or WeightsG) slides over each pixel of the colour planes (y and x directions). The value of the middle pixel is the weighted average of the neighbouring pixels.

Below, Figure 2 illustrates how the behaviour of the pixel values in the RGB colourspace. The minimum RGB value of one channel is zero (0), which is visualised as black. The highest value 255, is white in a greyscale image, but in a colour channel, it is visualised as the most intensive red, green or blue tone. Intuitively the interpolation can be seen as a pixel value surface formed from the mean of neighbouring pixels. The pixel value surface is drawn above the original Bayer pattern in the illustration (Figure 2). After interpolation, the top pattern with the individual middle pixel represents the computed pixel value. The green and red examples are fading colours, so the surfaces are inclined. The blue represents an equally intensive pixel neighbourhood, where the pixel value surface is drawn horizontally straight.

FIGURE 2 Bilinear interpolation. Intuitively, the bilinear interpolation can be seen as linear towards the image’s x and y directions, where the mean of the neighbouring pixels forms a surface, where the middle pixel gets its pixel colour value. The colour bar represents the RGB colours’ minimum and maximum values (0 to 255). A pixel captured with a high number of photons reaches higher values, resulting in a “bright colour”. A pixel with a low number of photons gets values close to zero, and the pixel is seen as black.

Found something useful? Wish to cite? This post is based on my dissertation. For citations and more information, click here to see the scientific version of it.

References

Eskelinen, M. 2019. Computational methods for hyperspectral imaging using Fabry-Perot interferometers and colour cameras. URL:http://urn.fi/URN:ISBN:978-951-39-7967-6.