The core of a typical image sensor is a CCD unit (charge-coupled device) or a standard CMOS unit (complementary metal oxide semiconductor). CCD and CMOS sensors have similar characteristics and are widely used in commercial cameras. However, most modern sensors use CMOS cells, mainly for manufacturing reasons. Sensors and optics are often integrated to make wafer-level cameras that are used in areas such as biology or microscopy, as shown in Figure 1. Figure 1: Common arrangement of image sensors incorporating optics and color filters Image sensors are designed to meet the specific goals of different applications and offer different levels of sensitivity and quality. To get familiar with the various sensors, check out their manufacturer information. For example, to have the best compromise between silicon-based modes and dynamic response (for achieving light intensity and color detection), the size and composition of each photodiode sensor unit needs to be optimized for a particular semiconductor manufacturing process. ingredient. For computer vision, the effect of sampling theory is important, such as the Nyquist frequency used in the pixel range of the target scene. The sensor resolution and optics together provide enough resolution for each pixel to image the feature of interest, so there is a conclusion that the sampling (or imaging) frequency of the feature of interest should be an important pixel (for interested In terms of features, it is twice the smallest pixel size. Of course, twice the oversampling is only a minimum target for imaging accuracy, and in practical applications, it is not easy to determine the characteristics of a single pixel width. For the best application, to get the best results, the camera system needs to be calibrated to determine the pixel noise and dynamic range of the pixel depth under different illumination and distance conditions. In order to be able to handle the noise and nonlinear response of the sensor to any color channel, and to detect and correct pixel artifacts, and to model geometric distortion, a suitable sensor approach needs to be developed. If you use the test mode to design a simple calibration method, this method has a fine-to-thick gradient in terms of grayscale, color, feature pixel size, etc., and you will see the result. Silicon image sensors are the most widely used, and of course other materials are used, such as gallium (Ga) to cover longer infrared wavelengths than silicon in industrial and military applications. The resolution of the image sensor will vary from camera to camera. From single-pixel phototransistor cameras (which use 1D linear scan arrays for industrial applications) to two-dimensional rectangular arrays on ordinary cameras (all paths to spherical arrays are used for high-resolution imaging), it is possible to use . (The sensor configuration and camera configuration are described at the end of this chapter). Ordinary imaging sensors are manufactured using CCD, CMOS, BSI, and Foveon methods. The silicon image sensor has a nonlinear spectral response curve that is well perceived by the near-infrared portion of the spectrum, but not well for the blue, purple, and near-ultraviolet portions (see Figure 2). Figure 2: Typical spectral response of several silicon photodiodes. It can be noted that the photodiode has high sensitivity in the near-infrared range around 900 nm, and has nonlinear sensitivity in the visible range across 400 nm to 700 nm. Due to the standard silicon response, removing the IR filter from the camera increases the sensitivity of the near infrared. Note that when the raw data is read in and the data is discretized into digital pixels, it causes a silicon spectral response. The sensor manufacturer has made design compensation in this area. However, when calibrating the camera system according to the application and designing the sensor processing method, the color response of the sensor should be considered. The key to an image sensor is the size of the photodiode or the size of the component. The number of photons captured by sensor elements using small photodiodes is not as large as using large photodiodes. If the component size is smaller than the visible wavelength of visible light (such as blue light with a length of 400 nm), other problems must be overcome in the sensor design in order to correct the image color. Sensor manufacturers spend a lot of effort to design optimized component sizes to ensure that all colors are equally imaged (see Figure 3). In extreme cases, small sensors may be more sensitive to noise due to the lack of accumulated photons and sensor readout noise. If the diode sensor element is too large, the particle size and cost of the silicon material will increase, which has no advantage. Typical commercial sensor devices have sensor elements that are at least 1 square micron in size and will vary from manufacturer to manufacturer, but there are some tradeoffs to meet certain special needs. Figure 3: Wavelength assignment of the basic color. Note that the basic color areas overlap each other, and green is a good monochrome alternative for all colors. Figure 4 shows the different on-chip configurations of a multispectral sensor design, including mosaic and stacking methods. In the mosaic method, a color filter is mounted on the mosaic mode of each component. The Faveon sensor stacking method relies on the deep penetration of the color wavelength into the physical composition of the semiconductor material, where each color infiltrates the silicon material to varying degrees to image the respective color. The entire component size is available for all colors, so there is no need to configure components for each color separately. Figure 4: (Left) Foveon method for stacking RGB components: RGB color at each component position and absorbing different wavelengths at different depths; (right) Standard mosaic components: placed on top of each photodiode An RGB filter, each filter only allows a specific wavelength to pass through each photodiode Back-side illuminated (BSI) sensor structures have larger component areas, and each component collects more photons, thus rearranging sensor wiring on the die. The arrangement of the sensor elements also affects the color response. For example, Figure 5 shows a different arrangement of basic color (R, G, B) sensors and white sensors, where the white sensor (W) has a very sharp or achromatic color filter. The arrangement of the sensors takes into account a range of pixel processing, such as in the processing of a pixel information by the sensor, combining pixels selected in different configurations of adjacent elements that optimize color response or spatial color resolution. In fact, some applications use only raw sensor data and perform common processing to enhance resolution or construct other color mixtures. Figure 5: Several different mosaic configurations of component colors, including white, basic RGB colors, and secondary CYM components. Each configuration provides a different approach to optimizing color or spatial resolution for sensor processing (images are from Building Intelligent Systems and licensed by Intel Press). The size of the entire sensor also determines the size of the lens. In general, the larger the lens, the more light it passes, so larger cameras can be better suited for digital cameras for photographic applications. In addition, the aspect ratio of the components on the particles determines the geometry of the pixels. For example, the aspect ratios of 4:3 and 3:2 are used for digital cameras and 35 mm film, respectively. The details of the sensor configuration are worth understanding by the reader so that the best sensor processing and image pre-processing can be designed. Pandora Box,pandora box,pandora box arcade,pandora Guangzhou Ruihong Electronic Technology CO.,Ltd , https://www.callegame.com