
Citation: | Zhou C. C., Wang Y. K., Ding Y., et al. 2025. Optical Design of Wide-Field and Broadband Light Field Camera for High-Precision Optical Surface Defect Detection. Astronomical Techniques and Instruments, https://doi.org/10.61977/ati2025036. |
To address the challenges of high-precision optical surface defect detection, we propose a novel design for a wide-field and broadband light field camera in this work. The proposed system can achieve a 50° field of view and operates at both visible and near-infrared wavelengths. Using the principles of light field imaging, the proposed design enables 3D reconstruction of optical surfaces, thus enabling vertical surface height measurements with enhanced accuracy. Using Zemax-based simulations, we evaluate the system’s modulation transfer function, its optical aberrations, and its tolerance to shape variations through Zernike coefficient adjustments. The results demonstrate that this camera can achieve the required spatial resolution while also maintaining high imaging quality and thus offers a promising solution for advanced optical surface defect inspection.
The emergence of the light field camera has introduced a transformative approach to the capture of multidimensional optical information, making these cameras highly suitable for applications including surface defect detection and optical element characterization. Unlike conventional 2D imaging systems, light-field cameras use light field imaging principles to record both the spatial and angular light distributions, representing multi-dimensional functionality, based on a microlens array (MLA). The foundation of light-field imaging is multi-ocular stereo vision, which enables computational refocusing and 3D reconstruction. This capability is particularly advantageous for optical metrology, in which precise surface measurements and defect identification are critical requirements.
Recent advancements in light field principles and related technologies have demonstrated the potential of these cameras for use in atmospheric wavefront sensing, optical element defect testing, and 3D shape reconstruction. Ng et al.[1] produced a standard light field camera that is regarded as the first hand-held light field camera. Over recent years, Lytro and Raytrix have become well known as manufacturers of these cameras because of their high-performance devices based on light field technology. There are three classical types of light field camera that emphasize their individual functions: standard light field cameras[2] for depth estimation[3], focused light field cameras[4-5] for improved spatial resolution, and multi-focused light field cameras[6] for enhanced depth-of-field control[7].
One major challenge in optical defect detection is realizing a balance between the high spatial resolution and a wide field-of-view (FOV). At present, high-magnification inspection is the predominant method used to detect surface defects, including scratches and dots, but its applicability is limited when inspecting larger optical components. Additionally, the subsurface detection technique has been developed to measure the sizes and positions of bubbles[8]. The trade-off between the wide FOV and high resolution remains a fundamental issue in optical metrology. Additionally, existing measurement techniques often struggle with complex optical surfaces, particularly on surfaces with freeform geometries or unknown parameters such as operational wavelengths. Although interferometers offer superior precision, which can reach sub-nanometer levels, the high cost, large size, and high thresholds for their use limit applications in low-cost manufacturing. Furthermore, for high-precision microscopes[9], the possibilities of a large FOV and 3D measurements are reliant on additional mechanical or algorithmic support. Measurement of the parameters of certain optical elements and systems can be challenging, particularly when these elements are encapsulated or they have unknown characteristics, e.g., their operating wavelengths. Some distinctive methods for preliminary determination are being explored. Light field cameras represent a promising solution for optical defect detection.
When compared with interferometers, which excel at shape measurement, and microscopes, which provide high spatial resolution, light field cameras offer a balanced approach with unique advantages when capturing 3D optical surface information[10]. As a result, algorithms for light field acquisition[11] and image calibration[12-13] are popular areas of research. To the best of our knowledge, although some researchers have tried to measure simple shapes[14], they have not applied their methods to high-precision testing of shapes and surface defects, although this trend is becoming obvious. Ammann improved the surface reconstruction performance of a light field camera by using a pattern projection approach[15]. Additionally, with the ongoing development of deep learning technology, traditional algorithms are being strengthened. Tsai et at.[16] proposed a high-precision method to improve image reconstruction accuracy by considering information from multiple directions. In contrast to realistic scenes, fewer data are required for training and more are available. Furthermore, Huang[17], Heber[18], and Zhang[19] have conducted multiple research works from different perspectives. Overall, improvements in measurement time and reconstruction accuracy contribute to enabling shape measurements of optical elements using light field technology. However, high-precision light field cameras are limited by their FOV and only work in the visual wavebands, which do not match the size or the working bands of certain optical elements.
To address these limitations, this work presents the design and analysis of a 50°-wide-FOV, broadband light field camera that has been optimized to perform high-precision optical surface defect measurements. The proposed system operates across the visible and near-infrared wavelengths and has integrated aberration correction mechanisms to enhance the imaging accuracy. By combining light field imaging with Zernike polynomial-based aberration modeling, the proposed approach enables robust defect detection and surface height measurements. The study evaluates the important optical performance indicators through simulations, including the modulation transfer function (MTF), chromatic aberrations, and the image reconstruction quality, to provide insights into the feasibility of use of light field technology in optical metrology, particularly for freeform surfaces.
Light field imaging is governed by the light field function, which is a multi-dimensional representation of light rays in space. Originally formulated in an article by Adelson and Bergen[20], a light field function can be expressed as a seven-dimensional function in spherical coordinates that contains the spherical coordinates (θ and ϕ), the wavelength λ, the time t, and multiple viewing positions (Vx, Vy, and Vz). However, by considering the conditions realistically, the light field function can be simplified as a four-dimensional function. The final function is shown below, where (x,y) are the coordinates of the image plane and (u,v) are the coordinates of the exit pupil:
P=P(x,y,u,v) | (1) |
One simple example is that when a person blocks one of their eyes and then blocks the other, they find that the object in their vision changes its position. This is actually the principle of the parallax error, but the relative position between the observer and the object remains virtually unchanged. When exploring and extending this theory to lens arrays, more information about every dot of the objective is presented by several lenses and the pixels behind them. A piece of a lens and the pixels within the range of the lens’ projected area constitute a subsystem, and each subsystem of this type receives tiny distinctions about the light-field information from an object point. The sub-images produced by these subsystems have tiny differences between each other, which means that the holistic 3D image will be reconstructed using the sub images and a digital algorithm, e.g., digital refocusing.
To aid in understanding of the principle, a simplified diagram of the measured shape of the optical element is shown in Fig. 2. Unlike real items, which have sharp edges, most optical elements, e.g., mirrors, have rotational symmetry. Each of the object points are constant, which reduces the requirements for the light field cameras. The distance between two adjacent object points depends on the spatial resolution and the related algorithm. The axial direction is also related to the system characteristics and the reconstruction accuracy.
Common shape measurement methods can be segmented into several stages, including grinding, exact grinding, and polishing. A number of devices are applied to each of these stages because of the machining precision requirements. Based on the discussion above, we believe that light-field technology provides a powerful way to test the vector height of optical elements because of its 3D reconstruction capability. By virtue of the high resolution of the optical system, some small flaws are easy to measure and locate. Although the matched algorithms lie outside the scope of our research, some simulations are presented in a later section.
In this section, we calculate the relevant parameters of the optical system of a light field camera. The optical system is intended for testing of a packaged optical element, which means that we must establish an ideal model. In the ideal model shown in Fig. 3, the unknown system is modeled using an entrance pupil and an exit pupil. To test an unknown system, particularly a visual system, the light field camera’s entrance pupil must be located in front of the lens. If the light field camera’s stop is located at other positions, then the holistic testing FOV will be confined. Therefore, it is essential to set the stop correctly when measuring an unknown system's FOV. When testing unknown systems, operators must match the light field camera’s entrance pupil with the unknown system’s exit pupil.
Because the design objectives of the light field camera have already been determined, some of the fundamental optical system parameters are discussed below. Several indicative parameters are shown in Table 1. As illustrated in Fig. 4, the optical system contains three parts. In this work, the first objective lens is located behind the stop, and the relay lens set has the capacity to correct residual aberrations.
Parameters | |
FOV | 50° |
Working spectrum | 400-850 nm |
Diameter of entrance pupil | 4 mm |
Spatial resolution | 12 μm |
MTF (at cut-off frequency) | ≥0.2 |
Given that some packaged systems only have one hole in which to receive light information, we must use an illumination system to supply active illumination. The number of first objective lenses used is vital to the overall holistic imaging quality. Excessive numbers of lenses result in stray light and ghost images. The diameter of the first lens is related to the diameter of the stop, the FOV, and the distance between the stop and the objective lens. The relationship between these three parameters is given as follows:
D1=2⋅l1⋅tanω | (2) |
where l1 is the distance between the stop and the objective lens, ω is half of the FOV, and D1 is the diameter of the first objective lens. The first imaging process is realized by the first objective lenses. The original objective lens type is the Koenig eyepiece. As the first part of the system, this eyepiece construction consists of a single lens and one piece composed of a double glued lens. On this basis, the cone coefficient is added to both sides to form aspherical surfaces to ensure that fewer lenses need to be used. To some extent, aberrations can be corrected preliminarily, especially for off-axial image performance. To ensure that there is sufficient space to include a potential illumination system, it is essential to control the size of the first image. The first image is described by:
y1′=f1′⋅tanω<D12 | (3) |
where y1′ is half the height of the first image and f1′ is the focal length of the first objective lens system. This consideration contributes to confinement of the sizes of the successive lenses. According to the magnification formulation, the relationship between ω (i.e., the FOV of an object), y1′, and f2′ (the focal length of the relay lenses) is:
β=f1′/y1′tanω⋅f2′ | (4) |
where β is the total system magnification, which is given by the ratio of the image and object sizes. β determines the size of the sensors required and the cut-off frequency of the system. In this work, we select the CS126CU (Thorlabs Inc., USA) as the final image sensor type, where the effective area of the sensors is 14.131 mm × 10.350 mm and the size of each individual pixel is 3.45 μm × 3.45 μm. The total number of pixels is
fc=1βP | (5) |
where P is the spatial resolution of the object. The secondary lens group varies from the Tessar lens configuration. The Tessar lens[21] changes one of the positive lenses of Kirk's objective system to a double glued lens, thus adding a gluing surface and a type of glass with more variables to provide better image quality.
The MLA is the core component used in light field cameras to capture information about the directions of rays, and this array is widely used in adaptive imaging[22-26] and wavefront testing in the celestial observation field. In this system, the MLA is set at the focal plane of the main lenses and divides the image into several sub-images. If the image point produces M sub-images, then the spatial resolution of each sub-image is 1/M of the initial image. In other words, the image information is dispersed and captured by the pixels behind the microlens. Therefore, under low light conditions, the image quality obtained when using light field cameras is not as good as that of traditional cameras. Although light field cameras are able to record different angles of the light field, holistic imaging depends on the precision of the depth extraction algorithms. Additionally, the precision of a light field camera may also be affected by manufacturing precision and errors in MLA installation and adjustment.
MLAs have different layout types to match realistic applications, e.g., a single lens for fiber coupling and a hexagonal lens arrangement for laser homogenization. In fact, the arrangement of the microlenses is a matter of baseline planning and has been researched in a related work[27]. In this work, we use the most common arrangement for this system layout. One basic requirement is matching of the F numbers of the MLA with those of the main lenses so that they form a fixed aperture ratio system. When the F numbers, which represent the ability of an optical system to capture light, are matched, the MLA can then acquire the light captured by the front optics without excessive or reduced illumination. The MLA has a diameter of 15.3 mm, where each microlens is a flat convex lens with a diameter of 50 μm and is intended to be designed using quartz. The main system’s F number can be given as follows:
F#=fD | (6) |
where F# is the F number of the main system, f is the focal length, and D is the entrance pupil diameter.
In this section, we evaluate the optical performance of the proposed wide-field, broadband light field camera using Zemax-based (Ansys) simulations. The analysis includes a modulation transfer function (MTF) assessment, standard evaluation of the spot diagrams, and presentation of other parameters. These metrics provide insights into the system’s imaging quality, aberration compensation capability, and feasibility for use in high-precision optical surface defect detection.
The MTF is used to describe the ability to transfer the details of the different optical system frequencies and is an important measurement standard. Because light field cameras rely on MLAs for sampling, a trade-off exists between the spatial and angular resolutions. We therefore assess the system’s MTF performance across the different spectral bands and field angles. Fig. 7 shows the MTF characteristics of the designed light field camera imaging system.
As shown in Fig. 7, the half-field FOVs of 0°, 5°, 10°, 15°, 20°, and 25° are all greater than 0.2 at a spatial frequency of 75 lp/mm, which indicates good imaging quality. For the near-infrared light range, we selected several single wavelengths and observed their MTF curves. Because of the range of sensors available, we selected the wavelengths of 0.8 and 0.85μm. As Fig. 7(b) and (c) show, the MTF curves at 0.8 μm and 0.85 μm, respectively, all have values of more than 0.2. The system thus achieves sufficient spatial resolution for defect detection across a 50° FOV, with only minor MTF degradation in the near-infrared spectrum. The relationship between the depth resolution Δz and the lateral resolution Δxis given by Δz=0.61NA2nΔx, where NA represents the numerical aperture of the optical system. The depth resolution for the central FOV is 43 μm and the edge FOV is 882 μm when operating at 633 nm. Obviously, the depth resolution is inversely related to the FOV.
Standard spot diagrams show the geometric ray distributions that reflect the aberrations in an optical system and represent an important but simple way to balance the system quality. From Fig. 8, the biggest spot occurs at the 25° FOV and has a root mean square (RMS) radius of 10.180 μm. The smallest spot occurs at the 0° FOV and has an RMS radius of 2.577 μm. The black circle indicates the range of the Airy disk, which represents the diffraction through the light field camera.
Working band | Visual band (μm) | 800 nm (μm) | 850 nm (μm) | |
Radius of Airy disk | 2.145 | 2.943 | 3.132 | |
Radius of RMS | 0° | 2.577 | 1.120 | 1.018 |
7° | 2.656 | 3.115 | 3.321 | |
15° | 6.536 | 5.168 | 5.271 | |
20° | 6.580 | 4.767 | 4.915 | |
25° | 10.180 | 9.998 | 9.895 |
As discussed above, the spot diagrams for the near-infrared light are as shown in Fig. 8(b). In an imaging system, a distribution where most of the energy of the rays is within 10× the Airy disk radius is regarded as signifying good image quality. The resolution of the center FOV is sufficient for surface defect testing. Application of the Zernike aberration correction improves the standard spot diagrams distribution significantly, reducing blurring and increasing the sharpness in defect imaging.
Aberrations such as lateral chromatic aberration (LCA), field curvature, and distortion will have serious effects on the image quality. We evaluated the system’s aberration performance after optimization. The process of aberration correction not only improves the holistic performance of the light field cameras but also extends the range of test objective types and the system robustness. Because astigmatism is an off-axis aberration, when the angle of view increases, then the value of the astigmatism also increases gradually. The deviation of the meridian and the arc-vector focus contribute to the presence of diffuse light spots between two focuses that will eventually lead to image plane bending. When the image plane does not match that of the sensors, off-axis image spots will become blurred. When the image distortion is considered, the overall image quality is not irrelevant. However, the final image appears to be peculiar because of the vertical magnification. Although the image distortion can be corrected using an image processing algorithm, the design of the early system must still be taken into consideration.
As shown in Fig. 9, the visible band and two infrared wavelengths are considered here. Overall, the astigmatisms in the visible band and those at 0.8 μm and 0.85 μm are all within ±0.2 mm. This proves that the image bending phenomenon is under control and that the off-axis image quality is guaranteed. Regarding distortion, under normal circumstances, the image distortion should be controlled to within 2%, and the human eye cannot sense the image distortion. Otherwise, the image is out of its original vertical magnification, in which case a digital distortion correction algorithm must be applied. Fortunately, the maximum distortion at the 25° FOV is 0.3%. Both the astigmatism and distortion performances meet the imaging requirements.
This section discusses the lateral chromatic aberration within the visible band because of the limitations of the lens materials and the different band ranges. Over a wide working wavelength range, the chromatic aberration accounts for a certain proportion of the total aberration. For the visual system in particular, the lateral chromatic contribution leads to magnification differences at different wavelengths. In pictures captured by the optical system, the lateral chromatic aberration results in purple and green edges being observed in the images. In our light field system, we use several materials to correct the lateral chromatic aberration, as shown in Fig. 10, where almost all of the lateral chromatic aberration of the FOV is within the Airy spot.
The main cause of the production of chromatic aberration is the dispersion of the optical materials. Optical materials have different refractive indices for different light wavelengths. Optimizing the lens materials and applying multi-material dispersion correction reduces chromatic aberration significantly and thus enhances defect imaging precision. In this optical system, we used at least two glued lens pieces to reduce the chromatic aberration along with several different grades of glass, which combined high and low refractive index materials. The final chromatic aberration of the visual band is roughly contained within the Airy disk at the 25° FOV.
Finally, we present a picture for comparison to demonstrate the image quality produced by the proposed light field camera, as shown in Fig. 11. Following comparison with the results of several aberration analyses above, the image simulation is the final result of particular note. In this way, the various aberrations of an object passing through this system can be materialized, which provides a standard to enable realization of a realistic image.
From Fig. 11, a comprehensive range of qualities can be observed directly by viewers. The resolution at the center is higher than that at the edge FOV and is sufficient for defect detection, where the defect size is commonly 5 μm. Although the simulated image has visible vignetting, the image details can be distinguished. In fact, the vignetting is related to the radial distortion in light field cameras[28]. Furthermore, basically zero chromatic aberration is observed here. In general, the proposed light field camera meets the fundamental requirements for imaging.
The overall light field camera design and the analysis of the imaging results have already been discussed. However, for regular shape measurement devices, surface shape detection of very steep surfaces presents a difficult problem. When the higher order terms are taken into consideration, the shapes of entire elements become more free and are more effective for aberration correction. Common methods provide high precision in testing in processes such as interference measurement, trilinear coordinates measurement, phase deflection measurement, and optical coherence tomography[29]. When compared with these methods, light field cameras currently demonstrate lower accuracy because of technological and algorithmic limitations. However, the 3D information captured by light field cameras theoretically makes determination of the shapes of optical elements possible. Therefore, this section addresses this camera's tolerance for curvature detection in optical elements. An expression for the shape of an aspherical element is shown below.
z=cr21+√1−(1+k)c2r2+a2r2+a4r4+a6r6+... | (7) |
where c=r/r0, k=−e2 is conic, r0 is the radius of curvature at the vertex, r is the vertical height, e is the deformation coefficient of a quadratic aspherical surface, and a2,a4,a6,...,an are coefficients. By setting the appropriate coefficients, different types of aberration can be added to simulate the shape of the surface. For a spherical surface shape, we set several radii, and for a cylindrical mirror, an astigmatism is set.
Some primary aberrations are easy to add to the light field system using the Zernike phase. Zernike coefficients can be comprehended clearly by dividing the primary aberrations into different coefficients. Because simulation of the primary aberrations will affect this optical system, previous standard Zernike terms are listed below, as shown in Table 3.
Number | 1 | 2 | 3 | 4 |
Zernike Terms | 1 | 2ρcosθ | 2ρsinθ | √3(2ρ2−1) |
Aberration | Piston | X tilt | Y tilt | Defocus |
Number | 5 | 6 | 7 | 8 |
Zernike Terms | √6ρ2sin2θ | √6ρ2cos2θ | √8(3ρ3−2ρ)sin2θ | √8(3ρ3−2ρ)cos2θ |
Aberration | 45° astigmatism | 90° astigmatism | Y coma | X coma |
Using the paraxial imaging model, a simulated object with ideal aberration is added to an ideal system to compare the results obtained with those from the light field camera. First, we analyzed the aberration that a curved object brings to an ideal system based on standards for the effects of the surface shape on system imaging and the detection tolerance. Then, after adding the same aberration using the Zernike standard phase, the impact on the system is observed and compared. Fig. 12 presents sample results for such a simulation.
Radius | Original/Flat | 500mm | 300mm | 200mm | 50mm |
Paraxial imaging model | 0 | −0.2 | −0.35 | −0.49 | −0.95 |
Light field camera in this work | − |
− |
− |
− |
− |
From a comparison of the data, the curvature of the object has almost no effect on this system. A flat object brings zero aberration through its ideal paraxial surface. By adding a radius to this flat object, a spherical mirror is created, which mainly brings defocus, along with some primary astigmatism. The simulation of the aberration above is related to the stop size for the paraxial imaging model, regardless of the focal length.
In conclusion for this section, it is easy to prove that the vertical height tolerance is related to the depth of view of light field cameras. Additionally, this radius range has no effect on defect imaging within the detection range of this light field camera.
MLAs are suitable for multi-angle imaging, where the digital refocusing algorithm can capture clear images even when the real image is outside the plane of the sensors. For this algorithm, the reference and tested distances offer a factor α. By adjusting α, we can realize axial position measurements. In this case, we substituted letter pictures for the virtual defects to illustrate the probability of the experiments. Images of “A” and “B” are placed at the same distance but at different plane coordinates, and the results in Fig. 13 show that we can obtain the initial image before refocusing and a clear image is acquired using digital refocusing.
To prove that two defects at different positions can be refocused individually, we set two distance values of 250 mm and 400 mm. We also adjusted other factors to refocus at other positions to enable several comparisons of images to be made, as shown in Fig. 14.
This study proposes a design and an evaluation of a 50°-wide-FOV broadband light field camera for optical surface defect measurement. First, the measurement method depends on 3D reconstruction of the light field via imaging principles and advanced aberration correction to realize high-resolution 3D surface reconstruction. Then, an analysis of the proposed light field camera clearly demonstrates the possibility of its use in testing of the vertical heights and shapes of optical elements. Finally, the shape measurement tolerance is simulated by comparing the ideal aberration with that of the light field camera designed in this work. Simulation results confirm that the camera design meets essential imaging performance criteria, including MTF consistency, aberration minimization, and chromatic aberration correction. Given the potential prospects of the proposed camera, further advantages will be uncovered in future work. The proposed methodology offers a promising alternative to conventional optical inspection techniques, with the camera having potential applications in industrial quality control, biomedical imaging, and aerospace optics testing. Future work will focus on experimental validation of the design, real-world defect detection trials, and deep learning-based reconstruction enhancements to improve the system’s accuracy and robustness further.
This research was supported by the Jilin Science and Technology Development Plan (20240101029JJ) for the following study: synchronized high-speed detection of surface shape and defects in the grinding stage of complex surfaces (KLMSZZ202305); for the high-precision wide dynamic large aperture optical inspection system for fine astronomical observation by the National Major Research Instrument Development Project (
Chengchen Zhou conceived the ideas, designed and implemented the study, and wrote most of the manuscript. Yukun Wang performed formal analysis and revised the manuscript. Ding Yue, Dacheng Wang and Jiucheng Nie performed data curation. Jialong Li, Zhixi Li, and Zheng Zhou carried out validation. Shuangshuang Zhang and Xiaokun Wang reviewed and edited the manuscript. All authors read and approved the final manuscript.
The authors declare no completing interests.
[1] |
Ng, R. , Levoy, M. , Brédif, M. , et al. 2005. Light field photography with a hand-held plenoptic camera (Doctoral dissertation, Stanford university).
|
[2] |
Michels, T. , Mckelmann, D. , Koch, R. 2024. Mind the exit pupil gap: revisiting the intrinsics of a standard plenoptic camera. Sensors, 24(8): 29
|
[3] |
Labussière, M. , Teulière, C. , Ait-Aider, O. 2023. Blur aware metric depth estimation with multi-focus plenoptic cameras. Computer Vision and Image Understanding, 235: 103802 doi: 10.1016/j.cviu.2023.103802
|
[4] |
Chen, M. , Ye, M. , Wang, Z. , et al. 2022. Electrically addressed focal stack plenoptic camera based on a liquid-crystal microlens array for all-in-focus imaging. Optics Express, 30(19): 34938−34955 doi: 10.1364/OE.465683
|
[5] |
Coppin, T. , Palmer, D. W. , Rana, K. , et al. 2022. Design of a focused light field fundus camera for retinal imaging. Signal Processing: Image Communication, 109: 116869 doi: 10.1016/j.image.2022.116869
|
[6] |
Chen, L. , Lei, G. , Wang, T. , et al. 2022. Improved blur circle detection method for geometric calibration of multifocus light field cameras. Optical Engineering, 61(9): 093101−093101
|
[7] |
Kim, H. M. , Kim, M. S. , Chang, S. , et al. 2021. Vari-focal light field camera for extended depth of field. Micromachines, 12(12): 1453 doi: 10.3390/mi12121453
|
[8] |
Schleuniger, P. , Herrera Leclerc, R. A. , Brunel, M. , et al. 2024. Experimental three-dimensional location and size distribution of rising bubbles in a cylindrical column through light field imaging. Physics of Fluids, 36(10).
|
[9] |
Li, M. , Hou, X. , Zhao, W. , et al. 2025. Optical design for 3D detection of surface defects on ultra-precision curved optical elements based on micro structured-light. Optics & Laser Technology, 184: 112436
|
[10] |
Eberhart, M. 2021. Efficient computation of backprojection arrays for 3D light field deconvolution. Optics Express, 29(15): 24129−24143 doi: 10.1364/OE.431174
|
[11] |
Sasaki, T. , Leger, J. R. 2020. Light field reconstruction from scattered light using plenoptic data. Journal of the Optical Society of America A, 37(4): 653−670 doi: 10.1364/JOSAA.378714
|
[12] |
Labussière, M. , Teulière, C. , Bernardin, F. , et al. 2022. Leveraging blur information for plenoptic camera calibration. International Journal of Computer Vision, 130(7): 1655−1677 doi: 10.1007/s11263-022-01582-z
|
[13] |
Fleith, A. , Ahmed, D. , Cremers, D. , et al. 2024. LiFCal: Online Light Field Camera Calibration via Bundle Adjustment. arxiv preprint arxiv: 2408.11682.
|
[14] |
Chen, B. , Pan, B. 2018. Full-field surface 3D shape and displacement measurements using an unfocused plenoptic camera. Experimental Mechanics, 58: 831−845 doi: 10.1007/s11340-018-0383-6
|
[15] |
Ammann, S. , Orlando, G. , Pierer, J. , et al. 2021. Enhancing the performance of light field camera by pattern projection. Optical Engineering(3), 60.
|
[16] |
Tsai, Y. J. , Liu, Y. L. , Ouhyoung, M. , et al. 2020. Attention-based view selection networks for light-field disparity estimation. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 07, pp. 12095-12103).
|
[17] |
Huang, Z. , Fessler, J. A. , Norris, T. B. , et al. 2020. Light-field reconstruction and depth estimation from focal stack images using convolutional neural networks. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 8648-8652). IEEE.
|
[18] |
Chen, M. , Li, Z. , Ye, M. , et al. 2022. All-in-focus polarimetric imaging based on an integrated plenoptic camera with a key electrically tunable LC device. Micromachines, 13(2): 192 doi: 10.3390/mi13020192
|
[19] |
Michels, T. , Mäckelmann, D. , Koch, R. 2024. Mind the exit pupil gap: Revisiting the intrinsics of a standard plenoptic camera. Sensors, 24(8): 2522 doi: 10.3390/s24082522
|
[20] |
Bergen, J. R. , Adelson, E. H. 1991. The plenoptic function and the elements of early vision. Computational models of visual processing, 1(8): 3
|
[21] |
Fotheringham, U. , Pannhorst, W. , Fischer, R. E. , et al. 2005. Design of a tessar lens including a diffractive optical element. In Optical Design and Engineering II (Vol. 5962, pp. 403-414). SPIE.
|
[22] |
Zhang, L. Q. , Rao, X. J. , Bao, H. , et al. 2024. Solar adaptive optics systems for the New Vacuum Solar Telescope at the Fuxian Lake Solar Observatory. Astronomical Techniques and Instruments, 1(2): 95−104
|
[23] |
Wang, Y. , Xu, H. , Wang, R. , et al. 2021. Zernike mode analysis of an adaptive optics system for horizontal free-space laser communication. Journal of Optics, 23(10): 105701 doi: 10.1088/2040-8986/ac1f36
|
[24] |
Wang, Y. , Li, D. , **, C. 2022. Eigenmode Wavefront Decoupling Algorithm for LC–DM Adaptive Optics Systems. Applied Sciences, 12(5): 7875
|
[25] |
Wang, Y. , Xu, H. , Li, D. , et al. 2018. Performance analysis of an adaptive optics system for free-space optics communication through atmospheric turbulence. Scientific reports, 8(1): 1124 doi: 10.1038/s41598-018-19559-9
|
[26] |
Du, Z, M. , Lin, Q. , Rao, X. J. , et al. 2024. The Educational Adaptive-optics Solar Telescope at the Shanghai Astronomy Museum. Astronomical Techniques and Instruments, 1(3): 171−178 doi: 10.61977/ati2024009
|
[27] |
Guo, X. H. , Hu, Y. , Li, J. , et al. 2024. Baseline design of the KunLun Turbulence Profiler instrument. Astronomical Techniques and Instruments, 1(4): 16−24
|
[28] |
Ardebili, M. , Saavedra, G. 2023. Analytic plenoptic camera diffraction model and radial distortion analysis due to vignetting. Journal of the Optical Society of America A, 40(7): 1451−1467 doi: 10.1364/JOSAA.485284
|
[29] |
Nie, J. , Wang, Y. , Wang, D. , et al. 2024. Method for Extracting Optical Element Information Using Optical Coherence Tomography. Sensors, 24(21): 6953 doi: 10.3390/s24216953
|
Parameters | |
FOV | 50° |
Working spectrum | 400-850 nm |
Diameter of entrance pupil | 4 mm |
Spatial resolution | 12 μm |
MTF (at cut-off frequency) | ≥0.2 |
Working band | Visual band (μm) | 800 nm (μm) | 850 nm (μm) | |
Radius of Airy disk | 2.145 | 2.943 | 3.132 | |
Radius of RMS | 0° | 2.577 | 1.120 | 1.018 |
7° | 2.656 | 3.115 | 3.321 | |
15° | 6.536 | 5.168 | 5.271 | |
20° | 6.580 | 4.767 | 4.915 | |
25° | 10.180 | 9.998 | 9.895 |
Number | 1 | 2 | 3 | 4 |
Zernike Terms | 1 | 2ρcosθ | 2ρsinθ | √3(2ρ2−1) |
Aberration | Piston | X tilt | Y tilt | Defocus |
Number | 5 | 6 | 7 | 8 |
Zernike Terms | √6ρ2sin2θ | √6ρ2cos2θ | √8(3ρ3−2ρ)sin2θ | √8(3ρ3−2ρ)cos2θ |
Aberration | 45° astigmatism | 90° astigmatism | Y coma | X coma |
Radius | Original/Flat | 500mm | 300mm | 200mm | 50mm |
Paraxial imaging model | 0 | −0.2 | −0.35 | −0.49 | −0.95 |
Light field camera in this work | − |
− |
− |
− |
− |