
Citation: | Xie, H. W., Zhi, H., Kang, Z., et al. 2025. Photometry-free sky area visibility estimation method for All-sky Camera. Astronomical Techniques and Instruments, 2(1): 52−64. https://doi.org/10.61977/ati2024056. |
Observatories typically deploy all-sky cameras for monitoring cloud cover and weather conditions. However, many of these cameras lack scientific-grade sensors, resulting in limited photometric precision, which makes calculating the sky area visibility distribution via extinction measurement challenging. To address this issue, we propose the Photometry-Free Sky Area Visibility Estimation (PFSAVE) method. This method uses the standard magnitude of the faintest star observed within a given sky area to estimate visibility. By employing a per-transformation re-fitting optimization strategy, we achieve a high-precision coordinate transformation model with an accuracy of 0.42 pixels. Using the results of Source Extractor, we propose screening criteria to reduce false detections. HEALPix segmentation is also introduced to achieve high spatial resolution. Comprehensive analysis based on real all-sky images demonstrates that our method exhibits higher accuracy than the extinction-based method. Our method supports both manual and robotic dynamic scheduling, especially under partially cloudy conditions.
Space-based multiband astronomical Variable Object Monitor (SVOM) was launched by a Long March-2C rocket from the Xichang Satellite Launch Center. This launch symbolizes nearly two decades of collaborative efforts between Chinese and French scientists. SVOM is an important space-based gamma-ray burst (GRB) observation project[1], requiring ground-based follow-up to observe the prompt optical emission associated with GRBs. One of the optical ground-based follow-up observation instruments, the China Ground Follow-up Telescope (C-GFT), is the 1.2-m telescope at Jilin Observatory (JLO). In this mission, C-GFT is responsible for observing the optical afterglows of GRB in the optical band[2]. To respond rapidly to the follow-up alerts and use telescope time more efficiently, especially during periods of partial cloud cover, research into a cooperating robotic telescope scheduler with real-time sky area visibility distribution is necessary.
The sky area visibility distribution strongly influences the data quality of ground-based optical observation. To monitor the sky area, a sky quality monitor (SQM)[3] and all-sky camera[4–6] are commonly deployed by observatories.
An SQM is a high-sensitivity photometer with a wide field of view designed to measure sky brightness and report it in units of magnitudes per square arcsecond. An SQM provides a scalar value representing the sky brightness of one specific sky area at a time. Because of its limited spatial resolution, it cannot simultaneously record the sky brightness distribution across the entire sky.
All-sky cameras offer better spatial resolution than SQMs, as they periodically capture synchronous all-sky images from zenith to horizon. These images can be used in many applications, such as site testing[4], observational environment analysis[7], and guiding robotic observation[8]. Robotic observation systems are designed to automate observational tasks, enabling rapid responses to changes in weather, reducing the workload of observation assistants, and enhancing telescope effectiveness[9, 10]. All-sky images provide real-time weather conditions and sky area visibility, which assists in assessing the quality of candidate tasks. By analyzing these images, we can identify areas of the sky with better visibility, assisting both robotic schedulers and human observers to make informed decisions.
Therefore, the primary challenge lies in estimating the sky area visibility distribution from all-sky images captured by cameras equipped with non-scientific-grade sensors. The resulting estimates should exhibit a high level of spatial resolution and contribute to the decision-making process for enhancing overall observation quality.
To address this, we propose the PFSAVE method. Designed for all-sky cameras without scientific-grade sensors, PFSAVE relies on high-precision star recognition to estimate sky area visibility rather than on the photometric precision of the camera. The visibility estimation depends on whether faint stars can be detected in the area. To enhance the accuracy of visibility, a high-precision coordinate transformation model is incorporated. Additionally, PFSAVE employs Hierarchical Equal Area isoLatitude Pixelization (HEALPix) segmentation to provide discrete data, which is convenient for automated observation decision-making.
This paper is organized as follows. Section 2 describes the proposed PFSAVE method in detail. Section 3 introduces the observatory and the all-sky camera, followed by a complete description of the PFSAVE process. Section 4 presents qualitative and quantitative analyses. Finally, the paper concludes in Section 5.
The two existing categories of methods show limited performance when used with cameras lacking scientific-grade sensors. Cloud-based approaches use image segmentation to label clouds in all-sky images and assess their thickness. However, these methods rely on light reflection for cloud detection, making them unsuitable for locations with low light pollution. The spatial resolution of cloud edge detection remains a challenge, limiting the effectiveness of these techniques. By contrast, star-based methods use photometry and extinction estimation to evaluate sky area visibility[11]. However, these methods require high photometric precision, which is difficult to achieve with all-sky cameras that lack scientific-grade sensors.
The color system of our all-sky camera, which is equipped with general commercial BAYER RGB filters, is inconsistent with any astronomical color system. These filters have a wide passband and their center wavelength is designed for human vision rather than astronomical photometry. Table 1 lists the bands of the BAYER RGB, human eye, and astronomical color system such as TYCHO[12] and JC UBVRI[13]. Fig. 1 presents a more direct comparison. The conversion from BAYER RGB to an astronomical color system will introduce significant photometric error.
Band | Wavelength/Å | ||
Astronmical | JC UBVRI | U | |
B | |||
V | |||
R | |||
I | |||
TYCHO | BT | ||
VT | |||
RGB | BAYER | B | |
G | |||
R | |||
Human eye | B | ||
G | |||
R |
Moreover, the interpolation (demosaicing), which is an essential step in converting the data from BAYER format to three-channel RGB format, also affects photometric accuracy. The BAYER RGB filters record three colors in one shot by placing the filters in array. Each pixel has one color filter, as shown in Fig. 2, of which 1/4 are red, 1/4 are blue, and 1/2 are green. To fill the missing pixel in every color channel, many demosaic algorithms[14,15] have been developed in the computer vision domain. However, these algorithms are designed for human vision and do not account for their impact on astronomical photometry. Additionally, the demosaicing process introduces positional errors during interpolation.
An all-sky image from a BAYER RGB sensor is easily interpreted by astronomers and observation assistants, but it is plagued by significant color conversion and interpolation errors. These errors compromise photometric accuracy, making extinction-based methods ineffective.
To estimate sky area visibility without relying on the photometric accuracy of all-sky cameras, we propose PFSAVE. Our method implicitly represents both the degradation of star signals caused by clouds and the increase in noise due to light pollution and moonlight. Given an all-sky image I, the observing time tobs, and the site location Lobs, PFSAVE estimates the sky area visibility distribution Qvis in the horizontal coordinate system. The concept of PFSAVE can be described as follows:
V(I,tobs,Lobs)→Qvis , | (1) |
where V(∗) refers to the proposed PFSAVE.
For a given sky area, its visibility Qvis is defined as follow:
Qvis=max | (2) |
where (\max{{{m}}_{\rm obs}}) is the standard magnitude of the faintest observed star in the given sky area and (\max{{{m}}_{\rm cat}}) is the standard magnitude of the faintest detectable star in this area when the sky is clear. We use the VT magnitude from TYCHO-2 as the standard magnitude. VT is one of the TYCHO colors and has an effective wavelength of 532 nm, whereas BT is the other TYCHO color with an effective wavelength of 426 nm[12]. Because of the non-uniform distribution of stars with limiting magnitude (as shown in Fig. 3), rather than using one value, the denominator needs to be separately calculated for each sky area. Here, (\min{{{m}}}) is the minimum VT magnitude of TYCHO-2 in an area. This extra term ensures Q remains positive in all conditions, and ranges from 0 to 1.
When the sky area is clear, \max{{{m}}_{\text{obs}}} will be equal to \max{{{m}}_{\text{cat}}} , and Q_{\text{vis}} will be equal to 1. When the sky area is covered by clouds or influenced by moonlight or light pollution, \max{{{m}}_{\text{obs}}} will be smaller than \max{{{m}}_{\text{cat}}} , and Q_{\text{vis}} will be smaller than 1. Additionally, we define that if no star is observed in a sky area, i.e., {{m}}_{\text{obs}} = 0 , Q_{\text{vis}} will be set to 0.
The Q_{\text{vis}} reflects two factors uniformly. One factor is the degradation in the signal, and the other is the increase in noise. Clouds and large airmass cause extinction, causing the signal to degrade, whereas moonlight and light pollution result in stronger skylight, increasing noise. Both of these factors decrease the accuracy of faint star detection.
Sky area visibility is different from the visibility defined in Reference [16], which refers to the minimum brightness threshold at which a target is distinguishable using given equipment under specific sky brightness conditions. In contrast, our visibility metric describes the observing conditions of a given sky area. Unlike star-based methods, our approach does not rely on photometric precision; thus, it is suitable for all-sky cameras. Furthermore, in contrast to cloud-based methods, our technique provides a higher spatial resolution, facilitating improved robotic observation.
In this section, we introduce the detailed process by which we analyze all-sky images to estimate the sky area visibility distribution. Initially, we provide a brief overview of the observatory and the specifications of our all-sky camera. Subsequently, we describe the processing pipeline, which includes image preprocessing (Section 3.2), coordinate transformation (Section 3.3), star matching and recognition (Section 3.4), sky area segmentation (Section 3.5), and sky area visibility estimation (Section 3.6). The entire pipeline is illustrated in Fig. 4.
Jilin Observatory (JLO, IAU code P61) is operated by Changchun Observatory (CHO), National Astronomical Observatories, Chinese Academy of Sciences. JLO is situated in Jilin, China, with geographical coordinates of (126.33^\circ \mathrm{E}, 43.82^\circ \mathrm{N}) [17]. The observatory houses many ground-based optical telescopes, including the C-GFT, making it a significant site for ground-based follow-up observations of gamma-ray bursts, particularly in response to alerts from SVOM.
At JLO, the CHO Optoelectronic Observation Research Laboratory installed an all-sky camera (as shown in Fig. 5) to monitor and record sky conditions. The specifications of the all-sky camera (ALCOR OMEA 3C, France) are detailed in Table 2. It is equipped with a SONY IMX178LQJ-C CMOS sensor, which is sensitive to low-light conditions and suitable for nighttime observations. The exposure time for each capture of the camera is determined by ALCOR SKYWATCH
Item | Parameter |
Model | ALCOR OMEA 3C |
Camera | ZWO ASI 178MC |
Sensor | Sony IMX178LQJ-C |
Focal length | 1.8 mm |
f-number | f/1.6 |
Pixel size | 2.4 \mu m |
Image size | |
Sensor size | 7.4 mm \times 5 mm |
Sensor frame | 1/1.8'' |
Field of view | 180^\circ \times 180^\circ |
We decide to use all-sky images saved in the FITS format rather than the JPEG format. The all-sky camera saves images in both JPEG and FITS formats[18]; however, the JPEG format includes auto white balance (AWB), which enhances the visual effect for human perception. AWB alters the color distribution, affecting the pixel amplitude distribution. Additionally, the JPEG format is stored in 8 bits, which compresses the image's dynamic range. These factors complicate the extraction of faint stars from JPEG images. In contrast, the FITS format preserves the raw image data in 14 bits without additional processing, making it more suitable for the extraction of faint stars.
To convert a color image to grayscale, we use the following formula:
GRAY=\frac{1}{3}(R+G+B)\ . | (3) |
The arithmetic average avoids introducing additional color bias into the pixel amplitude distribution, thereby enhancing the accuracy of star extraction.
PFSAVE depends on accurate star recognition. Therefore, a high-precision coordinate transformation is required to convert the horizontal coordinate system onto the photographic coordinate system.
The coordinate dataset is constructed using pairs of horizontal coordinates and their corresponding photographic coordinates, indicating the mapping relationship between the two coordinate systems. To generate this dataset, we selected a clear-night all-sky image, plotted the catalog stars, and extracted the source centers. The image used to construct the dataset is displayed in Fig. 6, In the figure, the white number indicates the extraction star ID, the fuchsia circle is its location, the red number is the star index in the catalog, and the olive round object indicates its position.
To mark catalog stars in the all-sky image, we use {{\text{astroplan}}}[19] to calculate the horizontal coordinates (azimuth, altitude) of catalog stars based on the observatory location and observation time. Then, with the initial camera model parameters (zenith coordinates, field of view radius, and north offset) provided by ALCOR SKYWATCH, we compute the positions of the catalog stars in the image coordinate system according to Algorithm 1.
Star extraction on the all-sky image was performed using {\text{SEP}}[20]. Since the extraction accuracy is higher for bright stars and their central positions are more precise, we retained the brighter point sources. {\text{SEP}} is a Python wrapper for the {\text{Source Extractor}}[21].
Using the image in Fig. 6, we constructed the coordinate dataset by recording the data pairs of the horizontal coordinates and the photographic coordinates. Table 3 shows the first 10 records of the dataset, in which the columns "calculated x" and "calculated y" represent the photographic coordinates calculated by Algorithm 1, and the columns "extracted x" and "extracted y" represent the source centers extracted by {\text{SEP}}.
Algorithm 1: Calculate photo coordinates for catalogue stars |
Data: horizontal coordinate (az,\ alt) , distance between the zenith and the horizon s , the north direction \theta_{\mathrm{\mathrm{north}}} , zenith coordinate in photo coordicate system (c_x,\ c_y) |
Result: photo coordinates (x,\ y) |
1 for each star i in the catalogue which is rised do |
2 \rho \leftarrow 90 - alt ; |
3 \rho \leftarrow s(\rho / 90) ; |
4 \theta\leftarrow az+\theta_{\mathrm{north}} ; |
5 x\leftarrow\rho\cos\theta+c_x ; |
6 y\leftarrow\rho\sin\theta+c_y ; |
7 end |
TYCHO ID | alt/(°) | az/(°) | Calculated x/pix | Calculated y/pix | Extracted x/pix | Extracted y/pix |
29.33 | 148.26 | 512.76 | 517.11 | |||
48.16 | 113.14 | 606.14 | 604.24 | |||
187- | 25.91 | 250.90 | ||||
27.54 | 308.65 | 885.08 | 884.06 | |||
66.60 | 64.94 | 858.80 | 896.91 | 858.89 | 894.94 | |
72.48 | 39.75 | 868.45 | 867.42 | |||
833- | 53.13 | 216.37 | ||||
1920- | 42.18 | 270.43 | ||||
42.42 | 276.52 | |||||
33.22 | 302.83 | 962.13 | 960.95 | |||
... | ... | ... | ... | ... | ... | ... |
A camera model describes the relationship between real-world coordinates and those in the photograph. Our coordinate transformation model is based on the classic fish-eye camera model[22]. This model comprises three concatenated transformations: the rotation and translation transformation, the projection transformation, and the radial and tangential distortion transformation. For a given point P(az,\ alt) in the horizontal coordinate system, the photographic coordinate P(x,\ y) are calculated as follows:
(1) Use the initial camera parameters to transform P(az,\ alt) into polar coordinates P(\rho,\ \theta) by
\rho=s\left(1-\frac{2 alt}{{\text{π}}}\right)\ , | (4) |
\theta=az-\theta_{{\mathrm{north}}}\ , | (5) |
where s is the distance between the zenith and the horizon in pixels, and \theta_{\mathrm{north}} is the north.
(2) Use the projection transformation to correct radius \rho to radial distance r as
r=k_0\rho+k_1\rho^3+k_2\rho^5+k_3\rho^7+k_4\rho^9\ , | (6) |
where k_0,\ k_1,\ k_2,\ k_3,\ k_4 are the projection coefficients. Only odd power terms are included based on the Kannala and Brandt's conclusion.
(3) Correct the radial and tangential distortions to obtain P(r',\ \theta') using
r'=r+\Delta_r(\rho,\; \theta)\ , | (7) |
\theta'=\theta+\Delta_{\theta}(\rho,\; \theta)\ , | (8) |
where the radial distortion term \Delta_r(\rho,\ \theta) is
\begin{split} \Delta_r(\rho,\theta)= & (\delta_{r_1}\rho+\delta_{r_2}\rho^3+\delta_{r_3}\rho^5)(\delta_{r_4}\cos\theta+\delta_{r_5}\sin\theta+ \\ &\delta_{r_6}\cos2\theta+\delta_{r_7}\sin2\theta)\ , \end{split} | (9) |
and the tangential distortion term \Delta_{\theta}(\rho,\ \theta) is
\begin{split} \Delta_{\theta}(\rho,\theta)= & (\delta_{\theta_1}\rho+\delta_{\theta_2}\rho^3+\delta_{\theta_3}\rho^5)(\delta_{\theta_4}\cos\theta+ \\ &\delta_{\theta_5}\sin\theta+\delta_{\theta_6}\cos2\theta+\delta_{\theta_7}\sin2\theta)\ . \end{split} | (10) |
(4) Transform P(r', \theta') into Cartesian coordinates P(x, y) using
x=r'\cos\theta'\ , | (11) |
y=r'\sin\theta'\ . | (12) |
(5) Transform the Cartesian coordinates P(x,\ y) into photographic coordinates P(u,\ v) , moving the origin to the top-left of the image as follows:
u=x+x_c\ , | (13) |
v=y+y_c\ , | (14) |
where P_c(x_c,\ y_c) are the zenith coordinates in the photographic coordinate system, and u and v respectively correspond to the x- and y-axes of the photographic coordinate system. This step aims to make the photographic coordinate system consistent with the coordinate system used by computer image processing, which regards the image as a matrix with the origin at the top-left.
Our coordinate transformation model involves 23 undetermined coefficients: the distance between the zenith and the horizon in pixels, denoted as the zenith distance s ; the north offset \theta_{\mathrm{north}} ; the zenith position in photographic Pc(xc, yc); five projection coefficients k_0,\ k_1,\ k_2, \ k_3,\ {\mathrm{and}}\ k_4 ; seven radial distortion coefficients \delta_{r_1},\ \delta_{r_2},\ \delta_{r_3},\ \delta_{r_4},\ \delta_{r_5},\ \delta_{r_6},\ and\ \delta_{r_7} ; and seven tangential distortion coefficients \delta_{\theta_1},\ \delta_{\theta_2},\ \delta_{\theta_3},\ \delta_{\theta_4},\ \delta_{\theta_5},\ \delta_{\theta_6},\ and\ \delta_{\theta_7} . Table 4 summarizes these coefficients. ALCOR SKYWATCH provides the initial camera parameters, including the zenith distance s = 983 pixels, north offset \theta_{\mathrm{north}}=155.6^{\circ} , and zenith position P_c = (1053, 1063) . These parameters perform as initial value throughout the process untill the final setp. We solve these three parameters along with the other 20 coefficients.
Coefficients | Description | Count |
s | Zenith distance in pixel | 1 |
\theta\mathrm{_{north}} | North offset in degree | 1 |
P_c(x_c,\ y_c) | Zenith position in photographic coordinate system | 2 |
k_0,\ k_1,\ k_2,\ k_3,\ k_4 | Projection coefficients | 5 |
\delta_{r_1},\ \delta_{r_2},\ \delta_{r_3},\ \delta_{r_4},\ \delta_{r_5},\ \delta_{r_6},\ \delta_{r_7} | Radial distortion coefficients | 7 |
\delta_{\theta_1},\ \delta_{\theta_2},\ \delta_{\theta_3},\ \delta_{\theta_4},\ \delta_{\theta_5},\ \delta_{\theta_6},\ \delta_{\theta_7} | Tangential distortion coefficients | 7 |
TOTAL | 23 |
We propose a per-transformation re-fitting procedure, illustrated in Fig. 7, to solve the model. This process consists of the following five steps:
Step 1 Solve the projection coefficients k_i by minimizing the error in the polar coordinate system. We use the initial parameters s = 983 , \theta\mathrm{_{north}}=155.6^{\circ} , and P_c = (1\;053, 1\;063) to transform the horizontal coordinates into the polar coordinates as the fitting objects.
Step 2 Solve the radial distortion coefficients \delta_r and the tangential distortion coefficients \delta_{\theta} by minimizing the error in the polar coordinate system.
Step 3 Use the results of Steps 1 and Steps 2 as the initial values to solve these coefficients again by minimizing the error in the polar coordinate system. At this stage, we have all the transformation coefficients in the polar coordinate system.
Step 4 Solve the zenith distance s , north offset \theta_{\mathrm{north}} and zenith position P_c(x_c, y_c) by minimizing the error in the photographic coordinate system.
Step 5 Use the results of Step 3 and Step 4 as the initial values and handle the transformations as a whole to solve all 23 coefficients k_i , \delta_r , \delta_{\theta} , s , \theta\mathrm{_{north}} , and (x_c,\ y_c) by minimizing the error in the photographic coordinate system.
The fitting errors decrease step-by-step as shown in Fig. 8. Although the model is fitted in the polar coordinate system from Step 1 to Step 3, the error distribution graphs in Fig. 8 are plotted in the photographic coordinate system for consistent comparison of errors. Despite changes in error distribution during the initial three steps of fitting, there was not a pronounced overall trend of error reduction. However, after re-fitting the coefficients in Step 4, the error significantly decreased to 0.83 pixels. Subsequently, using the results from Step 3 and Step 4 as initial values and re-fitting all 23 undetermined coefficients in Step 5 led to a significant reduction in the fitting error to 0.49 pixels. This represents a 40.96\% decrease over independent fitting (Step 4), with a corresponding 30.77\% decrease in the standard deviation, indicating a substantial improvement in fitting accuracy. Table 5 provides an overview of the changes in these errors throughout the process.
Step | Mean/pixel | Standard deviation/pixel | Median/pixel |
1 | 3.04 | 1.31 | 3.10 |
2 | 2.17 | 1.03 | 2.05 |
3 | 2.11 | 1.01 | 2.05 |
4 | 0.83 | 0.52 | 0.76 |
5 | 0.49 | 0.36 | 0.42 |
To demonstrate the overall performance of our coordinate transformation model, we plotted the altitude distribution in the photographic coordinate system in Fig. 9. The color of each pixel corresponds to its altitude. A perfectly circular distribution of colors indicates the high quality of the transformation.
We use the high-precision coordinate transformation model presented in Section 3.3 to obtain the photographic coordinates of catalog stars. Then, we match the catalog stars with the extracted stars to recognize the stars in the all-sky images. After recognition, we determine the standard VT magnitude of the recognized stars.
We extract the star points (extracted stars) from the all-sky images using {\text{Source Extractor}}. Since {\text{Source Extractor}} is not specifically designed for all-sky images, the extracted stars may contain some errors. To eliminate these errors, we implement three screening criteria: the Signal-to-noise ratio (RSN), T-metric, and semi-major axis ( a ).
(1) RSN: The RSN is defined as the ratio of the peak value of the star to the background noise. A higher RSN of a star indicates it is more reliable. Generally, reliable stars have a RSN greater than 3.0. Therefore, we retain sources with RSN values greater than 3.0.
(2) T-metric: The T-metric is the ratio of the number of object pixels (npix) and RSN. We found that some false detections extracted by Source Extractor have a high RSN because they have a lage npix caused by the texture of the clouds. Because of the angular resolution of the all-sky camera, npix should not be too large. Therefore, we designed the T-metric to use RSN and npix. The T-metric is defined as
T=\frac{n_{\mathrm{pix}}}{R_{\mathrm{SN}}}. | (15) |
In our case, we keep the sources with values of T less than 7.2.
(3) Semi-major axis ( a ): The semi-major axis ( a ) is given by {\text{Source Extractor}}, and represents the size of the source. According to the angular resolution of the all-sky camera, the size of the star should not be too large. After analyzing the size distribution of the correct points larger a . Therefore, we keep the sources whose a are lower than 10.
The thresholds for the screening criteria are specific to the data being analyzed. These thresholds have been established through a rigorous analysis of the data captured by our all-sky camera. It is important to note that when applying these criteria to other sets of all-sky images, corresponding adjustments to the thresholds may be necessary to ensure accurate and reliable results.
After the screening process, a set of stars extracted with high confidence is obtained. To identify these stars, we compare them with the catalog stars in the photographic coordinate system using the Euclidean distance. Before matching, the catalog stars are sorted in ascending order based on their VT magnitude. The matching process follows this sorted order, prioritizing brighter stars for matching first. If the distance between a catalog star and an extracted star is less than 5 pixels, the extracted star is considered a match with the catalog star. This approach is based on the assumption that within a small area, only the brightest star can be detected by all-sky cameras because of their limited angular resolution.
Fig. 10 presents the performance of star matching and recognition in different sky conditions. When the sky is clear, a high number of stars are recognized across all sky areas. When it is cloudy, stars obstructed by clouds are not recognized. When it is overcast, all sky areas are covered by clouds, and only a few false recognitions exist.
Sky area segmentation is required to provide astronomers and observation assistants with sky area visibility at a high spatial resolution. The segmentation must be into equal areas to ensure comparability. Therefore, we introduce HEALPix to divide the sky into equal-area grids within the horizontal coordinate system. This approach achieves discrete spatial resolution, which is beneficial for robotic observation.
HEALPix was proposed by Górski et al[23]. to divide the sphere into equal-area pixels (grids). It provides a segmentation, or pixelization, of the sphere at a fixed resolution.
When using HEALPix to segment the sky area, the parameter N_{\mathrm{side}} defines the resolution of the segmentation. Fig. 11 illustrates the segmentation with different N_{\mathrm{side}} values. A higher N_{\mathrm{side}} provides higher spatial resolution; however, if the segmentation is too fine, the number of stars in some grids may be too small to effectively estimate the sky area visibility. Therefore, based on the specifications of our all-sky camera and balancing performance with spatial resolution, N_{\mathrm{side}} was set to 8, resulting in 384 grids covering the entire sky area from zenith to horizon. The number of grids is calculated by
\frac{1}{2}\times(12\times N_{\mathrm{side}}^2)=\frac{1}{2}\times(12\times8^2)=384, | (16) |
where the factor \dfrac{1}{2} is used because only half of the sphere is visible in the horizontal coordinate system.
The official HEALPix library can only run on Unix-like systems, such as Linux or Mac OS. Fortunately, {\text{Astropy}}[24–26] has released {\text{astropy-healpix}}
To estimate the sky area visibility of a sky area, we require the observed stars and catalog stars within this area. The observed stars are identified using the method described in Section 3.4. Let {{S}}_{\rm obs} be the set of observed stars, which includes the TYCHO ID, VT magnitude, and horizontal coordinates. Let {{S}}_{\rm cat} be the set of catalog stars above the horizon at the corresponding time. The horizontal coordinates are calculated using {\text{astroplan}}. According to the segmentation described in Section 3.5, we assign the HEALPix index to the observed stars {{S}}_{\rm obs} and catalog stars {{S}}_{\rm cat} . Let {{S}}_{\rm obs}^{(i)} and {{S}}_{\rm cat}^{(i)} be the sets of observed stars and catalog stars in the i -th grid, respectively.
For the i -th sky area (the i -th grid), the faintest observed magnitude is \max {{{m}}_{\rm obs}^{(i)}} and the faintest catalogue magnitude is \max{{{m}}_{\rm cat}^{(i)}} . Base on Equation (2), the visibility of the i -th sky area is calculated by
Q_{{\mathrm{vis}}}^{(i)} = \frac{\max{{{m}}_{\rm obs}^{(i)}}-\min{{{m}}}} {\max{{{m}}_{\rm cat}^{(i)} }-\min{{{m}}}}, | (17) |
where \min{{{m}}} is the minimum VT magnitude of the catalog stars in the sky area. If {{S}}_{\rm obs}^{(i)} = \emptyset , the visibility of the i -th sky area is set to 0. Additionally, a further correction is applied to mitigate the impact of potential misrecognition. If the number of observed stars in the sky area is insufficient ( N_{\rm obs} / N_{\rm cat} < 1\% ) and the observed brightest star is notably fainter than the catalog brightest star ( \min{{{m}}_{\rm obs}} - \min{{{m}}_{\rm cat}} > 1 ), the visibility of the sky area is set to 0.
Fig. 12 shows the sky area visibility distribution when the sky is clear. The orange circle indicates the altitude of 25°. Objects are typically not observed when they are below this altitude, and thus those areas are ignored. When plotting, these grids are set to 0. These sky areas have high airmass and are not assigned for observation, and hence they are not included in the visibility estimation.
In this section, we present results of comparison experiments with the extinction-based method[11] to demonstrate the effectiveness of PFSAVE.
The benchmark method in the comparison experiments is the extinction-based method, which measures the observation condition using the overall extinction of the sky area. To obtain the extinction, the overall standard flux is required. As we discussed in Section 2, the filters of the all-sky camera are not consistent with any astronomical color system. Despite this, we use the VT magnitude from the TYCHO catalog as the standard magnitude to calculate the overall standard extinction, since the comprehensive passbands of the RGB filters are close to the VT passband. An all-sky image taken on a clear night was used to determine the zero point z, which estimated to be approximately 18 mag. The overall standard flux of the i-th sky area was defined as
F_{\rm cat}^{(i)}=\sum\limits_j^{N_i}10^{-0.4[m_{\rm cat}^{(j)}-z]}\ , | (18) |
where i is the HEALPix index of the sky area, j \in [0,N_i] is the index of the catalogue star in the sky area, and \mathrm{m}_{\mathrm{cat}}^{(j)} is the VT magnitude of the j -th catalog star.
The overall observed flux of i -th sky area is defined as
F_{\rm obs}^{(i)}=\sum\limits_j^{N_i}f^{(j)}\ , | (19) |
where f^{(j)} is the flux of the j -th observed star given by {\text{Source Extractor}}. As our method uses a uniform format including the impact of airmass, F_{\mathrm{obs}}^{(i)} was not corrected by airmass for a fair comparison.
The extinction of the given i -th sky area Q_{\mathrm{ext}}^{(i)} is defined as
Q_{\rm ext}^{(i)}=\frac{F_{\rm cat}^{(i)}-F_{\rm obs}^{(i)}}{F_{\rm cat}^{(i)}}\ . | (20) |
We note that because of the limited photometric accuracy of the all-sky camera, the following statement is not always true:
\forall i,F_{\rm cat}^{(i)}\geqslant F_{\rm obs}^{(i)}\ , | (21) |
and hence Q_{\mathrm{ext}}^{(i)} may be negative in some sky areas. Negative extinction is irrational; therefore, these values are truncated to 0 in practice.
Figs. 13, 14, and 15 show the visibility Q_{\mathrm{vis}} and extinction Q\mathrm{_{\rm ext}} of all sky areas when the sky is clear, cloudy, and overcast, respectively. As shown in Fig. 13, the visibility of all sky areas is close to 1, while the extinction of those area ranges from 0.3 to 0.6. The visibility estimation more closely aligns with the actual observation conditions. As shown in Fig. 14, our method successfully distinguishes sky areas with clouds from those without and accurately represents the degree of visibility reduction caused by clouds. In contrast, the extinction-based method performs poorly, as it deems most of the sky area to be not observable. As shown in Fig. 15, our star recognition method (Section 3.4) has few errors, with the visibility of all sky areas being 0 and the extinction being 1.
Qualitative analysis indicates that the proposed method can effectively estimate the sky area visibility under various sky conditions and measure the decrease in visibility caused by clouds. Compared with the extinction-based method, the PFSAVE method is more accurate.
To quantitatively analyze the performance of the proposed method, we compared the visibility with manual scores. The scores are discrete and consist of (0, 1, 2) . Here, "0" means the area is fully covered by cloud or affected by moonlight; "1" means the area is partially covered by cloud or affected by moonlight; "2" means the area has no clouds nor moonlight. To ensure a comprehensive and fair comparison, we selected five all-sky images taken on a cloudy night. Table 6 lists examples of different manual scores. The quantitative analysis was conducted using Spearman's correlation efficiency and the Jaccard similarity coefficient.
Image grid ID | Grid image | Manual | Q_{\mathrm{vis}} | Q_{\mathrm{ext}} | 1- Q\mathrm{_{\rm ext}} |
2024_05_14__20_52_51-4 | ![]() |
2 | 0.812 | 0.963 | 0.037 |
2024_05_14__20_52_51-6 | ![]() |
1 | 0.672 | 0.995 | 0.005 |
2024_05_14__20_52_51-98 | ![]() |
0 | 0.000 | 1.000 | 0.000 |
The Spearman correlation rho is a non-parametric measure of rank correlation, which it is defined as
rho = 1 - 6 \mathop \sum \limits^N \frac{d_i^2}{N(N^2-1)} , | (22) |
where d_i is the rank difference, and N is the number of sky areas. The Spearman correlation ranges from −1 to 1. A value of −1 indicates a perfectly negative correlation, 1 indicates a perfectly positive correlation, and 0 indicates no correlation between the two variables. To compute the Spearman correlation, we use {\text{scipy.stats.spearmanr}} from the Python package \text{SciPy} [27]. Ideally, the Spearman correlation between the visibility and the manual score should be 1, and that between the extinction and the manual score should be −1. The results are presented in Table 7.
Metrics | Q_{\mathrm{vis}} | Q\mathrm{_{\rm ext}} |
Spearman correlation | 0.30 (1) | 0.29 (−1) |
Jaccard similarity | 0.26 | 0.08 |
The Spearman correlation between the visibility and the manual score is 0.30, indicating a positive correlation. Conversely, the Spearman correlation between the extinction and the manual score is 0.29, which contradicts the expected negative correlation. These results suggest that the proposed method is more accurate in terms of rank-order correlation than the extinction-based method.
The Jaccard similarity is another classical metric used for evaluating the accuracy of a classification method. It is defined as the ratio of the intersection of correctly classified categories to the union of all categories. Let category c \in \{0, 1, 2\} represent the three levels of the manual score, C denote the number of categories, GT_c represent the set of manual scores, and Q_c represent the classification derived from either the visibility or extinction. The Jaccard similarity J is defined as
J=\frac{1}{C}\sum\limits_{c=0}^C\frac{|Q_c\cap GT_c|}{|Q_c\cup GT_c|}\ , | (23) |
where |*| is the number of elements in the set.
Quantitative analysis using the Spearman correlation and Jaccard similarity shows that PFSAVE is more closely aligned with human judgment than the extinction-based method.
In this paper, we proposed the PFSAVE method to achieve high spatial resolution sky area visibility using all-sky cameras without scientific-grade sensors. Our method estimates sky area visibility without relying on photometric precision, making it particularly suitable for all-sky cameras with limited photometric accuracy. Visibility is assessed based on the magnitude of the faintest star observed within each specific sky area. High spatial resolution is achieved through the segmentation of the sky using the HEALPix. The cornerstone of PFSAVE is a high-precision coordinate transformation model, and we introduce a per-transformation re-fitting technique to solve this model, achieving an accuracy of 0.42 pixels.
Qualitative and quantitative analyses demonstrate that our method is effective under various sky conditions and more accurate than extinction-based methods. Our approach generates a sky area visibility distribution in HEALPix format, which is both discrete and searchable, making it suitable for robotic observation software. Based on this distribution, astronomers and observation assistants can select sky areas with high visibility to conduct observations, thereby obtaining higher quality observational data.
For the model fitting, our method depends on a precise coordinate transformation and experienced researchers to establish a dataset that maps horizontal coordinates to photograph coordinates. Low-resolution all-sky images have serious signal aliasing, and our method has limited performance on such images.
This research is supported by Natural Science Foundation of Jilin Province (20210101468JC); Chinese Academy of Sciences and Local Government Cooperation Project (2023SYHZ0027, 23SH04), and National Natural Science Foundation of China (12273063 & 12203078). We acknowledge the assistance provided by the observation assistants of Jilin Observatory.
Haiwen Xie conceived the ideas. Haiwen Xie and Xiaojun Jiang drafted the initial manuscript. Haiwen Xie, Hui Zhi, Zhe Kang, Shiyu Deng, and Bingli Niu developed the methodology. Zhe Kang managed and maintained research data and provided study resources. Xiaojun Jiang and Zhe Kang acquired the fundings and administered the project. Xiaojun Jiang supervised the research. Haiwen Xie, Hui Zhi, Zhe Kang, Shiyu Deng, Lei Wang, and Xiaojun Jiang reviewed and edited the manuscript. All authors read and approved the final manuscript.
The authors declare no competing interests.
[1] |
Götz, D., SVOM Collaboration. 2012. SVOM: a new mission for gamma-ray bursts studies. Memorie della Societa Astronomica Italiana Supplementi, 21: 162.
|
[2] |
Niu, B. L., Kang, Z., Li, Z. W., et al. 2023. Simultaneous three-channel photometric system of 1.2 m photoelectric telescope. Optics and Precision Engineering, 31(6): 793−803. (in Chinese) doi: 10.37188/OPE.20233106.0793
|
[3] |
Sánchez de Miguel, A., Aubé, M., Zamorano, J., et al. 2017. Sky quality meter measurements in a colour-changing world. Monthly Notices of the Royal Astronomical Society, 467(3): 2966−2979. doi: 10.1093/mnras/stx145
|
[4] |
Skidmore, W., Schöck, M., Magnier, E., et al. 2008. Using All sky cameras to determine cloud statistics for the thirty meter telescope candidate sites. In Proceedings of SPIE.
|
[5] |
Yin, J., Yao, Y. Q., Liu, L. Y., et al. 2015. Cloud cover measurement from all-sky nighttime images. Journal of Physics: Conference Series, 595: 012040. doi: 10.1088/1742-6596/595/1/012040
|
[6] |
Adam, J., Buss, J., Brügge, K., et al. 2017. Cloud detection and prediction with all sky cameras. EPJ Web of Conferences, 144: 01004. doi: 10.1051/epjconf/201714401004
|
[7] |
Gao, B. Q., Ping, Y. D., Lu, Y., et al. 2022. Nighttime cloud cover estimation method at the saishiteng 3850 m site. Universe, 8(10): 538. doi: 10.3390/universe8100538
|
[8] |
Gao, B. Q., Ping, Y. D., Zhang, C. 2022. The application of all-sky camera in multi-device optical survey. ACTA ASTRONOMICA SINICA, 63(1): 60−68. (in Chinese) doi: 10.15940/j.cnki.0001-5245.2022.01.006
|
[9] |
Castro-Tirado, A. J. 2010. Robotic autonomous observatories: A historical perspective. Advances in Astronomy, 2010: 1−8. doi: 10.1155/2010/570489
|
[10] |
Mommert, M. 2020. Cloud identification from all-sky camera data with machine learning. Astronomical Journal, 159(4): 178. doi: 10.3847/1538-3881/ab744f
|
[11] |
Zhi, H., Wang, J. F., Zhang, X. M., et al. 2024. Cloud identification and reconstruction from all-sky camera images based on star photometry estimation. Publications of the Astronomical Society of the Pacific, 136(3): 035002. doi: 10.1088/1538-3873/ad2867
|
[12] |
Hog, E. 1986. TYCHO Astrometry and Photometry. Proceedings of the International Astronomical Union, 109(109):625.
|
[13] |
Bessell, M. S. 1990. UBVRI passbands. Publications of the Astronomical Society of the Pacific, 102: 1181−1199. doi: 10.1086/132749
|
[14] |
Malvar, H., He, L. W., Cutler, R. 2004. High-quality linear interpolation for demosaicing of Bayer-patterned color images. In Proceedings of 2004 IEEE International Conference on Acoustics.
|
[15] |
Kerepecký, T., Šroubek, F., Novozámský, A., et al. 2023. NERD: Neural field-based demosaicking. In Proceedings of 2023 IEEE International Conference on Image Processing (ICIP).
|
[16] |
Crumey, A. 2014. Human contrast threshold and astronomical visibility. Monthly Notices of the Royal Astronomical Society, 442(3): 2600−2619. doi: 10.1093/mnras/stu992
|
[17] |
Wu, C., Kang, Z., Xin, L. P., et al. 2023. GRB 230205A: SVOM/C-GFT optical upper limit. GRB Coordinates Network, 33283.
|
[18] |
Wells, D. C., Greisen, E. W., Harten, R. H. 1981. FITS - a flexible image transport system. Astronomy and Astrophysics Supplement Series, 44: 363.
|
[19] |
Morris, B. M., Tollerud, E., Sipőcz, B., et al. 2018. Astroplan: An open source observation planning package in Python. Astronomical Journal, 155: 128. doi: 10.3847/1538-3881/aaa47e
|
[20] |
Barbary, K. 2016. SEP: Source extractor as a library. The Journal of Open Source Software, 1(6): 58. doi: 10.21105/joss.00058
|
[21] |
Bertin, E., Arnouts, S. 1996. SExtractor: software for source extraction. Astronomy and Astrophysics Supplement Series, 117(2): 393−404. doi: 10.1051/aas:1996164
|
[22] |
Kannala, J., Brandt, S. 2006. A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(8): 1335−1340. doi: 10.1109/TPAMI.2006.153
|
[23] |
Górski, K. M., Hivon, E., Banday, A. J., et al. 2005. HEALPix: A framework for high-resolution discretization and fast analysis of data distributed on the sphere. Astrophysical Journal, 622(2): 759. doi: 10.1086/427976
|
[24] |
Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013. Astropy: A community Python package for astronomy. Astronomy and Astrophysics, 558: A33. doi: 10.1051/0004-6361/201322068
|
[25] |
Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018. The astropy project: building an open-science project and status of the v2.0 core package. Astronomical Journal, 156(3): 123. doi: 10.3847/1538-3881/aabc4f
|
[26] |
Astropy Collaboration, Price-Whelan, A. M., Lim, P. L., et al. 2022. The astropy project: sustaining and growing a communityoriented open-source project and the latest major release (v5.0) of the core package. Astrophysical Journal, 935(2): 167. doi: 10.3847/1538-4357/ac7c74
|
[27] |
Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020. SciPy 1.0: fundamental algorithms for scientific computing in Python. Nature Methods, 17: 261−272. doi: 10.1038/s41592-019-0686-2
|
[1] | Xue Chen, Yuyang Ma, Xiaoyun Ma, Junjie Huang, Zehua Dong, Zegang Ding. Radio astronomy observation on distributed deep space radar: A prototype experiment [J]. Astronomical Techniques and Instruments. DOI: 10.61977/ati2025025 |
[2] | Yan Xiaohui, Zhang Yuheng, Song Qian. Method for Crosstalk Image Correction of Digital Camera Based on Calibration and Image Processing [J]. Astronomical Techniques and Instruments, 2023, 20(2): 135-144. DOI: 10.14005/j.cnki.issn1672-7673.20221123.001 |
[3] | Hou Xiaozheng, Xu Qian, Liang Juan, Yi Letian, Xue Fei, Wang Hui. Research on Fast Calibration of Rotary Encoder Based on Particle Swarm Optimization [J]. Astronomical Research and Technology, 2022, 19(5): 487-492. DOI: 10.14005/j.cnki.issn1672-7673.20211213.002 |
[4] | Wang Kai, Yan Hao, Duan Xuefeng, Ma Jun, Chen Maozheng, Wang Yang, Cao Liang, Chen Chenyu, Li Peng. Analysis and Research on Noise Injection Calibration Method of K-band Ambient Temperature Receiver [J]. Astronomical Research and Technology, 2020, 17(4): 439-445. |
[5] | Zhang Xin. A Fast Image Simulation Method and Analysis for Astronomy Research [J]. Astronomical Research and Technology, 2019, 16(1): 77-84. |
[6] | Dong Liang, Jiang Tao, Zhou Shaohong, Wang Min, Liu Junqing, Guo Shaojie. An Analog Receiver Designing of 55-65MHz Band Radio Astronomy Observation [J]. Astronomical Research and Technology, 2017, 14(2): 157-163. |
[7] | Yin Qijun, Zhu Liang, He Lesheng, Dong Liang, Liu Junqing. A New Method to Observe Radio Astronomy Signal: Centimeter-Decimeter Spectral Line——Based on High Performance Integrated Circuit [J]. Astronomical Research and Technology, 2017, 14(1): 103-109. |
[8] | Xie R. X., Wang M.. Observational Evidence for Solar Radio Microflares with Unusual Characteristics [J]. Publications of the Yunnan Observatory, 1999, 0(S1): 396-399. |
[9] | Wu Shengyin. Protection of Radio Astronomy Frequency in China [J]. Publications of the Yunnan Observatory, 1999, 0(S1): 90-93. |
[10] | Mo Yulong, Ji Zhongshu. Verifying the Straka's Filtering Formula of the Observing Data in Radio Astronomy [J]. Publications of the Yunnan Observatory, 1988, 0(4): 80-85. |
Band | Wavelength/Å | ||
Astronmical | JC UBVRI | U | |
B | |||
V | |||
R | |||
I | |||
TYCHO | BT | ||
VT | |||
RGB | BAYER | B | |
G | |||
R | |||
Human eye | B | ||
G | |||
R |
Item | Parameter |
Model | ALCOR OMEA 3C |
Camera | ZWO ASI 178MC |
Sensor | Sony IMX178LQJ-C |
Focal length | 1.8 mm |
f-number | f/1.6 |
Pixel size | 2.4 \mu m |
Image size | |
Sensor size | 7.4 mm \times 5 mm |
Sensor frame | 1/1.8'' |
Field of view | 180^\circ \times 180^\circ |
TYCHO ID | alt/(°) | az/(°) | Calculated x/pix | Calculated y/pix | Extracted x/pix | Extracted y/pix |
29.33 | 148.26 | 512.76 | 517.11 | |||
48.16 | 113.14 | 606.14 | 604.24 | |||
187- | 25.91 | 250.90 | ||||
27.54 | 308.65 | 885.08 | 884.06 | |||
66.60 | 64.94 | 858.80 | 896.91 | 858.89 | 894.94 | |
72.48 | 39.75 | 868.45 | 867.42 | |||
833- | 53.13 | 216.37 | ||||
1920- | 42.18 | 270.43 | ||||
42.42 | 276.52 | |||||
33.22 | 302.83 | 962.13 | 960.95 | |||
... | ... | ... | ... | ... | ... | ... |
Coefficients | Description | Count |
s | Zenith distance in pixel | 1 |
\theta\mathrm{_{north}} | North offset in degree | 1 |
P_c(x_c,\ y_c) | Zenith position in photographic coordinate system | 2 |
k_0,\ k_1,\ k_2,\ k_3,\ k_4 | Projection coefficients | 5 |
\delta_{r_1},\ \delta_{r_2},\ \delta_{r_3},\ \delta_{r_4},\ \delta_{r_5},\ \delta_{r_6},\ \delta_{r_7} | Radial distortion coefficients | 7 |
\delta_{\theta_1},\ \delta_{\theta_2},\ \delta_{\theta_3},\ \delta_{\theta_4},\ \delta_{\theta_5},\ \delta_{\theta_6},\ \delta_{\theta_7} | Tangential distortion coefficients | 7 |
TOTAL | 23 |
Step | Mean/pixel | Standard deviation/pixel | Median/pixel |
1 | 3.04 | 1.31 | 3.10 |
2 | 2.17 | 1.03 | 2.05 |
3 | 2.11 | 1.01 | 2.05 |
4 | 0.83 | 0.52 | 0.76 |
5 | 0.49 | 0.36 | 0.42 |
Image grid ID | Grid image | Manual | Q_{\mathrm{vis}} | Q_{\mathrm{ext}} | 1- Q\mathrm{_{\rm ext}} |
2024_05_14__20_52_51-4 | ![]() |
2 | 0.812 | 0.963 | 0.037 |
2024_05_14__20_52_51-6 | ![]() |
1 | 0.672 | 0.995 | 0.005 |
2024_05_14__20_52_51-98 | ![]() |
0 | 0.000 | 1.000 | 0.000 |
Metrics | Q_{\mathrm{vis}} | Q\mathrm{_{\rm ext}} |
Spearman correlation | 0.30 (1) | 0.29 (−1) |
Jaccard similarity | 0.26 | 0.08 |