
Citation: | Xu, Y., Li, G. W., Wang, J., et al. 2025. A monitoring system to improve fault diagnosis in telescope arrays. Astronomical Techniques and Instruments, https://doi.org/10.61977/ati2025019. |
The Ground-based Wide-Angle Cameras array necessitates the integration of more than 100 hardware devices, 100 servers, and
The Ground-based Wide-Angle Cameras (GWAC) array, a main ground-segment component of the Chinese–French Space-based multi-band astronomical Variable Objects Monitor (SVOM) mission[1], is a fully automated, self-triggering survey system intended for follow-up observations of SVOM transient detections and for real-time autonomous detection of optical transients. Since construction completion, a series of scientific results were obtained with GWAC data, such as the independent detection of a short-duration, high-energy gamma-ray burst, GRB 201223A, with a timescale of only 29 s[2]; the detection of more than 200 white-light flares[3,4]; the detection of two cold-star superflares with amplitudes reaching an approximate magnitude of 10[5,6]; and characterization of the long-term flare activity of cold stars[7]. The complete GWAC system consists of 10 telescope mounts, 50 image sensors, 50 camera focusing mechanisms, and more than 100 supporting servers. The data processing pipeline includes multiple subsystems for observation scheduling[8], observation control, automatic focus[9], automatic guiding, real-time scientific data processing, automatic follow-up[10], and scientific result management[11].
Daily GWAC operations rely on the coordination of numerous software and hardware modules and involves complex processes that result in a moderately high fault rate. The two primary challenges of maintaining the array's operational are:
1. hardware–software dependence: hardware operations strongly depend on software feedback. For example, pointing correction for each mount relies on the astrometric analysis of observed images, while maintaining image quality depends on the calculation results of the stellar energy concentration in each image. These are affected by meteorological variability, introducing uncertainty in the astrometric measurements and image assessments. Because scientific data processing also depends on these results, the overall GWAC feedback loop is long, with limited reliability.
2. complex software architecture: it is intricate and requires high concurrency and real-time processing. The data processing pipeline must complete a variety of critical tasks, such as observation planning, image quality assessment, template catalog generation, hardware feedback, cross-matching with multiple catalogs, and automated transient follow-up. For example, the processing of a single image requires more than 50 software modules; thus, because all 50 cameras operate simultaneously, a single image cycle necessitates more than
When the number of telescopes in an array is small, faults are rare and their impact on observation efficiency is minimal. However, as the number of telescopes increases, fault frequency increases, often exceeding the diagnosis and repair capacity of the maintenance staff. This results in prolonged operational periods affected by fault occurrences and a substantially reduced system efficiency. Therefore, the GWAC array critically requires an efficient monitoring system to track system state continuously, diagnose faults, issue subsequent alerts, and provide maintenance staff with guidance to solve problems rapidly and, ultimately, improve the operational efficiency.
Current monitoring solutions are inadequate for the GWAC array because of its complexity and requirements for high concurrency and real-time operation. For example, the monitoring system of the Cherenkov Telescope Array[12] mainly tracks hardware degradation to prevent major failures. The Large Sky Area Multi-Object Fiber Spectroscopic Telescope[13] relies on real-time assessments of its guiding system performance, focal surface defocusing, submirror performance, and active optics system performance. Similarly, radio astronomy projects such as the Square Kilometre Array[14] mainly monitor hardware health state. These systems emphasize hardware health or observation efficiency, but cannot monitor hardware and software state comprehensively in high-concurrency environments like that of the GWAC array.
To overcome these limitations, the GWAC array requires a monitoring system that not only provides real-time hardware state tracking, but also integrates comprehensive software pipeline monitoring. Thus, by design, the GWAC monitoring system continuously assesses the running state of all array components and displays monitoring data in several views. To ensure comprehensive fault coverage, the system collects a variety of data on hardware state, real-time pipeline state, and key image parameters. Hardware–software collaboration generates considerable amounts of raw monitoring data that cannot be analyzed manually. By integrating and abstracting the raw data, the monitoring system provides multidimensional views that simplify data interpretation and reduce reliance on the experience and skills of the maintenance personnel. Additionally, the system contributes to characterizing the array's internal operations and temporal performance evolution, supports manual fault diagnosis, and establishes the basis for future automated fault detection.
Unlike traditional monitoring systems that focus on hardware health or isolated performance metrics, the proposed system introduces two original views to monitor state evolution and transient lifecycle. These views offer a dynamic, temporally resolved perspective on system behavior and partly alleviate the limitations of existing approaches. The "state evolution monitoring" view continuously records the temporal variations of key parameters (e.g., mount pointing, image quality, and module invocation time), allowing for early detection of complex faults that develop gradually or propagate across multiple modules. The “transient lifecycle monitoring” view illustrates the entire processing pipeline for transient events, from detection to follow-up, allowing for real-time identification of delays or bottlenecks. This view is particularly useful to optimize transient surveys and improve response efficiency. By integrating these original capabilities, the proposed system is designed to improve fault diagnosis and operational efficiency within a comprehensive monitoring framework particularly suited to large-scale, high-concurrency systems like the GWAC array.
In this study, we introduce a new monitoring system for the GWAC array, with diversified monitoring data collection and an original visualization scheme. The structure of the manuscript is as follows: Section 2 describes the system architecture and its relationship with the existing GWAC pipeline; Section 3 details the system design and implementation, including monitoring view construction, database design, and system implementation; Section 4 presents the analysis of fault diagnosis cases using the new monitoring views; and Section 5 summarizes our study and suggests future uses for the proposed monitoring system.
To enhance the efficiency of fault detection and diagnosis in the GWAC system, we developed a monitoring system integrated into the existing GWAC pipeline. As illustrated in Fig. 1, the monitoring system comprises two main components: data collection and monitoring views.
The "data collection" component (Fig. 1, top) represents data aggregation from the seven subsystems of the GWAC data processing pipeline. Collected data are sorted into three categories: key module invocation times, image and instrument parameters, and transient processing information, described hereafter.
1. Key module invocation time: the seven GWAC subsystems include more than 50 software modules. Monitoring all software modules would produce excessive data, complicating data management and visualization. Thus, only selected key modules are monitored, such as observation planning generation, image exposure, image processing, and transient detection. For each key module, the invocation start time is recorded to indicate the module activity state. The specific key modules are described in Section 3.1.1 (instantaneous state monitoring).
2. Image and instrument parameters: in the GWAC data processing pipeline, several analyses are conducted on each image to evaluate, e.g., image quality, pointing accuracy, alignment precision, and target count. The combined analysis results represent the image parameters. Additionally, physical components of the array, such as mounts or cameras, are characterized by state parameters (e.g., temperature, vacuum level, voltage, and current) representing the instrument parameters.
3. Transient processing information: transient properties are a critical scientific output of the GWAC array. Their timely processing is essential to the quality of the scientific results. Data in this category record the start times of key modules directly related to transients, from the acquisition time of transient discovery frames to the triggering time of follow-up observations, to track transient processing.
The "monitoring views" component (Fig. 1, bottom) represents abstraction and presentation of the collected data into four views (described in Section 3): instantaneous state, state evolution, key parameters, and transient lifecycle.
1. Instantaneous state monitoring: this view tracks real-time system state, providing immediate feedback on system health. It alerts staff of faults to allow for prompt diagnosis and resolution. This view includes real-time camera previews and displays the state of key modules.
2. State evolution monitoring: this view illustrates the temporal evolution of the state of key modules, i.e., time series of the instantaneous state monitoring.
3. Key parameter monitoring: this view primarily monitors the temporal evolution of parameters related to images, mounts, and cameras.
4. Transient lifecycle monitoring: this view visualizes the lifecycle of detected transients, from discovery frame acquisition to the triggering of follow-up observations. It includes key timestamps: acquisition time of the transient candidate detection frame, start and end times of the identification process, and start and end times of follow-up observations. This view is used to verify whether processing is nominal or faulty.
Managing more than 100 custom hardware devices, 100 servers, and
This view is intended for early fault warnings and provides a comprehensive, real-time system health summary that includes images from the 50 cameras and a key module state monitoring table. The user interface (UI) design details are illustrated in Fig. 2. The left- and right-hand side parts are described in the following paragraphs.
1. Camera observation image monitoring: this page displays real-time images from each camera to assess observation parameters, including focus, camera state, and weather conditions. The design challenge is to maintain clarity when displaying images from 50 cameras simultaneously. We propose a combination of thumbnails around a high-resolution image carousel (Fig. 2, left). Thumbnails are displayed on the sides for browsing, while high-resolution images are presented in the center carousel, allowing users to view detailed images by clicking on the thumbnails.
2. Key module instantaneous state monitoring: this page tracks the real-time state of the data processing pipeline, involving more than 50 software modules for each camera. Because of space constraints, only key modules are monitored and their state displayed (Fig. 2, right). Each row corresponds to a camera and each column to a key module. Module state is color-coded in white (online), green (nominal), orange (warning), red (fault), and gray (offline). This layout supports rapid fault diagnosis by efficiently conveying critical information.
By providing a complete overview of the system's health state, the instantaneous state monitoring view enables users to rapidly assess overall array performance. For a detailed fault diagnosis, users can conduct a comprehensive system analysis using the additional monitoring views.
This view is used to monitor continuous system changes during observation, focusing on complex mechanisms essential for fault diagnosis. For the same purpose, traditional monitoring methods use specialized, time-consuming tools that require expertise. We designed a new view for dynamic visualization of the system state. It displays the variations in mount pointing, image quality, and image lifecycle (Fig. 3). The UI consists of a control area for mount selection, data type selection, and observation date selection (Fig. 3, left) and of a chart displaying temporal parameter variations (Fig. 3, right), with the following layout:
• horizontal axis: elapsed time in minutes since the start of the active observation.
• vertical axis: parameters such as mount properties (observation plan, guiding action, pointing errors), camera properties (template image creation, focus, image quality indicators such as the full width at half maximum or FWHM
• key module association: Each item on the vertical axis represents a key module. For each camera, multiple key modules are connected from top to bottom in chronological order; the shape of the connecting lines indicates different types of faults.
This view provides separate monitoring of parameters related to the mounts, cameras, and images. For example, temporal variations of the mount pointing accuracy, displayed on the left-hand side of Fig. 4, indicate deviations from the planned pointing. Image quality (Fig. 4, right), influenced by the camera itself and by environmental factors, is monitored with parameters such as FWHM, star count, background brightness, limiting magnitude, and processing time. Temporal variations of camera hardware parameters, such as temperature, are also displayed (Fig. 4, right). All of these parameters are listed in the “image parameters” menu on the left-hand side; selecting a parameter name displays the corresponding curve.
Because of its hardware characteristics, the GWAC array can observe fast transients, i.e., with durations of the order of minutes. The entire GWAC transient processing pipeline, from transient detection to automatic identification, has been developed and is fully operational. To date, the array has successfully observed numerous minute-scale transients, such as the 200 white-light flares reported by Li et al[3,4]. Follow-up identification[10] of transient candidates, using an independent 60-cm telescope located at the same site, is a critical step in this observation process. Early identification of transients alerts large-aperture telescopes sooner, allowing for timely spectroscopic measurements and other follow-up observations. Therefore, optimizing the GWAC data processing pipeline is critical to accelerate the observation process, from detection to automatic identification, particularly for short-duration phenomena such as optical transients.
For this purpose, we introduced the concept of “transient lifecycle,” which consists in monitoring and optimizing key modules in the transient processing pipeline. These include detection, identification, and follow-up observation. The system records the uptime of each key module to monitor the transient lifecycle (Fig. 5, transient lifecycle monitoring view). When a system fault occurs, time consumption markedly increases, reflecting system anomalies in real time. Fig. 8 illustrates an example of a transient lifecycle fault.
Although the proposed transient lifecycle monitoring view is valuable to monitor the processing timeline of transient events, the system could benefit from future optimization, such as finer-resolution performance sampling or distributed tracing. Implementing more detailed performance metrics at each stage of the transient processing pipeline could allow the system to precisely identify bottlenecks and optimize resource allocation. This would further improve the transient processing efficiency and reduce delays caused by data accumulation.
To support data storage for the proposed monitoring system, we designed a series of database tables divided into two main categories: "basic entity" and "entity state." Here, an entity represents a monitoring target within the astronomical data processing pipeline, such as a mount, camera, or image.
1. Basic entity tables: these include the observation plan table, mount table, camera table, image table, and transient table. The basic entity tables are shared with the scientific data processing pipeline and are used to support GWAC operations and scientific result management.
2. Entity state tables: these include the instantaneous state table, transient key module uptime table, image lifecycle monitoring table, and image and instrument parameter tables. They are specific to the monitoring system and are used to record data parameters, state parameters, and temporal lifecycle variations of the monitored entities.
The GWAC monitoring system comprises a web server, a database, and several data collection programs. The web server facilitates view display and provides application programming interfaces (APIs) for data collection. The data collection programs run on the GWAC data processing servers, where they monitor and collect information on the local operational state, uploaded to the web server via the API during each image cycle. The collected monitoring data are then stored in the database and made accessible to users through multiple view pages on a web browser.
Details of the monitoring system implementation, necessary to ensure efficient data collection and visualization for fault diagnosis, are given in the following sections.
The monitoring system employs several strategies to accelerate data visualization and handle a large volume of monitoring data efficiently:
1. Instantaneous state monitoring view: the instantaneous camera state is stored in a dedicated state table, where each record represents the latest state of a camera. This minimal database footprint ensures rapid data retrieval. Additionally, a fast page refresh rate, at the frequency of the camera's observation cycle, ensures real-time state updates.
2. State evolution monitoring view: this view aggregates state evolution data from all cameras at image granularity over an entire night. For the GWAC telescope array, a single page may display approximately 600 000 records, with a total attribute count exceeding 10 million. To optimize rendering speed, a “lazy-loading” mechanism is adopted: if cached data are available, the system loads them by default; otherwise, data are retrieved from the database in real time. Users can manually initiate data updates from the interface.
3. Rendering technology: the system uses a JavaScript-based canvas for online rendering on web pages. This enables users, through interactive zooming and moving, to examine fine details within the monitoring data.
The GWAC monitoring system must process approximately 18 million observation records per month. Unfortunately, the performance of a relational database degrades considerably once a single table exceeds 10 million records. To maintain real-time performance and ensure database reliability, the following optimizations are implemented:
1. Data partitioning into daily and historical tables: a daily data table is used to store and retrieve real-time operational data, ensuring rapid access and update for active observations. Additionally, a historical data table is used for long-term storage and supports offline fault analysis and data mining, for which real-time performance is less critical. Data are automatically migrated from the daily table to the historical table in Beijing, at 16:30 PM every day, to maintain optimal database performance.
2. Database reliability measures: the database is configured with real-time streaming replication to ensure high availability. Backup points are established at both the observation site and the Beijing data center to provide geographical redundancy and enhance data security.
To balance real-time data ingestion and fault tolerance, the system employs a message queue-based data processing architecture. All collected monitoring data are stored in a message queue before being committed to the database. This ensures that data are ingested in real time without overwhelming the database. Message consumption threads then process incoming data asynchronously and continuously to prevent system bottlenecks. Thus, in case of anomalous data spikes or unexpected failures, the backlog is handled without system failure, ensuring system robustness even under extreme conditions.
The monitoring system was completed in early 2024. Over nearly a year of operation, we have conducted extensive fault analyses and diagnoses using the proposed monitoring views, ultimately establishing a correspondence table between faults and monitoring views in which nearly all common faults correspond to a specific view configuration and to a solution. With our proposed monitoring views, the complete fault analysis process, from warning to diagnosis, typically lasted for a few minutes only. Compared with traditional methods, for which the same process can last from several tens of minutes to several hours, the proposed monitoring solution accelerated fault diagnosis by a factor of 10 or more. Moreover, the monitoring views described in this study do not consider all available fault information; system optimization is ongoing and upgrades are continuously applied. In future, the monitoring content will be further expanded to cover more fault points. Typical examples of common fault diagnoses, characterized using the proposed monitoring views, are presented hereafter.
1. Defocused image: the left panel in Fig. 6 is a real-time preview of a faulty observation. The poor image quality indicates a failure of the autofocus function.
2. Single or multiple response timeout: the right panel in Fig. 6 displays the instantaneous state of key modules (small rectangular boxes) for each camera in the array. A majority of boxes are typically colored green, indicating nominal module state. A red box indicates an anomalous state for a specific key module. An entirely red column indicates that the corresponding key module, for all cameras (rows), either failed to start or suffered complete failure.
3. Simultaneous data processing failure on multiple servers: this is illustrated in the first two images of Fig. 7, which presents three cases of state evolution monitoring view. In the left image, two cameras on the same mount exhibited an unusually large FWHM because of the weather conditions, degrading the image quality and causing simultaneous data processing failures on two servers of the same mount. In the middle image, multiple failures are displayed. First, a weather-related fault caused data processing failures on all servers of a single mount, triggering a pointing switch to the next planned sky region. Second, frequent sky region switches on a mount depleted its observation plan for the corresponding time period, eventually causing it to enter a waiting state and suspend observations. At the same time, focusing failure of the guiding camera or excessive pointing deviations also induced simultaneous data processing failures on all servers of a single mount. Additionally, a failure in the source extraction module of one server prevented image processing, but not image acquisition.
4. Single-server data processing delay: the right image in Fig. 7 illustrates a network card failure on the camera control server, reducing transmission speed to below the image readout rate, which caused considerable image transmission delays. However, image processing remained nominal in the subsequent modules.
5. Anomalous transient lifecycle: Fig. 8 illustrates the temporal evolution of transient candidate catalog parsing. In two periods, transient catalog parsing experienced substantial delays. This is typically caused by consecutive processing failures for multiple images, which generate an excessive number of transient candidates. When the number of candidates exceeds peak pipeline processing capacity, a candidate queue forms, resulting in processing delays. As processing continues, the number of queued targets gradually decreases and processing time reverts to its nominal state.
The complexity of the GWAC data processing pipeline results in a high failure rate and a difficult fault diagnosis. In this study, we designed and implemented an original monitoring system to improve the fault diagnosis efficiency of the GWAC array. The primary innovation of the proposed system is the design of monitoring views tailored to the highly complex pipeline requiring the abstraction of intricate data collected from more than
These innovative monitoring views provide users with comprehensive diagnosis tools for the instantaneous states and state evolution of the GWAC array, including pointing, observation, data processing, and hardware–software feedback. This enhanced diagnosis capability considerably accelerates fault detection and localization. Additionally, the transient lifecycle monitoring view provides clear visualization of critical anomalies in key processing modules, thus representing a valuable reference indicator to optimize observation efficiency. Our practical case analysis indeed demonstrated that the proposed monitoring system produced a tenfold acceleration of fault localization in the GWAC array. We also detailed the collection and storage schemes implemented for the monitoring data and presented several monitoring case studies to illustrate the aspects of fault analysis related to each original monitoring view.
In its current version, the GWAC monitoring system primarily focuses on the visualization of monitoring data, assisting operators in assessing the system operation state, and conducting fault analyses. Although common faults can be diagnosed with the monitoring views only, complex failures require expert intervention combining the monitoring views with detailed backend logs. Future work will involve compiling an exhaustive list of fault cases and characterizing the connection between monitoring data and faults. Additionally, research will be undertaken to establish an automated fault diagnosis system, relying on monitoring data to further improve fault diagnosis and system maintenance efficiency. The design of the proposed system monitoring views is highly generalizable, hence applicable not only to the GWAC array, but also to other arrays.
GWAC: Ground-based Wide-Angle Cameras array
SVOM: Space-based multi-band astronomical Variable Objects Monitor
UI: user interface
API: application programming interface
This study was supported by the Young Data Scientist Program of the China National Astronomical Data Center, the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB0550401), and the National Natural Science Foundation of China (Grant No. 12494573).
[1] |
Wei, J. Y. , Cordier, B. , Antier, S. , et al. 2016. The deep and transient universe in the SVOM era: new challenges and opportunities-scientific prospects of the SVOM mission. arXiv preprint arXiv: 1610.06892.
|
[2] |
Xin, L. P. , Han, X. H. , Li, H. L. , et al. 2023. Prompt-to-afterglow transition of optical emission in a long gamma-ray burst consistent with a fireball. Nature Astronomy, 7(6): 724−730 doi: 10.1038/s41550-023-01930-0
|
[3] |
Li, G. W. , Wang, L. , Yuan, H. L. , et al. 2024. The white-light superflares from cool stars in GWAC triggers. The Astrophysical Journal, 971: 114 doi: 10.3847/1538-4357/ad55e8
|
[4] |
Li, G. W. , Wu, C. , Zhou, G. P. , et al. 2023. Magnetic activity and parameters of 43 flare stars in the GWAC archive. Research in Astronomy and Astrophysics, 23: 015016 doi: 10.1088/1674-4527/aca506
|
[5] |
Xin, L. P. , Li, H. L. , Wang, J. , et al. 2024. A huge-amplitude white-light superflare on a L0 brown dwarf discovered by GWAC survey. Monthly Notices of the Royal Astronomical Society, 527(2): 2232−2239
|
[6] |
Xin, L. P. , Li, H. L. , Wang, J. , et al. 2021. A ΔR~9.5 mag superflare of an ultracool star detected by the SVOM/GWAC system. The Astrophysical Journal, 909(2): 106 doi: 10.3847/1538-4357/abdd1b
|
[7] |
Li, H. L. , Wang, J. , Xin, L. P. , et al. 2023. White-light superflare and long-term activity of the nearby M7 type binary EI~Cnc observed with GWAC system. The Astrophysical Journal, 954(2): 142 doi: 10.3847/1538-4357/ace59b
|
[8] |
Han, X. H. , Xiao, Y. J. , Zhang, P. P. , et al. 2021. The automatic observation management system of the GWAC network. I. System architecture and workflow. Publications of the Astronomical Society of the Pacific, 133(1024): 065001 doi: 10.1088/1538-3873/abfb4e
|
[9] |
Huang, L. , Xin, L. P. , Han, X. H. , et al. 2015. Auto-focusing of wide-angle astronomical telescope. Optics and Precision Engineering, 23: 174−183 doi: 10.3788/OPE.20152301.0174
|
[10] |
Xu, Y. , Xin, L. P. , Wang, J. , et al. 2020. A real-time automatic validation system for optical transients detected by gwac. Publications of the Astronomical Society of the Pacific, 132(1011): 054502 doi: 10.1088/1538-3873/ab7a73
|
[11] |
Xu, Y. , Xin, L. P. , Han, X. H. , et al. 2020. The GWAC data processing and management system. arXiv preprint arXiv: 2003.00205.
|
[12] |
Costa, A. , Munari, K. , Incardona, F. , et al. 2021. The monitoring, logging, and alarm system for the cherenkov telescope array. arXiv preprint arXiv: 2109.05770.
|
[13] |
Hu, T. Z. , Zhang, Y. , Cui, X. Q. , et al. 2021. Telescope performance real-time monitoring based on machine learning. Monthly Notices of the Royal Astronomical Society, 500: 388−396
|
[14] |
Di Carlo, M. , Dolci, M. , Smareglia, R. , et al. 2016. Monitoring and controlling the SKA telescope manager: a peculiar LMC system in the framework of the SKA LMCs. In Software and Cyberinfrastructure for Astronomy IV, 9913: 1348−1357
|
[1] | Zhang Yulong, Wang Jianfeng, Li Taoran, Ge Liang, Wu Ying, Zhao Yong, Jiang Xiaojun. The Intelligent Fault Auxiliary Diagnosis System of Astronomical Telescope Based on Observation Image Recognition [J]. Astronomical Research and Technology, 2020, 17(3): 392-398. |
[2] | Li Rongkai, Li Xirui, Cai Yong. Research on Fault Diagnosis of Hydrogen Maser Based on Machine Learning [J]. Astronomical Research and Technology, 2020, 17(3): 349-356. |
[3] | Fu Xia'nan, Huang Lei, Wei Jianyan. The Fault Diagnosis Expert System of Mini-GWAC [J]. Astronomical Research and Technology, 2016, 13(3): 366-372. |
[4] | Yang Wenjun, Zhao Rongbing, Nie Jun. The Improvement of a VLBI Monitoring System [J]. Astronomical Research and Technology, 2015, 12(3): 285-291. |
[5] | Xu Jin, Chen Maozheng, Li Jian. Application of an Embedded TCP/IP Protocol in a Dewar Performance-Parameter Monitoring System of a Radio Receiver [J]. Astronomical Research and Technology, 2011, 8(2): 108-112. |
[6] | Li Rufeng, Bi Shaolan, Dun Jinping. Observation and Research on the Twisting Characteristics of Flare Loop System [J]. Publications of the Yunnan Observatory, 1999, 0(S1): 437-440. |
[7] | Boonrucksar Soonthornthum, Sumith Niparugs, Aniwat Sooksawat, Wim Nhuerpeng. Physical Properties of an Eclipsing Binary System, YY Eridani [J]. Publications of the Yunnan Observatory, 1999, 0(S1): 382-385. |
[8] | Zhang Bin, Li Wei, Song Guofeng, Jin Shengzhen. Presentation on the Electronic Control System of Space Solar Telescope [J]. Publications of the Yunnan Observatory, 1999, 0(S1): 130-133. |
[9] | Jong Ae Park, Seog Tae Han, Tai Seong Kim, Kwang Dong Kim, Hyo Ryoung Kim, Hyun Soo Chung, Chang Hoon Lee, Se Hyung Cho, Jongmann Yang. Quasioptical System of Trao Telescope [J]. Publications of the Yunnan Observatory, 1999, 0(S1): 86-89. |
[10] | Xiong Yaoheng, Jiang Chongguo, Wang Wu, Zheng Xianming, Zhang Yuncheng, Feng Hesheng. Adaptive Optics System and Its Application Predictions at 1.2m Telescope of Yunnan Observatory [J]. Publications of the Yunnan Observatory, 1999, 0(S1): 63-67. |