In recent years, the performance of modern optical remote sensing instruments on board of satellites improved greatly due to better spatial and spectral resolution. This requires higher storage capacities as well as higher downlink times for the captured data. Due to an average global cloud coverage of about 66 % the recorded images are often contaminated with clouds. Hence, for users that are mostly interested in surface features like monitoring vegetation conditions or desertification, these images do not contain the desired information and are therefore of no value. One can see that an on-board cloud detection tool would be very useful to prevent dissipation of storage resources and downlink capacities in the case of cloud-contaminated images. This task principally consists of two parts: the first one is choosing a suitable cloud detection algorithm, the second part is implementing it in a satellite environment. For the implementation of cloud detection algorithms on board of satellites Field Pro-grammable Gate Arrays (FPGAs) are an optimal device due to their low costs, low power consumption and high flexibility. These properties combined with the ability of reconfiguration during operations make this hardware ideal for the realization of on-board processing functionality without stressing the On-board computer (OBC) or the Payload processing unit (PPU). Another useful characteristic of the FPGA is that its processing speed when using adequate algorithms is sufficient to allow real-time processing of the incoming data flow. In this thesis a sophisticated approach is developed which allows on-board cloud detection on FPGAs with low hardware consumption.
The basis of this method is that the conversion results are precalculated and stored in a Look-up table (LUT). This can be done because the conversion from the digital output to the corresponding physical magnitudes per spectral band is a function of a maximum of three known variables, depending on the sensor and the desired physical magnitude. Assuming that the accuracy of the algorithms is not too heavily decreased by discretising the input variables one can restrict the input space and therefore also the output space in order to minimize the size of the LUT. This allows the conversion of every digital output value into e.g. radiance in advance which has two advantages: instantaneous access to the corresponding value is offered and no hardware on the FPGA is consumed for the
conversion of the data to physical magnitudes. The method developed in this thesis was applied to data from two sensors, the Moderate-resolution Imaging Spectroradiometer (MODIS) and the Enhanced Thematic Mapper (ETM+). Changes in the quality of the cloud mask caused by the adjustments made in the LUT-based method are assessed via a comparison of the algorithms with or without the use of LUTs on a large amount of data. Two supervised learning algorithms, Linear discriminant analysis (LDA) and Support vector machines (SVM), were applied to data from both sensors. In addition, the operational cloud detection algorithm for ETM+ (ACCA) based on a threshold method was implemented on an FPGA. For ETM+ the accuracy of the supervised learning algorithms using LUTs compared to their unscaled counterparts is 99.3 % for LDA and 98.1 % for SVM. For MODIS we get a corresponding accuracy of 99.3 % for LDA and 99.9 % for SVM. For ACCA using a LUT an accuracy of 97,5% compared to the unscaled variant is reached.
«In recent years, the performance of modern optical remote sensing instruments on board of satellites improved greatly due to better spatial and spectral resolution. This requires higher storage capacities as well as higher downlink times for the captured data. Due to an average global cloud coverage of about 66 % the recorded images are often contaminated with clouds. Hence, for users that are mostly interested in surface features like monitoring vegetation conditions or desertification, these i...
»