Abstract
Quantitative Precipitation Estimates (QPE) obtained from satellite data are essential for accurately assessing the hydrological cycle in both land and ocean. Early artificial Neural Networks (NN) methods were used previously either to merge infrared and microwave data or to derive better precipitation products from radar and radiometer measurements. Over the last 25 years, machine learning technology has advanced significantly, accompanied by the initiation of new satellites, such as the Global Precipitation Measurement Mission Core Observatory (GPM-CO). In addition, computing power has increased exponentially since the beginning of the 21st century. This paper compares the performance of a pure NN FORTRAN, originally designed to expedite the 2A12 TRMM (Tropical Rainfall Measuring Mission) algorithm, with a contemporary state-of-the-art NN in Python using the TensorFlow library (NN PYTHON). The performance of FORTRAN and Python approaches to QPE using GPM-CO data are compared with the goal of achieving a minimum NN architecture that at least matches the outcome of the Goddard Profiling Algorithm (GPROF) algorithm. Another conclusion is that the new NN PYTHON does not present significant advantages over the old FORTRAN code. The latter does not require dependencies, which has many practical advantages in operational use and therefore have an edge over more complex approaches in hydrometeorology.
Original language | English |
---|---|
Article number | 107879 |
Journal | Atmospheric Research |
Volume | 315 |
DOIs | |
State | Published - 1 Apr 2025 |
Keywords
- FORTRAN
- GPM
- GPROF
- Neural network
- PYTHON
- Quantitative precipitation estimates
- Remote sensing