A video rainfall calculation method based on stacked residual time convolutional network
-
Abstract
To enhance the density of the "Three Defense Lines" rainfall monitoring network, this study leverages the widely distributed surveillance video network data and proposes a novel rainfall estimation method utilizing video data. By fully exploiting the temporal dynamic features in rainfall video frame sequences, a hybrid CNN+TCN rainfall estimation model is established, integrating the RegNetY backbone network with a stacked TCN architecture. Experimental results demonstrate that the model achieves excellent performance in rain gauge comparisons, with goodness-of-fit metrics (R2, NSE and KGE) all exceeding 0.976, and optimal error evaluation metrics (MAE and MAPE) reaching 0.799 mm/h and 3.79%, respectively. The multilayer residual TCN structure effectively enhances temporal feature extraction, maintaining stable rainfall intensity estimation performance under varying frame sequence lengths. This study provides a lightweight technical solution for high-precision rainfall monitoring.
-
-