论文标题
使用长期 - 内存(LSTM)网络的高时间分辨率降雨径流建模
High Temporal Resolution Rainfall Runoff Modelling Using Long-Short-Term-Memory (LSTM) Networks
论文作者
论文摘要
降雨径流(RR)模拟的准确有效模型对于洪水风险管理至关重要。当今使用的大多数降雨模型都是由过程驱动的。即,他们解决了简化的经验公式或圣地纳特(浅水)方程的某些变化。随着机器学习技术的开发,我们现在可以使用例如神经网络模仿降雨模型。在这项研究中,构建了使用序列到序列长期内存(LSTM)网络的数据驱动的RR模型。该模型在德克萨斯州休斯敦的流域进行了测试,该分水岭以严重的洪水事件而闻名。 LSTM网络在学习网络的输入和输出之间学习长期依赖性方面的能力允许对RR进行高分辨率(15分钟)进行建模。使用153个降雨量量规和河道排放数据的10年降水(超过530万个数据点),并通过设计多个数值测试,测试了预测河流排放中开发的模型性能。还将模型结果与过程驱动的模型网格表面水文分析(GSSHA)的输出进行了比较。此外,探索了LSTM模型的身体一致性。模型结果表明,LSTM模型能够有效预测放电并实现良好的模型性能。与GSSHA相比,数据驱动的模型在预测和校准方面更有效,更健壮。有趣的是,当基于模型性能的选定子集的选定子集时,LSTM模型的性能(测试NASH-SUTCLIFFE模型效率从0.666到0.942)被用作输入,而不是所有降雨量。
Accurate and efficient models for rainfall runoff (RR) simulations are crucial for flood risk management. Most rainfall models in use today are process-driven; i.e. they solve either simplified empirical formulas or some variation of the St. Venant (shallow water) equations. With the development of machine-learning techniques, we may now be able to emulate rainfall models using, for example, neural networks. In this study, a data-driven RR model using a sequence-to-sequence Long-short-Term-Memory (LSTM) network was constructed. The model was tested for a watershed in Houston, TX, known for severe flood events. The LSTM network's capability in learning long-term dependencies between the input and output of the network allowed modeling RR with high resolution in time (15 minutes). Using 10-years precipitation from 153 rainfall gages and river channel discharge data (more than 5.3 million data points), and by designing several numerical tests the developed model performance in predicting river discharge was tested. The model results were also compared with the output of a process-driven model Gridded Surface Subsurface Hydrologic Analysis (GSSHA). Moreover, physical consistency of the LSTM model was explored. The model results showed that the LSTM model was able to efficiently predict discharge and achieve good model performance. When compared to GSSHA, the data-driven model was more efficient and robust in terms of prediction and calibration. Interestingly, the performance of the LSTM model improved (test Nash-Sutcliffe model efficiency from 0.666 to 0.942) when a selected subset of rainfall gages based on the model performance, were used as input instead of all rainfall gages.