论文标题
卫星图像和基于机器学习的知识提取在贫困和福利领域
Satellite Image and Machine Learning based Knowledge Extraction in the Poverty and Welfare Domain
论文作者
论文摘要
人工智能和机器学习的最新进展已经在如何衡量人类发展指标(特别是基于资产的贫困)方面发生了变化。卫星图像和机器学习的结合能够以与诸如面对面访谈和家庭调查之类的主力方法相似的水平估算贫困。除了静态估计之外,越来越重要的问题是,该技术是否可以有助于科学发现,从而在贫困和福利领域中获得新知识。实现科学见解的基础是领域知识,进而转化为解释性和科学一致性。我们回顾了有关在这种情况下相关的三个核心要素的文献:透明度,解释性和解释性,并研究它们与贫困,机器学习和卫星图像Nexus的关系。我们对该领域的回顾表明,可解释的机器学习的三个核心要素(透明度,可解释性和领域知识)的状态各不相同,并且无法完全满足科学见解和发现的要求。我们认为,解释性对于支持更广泛的传播和接受这项研究至关重要,而解释性不仅意味着可解释性。
Recent advances in artificial intelligence and machine learning have created a step change in how to measure human development indicators, in particular asset based poverty. The combination of satellite imagery and machine learning has the capability to estimate poverty at a level similar to what is achieved with workhorse methods such as face-to-face interviews and household surveys. An increasingly important issue beyond static estimations is whether this technology can contribute to scientific discovery and consequently new knowledge in the poverty and welfare domain. A foundation for achieving scientific insights is domain knowledge, which in turn translates into explainability and scientific consistency. We review the literature focusing on three core elements relevant in this context: transparency, interpretability, and explainability and investigate how they relates to the poverty, machine learning and satellite imagery nexus. Our review of the field shows that the status of the three core elements of explainable machine learning (transparency, interpretability and domain knowledge) is varied and does not completely fulfill the requirements set up for scientific insights and discoveries. We argue that explainability is essential to support wider dissemination and acceptance of this research, and explainability means more than just interpretability.