Artificial Intelligence for Disaster Risk Reduction: Opportunities, challenges, and prospects

21 March 2022
  • Author(s):
  • Monique Kuglitsch, Arif Albayrak, Raúl Aquino, Allison Craddock, Jaselle Edward-Gill, Rinku Kanwar, Anirudh Koul, Jackie Ma, Alejandro Marti, Mythili Menon, Ivanka Pelivan, Andrea Toreti, Rudy Venguswamy, Tom Ward, Elena Xoplaki, A. Rea & J. Luterbacher

Artificial intelligence (AI), in particular machine learning (ML), is playing an increasingly important role in disaster risk reduction (DRR) – from the forecasting of extreme events and the development of hazard maps to the detection of events in real time, the provision of situational awareness and decision support, and beyond. This raises several questions: What opportunities does AI present? What are the challenges? How can we address the challenges and benefit from the opportunities? And, how can we use AI to provide important information to policy-makers, stakeholders, and the public to reduce disaster risks? In order to realize the potential of AI for DRR and to articulate an AI for DRR strategy, we need to address these questions and forge partnerships that drive AI in DRR forward.

AI and its use in DRR

Application of AI.pngFigure 1. Application of AI to the detection and forecasting of natural hazards and disasters derived from a preliminary literature survey covering articles published between 2018 and 2021 with a focus on (future) DRR applications. These results show an overrepresentation of certain natural hazard types, particularly floods, earthquakes, and landslides.

AI refers to technologies that mimic or even outperform human intelligence when performing certain tasks. ML, which is a subset of AI that includes supervised (e.g., random forest or decision trees), unsupervised (e.g., K-means) or reinforcement (e.g., Markov decision process) learning, can be simplified as parsing data into algorithms that learn from data to make classifications or predictions. AI methods offer new opportunities related to applications in, for instance, observational data pre-processing as well as forecast model output post-processing. The methodological potential is strengthened by novel processor technologies that allow heavy-duty, parallel data processing.

In general, the performance of ML for a given task is predicated upon the availability of quality data and the selection of an appropriate model architecture. Through remote sensing (e.g., from satellites, drones), instrumental networks (e.g., from meteorological, hydrometeorological, and seismic stations) and crowdsourcing, our foundation of Earth observational data has grown immensely. In addition, model architectures are constantly being refined. Therefore, it is to be expected that ML will be growing more prominent in DRR applications (Sun et al., 2020). For instance, a preliminary survey of recent (2018–2021) literature shows that ML approaches are being used to improve early warning and alert systems and to help generate hazard and susceptibility maps through ML-driven detection and forecasting of various natural hazard types (see Figure 1, note that this survey excludes research that is purely focusing on method development but does not target future DRR application).

This preliminary survey clearly demonstrates that AI-related methods are being applied to help us better manage the impacts of many types of natural hazards and disasters. In the next paragraphs we present four specific examples of where AI is being implemented to support DRR.

In Georgia, the United Nations Development Programme (UNDP) is creating a nation-wide multi-hazard early warning system (MHEWS) to help reduce the exposure of communities, livelihoods and infrastructures to weather and climate-driven natural hazards. For its operation, this system requires accurate forecasts and hazard maps of severe convective events (i.e., hail- and windstorms).

However, developing these products is challenging, given the lack of on-site observation networks across the country. Therefore, experts are using AI to create a tool that predicts the probability of observing a convective event for a specific day at a given location under certain meteorological and climatological conditions. The ML model is able to predict severe convective conditions – that is, the model detects days with a high potential of severe convection resulting in hail- or windstorms – by combining the available on-site observations with data from National Oceanic and Atmospheric Administration’s (NOAA) 70-year Storms Events Database and from European Centre for Medium- Range Weather Forecasts’ (ECMWF) 5th-generation atmospheric reanalysis dataset (ERA5). The tool uses historical data from data-rich regions to extrapolate to other locations worldwide with limited data availability using transfer learning. Finally, a downscaling approach is used to simulate and analyze these events with the Weather Research and Forecasting (WRF) numerical weather prediction model (Skamarock et al., 2019) and the ERA5 data. This has shown great potential for forecasting severe convective storms and producing hazard maps in Georgia, which is a particularly challenging region for hail- and windstorm prediction due to its complex topography.

Manzanillo_Mexico_flooding_-Ricardo_Ursua.pngFigure 2. A photograph of a flash flood in Manzanillo, Mexico. (Credit: Ricardo Ursúa)

The second example, which relates to flash floods, also leverages AI to assist with limited datasets. Flash floods are particularly hazardous because there is often little or no forewarning of the impending disaster. To detect such events as they occur, it is important to have a dense network of sensors to monitor and detect changes in discharge or stage across the catchment. In Mexico’s Colima River basin, the elevation of which ranges from 100 to 4 300 metres (m), hydrological stations are supplemented by a multi-sensor network consisting of RiverCore sensors (for stage and soil moisture) and weather stations (Mendoza-Cano et al., 2021; Ibarreche et al., 2020; Moreno et al., 2019). The data from these are used to train ML models, which can detect flash floods (Figure 2). The results from the ML models are compared with hydrological/hydraulic models and performance metrics are calculated, including: overall accuracy (OA), F1-score, and Intersection Over Union (IoU). Due to the success of this use case in Colima, the same methods are being expanded to detect flash floods in city tunnels in the Guadalajara metropolitan area.

The third example shows how AI can be used in geodesy to detect tsunamis and avoid issues around sensitive data crossing national borders. The application of advanced Global Navigation Satellite System (GNSS) real-time processing for positioning and ionospheric imaging provides very significant improvements to Tsunami Disaster Early Warning. GNSS is used in seismology to study ground displacements as well as to monitor perturbations in ionospheric total electron content (TEC) that commonly follow seismic events. Ten years ago, when Japan’s northern coastal areas were hit by the Tohoku tsunami, it took several days to grasp the entirety of the vast damage. Earth observations, combined with AI and ML, can be used to assess threats (Iglewicz and Hoaglin, 1993) and prepare ahead of time, to evaluate impacts as they unfold (as little as 20 min after earthquake occurrence) (Carrano and Groves, 2009), and to respond more quickly in the aftermath to save lives during recovery operations (Martire et al., 2021). Geodesy4Sendai, a Group on Earth Observations (GEO) Community Activity led by the International Association of Geodesy (IAG) and the International Union of Geodesy and Geophysics (IUGG), is participating in a new tsunami early warning collaboration with the International Telecommunication Union (ITU), WMO and United Nations Environment Programme (UNEP) Focus Group on Artificial Intelligence for Natural Disaster Management (FG-AI4NDM). Within the Topic Group on AI for Geodetic Enhancements to Tsunami Monitoring and Detection, experts have started to look at relevant best practices in use of Global Navigation Satellite Systems (GNSS) data (Astafyeva, 2019; Brissaud and Astafyeva, 2021). Specifically, the experts are exploring the feasibility of using AI to process GNSS data in countries where exporting real-time data is prohibited by law, and to establish protocols for development and sharing of export-permitted products derived from AI and related methods. The group is also considering innovative communication technologies for transmitting real-time GNSS data to countries or regions with limited bandwidth capacity, where using AI for decentralized, data-derived product sharing could enable the transmission of life-saving information over limited communications infrastructure. Such an effort lays the groundwork for expanding the use of these methods in developing countries that suffer from increasing tsunami threats, in addition to climate change impacts such as sea level rise (Meng et al., 2015).

The fourth example explores how AI can be used to provide effective communication in the case of natural hazards and disasters. Specifically, it looks at how AI can help natural disaster responders assess the severity of risk and prioritize when and where to respond. Structured and unstructured data, including risk alert sources, vulnerability, susceptibility and resilience indicators, and news sources are fed into Operations Risk Insight (ORI), a platform that applies natural language processing and machine learning to visualize and communicate multi-hazard risks in real-time and assist with decision-making.1 As part of the IBM Call for Code Program, which was held between Hurricanes Florence and Michael (autumn of 2018), IBM made ORI available to qualified natural disaster non-profit organizations. Since then, IBM and several nongovernmental organizations (NGO) have partnered to improve and customize a platform for disaster response leaders. For example, ORI provides Day One Relief, Good360 and Save the Children with customized hurricane and storm alerts as well as layered data sets to generate map overlays to increase situational awareness.2

Simplified AI lifecycle for DRR.jpgFigure 3. A schematic of key steps in the AI lifecycle for DRR.

Challenges to the use of AI for DRR

When applying AI for DRR, challenges can appear at any stage of the life cycle (Figure 3): at the data, model development or operational implementation stage.

During the collection and handling of data, it is important to consider: (a) biases in training/testing datasets, (b) new distributed AI technologies within the data domain and (c) ethical issues. In terms of biases in training/testing datasets, it is important to ensure that data are correctly sampled and that there is sufficient representation of each pattern for the problem in question. Consider, for instance, the challenge of building a representative dataset containing examples of extreme events (which are, by nature, rare). Also, imagine the possible costs of failing to provide appropriate data, for instance, wrong predictions or biased outcomes.

Fig4_iStock-510576834.jpegFigure 4. Creating AI-based models that can detect certain events, such as tsunamis, can be hindered by data export limitations. This image captures the aftermath of the 2011 Tohoku tsunami  (Credit: ArtwayPics, iStock: 510576834).

Once we have ensured that a dataset is not biased, we also need to decide how to integrate new distributed AI technologies within the data domain. Strategic modifications on the construction of space-based instruments like multiple small satellites3 and the introduction of edge computing (Nikos et al., 2018) have resulted in petabytes of data. Because AI relies on data transmission and computation of complex machine learning algorithms, centralized data processing and management can impose difficulties. On the one hand, real-time disaster applications require strong partnerships and data sharing between countries (recall the tsunami use case; Figure 4). On the other hand, ML algorithms are often operated in a centralized fashion, requiring training data to be fused in data servers. A centralized approach can also introduce additional challenges, such as privacy risks to personal and country-specific data. Furthermore, centralized data processing and management can limit transparency, which could lead to a lack of trust from end-users as well as difficulty in complying with regulations (e.g., GDPR).

Another data-related challenge is tied to ethical considerations. These centre on how AI-driven tools ought to be implemented from development to deployment, ensuring, for example, that socio-economic biases in underlying data are not propagated through the models developed by the system. Such principles are championed so that potential harms associated with AI, such as underrepresentation due to bias (either technical or human-based), can be mitigated – if not removed, and so that the benefits of AI can be realized for all, especially those made more vulnerable by the impacts of natural hazards.4

After a dataset has been curated, we also need to consider challenges at the model development stage. Here, we focus on the computational demands and transparency. AI models tend to rely on complex structures and, as a result, can be computationally expensive to train. For example, the VGG16 model (Simonyan and Zisserman, 2015), which is used for image classification, has approximately 138 million trainable parameters. Training models of this size requires large and expensive computing capacity, which is not always accessible.

Once an AI model is developed, it is important that the results are humanly understandable and acceptable. This can be challenging to obtain because there is no general out-of-the-box human-machine interface that provides information about how and why certain decisions are made by the AI model.

Consequently, many researchers are working toward developing trustworthy AI solutions. In modelling and model evaluation, for instance, it is important to have a precise formulation of the problem and the requirements and expectations of the AI-based solution. Only then can a suitable model and learning strategy be developed to tackle the problem. Moreover, understanding the precise setup also helps in choosing and developing corresponding evaluation criteria.

For an AI-based model that is deemed ready for operational implementation, it is important to consider the aforementioned – data and model development-related – challenges as well as user notification challenges. These are explored using AI-based communications technologies. To improve and facilitate interpretation of AI model outputs, these need to be translated and visualized according to end-user needs. Therefore, it is critical that stakeholders – from local communities to emergency system managers – and NGO disaster response leaders be included in the design and evaluation of alert and early warning systems, forecasts, hazards maps, decision support systems, dashboards, chatbots and other AI-enhanced communications tools. Timely feedback and evaluation of AI model insights from disaster responders is essential to improve the quality and precision of insights. Transparency into the data sources ingested, the frequency of data refresh and the algorithms used for the communication tools is essential to develop trust and refinement of machine learning-based recommendations. As with traditional modelling approaches, conveying confidence levels, uncertainties and limitations of an AI-enhanced system in an understandable way is crucial for informed decision-making. Ultimately, trust in timely and fully transparent AI-based communications tools is the biggest challenge to be overcome. This requires effective collaboration among experienced disaster responders, AI developers, geoscientists, regulators, government agencies, NGOs, telecom companies and others, to meet the needs of all stakeholders. Each disaster type is unique, and each region has different vulnerabilities and resiliency levels.

Efforts to address challenges to the use of AI for DRR

Concerted efforts are being made to address the many challenges when using AI for DRR and to facilitate its use. These efforts support greater data availability, provide tools and packages to assist with AI development, enhance model explainability, offer new applications for AI-based methods (i.e., digital twins), and contribute to the development of standards.

As already highlighted, one of the biggest challenges when developing an AI algorithm for DRR is the collection of data with correct sampling and sufficient representation of each pattern for a given problem. Here, open datasets (or “benchmarking” datasets)5,6,7 can be a valuable resource. By open sourcing their data, teams hope to allow other researchers to use the data collected to improve and augment existing solutions. To achieve this goal, the data provided must be well documented – include metadata – and be accessible. Steps should be taken to block, remove or edit the data to avoid the inadvertent release of personally identifiable language. Furthermore, it is advisable to provide clear documentation on how to download and begin working with the data. Many teams open source their projects with stellar documentation but fail to see a rise in use cases due to a lack of discoverability. This can be resolved by providing links to the open data on Google Datasets, Kaggle, Github or other data discovery platforms. GEONASA, the European Space Agency and others have created guidelines and/or databases to support open-sourcing data.

Figure 5. Innovative (AI based) tools can automate the identification of atmospheric phenomenon and natural disasters – such as hurricanes – from satellite imagery. (Annotated image retrieved from SpaceML’s NASA Worldview Search tool, which shows Hurricane Sam over the Atlantic Ocean on 29 September 2021 as captured by NOAA-20/VIIRS.)

Alongside open-source data, AI developers can benefit from an array of tools that assist with the major aspects of AI deployment: data gathering, model development, model deployment and model retraining/monitoring. Within each of these aspects, there are several private and open-sourced tools for AI developers. For example, many scientists rely on open-source imagery to be hand labelled by a research team. However, shared file systems that assist with data collection and automate annotation (e.g., relevant features in satellite imagery; Figure 5) can increase efficiency. Once data have been labelled, the machine learning/data science practitioner should use the most familiar packages (e.g., Python Tensorflow, Keras, and Pytorch). Many popular model architecture and training frameworks allow for AI efforts to be simplified. For instance, Pytorch Lightning is built on top of Pytorch and is a framework to help manage data within individual models. Lastly, with respect to model deployment and monitoring, there are solutions that can be run internally (i.e., without the cloud). This requires a dedicated model server with guarantees on model availability and latency. Before running such a solution, however, it would be wise to consider the use case, the cost of resources, the number of trained staff to ensure the model’s availability, and, lastly, how often you expect you will need to retrain your model. Systems like AWS Lambda and Gateway, Sagemaker, Google AI platform, and Watson model deployment manage servers for ML-specific tasks but still require on-call machine learning/data science resources to ensure model accuracy, retraining and model availability.

When a model has been developed, one caveat to its use for high-stake applications is the “black box” predicament. How can we trust the model if we cannot unfold its decision-making? EXplainable AI (XAI) is a highly active research field, which is producing tools that can be used during different stages of an AI lifecycle. For instance, AI models are often trained on a large dataset to obtain very high accuracies. However, the reasons why a certain model performs better or worse than another are often not clear. Using XAI tools such as integrated gradients (Sundararajan et al., 2017) or layer-wise relevance propagation (Bach et al., 2015), one can analyze the model and its learned feature importance in the input data to determine what is most relevant for a prediction. Moving from such local to global XAI methods, data imbalances can also be discovered and artifacts can even be unlearned (Anders et al., 2022).

Revolutionary opportunities to leverage AI to enhance DRR approaches and services are motivating the sharing of open-source data, the development of tools and the enhancement of AI-related research (e.g., in XAI). For instance, digital twins of the Earth (i.e., digital replicas of the Earth system and its components) are expected to trigger key advances in building innovative digital ecosystems (Nativi et al., 2021) with user/service-oriented federations of GPU-CPU HPCs as well as dedicated software infrastructure (Bauer et al., 2021). In this context, the European Commission has launched the Destination Earth initiative, with some of the first identified twins and use cases being DRR oriented. AI will play a key role in the implementation and effective use of digital twins, enabling, for instance, the full coupling and representation of the human component as part of the Earth system.

Another important activity that can support the implementation of AI in DRR is standardization; that is, the creation of internationally recognized guidelines. Core standardization activities within the disaster management sphere are currently being undertaken by international standards developing organizations (SDOs), including the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), and ITU. Other United Nations agencies, including WMO, UNEP, United Nations Office for Disaster Risk Reduction (UNDRR) and World Food Programme (WFP), are also contributing to the production of technical regulations, frameworks, recommended practices and de-facto standards within this field.

While these technology-centric standards are generally aimed at employing existing ICT solutions for improving operational efficiency of early warning systems and maintaining the required services for disaster recovery, the standardization of AI for DRR has remained largely uncharted territory. Recognizing this, in December 2020, ITU, together with WMO and UNEP, established the Focus Group on Artificial Intelligence for Natural Disaster Management. The Focus Group is currently a) examining how AI could be used for the different types of natural hazards that can cascade into disasters and b) drafting best practices related to the use of AI in supporting modelling across spatiotemporal scales and provision of effective communication during such events. The Focus Group has ten active topic groups exploring the use of AI for floods, tsunamis, insect plagues, landslides, snow avalanches, wildfires, vector borne diseases, volcanic eruptions, hail- and windstorms, and multi-hazards, and is actively reviewing proposals on additional topics. In order to underscore and comprehend the standardization gaps in this application, the Focus Group is also developing a roadmap containing existing standards and technical guidelines on this topic from different international, national and regional SDOs. This roadmap will let us identify future areas requiring attention on the standardization front. In addition, the Focus Group is preparing a glossary, which maps the existing terms and definitions associated with the topic to ensure clear and unambiguous communication and consistency within the DRR standardization stream.

Next steps…

Within the field of DRR, there is considerable interest in exploring the benefits of using AI to bolster existing methods and strategies. This article introduced several use cases demonstrating how AI-based models are enhancing DRR; however, it also showed that AI comes with challenges. Fortunately, the promise of AI in DRR has motivated research to find solutions to these challenges and inspired new partnerships; bringing together experts from multiple United Nations agencies, from various scientific fields (computer science, geosciences), from diverse sectors (from academia to NGOs) and from around the globe. Such partnerships are key for driving AI in DRR forward. In particular, we believe that efforts are still needed in the creation of educational materials to support capacity building, for ensuring the availability of computational resources and other hardware and for bridging the digital divide. Only this way can we make sure that no one is left behind as AI for DRR advances.

For members of the WMO community with an interest in learning more about the use of AI for DRR, there are many committees, conferences, and reports, which can serve as a resource. For instance, the American Meteorological Society’s Committee on Artificial Intelligence for Environmental Science and Climate Change AI offer the opportunity to liaise with other experts in this field. The “AI for Earth Sciences” session at the recent Neural Information Processing Systems (NeurIPS) meeting or the “Artificial Intelligence for Natural Hazard and Disaster Management” session at the upcoming European Geosciences Union General Assembly are two examples of conferences featuring groundbreaking research and use cases. Finally, reports such as “Responsible AI for Disaster Risk Management: Working Group Summary” can provide additional guidance.

Footnotes

 

Authors

By Monique Kuglitsch, Fraunhofer Heinrich Hertz Institute, Germany; Arif Albayrak, NASA Goddard Space Flight Center, USA; Raúl Aquino, Universidad de Colima, Mexico; Allison Craddock, NASA Jet Propulsion Laboratory and California Institute of Technology, USA; Jaselle Edward-Gill, Fraunhofer Heinrich Hertz Institute, Germany; Rinku Kanwar, IBM, USA; Anirudh Koul, Pinterest, USA; Jackie Ma, Fraunhofer Heinrich Hertz Institute, Germany; Alejandro Marti, Mitiga Solutions and Barcelona Supercomputing Center, Spain; Mythili Menon, International Telecommunication Union; Ivanka Pelivan, Fraunhofer Heinrich Hertz Institute, Germany; Andrea Toreti, European Commission Joint Research Centre, Italy; Rudy Venguswamy, Pinterest, USA; Tom Ward, IBM, USA; Elena Xoplaki, Justus Liebig University Giessen, Germany; and Anthony Rea and Jürg Luterbacher, WMO Secretariat

 

References

Anders, C. J., L. Weber, D. Neumann, W. Samek, K.-R. Müller, S. Lapuschkin, 2022: Finding and removing Clever Hans: Using explanation methods to debug and improve deep models. Information Fusion, 77, 261-295.

Astafyeva, E., 2019: Ionospheric detection of natural hazards. Reviews of Geophysics, 57, 1265-1288. doi: 10.1029/2019RG000668

Bach, S., A. Binder, G. Montavon, F. Klauschen, K.-R. Muller, and W. Samek, 2015: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE, 10(7):e0130140.

Bauer, P., P.D. Dueben, T. Hoefler, T. Quintino, T.C. Schulthess, and N.P. Wedi, 2021: The digital evolution of Earth-system science. Nature Computational Science 1, 104-113.

Brissaud, Q., and E. Astafyeva, 2021: Near-real-time detection of co-seismic ionospheric disturbances using machine learning, Geophysical Journal International, in review.

Carrano, C., and K. Groves, 2009: Ionospheric Data Processing and Analysis. Workshop on Satellite Navigation Science and Technology for Africa. The Abdus Salam ICTP, Trieste, Italy.

Ibarreche, J., R. Aquino, R.M. Edwards, V. Rangel, I. Pérez, M. Martínez, E. Castellanos, E. Álvarez, S. Jimenez, R. Rentería, A. Edwards, and O. Álvarez, 2020: Flash Flood Early Warning System in Colima, Mexico. Sensor Journal 20(18), 5231. doi: https:// doi.org/10.3390/s20185231

Iglewicz, B., and D. C. Hoaglin, 1993: Volume 16: How to Detect and Handle Outliers. The ASQC Basic References in Quality Control: Statistical Techniques.

Lu, Y., L. Luo, D. Huang, Y. Wang, and L. Chen, 2020: Knowledge Transfer in Vision Recognition. ACM Computing Surveys 53(2), 1-35. https://doi. org/10.1145/3379344

Martire, L., V. Constantinou, S. Krishnamoorthy, P. Vergados, A. Komjathy, X. Meng, Y. Bar-Sever, A. Craddock, and B. Wilson, 2021: Near Real-Time Tsunami Early Warning System Using GNSS Ionospheric Measurements. American Geophysical Union, New Orleans, Louisiana, USA.

Mendoza-Cano, O., R. Aquino-Santos, J. López-de la Cruz, R. M. Edwards, A. Khouakhi, I. Pattison, V. Rangel-Licea, E. Castellanos-Berjan, M. A. Martinez- Preciado, P. Rincón-Avalos, P. Lepper, A. Gutiérrez- Gómez, J. M. Uribe-Ramos, J. Ibarreche, and I. Perez, 2021: Experiments of an IoT-based wireless sensor network for flood monitoring in Colima, Mexico. Journal of Hydroinformatics 23(3), 385-401. doi: https://doi.org/10.2166/hydro.2021.126

Meng, X., A. Komjathy, O. P. Verkhoglyadova, Y.-M. Yang, Y. Deng, and A. J. Mannucci, 2015: A new physics-based modeling approach for tsunami-ionosphere coupling. Geophysical Research Letters 42, 4736–4744. doi:10.1002/2015GL064610

Moreno, C., R. Aquino, J. Ibarreche, I. Pérez, E. Castellanos, E. Álvarez, R. Rentería, L. Anguiano, A. Edwards, and P. Lepper, 2019: Rivercore: IoT device for river water level monitoring over cellular communications. Sensor Journal 19(1), 127. doi: https://doi.org/10.3390/s19010127

Nativi, S., P. Mazzetti, and M. Craglia, 2021: Digital ecosystems for developing digital twins of the Earth: the Destination Earth case. Remote Sensing 13, 2119.

Nikos, K., M. Avgeris, D. Dechouniotis, K. Papadakis-Vlachopapadopoulos, I. Roussaki, and S. Papavassiliou, 2018: Edge Computing in IoT Ecosystems for UAV-Enabled Early Fire Detection. IEEE International Conference on Smart Computing (SMARTCOMP) 106-114. doi: 10.1109/ SMARTCOMP.2018.00080

Simonyan, K. and A. Zisserman, 2015: Very Deep Convolutional Networks for Large-Scale Image Recognition. ICLR.

Skamarock, W.C., J.B. Klemp, J. Dudhia, D.O. Gill, L. Zhiquan, J. Berner, W. Wang, J.G. Powers, M.G. Duda, D.M. Barker, and X. Y. Huang, 2019: A Description of the Advanced Research WRF Model Version 4. NCAR Technical NoteNCAR/TN-475+STR. doi: http://library.ucar.edu/research/publish-technote.

Sun, W., P. Bocchini, and B.D. Davison, 2020: Applications of artificial intelligence for disaster management. Natural Hazards 103, 2631–2689. doi: https://doi.org/10.1007/s11069-020-04124-3

Sundararajan, M., A. Taly, and Q. Yan, 2017: Axiomatic attribution for deep networks. Proceedings of the 34th International Conference on Machine Learning, 3319–3328.

Troung, N., K. Sun, S. Wang, F. Guitton, and Y. Guo, 2021: Privacy preservation in federated learning: An insightful survey from the GDPR perspective. Computer & Security 110, 10/2021. doi: https://doi. org/10.1016/j.cose.2021.102402.

    Share: