Making the Black Box Transparent: State of the Art in Explainable Machine Learning for Structural Design and Assessment
Abstract
Machine learning (ML)-based solutions have gained traction in various structural engineering applications, from structural design to assessment and monitoring. Nevertheless, the black-box nature of advanced ML models, and the resultant limited interpretation and transparency, are among the primary barriers to their broader adoption and implementation in the field. eXplainable ML (XML) is an interdisciplinary field that improves the understanding of the ML model performance. Despite the potential of XML to increase the accessibility of ML, the scattered available literature and the lack of a domain-specific holistic review of XML have created a significant knowledge gap for its application in structural engineering. Therefore, this paper presents a targeted review of the XML definition, nomenclature and taxonomy, frequently used algorithms, and domain-specific literature. Additionally, three case studies are presented to illustrate different classes of XML algorithms and their implementation on diverse structural engineering problems at the component-, structure-, and inventory levels, providing insights into how these techniques can provide engineering-oriented interpretations that enhance understanding of the studied problems.
Related articles
Related articles are currently not available for this article.