Unlocking the Black Box beyond Bayesian Global Optimization for Materials Design using Reinforcement Learning

This article has 0 evaluations Published on
Read the full article Related papers
This article on Sciety

Abstract

Materials design often becomes an expensive black-box optimization problem due to limitations in balancing exploration-exploitation trade-offs in high-dimensional spaces. We propose a reinforcement learning (RL) framework that effectively navigates the complex design spaces through two complementary approaches: a model-based strategy utilizing surrogate models for sample-efficient exploration, and an on-the-fly strategy when direct experimental feedback is available. This approach demonstrates better performance in high-dimensional spaces (D ≥ 6) compared to Bayesian optimization (BO) with the Expected Improvement (EI) acquisition function through more dispersed sampling patterns and better landscape learning capabilities. Furthermore, we observe a synergistic effect when combining BO's early-stage exploration with RL's adaptive learning. Evaluations on both standard benchmark functions (Ackley, Rastrigin) and real-world high-entropy alloy data, demonstrate statistically significant improvements (p < 0.01) over traditional BO with EI, particularly in complex, high-dimensional scenarios. This work addresses limitations of existing methods while providing practical tools for guiding experiments.

Related articles

Related articles are currently not available for this article.