Where to Position Matlantis Among Computation, Experiment, and Machine Learning Research Design for Materials Discovery — Insights from CCEL at Seoul National University

Seoul National University, CCEL (Computational Catalysis and Emerging Materials Lab)
Sector: Academia
CCEL (Computational Catalysis and Energy Laboratory) is a research group that designs catalytic and energy materials based on computational materials science. Starting from computational materials design, the group validates hypotheses through synthesis and evaluation experiments, and then feeds the resulting insights into knowledge accumulation and AI-driven approaches—forming a cyclic research process. To date, CCEL has published more than 330 SCIE-indexed papers, including articles in international journals such as Nature, Advanced Materials, and Energy & Environmental Science. Among these, 183 papers were published in journals with an Impact Factor above 10, and 40 were selected as cover articles. The group has an h-index of 72, with total citations exceeding 18,000.

Professor Han Jeong Woo, Department of Materials Science and Engineering, Seoul National University

What we aim for is not simply whether “a prediction is correct,” but to fundamentally understand why a particular catalyst exhibits high activity.

Advances in computational chemistry and machine learning have greatly expanded the toolkit for materials research. At the same time, deciding when to use each method and how to connect it to experiments is something every laboratory must work out for itself.

Led by Professor Jeong Woo Han (Department of Materials Science and Engineering, Seoul National University), CCEL has pursued the design of catalytic and energy materials by integrating computation, experiment, data, and machine learning within a single laboratory environment. As one approach within that integrated workflow, the group has adopted and actively used Matlantis.

In this article, we asked CCEL how they move back and forth between computation and experiment, and how they position Matlantis while also leveraging other machine-learning methods—covering both their philosophy and practical usage.


table of contents


Q. First, could you tell us about your research area and your laboratory (CCEL)?

Prof. Han:
CCEL is a laboratory that designs catalysts and energy materials based on computational chemistry. Our main research areas include catalysts for greenhouse-gas mitigation, hydrogen fuel cells, solid oxide fuel cells (SOFC), lithium–sulfur batteries, electrochemical CO₂ reduction, and water-splitting reactions—covering catalysts central to energy conversion and storage.

What we value most is not simply deriving theoretical numbers, but accurately elucidating reaction mechanisms at the atomic level and making tangible contributions by proposing new materials that did not previously exist.

In particular, we use DFT calculations, multiscale modeling, and machine-learning interatomic potentials (MLIPs) to screen candidate materials, and then validate them experimentally—building a synergy process between theory and experiment.

(Image, right) Prof. Jeong Woo Han, Department of Materials Science and Engineering, Seoul National University

Q. Your lab seems to place strong emphasis on experiments—such as synthesis and characterization—while being rooted in computational materials science. How do you see this?

Prof. Han:
That is exactly right. We consider it important not to separate computation and experiment, but to conduct both within the same laboratory.

We use computation to predict reaction mechanisms and material behavior, and then verify those predictions through experiments. Even when the experimental results do not match the calculations, discussing why that happened is itself at the core of research.

Because we can complete this back and forth internally, we can flexibly move research forward while continuously revisiting computational settings and experimental design.

In practice, we have conducted many studies that combine computation and experiment, and we have published 41 papers that include experiments performed within our laboratory. Many of our works published in major journals such as Nature Communications, Advanced Materials, and Energy & Environmental Science also progressed through a combined computational-and-experimental approach.

For example, in studies on metal nanoparticle exsolution in perovskite oxides, oxidation-reduction reaction (ORR) catalyst design, and catalysts for CH₄ oxidation and CO₂ reduction, we first used DFT to understand reaction mechanisms, predicted catalyst compositions, and then validated them through synthesis and electrochemical evaluation.

While many frontier research teams—both in Korea and internationally—specialize in either computation or experiment, our laboratory conducts both in-house and moves between them. Running the entire catalyst-design process as a single cycle—computation, experiment, and more recently AI—is a key feature of our group.

In Korea, there are only a limited number of teams that integrate AI, computation, and experiment and operate the full catalyst-design process as an organic cyclic structure within one laboratory, and we consider ourselves one of them.

Q. Why do you place such importance on doing both computation and experiment within the same lab?

Prof. Han:
We believe it is difficult to fully understand material behavior using only one of the two approaches. Computation alone cannot fully predict complex phenomena under realistic reaction environments, while experiments alone cannot clearly explain what is happening at the atomic scale.

Our goal is not simply whether “predictions are correct,” but  to fundamentally understand why a specific catalyst shows high activity, and which atomic arrangements and electronic states contribute to improved performance.

The biggest reason is that we can maximize research efficiency through a rapid feedback cycle between computation and experiment within the laboratory. We ourselves have repeatedly achieved outcomes by running that cycle in our research.

For example, in our studies on single-atom catalysts (SACs) based on 3d transition metals such as Fe, Co, and Ni, we systematically organized how the coordination environment and electronic structure around the metal center affect ORR activity using DFT, and then validated those hypotheses experimentally.

In particular, in our work on Fe dual-atom catalysts, we predicted—through computation—what synergistic effects arise from active sites formed by two adjacent Fe atoms compared with single-atom sites, and confirmed performance improvements through synthesis and electrochemical evaluation. This work was published in ACS Energy Letters.

In this way, continuous interaction between computational prediction and experimental validation, which advances catalyst design step by step, is our research philosophy—and the reason we perform both computation and experiment in-house.

Can the Computational Results Be Experimentally Verified—and Do They Match?

Q. As you move back and forth between computation and experiment, how do you view “accuracy” in computation, and at what stage do you judge it as sufficient?

Prof. Han:
This is a very important question. We set two main criteria when evaluating computational accuracy.

The first is quantitative comparison with experimental data—for example, checking to what extent computed values such as adsorption energies or activation barriers align with experimentally observed trends.

The second is whether qualitative trends can be reproduced. We place more emphasis on whether the relative ordering and trends across different catalyst compositions match experiments, rather than focusing only on the absolute values. If this can be reproduced, we judge that the computation has sufficient predictive power for materials design.

In fact, in a screening study of 3d transition metals, there was a case where a computational prediction—that V-doped TiO₂ improves C₂ selectivity in CH₄ conversion—was experimentally validated. In such cases, we consider the computational approach to be sufficiently effective for that system.

In addition, by using CatBench, a methodology we developed, we can systematically compare and validate the performance of a wide range of MLIP models. In work on nm-scale CeO₂–Al₂O₃ composite structures, we confirmed that multiscale modeling results were consistent with both atomic-scale DFT calculations and experimental outcomes, demonstrating that multiscale analysis can be reliable.

The standard for “sufficient accuracy” depends on the purpose of the research. In the early stage of materials exploration, we consider it sufficient if qualitative trends match experiments. In the stage of deeply understanding mechanisms, we apply more precise methods to improve accuracy.

For us, the most important point is whether computation can propose predictions that are experimentally verifiable—and whether those predictions actually agree with experimental results.

Machine-Learning Methods in CCEL: A Decade of Development

Q. Your lab had already incorporated machine learning before using Matlantis. How did that evolution unfold?

Prof. Han:
We began introducing data-science approaches into computational catalysis research around 2016. As a result, we have accumulated roughly a decade of methodological development.

In the first stage, we conducted research based on DFT calculations to clarify linear relationships in adsorption energies depending on catalyst metal type—so-called scaling relations. Outcomes from this period include studies on linear relationships for CO₂ activation/dissociation on bimetallic surfaces, and research predicting cation segregation in SOFC cathode materials to improve material stability.

Then, around 2017, we worked to translate those ideas into a more practical form by developing descriptors that allow rapid assessment of catalyst performance. One example is a study that proposed a simple indicator for CO oxidation on rare-earth-doped CeO₂ catalysts, linking computation and experiment.

Next, we expanded descriptor-based approaches to more diverse catalytic systems. We tested how far rational design could be applied across systems such as single-atom catalysts, catalysts for solid oxide electrolysis cells (SOEC), and M–N–C catalysts, and established the methodology’s effectiveness.

From around 2022, to address complex correlations that cannot be captured by linear relations alone, we began full-scale adoption of machine-learning models. Through large-scale screening of single-atom catalysts (SACs), catalyst design using genetic algorithms, reaction-activity prediction with neural networks, and structure optimization using active learning, we confirmed that machine learning can function effectively in practical catalyst design.

More recently, building on these experiences, we have entered a phase of organizing guidelines and criteria for how MLIPs should be used in heterogeneous catalysis research. Writing review papers on MLIP usage and developing CatBench, an MLIP benchmarking framework specialized for catalysis research, are part of that effort.

In other words, we did not adopt machine learning simply as a “new tool.” We have developed our methodology step by step—from understanding linear relations, to descriptor development, to building machine-learning models, and ultimately to systematizing MLIPs. That accumulation forms the foundation of our current research style.

(Image, from left) Wongyu Park, Jinuk Moon, Prof. Jeong Woo Han, Seokhyun Choung

Q. After progressing step by step through computation, experiment, and machine learning, what made you newly interested in Matlantis?

Prof. Han:
From around 2021, MLIPs began to rise in earnest, and so-called universal models started to emerge. We ourselves felt that MLIPs were entering the next important phase.

In that context, we first became interested after hearing—through members of the Matlantis research team—that they were developing advanced MLIPs while collaborating with world-leading researchers such as Professor Ju Li at MIT. As a research trend shared by colleagues we trust within the community, it naturally drew our attention.

After that, starting in 2022, we began using Matlantis and benchmarking it against other MLIPs. Through that process, what we felt was that Matlantis was not optimized for a specific system, but rather captured general chemical relationships (“general chemistry”) well. .

Q. During evaluation and consideration, what were the key points that led you to decide, “Let’s use it”?

Prof. Han:
At the time, many MLIPs were developed in ways specialized for specific domains. For example, in catalysis there was the Open Catalyst Project (OCP); in materials science there were models such as SevenNet, M3GNet, CHGNet, and NequIP; and in chemistry there were models such as MACE trained on datasets like QM9 and the ANI family.

What was distinctive about Matlantis was that it aimed to capture more general chemical relationships by learning from data across multiple domains, rather than being restricted to one domain. This was an important point for our research style.

When we ran several internal benchmarks, we confirmed that Matlantis showed sufficiently reasonable accuracy—at least as an initial evaluation—for systems we frequently handle, such as SACs, SOFC-related systems, and heterogeneous catalysis problems.

Based on these results, we judged that Matlantis, as a universal model, is suitable for early-stage research—such as forming hypotheses and narrowing down candidate sets. Therefore, we encouraged multiple students in our group to start their calculations using Matlantis.

The Impact of Speed and Scale on Research: Evolution and Emerging Challenges

Q.After adopting Matlantis, what changes did you observe from the perspectives of computation and experiment?

Prof. Han:
Our fundamental research methodology did not change drastically, but there were a few meaningful changes.

First, the feedback loop between computation and experiment became faster. Before waiting for DFT results, we can use MLIPs to check trends in advance, enabling us to determine research directions at an earlier stage.

Second, the scale of simulations moved somewhat closer to experimental reality. We cannot yet discuss everything at the exact same scale as experiments, but simulations involving thousands of atoms became possible, allowing us to capture catalytic phenomena from a more macroscopic viewpoint than before.

Q.Among those changes, what felt most impactful?

Prof. Han:
We saw growth in both simulation scale and speed. Compared with before, we can step into large-scale simulations with greater confidence.

At the same time, a new challenge has emerged: the verification burden. Even today, there remains global caution toward adopting MLIPs universally. In that environment, our group has continued the discussion on how MLIPs should be selected and used safely by developing benchmarking frameworks such as CatBench and contributing review and perspective papers.

After moving to Seoul National University, we gained a stronger environment for comparative studies with domestically developed large models such as SevenNet, and we feel we can develop our research further. We believe it is important not to depend only on Matlantis, but to identify optimal methodologies by comparing and validating multiple models.

Next, we would like to ask a student in the lab. 

Q. While using Matlantis in your research, what felt “helpful” or “different” compared with before?

Seokhyun Choung:

The biggest differences were speed and scale. Previously, even building a model and running convergence tests for DFT took a significant amount of time. With Matlantis, we can quickly conduct initial structure exploration and stability evaluation, which allows us to invest more time into the core research. We can also directly simulate complex structures found in real catalysts—such as nanoparticles, interfaces, and defects—using models containing thousands of atoms, making comparisons with experimental results much easier.

As AI-based simulation techniques draw global attention, experience in catalysis research using Matlantis also helps expand our skill set as researchers. However, we cannot blindly trust MLIP results, so verification is always necessary, and this can become an additional burden.

A positive point is that Matlantis has shown visible accuracy improvements through version upgrades. At the same time, other models such as FairChem (formerly OCP), MACE, and SevenNet are also advancing, and from a researcher’s perspective, it is a very interesting situation in which more options are becoming available.

Choosing Tools by Research Stage — The Division of Roles Among Matlantis, DFT, and MLIPs

Q. How do you currently distinguish Matlantis from DFT and other machine-learning methods?

Prof. Han:
Recently, we have come to clearly separate tools based on the stage of the research.

We mainly use Matlantis in the early stage. We use it to rapidly explore catalyst compositions, surface structures, and dopant effects, and to narrow down promising candidate sets. It is especially effective when considering new elemental combinations or when we want to understand dynamic behavior through molecular dynamics simulations.

On the other hand, we use DFT in the verification stage. For candidates screened with Matlantis, we evaluate adsorption energies, search for transition states, and analyze electronic structures. Key energy values that we report in papers are always computed with DFT. DFT is also indispensable when fine-tuning MLIP models tailored to specific systems using DFT-generated data.

However, Matlantis is a closed-source model, so it cannot be fine-tuned. For that reason, depending on the case, we may fine-tune open-source models such as FairChem, MACE, and SevenNet, or use multiple models in an ensemble-like manner. When extremely high accuracy is required for a particular catalytic system, a model trained in-house may be more suitable.

Q. Behind that division of roles, you likely have criteria such as “this method is strong here” or “another method is better here.” What factors do you consider?

Prof. Han:
We try to clarify the strengths and limitations of each method as much as possible, and then proceed without leaning too heavily toward any one approach. Accordingly, we choose tools based on what we want to obtain: what quantity we need, how much confidence is required, and what scale we want to handle.

We see Matlantis’ greatest strength as universality. Without generating additional training data, it can be applied relatively quickly to a wide range of elemental combinations across the periodic table, and it captures general chemical relationships well. Another important advantage is that it is much faster than DFT, enabling large-scale screening within realistic time frames.

On the other hand, because it is a closed-source model, it is currently difficult to utilize latent-space information or fine-tune the model for specific systems. From a researcher’s perspective, this is also an area where we hope to see future functional expansion.

DFT’s strengths are reliability and electronic-structure information. The ability to directly obtain electronic-level information—such as charge distribution, bonding states, and energy barriers—is essential for mechanism understanding and publishing. However, because computational cost is high, DFT has limitations for large systems and long-timescale dynamics simulations.

Open-source MLIPs (FairChem, MACE, SevenNet, etc.) are strong in flexibility. In addition to fine-tuning with in-house data, researchers can understand and modify model structures, or quantify uncertainty by using ensembles of multiple models. On the other hand, because training data and architectures differ across models, their applicability and achievable accuracy are not uniform.

For these reasons, our group uses benchmarking frameworks such as CatBench to compare and validate multiple models, and then selects the optimal tools depending on the research objective.

Q. Finally, what would you like to achieve through Matlantis going forward?

Prof. Han:
We want to advance our research by combining Matlantis with a variety of machine-learning models, rather than relying on a single approach.

First, we want to comprehensively utilize the learned latent embeddings of multiple MLIP models. The representations each model learns for atomic environments contain valuable chemical information. If we can extract and combine that information and apply it to catalyst activity prediction, structure exploration, and reaction-path analysis, we could achieve both acceleration and improved precision.

Second, we want to further expand the simulation scale. Simulations involving thousands of atoms are now possible, but fully reproducing complex catalytic phenomena—such as sintering, carbon deposition, and phase transitions—requires larger-scale and longer-timescale simulations.

Third, we want to build an automated catalyst discovery pipeline. Through an automated workflow that spans large-scale screening with MLIPs, selection of optimal candidates, high-precision DFT calculations, and experimental validation, we aim to dramatically shorten the catalyst development cycle.

Fourth, we want to understand dynamic catalytic phenomena under operando conditions. By using molecular dynamics simulations to elucidate structural rearrangements of catalyst surfaces under real reaction conditions, the dynamic formation and disappearance of active sites, and deactivation mechanisms, we want to understand experimental results at the atomic level—especially those that are difficult to explain with conventional static calculations.


Seoul National University/CCEL

For more information, please visit the following website:

Department of Materials Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Korea

what's new

Computation Guides, Experimentation Tackles. Accelerating Research at Honda with Matlantis

Honda R&D Co., Ltd.Research and Development (Transportation Equipment and Advanced Technology)

Prof. Shimogaki of the University of Tokyo talks about two "Before and Afters" with Matlantis: The new research style transforming surface reaction research and education

University of Tokyo, Shimogaki LaboratoryAcademia

Verification period shortened from 2-3 years to 1.5 months after Matlantis. Significant enhancement in speed and efficiency of development processes with computation.

Kuraray Co., Ltd.Chemicals

Matlantis enables a development process that shifts from the traditional "test-first" approach to "calculation-first" testing.

AGC Inc.Glass and Chemicals

Kao Corporation - ahead of experiments with computational chemistry mastering the speed of Matlantis - shortening devlopment period

Kao CorporationChemicals