August 5, 2025
5 min read
Brian Buntz
LLNL's MADA system uses AI agents on El Capitan supercomputer to automate and accelerate inertial confinement fusion target design.
LLNL Deploys AI Agents for Fusion Target Design on Supercomputers
Lawrence Livermore National Laboratory (LLNL) has deployed the Multi-Agent Design Assistant (MADA) to accelerate inertial confinement fusion (ICF) target design. This innovative system integrates large language models (LLMs) with LLNL’s 3D multiphysics simulation code, MARBL, aiming to automate the generation of complex simulation decks. Researchers run MADA on the El Capitan supercomputer, one of the fastest globally with a peak performance of 2.79 exaFLOPs, and its smaller counterpart, Tuolumne. The system employs two AI agents working together: an "Inverse Design Agent" that converts hand-drawn capsule diagrams into thousands of simulation scenarios, and a "Job Management Agent" that schedules and manages these simulations across high-performance computing (HPC) resources. The project, which began development in 2019, was led by LLNL physicist Jon Belof. Initially, the team explored combining AI with shockwave physics—a concept once considered unconventional. In a recent demonstration, an open-source LLM fine-tuned on MARBL documentation successfully interpreted a hand-drawn capsule design and a natural language request from a human designer, then generated a complete simulation deck and executed thousands of simulations exploring variations in ICF capsule geometry. Belof emphasizes that AI agents can drastically compress design cycles, enabling researchers to explore hundreds or thousands of design concepts in parallel instead of just a few."Rather than the human running ensembles of simulations, they will be able to run ensembles of ideas." — Jon BelofFollowing LLNL’s December 2022 ignition milestone, the laboratory is now focused on developing a robust high-gain fusion platform for national security applications.
Two Agents Working in Concert
The MADA framework is composed of two primary AI agents:- Inverse Design Agent: Responsible for generating design concepts by translating hand-drawn diagrams into simulation inputs.
- Job Management Agent: Oversees the execution of large-scale simulation workflows, coordinating with the Flux scheduler and workflow tools like Merlin. Giselle Fernandez, the Job Management Agent Team Lead, explains, "The Job Management Agent brings AI and HPC together to coordinate agents that handle resource management and workflow optimization at massive scales." The team has already achieved promising results by running tens of thousands of ICF simulations on the Tuolumne supercomputer. The simulation outputs train a machine learning model called PROFESSOR, which provides instant feedback to designers. According to Belof, "Once trained, the PROFESSOR model generates implosion time histories—radius as a function of time—that change instantaneously when the human designer modifies the input geometry." Funding for MADA comes from the National Nuclear Security Administration’s Advanced Simulation & Computing program. The project team includes LLNL researchers Charles Jekel, Rob Rieben, Will Schill, Meir Shachar, and Dane Sterbentz, along with collaborators Nathan Brown from Sandia National Laboratories and Ismael Djibrilla Boureima from Los Alamos National Laboratory. Belof highlights the novelty of the system: "We are putting AI in the driver’s seat of a supercomputer, which is something that has never been done before."
- AI Agents Capabilities, Risks, and Growing Role
- AI-Driven Crypto Trading Tools Reshape Market Strategies
- The Future of Cryptocurrency Explained
Federal Agencies Test Agentic AI—With Mixed Results
LLNL’s advancements come amid broader government experimentation with agentic AI systems. In June 2025, the FDA introduced "Elsa," an AI assistant trained on Anthropic’s Claude. Elsa has received mixed feedback; some staff report it helps parse test reports and reduces review times, while others note its knowledge cutoff in April 2024 and occasional outdated responses. Elsa carries a disclaimer advising users to verify its answers due to potential mistakes. Meanwhile, NASA’s Goddard Space Flight Center is developing "text to structures" and "text to spaceship" projects, enabling scientists to describe spacecraft designs in natural language and receive AI-generated lightweight, efficient structures. Omar Hatamleh, Goddard’s chief AI officer, notes experimentation with agentic workflows to streamline procurement and finance by automating routine tasks. A January 2025 Nextgov/FCW column predicted AI agents would automate back-office tasks and optimize workflows in government, freeing staff for higher-value work. It highlighted the U.S. Patent and Trademark Office’s AI-assisted search system as a successful example of AI improving efficiency and accuracy.Industry Hype Meets Organizational Reality
Consulting firms are promoting agentic AI’s economic potential. Capgemini’s July 2025 report estimates up to $450 billion in revenue growth and cost savings by 2028. However, only 2% of organizations have deployed AI agents at scale, and trust in fully autonomous agents dropped from 43% to 27% in one year. The report stresses the need to redesign processes, rethink business models, and balance autonomy with human oversight. Similarly, McKinsey’s January 2025 report "Superagency" describes agentic AI’s current ability to autonomously handle customer interactions, payments, and fraud checks. Despite software vendors embedding agentic features into platforms to create "digital workforces," only 1% of companies consider themselves mature in AI deployment.Originally published at Research & Development World on August 4, 2025.