The potential for AI physics discovery uncovering fundamental laws of nature has moved a step closer to reality. A fascinating study from MIT explores whether AI can go beyond pattern recognition to actually discover physical principles on its own. The research reveals how specialized AI (not large language models) learned core concepts of classical mechanics from data alone without prior knowledge of the equations.
Forget chatbots like ChatGPT for a moment. This research delves into specialized neural networks designed for scientific tasks. The core question explored in the paper, aptly titled “Do Two AI Scientists Agree?”, wasn’t just if AI could learn physics, but whether different AIs, trained independently on the same data, would arrive at the same fundamental theories.
Table of contents
- The Quest: Can Independent AIs Find the Same Truth?
- Meet MASS: An AI Built for Physics Discovery
- The Experiments: From Simple Springs to Complex Systems
- Do Different AI Scientists Agree? The Surprising Consensus
- “Without Prior Knowledge”? A Point of Nuance
- Why This Matters: AI as a Potential Discoverer
- The Future: AI as a Scientific Partner?
The Quest: Can Independent AIs Find the Same Truth?
Throughout history, human scientists have developed different, sometimes complementary, ways to describe the same phenomenon. Think of Newton’s laws versus the later formulations by Lagrange and Hamilton. These offer different perspectives and mathematical tools to analyze motion, energy, and forces.
The MIT researchers wanted to see if AI would behave similarly. If you train multiple “AI scientists” on the same observations of physical systems, will they converge on a single, unified theory? Or will they develop entirely different, yet equally valid, ways of explaining the data? This raised an exciting question in the realm of AI discovery, can machines uncover their own unique paths to scientific understanding?
Meet MASS: An AI Built for Physics Discovery
To explore this, the researchers developed a specific neural network architecture called MASS (Multiple AI Scalar Scientists). This isn’t your typical language model. MASS is inspired by fundamental physics principles, particularly the idea that many physical systems can be described by a single scalar quantity (like energy) from which the dynamics (how things move) can be derived.

Here’s the basic idea:
- Data Ingestion: MASS is fed observational data from various physical systems. Lets think trajectories of pendulums, planets orbiting stars (Kepler problem), or even specially designed “synthetic” physical scenarios.
- Hypothesis Formation: For each different physical system it sees, a part of MASS learns a specific scalar function (analogous to a potential or energy function) that describes that system.
- Theory Evaluation: A crucial final layer, shared across all systems, learns how to take derivatives (calculus operations) of these scalar functions to predict the system’s motion (like acceleration). This shared layer forces the AI to find a consistent theoretical framework.
- Refinement: The AI compares its predictions to the actual data, calculates the error, and adjusts its internal workings (both the scalar functions and the derivative rules) to become more accurate.
Critically, MASS was not pre-programmed with Newton’s Laws, Hamilton’s equations, or the Euler-Lagrange equations. It had to figure out the rules from the data itself.
The Experiments: From Simple Springs to Complex Systems
The researchers trained MASS starting with simple systems and gradually introducing more complex ones.
Finding #1: Simple Systems Lead to Hamiltonian-like Ideas
In a compelling example of AI discovery, When MASS was trained only on simple systems like the harmonic oscillator (a basic spring-mass system), it learned scalar functions that strongly resembled the Hamiltonian. In classical mechanics, the Hamiltonian (H) is typically the sum of kinetic energy (T, energy of motion) and potential energy (V, stored energy): H = T + V.
The AI didn’t learn a perfectly clean H = T + V instantly. Initially, it often used many complex mathematical terms. But as training progressed, it tended to simplify, converging on this Hamiltonian-like structure as the most effective description for these simple cases.
Finding #2: Complexity Favors the Lagrangian Viewpoint
Things got really interesting when MASS was exposed to more complex physical systems, including relativistic scenarios and synthetic problems designed by the researchers. Here, the AI started to shift its preference.
Instead of consistently sticking to the Hamiltonian (T+V), it increasingly favored a description resembling the Lagrangian (L). The Lagrangian is typically the difference between kinetic and potential energy: L = T – V.

Why the switch? The Lagrangian formulation is often more versatile, especially when dealing with systems described in “generalized coordinates” (variables other than simple x, y, z positions). Since the AI was working in such coordinates, it appears to have independently discovered that the Lagrangian approach offered a more universal and accurate framework across a wider range of situations. This is a profound AI discovery, the system wasn’t told the Lagrangian was better; it inferred it from the data through its own analytical process.
Do Different AI Scientists Agree? The Surprising Consensus
What happened when multiple MASS instances (with different random starting points, simulating different “AI scientists”) were trained on the same complex datasets? Did they invent wildly different theories?
Mostly, no. The research showed a remarkable convergence. While the specific internal details (the exact mathematical terms or internal “weights”) might differ slightly between individual AI scientists, their underlying learned theory was largely the same.
Further analysis, using techniques like Principal Component Analysis (PCA) and constrained optimization, confirmed this. The core theoretical structure learned by different AIs was highly correlated. It strongly matches the Lagrangian formulation for the more complex and general cases.
So, do two AI scientists agree? The paper’s answer is a qualified yes. They tend to converge on the same fundamental description, primarily the Lagrangian one, when tasked with explaining diverse physical phenomena.
“Without Prior Knowledge”? A Point of Nuance
Did the AI really learn this with no prior knowledge? Yes and no.
- No Explicit Equations: It’s true that the specific equations of Hamilton or Euler-Lagrange were not programmed into MASS. It derived the form of these theories from data.
- Structural Priors: However, the architecture of MASS itself contains implicit assumptions inspired by physics. The very idea of learning a scalar function and using its derivatives to predict dynamics comes from foundational concepts like the Principle of Stationary Action. This is a structural prior – building the AI in a way that’s conducive to finding physics-like solutions, without giving it the answers outright.
This is different from simply telling an AI “use F=ma”. It’s more like giving it the tools (calculus via neural network layers) and a framework ( that physicists themselves found powerful.
Why This Matters: AI as a Potential Discoverer
This research is significant for several reasons:
- AI Discovery: It demonstrates that AI can potentially go beyond pattern recognition and data analysis. It demonstrated that it can to rediscover (and perhaps one day discover) fundamental scientific laws.
- Validating Physics: It provides an independent, data-driven validation of why principles like the Hamiltonian and Lagrangian are so fundamental. Even an alien intelligence (the AI) finds them useful.
- Tailored Architectures: It highlights the power of designing AI architectures specifically inspired by the problems they are meant to solve, rather than relying solely on general-purpose models.
- Interpretability: While complex, the structure of MASS allows for more interpretation than some “black box” models, letting researchers probe what theory the AI has learned.
The Future: AI as a Scientific Partner?
The MIT study offers a compelling glimpse into a future where AI could act as a collaborator in scientific discovery. While MASS was tested on known physics, the approach could potentially be applied to complex datasets where the underlying laws are unknown.
Could AI help us unravel mysteries in fields like turbulence, neuroscience, or quantum gravity by finding new, concise mathematical descriptions hidden in the data? This research suggests it’s a possibility worth exploring.
The journey showed AI starting with Hamiltonian ideas for simple cases and evolving towards the more general Lagrangian framework as complexity grew – a learning trajectory mirroring aspects of physics history itself. It seems AI scientists, when properly equipped, don’t just learn; they seek fundamental, unifying principles. And remarkably, they tend to agree on what those principles are.
| Latest From Us
- Robotaxis Are Watching You: How Autonomous Cars Are Fueling a New Era of Surveillance
- AI Unmasks JFK Files: Tulsi Gabbard Uses Artificial Intelligence to Classify Top Secrets
- FDA’s Shocking AI Plan to Approve Drugs Faster Sparks Controversy
- AI in Consulting: McKinsey’s Lilli Makes Entry-Level Jobs Obsolete
- AI Job Losses Could Trigger a Global Recession, Klarna CEO Warns