Large language models have already transformed software engineering, for better or worse. Now, so-called large physics models are also starting to transform design engineering. These tools are beginning to replace—or at least, amend—the role of full-fledged physics simulation in the automotive and aerospace industries, semiconductor engineering, and more.
Before the advent of computer simulation, a car manufacturer, for example, would create prototypes to test their designs, says Thomas Von Tschammer, managing director at physics-based AI company Neural Concept. “For the past 40 years, we reduced a lot of the need for prototypes by using numerical simulations for aerodynamics, for crash testing, and so on.” Now, Tschammer explains, AI is drastically reducing the need for simulation, the same way simulation reduced the need for physical prototypes.
Growing adoptions of this type of AI was a topic of interest at Nvidia GTC in March. Chris Johnston, senior technical specialist at Jaguar Land Rover, presented how his company is using Neural Concept’s technology. PhysicsX, another physics-based AI company, announced a collaboration with Nvidia to advance open standards for such models, also at GTC.
The AI design engineering workflow
Over the past six months, General Motors (GM) has introduced large physics models into their car design process to speed up the workflow.
Previously, a creative design engineer would develop a 3D model of a new car concept. This model would be sent to aerodynamics specialists, who would run physics simulations to determine the coefficient of drag of the proposed car—an important metric for energy efficiency of the vehicle. This simulation phase would take about two weeks, and the aerodynamics engineer would then report the drag coefficient back to the creative designer, possibly with suggested modifications.
Now, GM has trained an in-house large physics model on those simulation results. The AI takes in a 3D car model and outputs a coefficient of drag in a matter of minutes. “We have experts in the aerodynamics and the creative studio now who can sit together and iterate instantly to make decisions [about] our future products,” says Rene Strauss, director of virtual integration engineering at GM.
For GM and other companies, running inference on an AI model trained on physics simulations, instead of running the simulation itself, can bring immense time savings. “Depending on the kinds of physics [being simulated], or the resolution, it can be anywhere between 10,000 to close to a million times faster,” says Jacomo Corbo, CEO and co-founder of PhysicsX.
How accurate are large physics models?
But, what about accuracy? For GM’s purposes, Strauss says accuracy is not a huge concern at the design stage because finer details are ironed out later in the process. “When it really starts to matter is when we’re getting close to launching a vehicle, and the coefficient of drag is going to be used for our energy calculation, which eventually goes to the certification of our miles per gallon on the sticker.” At that stage, Strauss says, a physical model of the car will be put into a wind tunnel for an exact number.
PhysicsX’s Corbo argues that, with the right data, the AI model accuracy can supersede the accuracy of the simulation it’s trained on. The trick is to incorporate experimental measurements to fine-tune the model. If a physics simulation doesn’t agree exactly with experimental data, it is often difficult to figure out why and tweak the model until they agree. With AI, incorporating a few experimental examples into the training process is a lot more straightforward, and it’s not necessary to understand where exactly the model went wrong.
All in all, by drastically bringing down the time it takes to model the physics, large physics models enable engineers to explore a much greater range of possibilities before a final design is reached.
Training large physics models
There is no one-size-fits-all approach to training large physics models. Depending on the types of data available, and the physics in question, the models may use the transformer architecture that underlies LLMs, a generalized version of convolutional neural networks known as geometric deep learning, or an architecture that can solve partial differential equations called neural operators.
Currently, most companies are training their own models on their simulation data, catering to specific use cases. In GM’s aerodynamics implementation, there are different AI models for different types of cars: think SUVs versus sedans. But PhysicsX’s Corbo says his team is working on building more “foundational” physics models that can be applied across different scenarios.
Both LLMs and robotics have benefitted from scaling laws, which describe how a system improves as the models increase in size or get trained on more data. In AI, models tend to improve quickly, in a non-linear way. Along the way, the models also become more generalizable—extending them to new settings takes less and less fine-tuning to reach the same accuracy. Corbo says his team is now starting to see the same types of scaling laws for large physics models.
“What we’re seeing here is maybe a little bit unsurprising,” Corbo says, “but it’s also pretty incredible. And it’s given us the confidence to make these models bigger, because they perform a whole lot better, and they cover broader domains, and they have these really amazing emergent properties.”
Developing open standards for the data formats used in training, as well as the model architectures, should help develop these more powerful foundational models. That’s the goal of PhysicsX’s collaboration with Nvidia, and of Nvidia’s physicsNeMo open source platform.
“The thing that we’re collaborating on is being able to compose architectures from building blocks,” Corbo says, making it easy for those in both academia and industry to re-use and build upon existing models.

The long-term role of simulations and engineers
While some are working on developing more powerful models, others are pushing to implement what’s already available into existing workflows, which is no easy task. “With any innovation, it’s not a straight line. There’s some steps forward and then some steps back and improvements that we find along the way. But that’s part of the joy of the innovation process and using new tools like this,” GM’s Strauss says.
This technology is still in the early stages, and it’s unclear what the final role of AI tools will be in the engineering workflow. For one, opinions vary on whether AI will replace simulations completely, or just reduce their use.
“We will never fully replace simulations,” Neural Concept’s von Tschammer says. “But the idea is to make a much smarter usage of simulation at the most major phase of developments, and you use AI to speed up the early design stages, where you need to explore a very wide set of options.”
PhysicsX’s Corbo begs to differ. “The whole idea is to take numerical simulation … out of the workflow,” he says, “and to move that to inference.”
Whatever the role of simulation will be, everyone in the field is adamant that human design engineers will continue to be in the driver’s seat, enabled to do their best work by these newfangled tools. (After all, when has AI ever threatened to replace human labor?)
“What we’re seeing is that actually, these tools are empowering the engineers to be much more efficient,” Tschammer says. “Before, these engineers would spend a lot of time on low added value tasks, whereas now these manual tasks from the past can be automated using these AI models, and the engineers can focus on taking the design decisions at the end of the day. We still need engineers more than ever.”
Dina Genkina is an associate editor at IEEE Spectrum focused on computing and hardware. She holds a PhD in atomic physics and lives in Brooklyn.



