AI "sidesteps" scientific method

Not really but still a breakthrough

AI

There’s an article published in TheNewStack.io that is titled Machine Learning Algorithm Sidesteps the scientific method. While I do consider this to be clickbait, I still consider some truth behind it worth discussing.

How does it sidestep the scientific method?

First of all, you might wonder why they say that a certain ML algorithm sidesteps the scientific method, and if that’s actually possible at all. They explain that there’s this algorithm that learned to predict planetary motions based only on observational data and field theory, without previous knowledge or input about any other laws regulating motion.

“It is worthwhile to emphasize that the serving and learning algorithms do not know, learn, or use Newton’s laws of motion and universal gravitation,” wrote the team. “The discrete field theory directly connects the observational data and new predictions. Newton’s laws are not needed.”

Now, they say that this algorithm is escaping the limits of the scientific method because it does like scientist did on the old days: they saw data, predicted data. They verified it’s true and got new scientific knowledge. No need to study other theories, no need to incorporate existing knowledge.

“Sidesteping”… That’s still a bit of a stretch. After all, it is basing itself in field theory, meaning that we’re assuming it to be true. It is also assuming the observed data to be true. All of these things are part of the scientific method so that experiments can be replicated and verified. After all, being right just once is valuable but not enough to be called “science”, is it? The iterative nature of machine learning models is the hypothesis-verification loop all over again.

In short, what this team did is replicate an old-school Galileo that would run experiments on its own and make predictions that made sense. Which is still a great feat!

Why is this news at all?

Being able to predict without relying on existing knowledge allows for predictions to be wrong or right despite them. There’s a very important point to this. Science learns when its predictions turn out wrong, not when they turn out true. Being able to ignore most of what we know is a great way to revalidate (or reject) our current assumptions.

It will come a point where these predictions will be consistently right even when our current scientific knowledge predicts they shouldn’t, and so we will be able to learn about the universe from what a computer found out. As such, I agree that this is a Big Deal™.

Also, the idea of sidestepping existing knowledge is a very powerful one too. It means that there’s a lot of less bootstrap required to have these models running. Assuming that computing power and memory storage becomes cheaper and more accesible over time, these three things will make scientific discoveries available for anyone that can feed data to computers and have them compute predictions.

This alone could bring us back into the golden age of science, with discoveries being made all of the time and advancing our knowledge of the world. Still, I’m getting ahead of myself.

Philosophical changes

Having shareable machine models that work the predictions out are a technological version of the formulas that science had been creating for years. In other words, I think having “executable” hypothesis easy to create and distribute might be the equivalent of when we started sharing written stories instead of telling them.

There are still a few differences: formulas and old-school models can be passed down in paper, while machine models require storage (which is almost ubiquitous today). Models are a lot more difficult to interpret and to make sense out, compared to formulae. This hinders some of the theoretical advances that gave such a big push to physics: the ability to play around with mathematical concepts without having to experiment directly.

This is a huge hindrance: the ability to test purely with imagination helped physics (and other fields) proliferate without the need of direct experimentation. However, when experimentation and verification were not possible, we saw the rise of multiple competing interpretations of the world, theories over theories of how the universe worked while not being able to easily discard each of these options. Machine learning models are disadvantaged on this aspect: they require data to work and data to provide value. Gathering data means going out into the real world and experimenting. We don’t build hadron colliders every day.

Still, “trying” and “predicting” with fake data (imagined) could also work. These machine learning models would not really bring anything new to the mix: we might as well have a bunch of scientists predict patterns and come up with predictions until we find a new piece of knowledge. But this work is arduous and slow, which is why leaving it to machines is a good investment.

As such, I believe that sciences (not only physics) will see a bigger disconnect between their experimental and theoretical counterparts, as machine learning accelerates the rate of model predictions without time for theoretical interpretation to catch up.

However, this advance will help us pass down knowledge in the form of algorithms that depend only on data, and no previous background. This is, we might be able to see farther without standing on the shoulders of giants.