AI beyond LLMs

Large-Language Models are trained on text. These models understand the world by learning from our understanding of the world that has been expressed in textual form. This way even a blind ChatGPT knows something about color and their potential similarities without ever “seeing” color per se.

Whenever something that ChatGPT generates makes sense the sense-making was already in the original text and by repetition it “learned” to repeat the patterns that make sense to us (most of the time that is). But what if we do not train a neural network not on text/language but on data/phenomenons? What if that data is collected by observations of the real world? What if we “ask” the model then to infer approximations of potential theorems that fit the data and thus describe in theory what happens in practice?

Basically it is what Google Deepmind did with AlphaGo and AlphaFold: These models do not generate text, they generate next moves in a Go game or predict atomic foldings in molecules. But these two are very narrow applications – and given the specificity of the problem domain it was necessary to train on the huge but topically limited datasets of each problem domain.

AI for a new data-driven science

But what if scientists take the idea, that GPTs can be trained on any kind of data and any kind of patterns to help with making new scientific discoveries? What if inferring theorems from real-world data is just another modality?

Miles Cranmer from the University of Cambridge seems to work on exactly that. The result of the inference are not theorems but approximations of mathematical representations to the data.

See yourself:

Summary:
Miles Cranmer’s talk at the Simons Foundation, titled “The Next Great Scientific Theory is Hiding Inside a Neural Network,” discusses how neural networks can be used to uncover new scientific theories. Cranmer explains that traditional scientific methods involve humans creating hypotheses and testing them through experiments. However, neural networks can analyze vast amounts of data and detect patterns that might be too complex for humans to notice. By using neural networks, researchers can identify new physical laws and theories that govern the universe, potentially leading to groundbreaking discoveries in science.

What does that mean?

The application of GPTs beyond language models holds great promise for advancing technology and solving complex problems across various domains. They open up a wide range of possibilities for AI in various fields. Here are some key areas where GPTs can make significant impacts in many areas like engineering, physics, robotics, … just to name a few.


Posted

in

by

Tags: