AI researchers claim technology behind Chat GPT can generate new insights

Google DeepMind’s AI researchers have achieved a groundbreaking feat by leveraging a large language model (LLM), exemplifying that models like ChatGPT can surpass human knowledge boundaries.
This breakthrough implies that these sophisticated language models go beyond merely reorganizing existing information during training; they can also generate novel insights.
Pushmeet Kohli, DeepMind’s Head of AI for Science, expressed surprise at the model’s capability to produce genuinely new scientific discoveries. This marks a significant milestone, establishing the first instance of a large language model contributing to an authentic scientific breakthrough.
Large language models, such as ChatGPT, are formidable neural networks that absorb language nuances from extensive text and data. While ChatGPT gained popularity for tasks like software debugging and content creation since its introduction last year, the prevailing belief was that it couldn’t generate entirely new knowledge and might provide answers susceptible to flaws.

DeepMind’s innovative approach involved utilizing an LLM to create “FunSearch,” an abbreviation for “searching in the function space.” This entailed writing computer programs to address problems, with the LLM working in tandem with an “evaluator” that assessed programs based on their performance.
The most effective programs were amalgamated and looped back to the LLM for refinement, progressively transforming subpar programs into robust ones capable of uncovering fresh insights.

AI researchers Challenges

DeepMind’s researchers put FunSearch to the test, unleashing it on two challenging puzzles.

In its first application, FunSearch crafted programs to generate extensive cap sets surpassing the achievements of the most accomplished mathematicians. This addressed the longstanding challenge of identifying the largest set of points in space where no three points create a straight line, a complex and enduring problem in mathematics.
The second puzzle tackled by FunSearch was the bin packing problem, a mathematical endeavor focused on efficiently arranging items of varying sizes into containers. This problem has practical applications in scenarios such as packing shipping containers or scheduling computing tasks in data centers. Traditionally, solutions involve either filling the first available bin or selecting the bin with the least remaining space. FunSearch, however, devised a more effective approach that minimized the creation of small, unlikely-to-be-filled gaps. The results of this breakthrough were published in Nature.
Sir Tim Gowers, a mathematics professor at Cambridge University not involved in the research, highlighted the significance of AI collaboration with human mathematicians in recent years. He expressed enthusiasm about FunSearch providing another valuable tool for such collaborations, enabling mathematicians to efficiently explore inventive and unexpected constructions. Moreover, these constructions are readily interpretable by humans, enhancing the collaborative potential between artificial intelligence and mathematicians.