Microsoft scientists claim glimmers of human logic appearing in neural networks

Source: Sputniknews  18.5.2023

 

While new findings surrounding the emergence of artificial intelligence may revolutionize the world, they have also continually raised concerns about the potential dangers they may pose to society.

 

Microsoft recently published a research paper, titled "Sparks of Artificial General Intelligence," that explores the possibilities and risks associated with creating machines possessing human-level or superior intelligence.

 

The debate surrounding the concept of Artificial General Intelligence (A.G.I.) often delves into philosophical realms, making it a contentious topic among computer scientists. Previous claims of A.G.I. led to reputational damage for researchers, and distinguishing true intelligence from simulated intelligence remains a challenge.

 

However, recent progress in the field has shown promise, with new AI systems generating human-like answers and ideas without explicit programming.

 

Microsoft has restructured its research labs to include dedicated groups investigating A.G.I., with Sébastien Bubeck, the lead author of the Microsoft A.G.I. paper, leading one of these groups. The technology they are working with, OpenAI's GPT-4, is considered the most powerful language model currently available. Microsoft has invested $13 billion in OpenAI, indicating a strong partnership between the two companies.

 

GPT-4, after analyzing extensive amounts of digital text, including books, articles, and chat logs, has learned to generate its own text, write poetry and even engage in conversations.

 

Researchers have observed impressive behavior from the system, showcasing a "deep and flexible understanding" of human concepts and skills.

 

For instance, GPT-4 was able to write a mathematical proof of infinite prime numbers in rhyming form, demonstrating both mathematical and linguistic prowess.

 

When challenged with various tasks, GPT-4 showcased its ability to analyze, synthesize, evaluate, and judge text better than generating it. For example, it generated a program to draw a unicorn when asked to do so, and even modified the program to draw the unicorn without a horn when the relevant code was removed.

 

It also composed a program to assess diabetes risk based on personal information and crafted a letter supporting an electron as a US presidential candidate in the voice of Mahatma Gandhi, among other complex tasks.

 

Critics argue that Microsoft's claims are unsubstantiated, representing an opportunistic effort to generate attention for a technology that remains largely misunderstood. Some researchers emphasize that true general intelligence requires an understanding of the physical world, which GPT-4 does not possess in theory. The Microsoft paper's subjective and informal nature, along with its departure from rigorous scientific evaluation, further raises skepticism among experts.

 

As researchers used an early version of GPT-4 that lacked fine-tuning for unwanted content, external experts cannot verify the claims made in the paper. Microsoft clarifies the public version of the system is less powerful than the one tested.

 

While GPT-4 demonstrates capabilities that resemble human reasoning, it also exhibits inconsistent behaviors. Some experts argue the text generated by these systems may not reflect true human reasoning or common sense.

 

Alison Gopnik, a psychology professor at the University of California, Berkeley, suggests anthropomorphizing AI systems and comparing them to humans in a competition is an inadequate approach.

 

The pursuit of Artificial General Intelligence continues to captivate researchers, but questions remain regarding the true nature and limitations of the intelligence generated by these systems.