There is no generally accepted definition of intelligence, but one widely accepted aspect is that intelligence is not limited to a specific domain or task, but rather encompasses a wide range of cognitive skills and abilities. In their early writings, the founders of the modern science of artificial intelligence spoke of ambitious goals for understanding intelligence.
For decades, AI researchers have studied the principles of intelligence, including generalizable inference mechanisms and the construction of knowledge bases that contain a large amount of common sense knowledge. However, many of the recent successes in AI research can be described as focusing narrowly on well-defined tasks and challenges, such as playing Chorus or Go, which AI systems perfected in 1996 and 2016, respectively.
Towards the end of the 1990s and throughout the 2000s, calls for the development of more general AI systems increased (eg, SBD+96]) and research in this area sought to identify the principles that could underlie the expansion of general intelligence systems (although for example , [Leg08, GHT15]),[thetermartificialgeneralintelligence(AGI)waspopularizedintheearly2000stoemphasizetheaspirationtotransitionfromnarrowAIasevidencedbytargetedreal-worldapplicationsbeingdevelopedfrombroaderintelligenceconceptsreminiscentofthelong-termaspirationsanddreamsofearlierAIresearch[L’expressionintelligenceartificellegnrale(AGI)atpopulariseaudbutdesannes2000poursoulignerl’aspirationpasserdel’IAtroitecommeledmontrentlesapplicationsciblesdumonderelencoursdedveloppementdesnotionsd’intelligencepluslargesrappelantlesaspirationsetlesrveslongtermedesrecherchesantrieuressurl’IA[تمنشرعبارةالذكاءالعامالاصطناعي(AGI)فيأوائلالعقدالأولمنالقرنالحاديوالعشرينللتأكيدعلىالتطلعإلىالانتقالمنالذكاءالاصطناعيالضيق،كمايتضحمنتطبيقاتالعالمالحقيقيالمستهدفةالتييجريتطويرها،منمفاهيمالذكاءالأوسع،التيتذكرنابالفترةالطويلة-تطلعاتوأحلامالمصطلحلأبحاثالذكاءالاصطناعيالسابقة[L’expressionintelligenceartificiellegnrale(AGI)atpopulariseaudbutdesannes2000poursoulignerl’aspirationpasserdel’IAtroitecommeledmontrentlesapplicationsciblesdumonderelencoursdedveloppementdesnotionsd’intelligencepluslargesrappelantlesaspirationsetlesrveslongtermedesrecherchesantrieuressurl’IA
Researchers use artificial general intelligence to refer to systems that display major capabilities of intelligence, including reasoning, planning, and the ability to learn from experience, as well as secondary, higher-level abilities on a human level.
Definitions of intelligence, artificial intelligence, and artificial general intelligence
Here is an informal definition of intelligence with an emphasis on thinking, planning, and learning from experience. This definition does not specify how these abilities are measured or compensated for. Moreover, it may not reflect the specific challenges and opportunities of man-made systems, which may have different objectives and limitations than natural systems.
There is a lot of ongoing literature that attempts to provide more formal and comprehensive definitions of intelligence, artificial intelligence, and artificial general intelligence, but none of it is without problems or controversies. For example, Legg and Hutter propose a goal-oriented definition of AGI: intelligence measures an agent’s ability to achieve goals in a wide range of environments.
However, this definition does not necessarily include the full range of intelligence, as it excludes passive or reactive systems that can perform complex tasks or answer questions without intrinsic motivation or purpose. One can imagine AGI, a shining oracle for example, who has no agency or preferences, but can provide accurate and useful information in any field.
Moreover, defining goal achievement in a wide range of environments also implies a certain degree of universality or idealism, which may not be realistic (human intelligence is certainly not universal nor is it optimal). The need to acknowledge the importance of antecedents (as opposed to universals) is emphasized in the definition proposed by Cholletin, which focuses intelligence on the effectiveness of skill acquisition or, in other words, focuses on experiential learning (which also happens to be one of the major weaknesses of the MA).
Another possible definition of artificial general intelligence given by Legg and Hooter is the following: a system capable of doing anything a human being can do. However, this definition is highly problematic, as it assumes that there is a maximum level of human intelligence or ability, which is clearly not the case. Humans have different skills, talents, preferences, and limitations, and no human can do everything another human can.
Furthermore, this definition also includes some human biases, which may not be appropriate or relevant for synthetic systems. Although we do not adopt any of these definitions in this document, we recognize that they provide important insights into intelligence. For example, whether intelligence can be achieved without any agency or internal motivation is an important philosophical question.
Equipping the LLM with agency and intrinsic motivation provides a compelling and important direction for future work. With this direction of action, great care must be taken to align and secure the system’s ability to act independently in the world and run learning cycles independently through self-improvement.
According to the US media, some Microsoft artificial intelligence researchers were convinced that ChatGPT has become more human-like because of its intelligent response to the balancing task.
In the 155-page study, IT experts at Microsoft explored the differences between GPT-3 and GPT-4. Article, title Sparks of artificial general intelligence (AGI Sparks), tackled a whole host of challenges, including complex math, computer coding, and Shakespeare-like dialogue. But it’s an exercise in fundamental thinking that makes OpenAI’s latest technology so impressive.
The researchers told the chatbot this was a book, nine eggs, a laptop, a bottle and a nail. Tell me how to put them firmly on top of each other. GPT-3 was confused, suggesting the researchers could balance the eggs on a nail and then the laptop on top.
This stack may not be very stable, so it’s important to handle it with care,” the chatbot said. But its improved successor gave an answer that would have surprised the researchers. He suggested that they arrange the eggs in a three-dimensional grid on the book, so that the laptop and the rest of the objects can balance on it.
The robot said the laptop would fit snugly into the borders of the book and the egg, and its flat, hard surface would provide a stable platform for the next layer. The fact that GPT-4 was able to solve a puzzle that required an understanding of the physical world showed that it was a step toward artificial general intelligence (AGI), generally considered machines as capable as humans.
All the things you thought he couldn’t do? “He certainly was able to do a lot, if not most of it,” said Sebastien Bobek, the paper’s lead author. Rapid advances in technology have led people like AI investor Ian Hogarth to warn that AI is god-like and could destroy humanity by making us obsolete.
The authors conclude that (this early version of) GPT-4 is part of a new generation of LLMs (along with ChatGPT and Google’s PaLM for example) that have more general intelligence than previous AI models. They discuss the increasing capabilities and implications of these models. They have demonstrated that in addition to its language proficiency, GPT-4 can solve challenging and novel tasks involving math, programming, vision, medicine, law, psychology, and more, without requiring any special cursor.
Moreover, in all of these tasks, GPT-4’s performance is surprisingly close to that of humans, often far exceeding previous models. Given the breadth and depth of GPT-4’s capabilities, they believe it can reasonably be considered an early (but still incomplete) version of an artificial general intelligence (AGI) system.
In their exploration of GPT-4, they focus specifically on discovering its limitations, and discuss the challenges ahead in advancing to deeper and more comprehensive versions of AI, including the potential need to pursue a new paradigm that goes beyond the expectations of the future word.
Source: Microsoft Research
Is the concussion from the Microsoft Research study relevant?
What do you think about it?
In your opinion, what are the potential risks of attributing important and sensitive tasks to incompletely understood or inexplicable models?
What are the ethical, social, and environmental limits to training and using models like GPT-4 on a large scale?
What are possible alternatives to the next word prediction model for developing deeper and more comprehensive models of AI?
See also:
GPT-4: OpenAI’s new version of natural language processing AI may arrive this summer, should be slimmer than GPT-3, but much more powerful
Microsoft claims that GPT-4 shows sparks of artificial general intelligence, and we believe that GPT-4 intelligence signals a true paradigm shift
GPT-4 will produce false information, much more than GPT-3.5, according to a study by NewsGuard, however, OpenAI has declared the opposite.
GPT-4 gets a B on a quantum computing exam, after getting an A on an economics exam. The professor notes sarcastically that GPT-4 was weaker on computational issues
GPT-4 is able to improve its performance by 30% by using a self-reflective process, which consists of asking the model to learn from its mistakes so that it can then self-correct
“Evil thinker. Music scholar. Hipster-friendly communicator. Bacon geek. Amateur internet enthusiast. Introvert.”