Artificial General Intelligence is Happening, Just a Question of How Much

Stephen McAleese, at Less Wrong, used scaling rules and formulas to try to predict the performance of GPT-4. He underpredicted the capabilities of GPT-4. However, he was not widely wrong. GPT-4 came out earlier than he expected and had better performance than expected. Multi-task Language Understanding on MMLU was better than expected. GPT-4 can work…
Artificial General Intelligence is Happening, Just a Question of How Much

Stephen McAleese, at Less Wrong, used scaling rules and formulas to try to predict the performance of GPT-4. He underpredicted the capabilities of GPT-4. However, he was not widely wrong. GPT-4 came out earlier than he expected and had better performance than expected.

Multi-task Language Understanding on MMLU was better than expected. GPT-4 can work with images and not just text.

GPT-4 is doing very well on SAT, GMAT, LSAT and many other tests.

Stephen indicates that he ignored algorithmic advances such as the introduction of image inputs to the GPT-4 model. Not taking algorithmic advances into account could also explain why he underestimated GPT-4’s performance improvement on the MMLU benchmark. The scaling prediction alone would always underpredict the improvements. Algorthmic advances will speed up improvement.

The average capabilities of language models tend to scale smoothly given more resources, specific capabilities can increase abruptly because of emergent capabilities. Therefore, a model that predicts linear improvements on certain capabilities in the short term could merely be a short tangent in a more complex non-linear model. This suggests that predicting specific capabilities in the long term is significantly more difficult.

There will be an increased effect of algorithmic advances on ML capabilities in the long term. More money and more cleverness from developers could overcome plateauing of improvements.

Given the breadth and depth of GPT-4’s capabilities, it can reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.

There is a 154 page analysis of GPT-4. Sparks of Artificial General Intelligence: Early experiments with GPT-4 by Microsoft Research.

They have focused on the surprising things that GPT-4 can do, but we do not address the fundamental questions of why and how it achieves such remarkable intelligence. How does it reason, plan, and create? Why does it exhibit such general and flexible intelligence when it is at its core merely the combination of simple algorithmic components—gradient descent and large-scale transformers with extremely large amounts of data? These questions are part of the mystery and fascination of LLMs, which challenge our understanding of learning and cognition, fuel our curiosity, and motivate deeper research. Key directions include ongoing research on the phenomenon of emergence in LLMs.

The central claim of our work is that GPT-4 attains a form of general intelligence, indeed showing sparks of artificial general intelligence. This is demonstrated by its core mental capabilities (such as reasoning, creativity, and deduction), its range of topics on which it has gained expertise (such as literature, medicine, and coding), and the variety of tasks it is able to perform (e.g., playing games, using tools, explaining itself,…). A lot remains to be done to create a system that could qualify as a complete AGI. We conclude this paper by discussing several immediate next steps, regarding defining AGI itself, building some of missing components in LLMs for AGI, as well as gaining better understanding into the origin of the intelligence displayed by the recent LLMs.

On the path to more general artificial intelligence

Some of the areas where GPT-4 (and LLMs more generally) should be improved to achieve more general intelligence include (note that many of them are interconnected):

Confidence calibration: The model has trouble knowing when it should be confident and when it is just guessing. It both makes up facts that have not appeared in its training data, and also exhibits inconsistencies between the generated content and the prompt, which we referred to as open-domain and closed-domain hallucination in Figure 1.8.

These hallucinations can be stated in a confident and persuasive manner that can be difficult to detect. Thus, such generations can lead to errors, and


also to confusion and mistrust. While hallucination is a good thing when generating creative content, reliance on factual claims made by a model with hallucinations can be costly, especially for uses in high-stakes domains such as healthcare. There are several complementary ways to attempt to address hallucinations. One way is to improve the calibration of the model (either via prompting or fine-tuning) so that it either abstains from answering when it is unlikely to be correct or provides some other indicator of confidence that can be used downstream. Another approach, that is suitable for mitigating open-domain hallucination, is to insert information that the model lacks into the prompt, for example by allowing the model to make calls to external sources of information, such as a search engine as in Section 5.1. For closed-domain hallucination the use of additional model computation through post-hoc checks is also promising, see Figure 1.8 for an example. Finally, building the user experience of an application with the possibility of hallucinations in mind can also be part of an effective mitigation strategy.


• Long-term memory: The model’s context is very limited (currently 8000 tokens, but not scalable in terms of computation), it operates in a “stateless” fashion and there is no obvious way to teach the model new facts. In fact, it is not even clear whether the model is able to perform tasks which require an evolving memory and context, such as reading a book, with the task of following the plot and understanding references to prior chapters over the course of reading.


• Continual learning: The model lacks the ability to update itself or adapt to a changing environment. The model is fixed once it is trained, and there is no mechanism for incorporating new information or feedback from the user or the world. One can fine-tune the model on new data, but this can cause degradation of performance or overfitting. Given the potential lag between cycles of training, the system will often be out of date when it comes to events, information, and knowledge that came into being after the latest cycle of training.


• Personalization: Some of the applications require the model to be tailored to a specific organization or end user. The system may need to acquire knowledge about the workings of an organization or the preferences of an individual. And in many cases, the system would need to adapt in a personalized manner over periods of time with specific changes linked to the dynamics of people and organizations. For example, in an educational setting, there would be an expectation of the need for the system to understand particular learning styles as well as to adapt over time to a student’s progress with comprehension and prowess. The model does not have any way to incorporate such personalized information into its responses, except by using meta-prompts, which are both limited and inefficient.


• Planning and conceptual leaps: the model exhibits difficulties in performing tasks that require planning ahead or that require a “Eureka idea” constituting a discontinuous conceptual leap in the progress towards completing a task. In other words, the model does not perform well on tasks that require the sort of conceptual leaps of the form that often typifies human genius.


• Transparency, interpretability and consistency: Not only does the model hallucinate, make up facts and produce inconsistent content, but it seems that the model has no way of verifying whether or not the content that it produces is consistent with the training data, or whether it’s self-consistent. While the model is often able to provide high-quality post-hoc explanations for its decisions, using explanations to verify the process that led to a certain decision or conclusion only works when that process is accurately modeled and a sufficiently powerful explanation process is also accurately modeled. Both of these conditions are hard to verify, and when they fail there are inconsistencies between the model’s decisions and its explanations. Since the model does not have a clear sense of its own limitations it makes it hard to establish trust or collaboration with the user without extensive experimentation in a narrow domain.


• Cognitive fallacies and irrationality: The model seems to exhibit some of some of the limitations of human knowledge and reasoning, such as cognitive biases and irrationality (such as biases of confirmation, anchoring, and base-rate neglect) and statistical fallacies. The model may inherit some of the biases, prejudices, or errors that are present in its training data, which may reflect the distribution of opinions or perspectives linked to subsets of the population or larger common views and assessments.


• Challenges with sensitivity to inputs: The model’s responses can be very sensitive to details of the framing or wording of prompts and their sequencing in a session. Such non-robustness suggests that significant effort and experimentation is often required with engineering prompts and their sequencing and that uses in the absence of such investments of time and effort by people can lead to suboptimal and non-aligned inferences and results.

Read More

Total
0
Shares
Leave a Reply

Your email address will not be published.

Related Posts
Asia Population Peaks by 2040 Under 5 Billion
Read More

Asia Population Peaks by 2040 Under 5 Billion

The pre-COVID UN 2019 population forecast was for Asia’s population to grow from about 4.6 billion to about 5.3 billion around 2055. Recent census for India and China have reported sharper drops in birthrates even before COVID. COVID is further suppressing birthrates. India will likely only add 100-140 million people instead of 273 million. China’s…
Sorry This Net Energy Gain Does NOT Mean Fusion is Close
Read More

Sorry This Net Energy Gain Does NOT Mean Fusion is Close

Lawrence Livermore National Laboratory Researchers were able to produce 2.5 megajoules of energy, 120 per cent of the 2.1 megajoules used to power the experiment. This means more energy out versus the energy deposited on the fuel pellet. The laboratory recently conducted a “successful” experiment at the National Ignition Facility. The National Ignition Facility is…
VEXT is next for Veloce in Web3 evolution
Read More

VEXT is next for Veloce in Web3 evolution

London, United Kingdom, May 5th, 2023, Chainwire VEXT is next for Veloce in Web3 evolution, London-based company to launch VEXT utility and governance token. Veloce, the world’s leading digital racing media network, is diving into the world of Web3 with the launch of its new blockchain utility and governance token, VEXT. Created in partnership with MDRxTech, experts in tech development…
Fact Mix 866: Pariah
Read More

Fact Mix 866: Pariah

For his long-overdue Fact mix Pariah chases a jaw-swinging, grin-inducing, eyelid-fluttering vibe that fluctuates between sparseness and intensity, depth and release. Arthur Cayzer has been appearing in the pages of Fact for over decade. We were there way back in 2010 when “promising UK producer Pariah” first began turning heads with ‘Detroit Falls’ and Safehouses,…