Ben Evans on What Exactly is ‘AGI’
Welcome to the fifty-eighth edition of ‘3-2-1 by Story Rules‘.
A newsletter recommending good examples of storytelling across:
- 3 tweets
- 2 articles, and
- 1 long-form content piece
Let’s dive in.
𝕏 3 Tweets of the week
Source: X
This is an excellent example of data + anecdote. The data gives you the big picture, but it’s the concrete anecdote (by ‘Air Katakana’, admittedly unverified) that you would perhaps remember and share with others.
Source: X
Interesting to see that most of the growth has come at the expense of foreign university grads.
Source X
I normally don’t share tweets which require you to click to view them – but this one is absolute gold! Such high-quality humour while selling an old car.
📄 2 Articles of the week
a. ‘What can and can’t be learnt from Singapore’ by Janan Ganesh
The ever-reliable Janan Ganesh from FT explores an intriguing idea – what can the world learn from Singapore?
Singapore’s success in building one of the world’s richest societies (in terms of per capita income) in just a few decades is well known.
But what we can learn from its growth? Here’s Ganesh, tongue firmly in cheek:
So, what can other countries learn from Singapore? Be small. If the US could hive off 320mn people and 99 per cent of its land mass, it would be an easier nation to mould. Second, have a maritime rather than continental setting. The likes of Bolivia are missing a trick there. Third, and foremost, get an individual of the calibre of Lee Kuan Yew as founder-leader. I presume there are headhunters for these things.
And so on and so unhelpfully on. In the end, Singapore is too particular, too sui generis in both its assets and liabilities, to constitute a template. It has but one universal lesson: the importance of an open mind.
He then lists some of its contradictions, showcasing how deliberate (and flexible) the country has been in making its choices:
If the mark of a thinking person is a having a weird mix of beliefs, the island has had a few among its policymakers. This is a high-income nation where most people live in public housing. It is a private-sector paradise where civil servants can earn a fortune. It has an acute sense of independence from the west but uses English as the main language of instruction.
Good use of contrast in that paragraph.
b. ‘Group chats rule the world’ by Sriram Krishnan
Sriram Krishnan throws a spotlight on the unsung platforms for most interesting conversations – chat groups
Most of the interesting conversations in tech now happen in private group chats: Whatsapp, Telegram, Signal, small invite-only Discord groups.
Being part of the right group chat can feel like having a peek at the kitchen of a restaurant but instead of food, messy ideas and gossip fly about in real time, get mixed, remixed, discarded, polished before they show up in a prepared fashion in public.
I liked the analogy of the cooling rods and nuclear reactors to describe two typical characters in each group:
Cooling rods and nuclear reactors: Cooling rods are used in nuclear reactors to control the rate of the reaction. When they pull back, the rate increases and when they go in, the reaction slows down.
Every group chat usually has one or two people that like to talk..a lot. They are critical: you need the provocateurs who inject new ideas consistently. However, almost all of them have a tendency to dominate these groups.
This is where the cooling rods come in. This is usually the BDFL or some trusted member who can judge the state of the group. Conversation slowing down? Get some of these spicy provocative takes going. Conversation getting heated/dominated? Take someone aside and calm them down. No different from a friend of mine who tries to get everyone’s glasses filled again and again if he feels the dinner getting boring.
It’s like Sriram K has a window into all the large WhatsApp groups of the world!
📄 1 long-form read of the week
a. ‘Ways to think about AGI’ by Benedict Evans
(Hat/Tip: Shyam Ramakrishnan)
Tech writer Ben Evans wrote a fascinating long-form piece on AGI (Artificial General Intelligence, contrasted with ‘narrow’ intelligence. For a primer, read up this old Tim Urban post)
AGI has been a Holy Grail for the AI community leaders for several decades now. And recently, with every new breakthrough, it seems like we are getting closer to that exciting and scary milestone.
But what exactly is AGI? And when can we say we have reached it?
Ben Evans gives some historical context to this idea, sharing instances from the past when serious scientists thought that we were close to achieving AGI, only for that to morph into an ‘AI winter’:
Every few decades since 1946, there’s been a wave of excitement that sometime like this might be close, each time followed by disappointment and an ‘AI Winter’, as the technology approach of the day slowed down and we realised that we needed an unknown number of unknown further breakthroughs. In 1970 the AI pioneer Marvin Minsky claimed that in “from three to eight years we will have a machine with the general intelligence of an average human being”, but each time we thought we had an approach that would produce that, it turned out that it was just more software (or just didn’t work).
But this time, is it different? And should we be worried?
As we all know, the Large Language Models (LLMs) that took off 18 months ago have driven another such wave. Serious AI scientists who previously thought AGI was probably decades away now suggest that it might be much closer. At the extreme, the so-called ‘doomers’ argue there is a real risk of AGI emerging spontaneously from current research and that this could be a threat to humanity, and calling for urgent government action. Some of this comes from self-interested companies seeking barriers to competition (‘This is very dangerous and we are building it as fast as possible, but don’t let anyone else do it’), but plenty of it is sincere.
He then shares an interesting idea – we keep shifting the goalposts of what it means to be AGI (emphasis mine):
Taking this one step further, as I listened to Mark Zuckerberg talking about Llama 3, it struck me that he talks about ‘general intelligence’ as something that will arrive in stages, with different modalities a little at a time. Maybe people will point at the ‘general intelligence’ of Llama 6 or ChatGPT 7 and say “That’s not AGI, it’s just software!” We created the term AGI because AI came just to mean software, and perhaps ‘AGI’ will be the same, and we’ll need to invent another term.
It’s like we are saying: “I don’t know what I mean by AGI, but I’ll know it when I see it”!
I liked this frame of practice vs. theory:
On this theme, some people suggest that we are in the empirical stage of AI or AGI: we are building things and making observations without knowing why they work, and the theory can come later, a little as Galileo came before Newton (there’s an old English joke about a Frenchman who says ‘that’s all very well in practice, but does it work in theory’)
Evans ends with a shrug:
By default, though, this will follow all the other waves of AI, and become ‘just’ more software and more automation. Automation has always produced frictional pain, back to the Luddites, and the UK’s Post Office scandal reminds us that you don’t need AGI for software to ruin people’s lives. LLMs will produce more pain and more scandals, but life will go on. At least, that’s the answer I prefer myself.
That’s all from this week’s edition.
Photo by Diana Polekhina on Unsplash