A Flight Crash due to Over-reliance on AI?

A Flight Crash due to Over-reliance on Ai-1
5. General

A Flight Crash due to Over-reliance on AI?

Last week I attended a cool event – ‘Science on Tap‘, where a scientist talks about his or her work over beers. This talk was by Ganesh Bagler of IIIT Delhi who spoke about ‘Computational Gastronomy’ (essentially the use of AI for analysing and recommending food recipes). It was fascinating.

For someone like me, events don’t get much better. A thought-provoking topic, good storytelling, a lovely setting, delicious food, and most of all, some great beers (specifically, IPAs). Can’t ask for more.

Try and attend one of these if you get the chance.

And now, on to the newsletter.

Welcome to the eighty-third edition of ‘3-2-1 by Story Rules‘.

A newsletter recommending good examples of storytelling across:

  • 3 tweets
  • 2 articles, and
  • 1 long-form content piece

Let’s dive in.


𝕏 3 Tweets of the week

Source: X

I like the data stories put out by Roshan Kishore of HT. Clear messages and reads like a story.


Source: X

One more data point about the positive impact of immigration. In India, Mumbai, Bangalore and to some extent Delhi have benefitted from this. Wonder if we might become an immigration magnet for the world as a whole? Will definitely need to massively improve our cities for that to happen…


Source: X

This also applies to storytelling. AI might be good at generating ideas, but human judgement and taste are key in converging on the most relevant and appropriate option/s.


📄 2 Articles of the week

a. ‘Are We Too Impatient to Be Intelligent?’ by Rory Sutherland

I know, I know. We had looked at a podcast featuring Rory Sutherland last week. And this article (his speech actually from an event called Nudgestock) has very similar ideas… But I am highlighting some different ones today, compared to the ones I covered last week. And yes, given it’s Rory, it still makes for thought-provoking reading.

A great example of problem (re)definition

What happens when you give a load of engineers a brief? I always ask the question: What would’ve happened if you hadn’t given the brief for High Speed 2 (a high-speed railway project in the UK) to a load of engineering firms who immediately focused on speed, time, distance, capacity. What if you’d given the brief to Disney instead?

They would’ve said, “First of all, we’re going to rewrite the question. The right question for High Speed 2 is: How do we make the train journey between London and Manchester so enjoyable that people feel stupid going by car?” That’s the right question. It’s not about time and speed and distance. Those things only obliquely correlate with human behavior, with human preference.

The paradox of choice:

…dating back to the Industrial Revolution, the acceleration of things has made us miserable because our choices are no longer sufficiently limited that we feel we can accomplish everything we want. We’ve created an acceleration and an explosion of choice, which will permanently leave us feeling fundamentally unsatisfied or under-optimized.

Sometimes, slow is good:

I think there are things in life that you want to telescope and compress and accelerate and streamline and make more efficient. And there are things where the value is precisely in the inefficiency, in the time spent, in the pain endured, in the effort you have to invest. And I don’t think we’re going to differentiate between those things. Because I think, like my friend at Transport for London, the automatic assumption is going to be that faster is better. We need to understand when we need to go slow.

b. ‘My biggest productivity mistake’ by Tim Harford for The Financial Times

Productivity is a never-ending pursuit. Even the most ‘productive’ of people are seemingly dissatisfied about their inefficiencies. In this article, author and podcaster Tim Harford (Undercover Economist) acknowledges that when his editor asks him to write about his productivity, he feels like an imposter:

From time to time, my editor will suggest that I write a column about how to be more productive. It’s a sure way to trigger imposter syndrome because, whether or not I appear productive from the outside, I certainly don’t feel productive on the inside. In fairness to myself, and to anyone who worries that they should be getting more done, personal productivity is a fiendishly hard problem. It demands willpower, since there is always usually some pleasant distraction available.

Trying to achieve 100% productivity is a fool’s errand – your time is finite, the list of things you want to/have to do is never-ending:

And a final challenge to anyone trying to get everything done: that goal is simply beyond us all. As Oliver Burkeman explains in his new book ‘Meditations for Mortals’, “the incoming supply of things that feel as though they genuinely need doing isn’t merely large, but to all intents and purposes infinite. So getting through them all isn’t just very difficult. It’s impossible.”

Having established that one shouldn’t aim for perfection, Harford shares a counter-intuitive and relatively low-hanging fruit to become more productive at work: to become less productive with email.

Email is so easy to deal with that it’s tempting to let email replace hard work. Faced with a genuinely difficult task, it’s the path of least resistance to open up my inbox instead. It doesn’t feel like I’m ducking the real work — what could be more professional than dealing promptly with email? But ducking the real work is exactly what I’m doing. For me, the most dangerous distraction is not YouTube or Instagram: it’s the things such as email, which are nearly, but not quite, the work that needs to be done.

His recommendation: Have a clear to-do list before you open your computer and don’t let email consume your time:

The solution is childishly simple. I should ensure that whenever I switch on my computer I have in front of me a good list of what I need to do.


🎧 1 long-form listen of the week

a. ‘Flying Too High: AI and Air France Flight 447’ on Cautionary Tales with Tim Harford

More Tim Harford. (Sutherland, Harford… let’s call this edition the Brit-special!)

This episode features a worrying story about the crash of an Air France passenger jet into the Atlantic, in which all 228 people on board died. Harford frames it as a story of over-reliance on AI.

He begins the episode on a cliffhanger – with the resting senior pilot being woken up from his sleep because the plane was in crisis:

When you are asleep and the alarm goes off, how quickly do you wake up? Captain Dubois took 98 seconds to get out of bed and into the Flight deck. Not exactly slow, but not quick enough. By the time he arrived on the Flight deck of his own airplane, he was confronted with a scene of confusion. The plane was shaking so violently that it was hard to read the instruments. An alarm was alternating between a chirruping trill and an automated voice, going “Stall, stall, stall, stall”.

Despite their efforts, the plane could not be rescued and it crashed into the cold depths of the Atlantic.

Harford then segues into a fascinating Harvard Business School experiment on the impact of AI on decision-making. In the study, professional recruiters were asked the evaluate real resumes. The recruiters were divided into 3 groups:

Some of the recruiters were given software that was designed to operate at a very high standard for simplicity. De Laqua (the study’s lead) calls that good AI. Other recruiters chosen at random were given an algorithm which didn’t work quite as well or bad AI. They were told that the algorithm was patchy. It would give good advice, but it would also make mistakes. Then there was a third group also chosen at random who got no AI support at all.

De Laqua found that the computer assistance was very helpful. Whether recruiters were given good AI or bad AI, they made more accurate recruitment choices than the recruiters with no AI at all.

But here’s the surprise: the recruiters with good AI did worse than those with bad AI.

Why? Because they switched off. The group who had the good AI spent less time analyzing each application. They more or less left the decision to the computer. The group who knew they had a less reliable AI tool, spent more effort and paid closer attention to the applications. They used the AI, but they also used their own judgment. And despite having a worse tool, they made more accurate decisions.

He called that study ‘Falling asleep at the wheel’.

Harford connects this study with the Air France crash. The problem for the pilots was not bad AI. Their AI (the autopilot and the fly-by-wire system) was excellent. But it was so good that the pilots probably relied on it too much and could not fly the plane without it’s assistance. (The autopilot had briefly stopped functioning because the airspeed indicators were temporarily not functioning due to the plane having encountered a storm over the equator).

As one of the air crash investigators, William Langewiesche, assessed:

Langewiesche argued that the pilots simply weren’t used to flying their own airplane at altitude without the help of the computer. Even Captain Dubois had spent only four hours in the last six months actually flying the plane rather than supervising the autopilot and had had the help of the full assistive fly-by-wire system if the plane fly itself.

Scary! You’d think that the pilots would have been trained to handle situations like this.

This brings Harford to a fascinating concept – the paradox of automation

When manual takeover is necessary, something has usually gone wrong. This means that operators need to be more rather than less skilled in order to cope with these atypical conditions. This is called the paradox of automation. Unreliable automation keeps the operators sharp and well-practiced. But the better the automated system gets, the less experience the operators will have in doing things for themselves and cruelly, the weirder the situations will be when the computer gives up.

Unsettling right? I hope that airlines globally have learnt their lesson from this…!


That’s all from this week’s edition.

Photo by Diana Polekhina on Unsplash

Decode Leadership Storytelling

Join the ‘3-2-1 by Story Rules’ newsletter and get an e-book that decodes the hidden storytelling structure used by Jeff Bezos, Bill Gates and Warren Buffett.

    Unsubscribe at any time.

    Get Storytelling tips in your Inbox

    Subscribe to the 'Story Rules on Saturday' newsletter

    Get a free e-book that decodes the hidden storytelling structure used by leaders like Jeff Bezos, Bill Gates and Warren Buffett.
    Your infomation will never be shared with any third party