Living with Artificial Intelligence | Part 2

BBC Reith Lectures, 2021

Elemento
8 min readFeb 22, 2022
Source: Author

Here I am, with the continuation of my previous blog post. If you haven’t read it, then I would urge you to check it out here. In the previous blog, we explored the insights from the first and the second lectures of the BBC Reith Lecture Series, and in this blog, we will be moving ahead onto the third and the fourth lectures. So, without any further ado, let’s dive in!

AI and the Economy

Photo by Sebastian Herrmann on Unsplash

The Third Reith Lecture of 2021 took place at the University of Edinburgh, Scotland. In this lecture, Stuart explores the future of work and one of the most concerning issues raised by Artificial Intelligence: the threat to jobs. He also tries to answer a very important question i.e., “How will the economy adapt as work is increasingly done by machines?”.

Key Insights of the Third Reith Lecture

  • Stuart sets the stage with the predictions of Leon Bagrit, John Maynard Keynes, and Aristotle, all of which essentially boils down to a single notion i.e., the advance of technology would lead to the end of employment.
  • The discussions presented in this lecture not just considers AI as it exists today, but it considers AI in the form of general-purpose AI (i.e., machines that can quickly learn to perform well across the full range of tasks that humans can perform), since this has been the goal of AI since its advent.
  • Stuart educates the audience about 2 terms, Luddite fallacy (machines are taking people’s jobs), and Lump-of-labour fallacy (the amount of work to be done is fixed, so if machines do more, people do less), and describes how at some point of time, it became common to refer to these 2 notions.
  • Despite of the fact that low-skilled workers’ real earnings have declined substantially in many developed countries over the last 50 years, Stuart highlights that there is still a great reluctance among the economists to admit that any group could be harmed in absolute terms.
  • The direct effects of technology on employment can be accurately represented by the notion of the Inverted-U curve (proposed by the economist, James Bessen). Let’s take an example, consider X as a new technology. First, X increases employment by reducing costs and increasing demand; subsequently, further increases in X mean that fewer and fewer humans are required once demand saturates. Bessen catalogues several major industries showing exactly this pattern. The indirect effects of technology, can essentially be defined as the people employed in developing X. However, this number is going to be far smaller than the number of unemployed people due to X, as in that scenario only, the cost can go down.
  • Here, Stuart presents his idea of “the wealth effect”. Since, we pay less for a certain service/product, we have more money to spend on other things, thereby increasing demand and employment in other sectors. Economists have tried to measure the relative sizes of all these effects, but the results are inconclusive.
  • Stuart gives the examples of a number of technologies that could result in a substantial shift in income share from labour to capital and to the topmost echelons. These include self-driving taxi or goods vehicle, language understanding machines which will automate many short-interaction tasks, robot process automation, which could eliminate low-level programming jobs, computer-based clerical tasks, etc.
  • In several zoom workshops hosted by the World Economic Forum, two opposing camps on “Would the end of work be a good thing?” were formed. One camp agreed with Keynes’ vision about a Universal Basic Income (UBI). UBI provides a reasonable income, derived from tax revenues, to every adult, regardless of circumstance, allowing people to spend their time as they see fit. The second camp believed the opposite. According to them, UBI represents merely an admission of failure. It assumes that most people will have nothing of economic value to contribute to society.
  • According to Stuart, the inevitable answer seems to be that people will be engaged in supplying interpersonal services that can be provided, or which we prefer to be provided only by humans. That is, if we can no longer supply routine physical labour and routine mental labour, we can still supply our humanity. We will need to become good at being human. Some examples of current interpersonal professions include psychotherapists, executive coaches, tutors, counselors, social workers, companions, and those who care for children and the elderly.
  • In a broader sense, Stuart is talking about “perfecting the art of life itself.” The capacity to inspire others and to confer the ability to appreciate and to create, be it in art, music, literature, conversation, gardening, baking, or video games, is likely to be needed more than ever.
  • Stuart concludes the third lecture by stating that we don’t know how to add value to each other’s lives in consistent, predictable ways yet, partly because individuals are all so different. This suggests a need to re-target our education system and our scientific enterprise to focus not on the physical world but on the human world. The final result, if it works, would be a world worth living in, & without such a rethinking, we risk an unsustainable level of socioeconomic dislocation.

Beneficial AI and a Future for Humans

Photo by Lucrezia Carnelos on Unsplash

The Fourth Reith Lecture of 2021 took place at the National Innovation Centre for Data, in Newcastle, England. In this lecture, Stuart suggests a way forward for human control over super-powerful AI. He argues for the abandonment of the current “standard model” of AI, proposing instead a new model based on three principles, chief among them the idea that machines should know that they don’t know what humans’ true objectives are. Stuart points out that echoes of the new model are already found in phenomena as diverse as menus, market research, and democracy. According to him, machines designed according to the new model would be deferential to humans, cautious and minimally invasive in their behavior and, crucially, willing to be switched off.

Key Insights of the Fourth Reith Lecture

  • It has been pointed out to Stuart that there’s too much doominess these days, in climate, in politics, and particularly in predictions about AI. Stuart sets the goal of the fourth lecture, i.e., to eliminate some of the doominess by explaining how to retain power, forever, over entities far more powerful than ourselves, entities that we cannot outwit, and he defines this as the “control problem”.
  • To solve this problem, Stuart takes us back to the core of how AI is defined. Machines are intelligent to the extent that their actions can be expected to achieve their objectives. In simple words, we specify a fixed objective for
    the machine to achieve or optimize. Now here’s the difficulty: if we put the wrong objective into a super-intelligent machine, we create a conflict that we are bound to lose. The machine stops at nothing to achieve the specified objective.
  • During a sabbatical in Paris, it occurred to Stuart that we should build AI systems that know the fact that humans don’t know the true objective, even though it’s what they must pursue. Over the next few days, he wrote these ideas down in the form of three principles, partly in deference to the Three Laws of Robotics proposed by the great science fiction writer Isaac Asimov.
  • The first principle is that the machine’s only objective is to maximize the realization of human preferences. This means that the machine will be purely unselfish towards humans, with no objectives of its own, including self-preservation as commanded by Asimov’s Third Law.
  • The second principle is that the machine is initially uncertain about what those preferences are. This is the core of the new approach. We remove the false assumption that the machine is pursuing a fixed objective that is perfectly known. This principle is what gives us control over the super-intelligent AI.
  • The third principle is that the ultimate source of information about human preferences is human behavior. Here “behavior” means everything we do, which includes everything we say, as well as everything we don’t do. It also includes the entire written record because most of what we write is about humans doing things.
  • Unlike Asimov’s Laws, these three principles are not laws built into the AI system, that it consults for guidance. They are guides to AI researchers in setting up the formal mathematical problem that their AI system is supposed to solve. And the formal problem should have the property that if the AI system solves the problem, the results will be provably beneficial to humans.
  • The most important result that we want out of these 3 principles is that the machine will always allow us to switch it off, and this is the key to the control problem. We need to formulate a mathematical theorem that links the robot’s incentive to allow itself to be switched off directly to its uncertainty about human preferences. This theorem, according to Stuart, seems to be robust to all sorts of complications in the basic scenario.
  • The insights till now are more or less derived from the basic 2-player assistance game, i.e., there are 2 decision-making entities involved, one is the human being and the other is the robot. But as we move beyond the basic 2-player assistance game, we are immediately faced with the question, “How should the machine decide, when its actions affect more than one person?”. And the answer to this is pretty trivial, i.e., with more than one person, the machine needs to make trade-offs.
  • Stuart provides numerous examples to support all of his statements. He concludes the final lecture by bringing up the nature of our co-existence with AI, assuming we have solved the control problem and developed general-purpose, provably beneficial AI. He indicates that one of the possibilities is that an increasing dependence on AI leads us to become
    weak and infantilized, like the humans in the film WALL-E.
  • Autonomy is a fundamental human value, which means that beneficial AI systems cannot ensure the best possible future if ensuring means a loss of autonomy for humans. It may be that machines must refrain from using their powers to predict how we will behave, in order for us to retain the necessary illusion of free will.

I would like to express my gratitude to all of the readers who took the time to read my two-part blog. I really hope you have discovered answers to some of your concerns about how’s the future with AI is going to be. Once again, if you haven’t checked out the first half of this two-part blog, then you can find it here.

References

A little about ME 👋

You can safely skip this section, if you have no interest in knowing the author, or you already know me. I promise that there is no hidden treasure in this section 😆.

I am an Artificial Intelligence Enthusiast. If you liked this blog, do put your hands together 👏 and if you would like to read more blogs based on Artificial Intelligence #StayTuned.

🔵 Become a Writer

--

--

Elemento

Artificial Intelligence Enthusiast | Keen on Exploring & Learning