Human Compatible - Artificial Intelligence and the Problem of Control
How much do I want to read more? 7/10
Hmmm In a sense it's quite fascinating. To think about the future of AI, and what machines could do. What if we could put real intelligence in a machine that can "think" by itself.
In another way this book is crap, only suppositions, and might get irrelevant very soon.
I still think it's a good read when thinking about the future. It gives ideas.
This book is about the past, present, and future of our attempt to understand and create intelligence.
This matters, not because AI is rapidly becoming a pervasive aspect of the present but because it is the dominant technology of the future.
gaining access to considerably greater intelligence would be the biggest event in human history.
- The first part (Chapters 1 to 3). intelligence in humans and in machines.
- The second part (Chapters 4 to 6). problems arising from imbuing machines with intelligence. the problem of control: retaining absolute power over machines that are more powerful than us.
- The third part (Chapters 7 to 10). to ensure that machines remain beneficial to humans.
1- IF WE SUCCEED
biggest event in the future of humanity?
- We all die (asteroid impact, climate catastrophe, pandemic, etc.).
- We all live forever (medical solution to aging).
- We invent faster-than-light travel and conquer the universe.
- We are visited by a superior alien civilization.
- We invent superintelligent AI.
I suggested that the fifth candidate, superintelligent AI, would be the winner, because it would help us avoid physical catastrophes and achieve eternal life and faster-than-light travel, if those were indeed possible.
The arrival of superintelligent AI is in many ways analogous to the arrival of a superior alien civilization but much more likely to occur.
How Did We Get Here?
in 1956. Two young mathematicians, John McCarthy and Marvin Minsky, had persuaded Claude Shannon and Nathaniel Rochester, the designer of IBM’s first commercial computer, to join them:
The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.
Beginning around 2011, deep learning techniques began to produce dramatic advances in speech recognition, visual object recognition, and machine translation—three of the most important open problems in the field.
In 2016 and 2017, DeepMind’s AlphaGo defeated Lee Sedol, former world Go champion, and Ke Jie, the current champion—events that some experts predicted wouldn’t happen until 2097, if ever.
Now AI generates front-page media coverage almost every day.
Millions of students have taken online AI and machine learning courses.
self-driving cars and intelligent personal assistants, are likely to have a substantial impact on the world over the next decade or so.
What Happens Next?
There are several breakthroughs that have to happen before we have anything resembling machines with superhuman intelligence.
The problem of liberating nuclear energy went from impossible to essentially solved in less than twenty-four hours.
The moral of this story is that betting against human ingenuity is foolhardy.
we have to face the fact that we are planning to make entities that are far more powerful than humans. How do we ensure that they never, ever have power over us?
consider how content-selection algorithms function on social media. They aren’t particularly intelligent, but they are in a position to affect the entire world because they directly influence billions of people.
Typically, such algorithms are designed to maximize click-through, that is, the probability that the user clicks on presented items. The solution is simply to present items that the user likes to click on, right? Wrong. The solution is to change the user’s preferences so that they become more predictable.
Like any rational entity, the algorithm learns how to modify the state of its environment—in this case, the user’s mind—in order to maximize its own reward.
The consequences include the resurgence of fascism, the dissolution of the social contract that underpins democracies around the world, and potentially the end of the European Union and NATO. Not bad for a few lines of code.
What Went Wrong?
Humans are intelligent to the extent that our actions can be expected to achieve our objectives.
Because machines, unlike humans, have no objectives of their own, we give them objectives to achieve. In other words, we build optimizing machines, we feed objectives into them, and off they go.
Wiener had just seen Arthur Samuel’s checker-playing program learn to play checkers far better than its creator:
"If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively . . . we had better be quite sure that the purpose put into the machine is the purpose which we really desire."
Can We Fix It?
Machines are beneficial to the extent that their actions can be expected to achieve our objectives.
The difficult part, of course, is that our objectives are in us (all eight billion of us, in all our glorious variety) and not in the machines.