Free Will & Artificial General Intelligence (AGI) | by Nathan Lambert | Jan, 2021
[ad_1]
Lessons about AI from lessons about our mind. Focused on the nature of free will.
I have been trying to train my mind through meditation for about two years now. It is remarkable to me the frequency and ease by which you can notice the actual modus operandi of the brain differing from the perceived modus operandi. Things like superimposing images on your visual field, noticing how arbitrary choices are made (they appear), and the transient nature of even self-conjured negative emotions all can easily challenge our status quo.
The Waking Up App has a new series concisely dictating the illusion that is the fleeting feeling of free will. It is unsurprising that how we view the artificial intelligences (AIs) that we create through the lens of our own subjective experience. That is: we see the AIs through the lens of how we think and we view the world. Our thoughts and sights do not entirely reflect reality, and the gaps you can observe when meditating and examining the nature of reality are gaps we get to decide if we want to imbue our machine creations with.
Ultimately, the nature of the mind that makes it feel as if there is are sensations to be human can likely be hard-wired in code to determine how it is to be an AI. Some of the sensations are more intuitive to set up in code and some represent open problems in AI research. On the other side of the coin, we can make AIs that have no bearing or expectation to feel any of the sensations we regularly experience as being human. In this post, I will walk you through the 90-minute course from Sam Harris on free will, focusing on what it means for AI.
- The first four sections 1. Cause & Effect, 2. Thoughts without a Thinker, 3. Choice, Reason, & Knowledge, and 4. Love & Hatred have the most bearing on creating intelligent computer systems, so I will spend more time there.
- The last three lessons 5. Crime & Punishment, 6. The Paradox of Responsibility, and 7. Why Do Anything? have more to do with ethics and the creation of a functional society in light of these mental structures. These three will be less about creating AIs, but rather how AIs could better fit into this society.
In writing this, a lot of themes come up with Artificial General Intelligence (AGI) and computer-consciousness. The points made are early explorations and I suspect these themes will be revisited as we learn more.
Some terms I use heavily in this piece can take multiple meanings, but for this post, I am thinking of them as:
- Meditation: the act of investigating the nature of the mind, normally through quietly focusing on individual aspects of awareness (such as the breath).
- Free will: the subjective feeling that your decisions, biology, and primarily your sense of self determine your actions to some extent.
- Artificial Intelligence: an agent that reasons and interacts with the world.
For an illustrative mental exercise (meditation) to warm you up to the illusion of free will, focus very closely on this arbitrary task and how your brain comes to a conclusion: what is the favorite article you have read in the past month?
(Pause and think closely about what has come up)
Now, think of another piece of writing.
Do you have any control over which articles come to mind? The feeling is that random suggestions appear as a consequence of your current state. This is a clear example that we don’t have free will, and really the illusion we trick ourselves with is an illusion that an illusion of free will exists at all. By no means do I expect you to think you are an autonomous entity after doing this, it can just be illuminating and make you want to investigate further.
Closely examining the nature of the mind shows the strength of our subjectiveness with respect to our experience. The subjectiveness of being is our wiring, and we are making machines that primarily reflect this notion. As with the example above, the illusion in our brain is really that there is an illusion of free will — that is, when we closely examine what is happening, the freedom disappears.
Considering these states and common operations is crucial to planning with powerful AIs in the loop. Now, I will walk through the lessons from Sam and what can be taken away from them.
Consider causes of a phone to ring — some digital signal is transferred to a speaker (and many digital signals before that), and electrical oscillations create sounds waves. What causes a phone to ring can be construed in different ways (on the chain of engineering), and the same ambiguity exists with human thought. The difference is: with human thought we don’t really consider the other possible causes other than ourselves. Ultimately, there is little difference in initiating in the human brain — where a set of neurons fire in reaction to a stimulus — and in code where an interrupt triggers an action. In the understanding of the human brain, we are limited by neuroscience, and in computers, we have already eliminated most uncertainty in fabrication. A difference in fundamental scientific understanding of a system does not preclude the uncertainties from behaving the same at a high level different.
Impact on AIs: changing the notion of cause and effect in software
Accepting the cause of events will actually be easier with AIs — we expect our agents to act based on the information they have (unless we add more abstract hallucination). We could very easily wire an AI to perceive that it is its own cause of many events. This would be accomplished by updating its priors (any distribution of beliefs) with an external program, but make the computer think it was its own fruition.
We have two modes of action and mental investigation, voluntary and involuntary — only the former (voluntary) indicates thought. If you re-consider the meditation exercise on choice proposed in the preliminaries, it is easy to prove that identifying with thought is not free will. Identifying with thought is more of a noticing than control (which is one of the first lessons in many meditation practices). The phrase thoughts without a thinker is referring to the idea that we all do experience many thoughts, but upon close inspection, there is not a thinker embedded in our consciousness (in the form of a thinker that reacts to and curates much of the information that appears).
Impact on AIs: Hierarchical AIs with contrived data flows
We could have robots truly act in the way where its thoughts are truly associated with a thinker, but I think this muddles efficacy (in some research curiosity) at the benefit of direct computation. A specific organization of this could be an AI with a computation structure so that there seems to be two levels of computation: one with control and an environment where information progresses — kind of like an RL loop with a separate multi-modal data processing unit (it’s obvious to me at this point that we don’t have the terminology to discuss what these things would look like). This structure could mirror the human perception of a thinker and a consciousness.
Without free will, the act of reasoning and the substance of knowledge can come into question. Luck seems to be a mental reaction to randomness, rather than a true property. The way that it is described to me is that reasoning about goals is a sort of self-value update, rather than a choice. The ultimate decision will not be something directly under your control. Inherently, and somewhat counter-intuitively, the lack of freedom makes reasoning possible. The world opposes free choices because they can be wrong (with respect to laws such as science) and punished. True freedom is recognizing that one is not controlling the aspects of experience previously identified with.
Impact on AIs: manipulating a computer’s relationship with randomness
AI can have a different understanding of luck (randomness and uncertainty) and how it interfaces with its structure of intellect. This more direct approach to AI leverages the benefits of computation, but if we seek to create consciousness, having AIs who tie their fate to the randomness can provide a strong source of self. Like with a sense of self, a formulation of knowledge doesn’t seem very practical in how we currently view AIs, which makes me think it can be one of the biggest opportunities for improvement.
The dichotomy of how, and why, we experience different types of emotions is part of the allure of being human. It is hard to describe what happens, yet we all feel it. Love is a magical feeling that can seem serendipitous and hatred feels incredibly focused. Love is about a feeling with people or things (not how they make decisions) while hatred is very specific to free will and judgment of actions (think they should act differently). The free will arguments start leading into ethical discussions from here by considering scenarios such as forgiving people who were shown to commit heinous acts partially due to degrading biology, like a brain tumor.
Impact on AIs: easier to make computers that love
At first glance, it will be hard to make AIs that hate in the same way. AIs can be made to want to optimize, but true hatred represents a complex “what if” structure of thought. Let us leave the door open for robots that love then — love for a robot can be when it fulfills its task perfectly and is part of the bigger system. Both of these have complicated problems of distilling human values into numerical approximations, but after this analysis, I am more optimistic than I originally thought.
Read More …
[ad_2]