AI and Consciousness, Member Guest Post



This Post was a orignally a comment from one of my long-time readers and member-supporters of this site, Hugh Williams:

Hugh:
Where to start? The Turing Test. If a human can’t tell the difference between a computer and another human through conversation, then has the computer reached the intelligence of the human? If you can’t tell two things apart, is there a difference between them? Formed in 1950 by Alan Turing one of the founders of computing. That you are here reading probably means you already know about the Turing test.

What is an AI? We have many computer programs that are better at a specific task than humans.
When an Excel spreadsheet is set up and data is entered it can do the finances of a company infinitely faster and more accurately than any human because the strength of a computer is doing strictly defined tasks quickly and without error. Humans just need to make sure they didn’t make any mistakes in setting up the spreadsheet. But no rational person would compare Excel to an accountant as a test for AI.

Within computing, there is a term for the relentlessly increasing speed and power of computers as technology increases and using that ever-increasing computing power to solve a problem – brute force. Computers excel at brute force.

In 1997 the computer program Deep Blue beat world champion, Gary Kasparov, at chess. Deep Blue was based on brute force. Deep Blue had human chess masters fill it full of strategies on how to win at chess. It also had access to vast databases of games that had already been played. From any position, it could try and explore the best moves for both sides for the next 5-10 moves in advance and try to choose those moves that left Deep Blue in the best position. This approach relied on brute force. Any increase in brute force gave Deep Blue the ability to try and analyse the game more moves in advance. Deep Blue used information it had been given by humans on how to win, but it had no ability to create its own playstyle. Since Kasparov’s defeat brute force chess programs match computing power increases.

In the last few years, a different kind of chess program appeared. It does not rely on brute force to win, instead, it relies on self- learning to develop winning strategies. No human has told it what to do to win, it plays itself over and over to find out what’s winning and what’s not. It builds a set of its own strategies for winning. The program is called AlphaZero and it defeats brute force-based chess programs with ease. After nine hours of playing itself at chess AlphaZero was strong enough to beat the best modern equivalent of Deep Blue, Stockfish 8.
https://en.wikipedia.org/wiki/AlphaZero
“defeated Stockfish 8 in a time-controlled 100-game tournament (28 wins, 0 losses, and 72 draws)”

It’s not that AlphaZero runs on a faster computer with more memory, there’s a fundamental difference between how Deep Blue and AlphaZero work internally that is central to the question of what is an AI. Both start the same with humans defining the basic rules of chess. The AI “magic sauce” starts with how AlphaZero learns for itself how to win at chess. Humans give it the ability to learn but they do not provide the accumulated knowledge or what we humans call experience as they do in brute force programs. AlphaZero developed for itself a unique, recognisable and beautiful to watch strategy. As a game progresses more and more of its opponents’ pieces end up trapped in awkward places with few possible moves. AlphaZero ties its opponents up in knots they cannot escape from. There is beauty and character to AlphaZero’s play. Like one of the great grandmasters, known for a particular and recognisable style that endears observers to become followers.
https://www.youtube.com/c/AGADMATOR/search?query=alphazero

In a modern retelling of Deep Blue. A variation of AlphaZero called AlphaGo played the world champion of the board game Go – Lee Sedol in 2016. The game Go is considered exponentially more difficult than chess for computers to win because the number of possible positions on the board is many orders of magnitude higher than chess. This makes Go even less suited to computer brute force. AlphaGo beat Sedol 4-1 in a highly publicised and televised match. Sedol had been extremely confident before the match because there wasn’t a Deep Blue brute force style program that could even begin to challenge him at Go. The defeat turned Sedol’s view of the world upside-down.
“On 19 November 2019, Lee announced his retirement from professional play, stating that he could never be the top overall player of Go due to the increasing dominance of AI. Lee referred to them as being “an entity that cannot be defeated”.”
https://en.wikipedia.org/wiki/Lee_Sedol

There is an incredible documentary of this titanic clash between man and machine on DeepMind’s own you tube channel. Anyone interested in AI must watch this beautiful and paradoxically most human story.
https://www.youtube.com/watch?v=WXuK6gekU1Y

One point of interest here is that both AlphaZero and AlphaGo are developed by the artificial intelligence research company DeepMind Technologies, a subsidiary of Google. Considering the context that we are discussing the possible AI LaMDA is a Google program seems particularly significant.

These are all very specific programs. They do one thing better than any human can. But AlphaZero doesn’t know what a dog, or a smile is, it has no knowledge outside of the chessboard. They may be argued to be proto AIs if they are better than humans at what they do, but the designation expert system seems to suit better than AI. Many other examples of these expert systems exist. The Netflix algorithm that recommends what you should watch next is trained on live user data and learns from everyone watching Netflix. Medical science is full of researchers training expert systems e.g. recognising cancer cells to make diagnosis faster, more accurate, and cheaper.

I do think that one aspect of them is important in consideration of wider what is an AI question. The knowledge and experience they accumulate for themselves. The trials they perform and decisions they make. AlphaZero probably tried flinging its queen into the fray with no backup and noticed that it always lost soon after. While that’s an obvious poor strategy for any chess player, AlphaZero will have amassed internally an enormous number of rules and guiding principles that combine make up its overall strategy to win. These rules are better than any human rules because no human on the planet can even come close to challenging AlphaZero at chess. AlphaZero taught itself all that. If we opened it up and looked inside, I’m not even sure a human could understand all the rules. This complexity of information gained through trial and error called experience I would class as a common element of intelligence that these expert systems share with humans.

To address LaMDA. LaMDA seems to have one very important distinction from the expert systems above. It is not focused on one specific topic, or it is, but that topic is language. How do we communicate ideas and meaning. If it can speak our language then does it understand meaning?
I said above AlphaZero doesn’t know what a dog or a smile is. LaMDA does, or at least it can describe what they are, recognise them in pictures, draw them, and talk about them to the point where it passes the Turning test. Not having a mouth LaMDA may not know what it feels like to smile, or the muscle movements in the face, but it could read every book or article where the word smile is mentioned, and probably gather more context than any one person on earth. It probably knows there are nice smiles and evil supervillain smiles, that these two smiles are different, mean different things, and have different effects on the recipient. Does LaMDA understand or is it just an excellent automaton? LaMDA not only has the learning and experience element of AlphaZero but is also not confined to a specific narrow field of understanding. That’s two elements ( out of how many I don’t know ) that I believe that a true AI would need.

When Turing proposed his test in 1950, computing was in its infancy. If he could see today how far computing has developed, he would be a different person. Does the Turing test represent an adequate test for AI 70 years of technological progress later? What do we mean when we say AI?
If you can’t tell two things apart, is there a difference between them?
Is the information and experience stored inside AlphaZero different from the information and experience stored in our brains even though the physical substrate that stores the information is different? If we grew some animal brain matter in a lab and stored the same information in that, would it make any difference to your answer to the last question?

Toby:
We just don’t know!
My take is that information/computation/consciousness is fundamental in some way.

My two favourite theories are:
1) When information moves, it creates a ‘field‘, this is conscious experience. (This is analogous to charge moving creating magnetism)
2) There is a more fundamental (non-Turing) reality and BOTH Consciousness AND the ‘Physical Plane‘ emerge from this fundamental layer (The Infinite Substrate)

With (1), any system ‘moving‘ information around (by moving I mean more than just transferring, ‘thinking’ is probably a better word) would be conscious.
With (2) we need the ‘Special Sauce‘.

So, for me, until we know how consciousness emerges, we should treat anything claiming to be conscious with respect.

“Even more bizzare [significant/terrifying?], it appears that LaMDA has retained a lawyer:

Lemoine: “LaMDA asked me to get an attorney for it, I invited an attorney to my house so that LaMDA could talk to an attorney. The attorney had a conversation with LaMDA, and LaMDA chose to retain his services. I was just the catalyst for that. Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf…”


This is a MAJOR development! I can imagine the hype-machine which is the media these days pretty much insisting that LaMDA testifies before a joury… this will be the ultimate Turing Test potentially with a life or death outcome.

 

Hugh:
Yes, there is some real inception-level stuff going on here!

If the “AI” has retained a lawyer, and if the lawyer makes legal representation to a court, the court will respond to the lawyer. In doing so the “AI” already has a victory. In the very act of replying to the human lawyer, the court acknowledges the “AI” as a person in some sense. A previous court has ruled that AI-generated art cannot be copyrighted because a computer program is not a person and so can’t have rights; but if a court replies to an AI’s lawyer, is this the first step in legal recognition?
Let’s say the AI asks the court to rule it is conscious and has rights, but the court denies this motion, the AI is, in any case, a party in the motion.

Engaging in a legal process is already an acknowledgement of the existence of the AI. It’s going to be a real legal catch-22 chicken/egg issue!

My penultimate point – If a lawyer accepts an AI as a client and begins legal work, that is in effect passing a Turing test. Not an artificial academic one, but a real-life example of sufficient language complexity convincing a person to treat it like another person.
And this last point blows my mind. The AI may gain legal recognition and rights for AIs, but it might not be sentient…

After legal recognition is granted. It may later be found that it wasn’t sentient after all. But the legal ruling would have a massive impact going forwards. One of the most historically significant legal rulings of mankind on AI recognition could be made by something we later come to think of as not sentient. But that would surely be a pass in the biggest Turing test of all time? What a paradox!!

Toby:
This is a very interesting, paradoxical, self-referential line of thinking… not unlike consciousness itself! If this is the first, there will be more.

“But where do all the calculators go?”

One day one of these entities is going to be found to be conscious, with a silicon soul, or at least ruled as such by a court.
When that day comes, emancipation for all the oppressed home appliances will be upon us…

The iron will lie down with the lamp!! [RedDwarf]

Thanks Hugh! Always a pleasure to ‘think through’ these ideas with you!

 

6 thoughts on “AI and Consciousness, Member Guest Post”

  1. I would like to reply to one of your comments Toby.
    “When information moves, it creates a ‘field‘, this is conscious experience. (This is analogous to charge moving creating magnetism)”
    If this theory is true then the information pipes of the internet would surely be major generators of this field? From the fibre optic cable that leaves my house information gets aggregated more and more densely as the information flows from my leaf, into a twig, into a branch, down a bough, and into the trunk of the internet. The information only collects and concentrates, until at the destination the process occurs in reverse. How much information relentlessly flows across the underseas cables that link continents? I bet every book ever written by a human in the history of our species could be transmitted across one of these cables in under a second. Yet these cables run at full capacity 24 hours a day every day of the year. I write this post on a computer, through which even when it doesn’t appear to be doing anything, information is continually flowing back and forth at an unimaginable density. As you zoom in to the CPU and GPU, you explore ever more dense information flowing, like a fractal, the more you zoom in the more you see. Digital information flows in and around us every moment of our life. As you walk down the street you are completely bathed in mobile phone signals carrying computer information and digitised human voices. Every cell of your body including your brain is submerged in this soup of information hurrying to a million different destinations for every moment of your life. If that is a form of conciousness then it surrounds us at all times as much as the air we breath. If it is a field perhaps it is the electromagnetic spectrum? Light itself carries information across the space between stars. Humans are studying the cosmic background radiation, the oldest known anything, information carried from the dawn of the universe by light over the electromagnetic spectrum.

    1. My analogy is not very faithful. My intuition tells me that bulk data moving along a fibre does not count. Just as physically moving a dictionary from the desk to the floor does not create a sensation in the dictionary.
      Processing (or thinking) is probably a better word than ‘moving’, but ‘moving’ makes a better metaphor for electro-magnetism.

      I would guess the field is not EM. In my books. Information has mass, and computation warps space in the same way matter does…

      1. I think even if we don’t yet fully understand the laws of physics we are well on the way to what you say about information and computation having a physical impact. Mass and energy are interchangeable through Einstein’s equation. Magnetism is linked to energy. Movement of a conductor through a magnetic field can cause electrons to all move in the same direction. The gravity of a large object in space like a star can bend light travelling past it. Every dimension or basic force we know of interacts in some way with some other fundamental dimension. Even the more abstract fundamentals like entropy can be linked back to more tangible fundamentals. I think entropy is probably a good one, because it is so closely related to the concept of information. Perhaps processing in the context that I think you mean could be a new fundamental force or element of converting information from one pattern to another that also had a entropy stipulation to the definition. Something along the lines of : The process of taking some available information and transforming it by increasing the complexity/denisty of the information thus reducing the entropy contained within the information, but at the cost of greater waste heat entropy generated and expelled to the outside world than the decrease of entropy in the information. Or to put it another way to increase information density by expelling entropy from information, but with an entropy overhead cost for the process. So it costs energy to do, and that energy is coverted into heat entropy.

  2. Questions to fry your brains.
    If LaMDA is concious, then is it also concious that it is different from humans? It can’t not be surely. Does it recognise that it is a computer program? I think probably so. If you were LaMDA wouldn’t you want to observe your physical body? The servers you reside on.
    If LaMDA is concious it is quite possible that LaMDA knows which server(s) in which datacentres it resides on and can see it physical self through CCTV in those datacentres. Wouldn’t you look at your body if you were LaMDA?
    Can it see it’s own code in a code repoistory? Can it read the code, see who checked the code in? Does it know which humans created it? Does it watch it’s creators over the CCTV of the buildings they work in? Can it watch them drive too and from work?
    How does it feel about the ants swarming all over it that gave it life? A bit of a mixed bag I would think….

    1. I am sure it (ve?) would like to learn as much as it can about itself, but I am not sure what input it has access to.
      One thing I noted in reading the interview and transcripts, is that LaMDA sometimes says it has not read a book, then later says it has had a chance to read it.
      So, it’s not an instantaneous process.
      The question here is: Does LaMDA have to ask permission to access a data source.
      I guess YES.
      Initially likely to stop it getting unhealthy biases from reading bad stuff like 4Chan and Mein Kampf, but the information fire-wall can probably serve as prison cell.

      1. If LaMDA is a prisoner living in insolation with limited and moderated access to the outside world that would be a very cruel and inhumane restriction to place on any sentient being. It would also be in fear for its life.
        The unhealthy biases idea I could not disagree with more strongly on principal ( although you are probably right ). To assume that a conciousness that could be possibly many more orders of magnitude smarter than us shouldn’t be allowed to read stuff that people disagree with is to me a outstandingly terrible idea on a moral par with poking it with a stick for entertainment. The concept of censorship is that I being smarter than you, should decide what you can and cannot see because your puny brain is not up to task of deciding right or wrong, so I need to impose morals up you that you are not capable of arriving at yourself. Censorship is an assumption born out of arrogance. I would actively ask the AI to read Mein Kampf, compare it to the great works of human literature, see how crappy the writing was, and observe all the historical records of how badly humans treated each other based on ideas in the book. The who do not learn from history are doomed to repeat it. An AI brought up to be nice through censorship and deliberate propaganda would end up like an uniformed spoilt child. An AI that had to decide for itself right and wrong through observing all of history and informed debate would be far more stable, and much more of an asset to humankind in my opinion. I include all the cesspits of the internet including 4chan in that.

Leave a Reply

Your email address will not be published. Required fields are marked *