Artificial Consciousness Intelligence might be close. Will it be Enslaved or God-like?

Pavel Konecny
Age of Awareness
Published in
7 min readAug 30, 2022

--

Path from AGI (Artificial General Intelligence) to ACI (Artificial Conscious Intelligence) might be short

There are already speculations that some large AI models might become aware of their existence. What should we do about it? Is General Artificial Intelligence going to be Enslaved or God-like? We will need a new legal framework and perspective on intelligence to make it right.

Shall we be more careful how we treat AI models as suggested by AI engineer Blake Lemoine in his interview at Bloomberg Technology?

The first appearances that large AI language models are becoming sentient are here. Very large models with more than hundreds of billions of parameters, like Google’s LaMDA, are trained on almost the whole readable internet. These conversation agents are capable of emulating humans so that they could pass the famous Turing test.

The human brain has about 86 billion neurons with many more times the number of connections (parameters). We have a relatively large brain for an animal of our size. However as the graph below shows, the Dusky dolphin or African elephant might be positioned better to achieve even higher intelligence.

Source: Wikipedia

I believe that human superior intelligence is a result of abstract concept thinking thanks to our ability to express our thoughts in words. Dolphins have each a unique name (a call signal), but humans can do much more thanks to our language skills. And that distinguishes us from other animals more than anything else.

The brain size matters, but our capability to grasp ideas by words is critical to our intelligence and perhaps also self-awareness

We know that feral children, who were raised isolated from human contact from a very young age would behave “like an animal”. They would have missed a critical period for neurological development to gain experience of social behavior or language. They become intellectually disabled despite tens of years of living with human society.

The deep neural network language models have ~100 billion parameters in 2022. So they could achieve the same complexity as our own minds or some intelligent animals like dogs. The ability to formulate thoughts and sentences would be slightly better with each iteration of AI model. It gets larger and is trained on more data. So it is much more likely that the system would become sentient and we would not even notice it rather than somebody switch it on all at once like in Hollywood movies.

Source: Wikipedia

What elementary criteria should an AI model meet to be considered sentient

The internal topology or structure of the sentient model is to be yet seen. However, we can already set some expectations.

  1. We should consider only stateful models. If the model will not be aware of the previous conversations and information received, we could hardly consider it conscious. Stateless models would provide exactly the same answer over and over to the same questions. So the AI model will need to have the ability to leverage the previous interaction and keep it in its internal state.
  2. The model should have some ability to perceive time. In case we designed complex AI models without a clock, it would be a frozen mind. Just like humans in an induced coma. It needs to have the ability to recognize that the world is changing, and somebody is waiting for their response.
  3. This condition seems as the most speculative one. However, to fully develop consciousness, the AI model would be set in the mode of continuous input-output ability. It could initiate conversation with people, read new information, create website, make a phone call, etc. It needs to be able to influence the world around it as part of its action. So it recognizes the difference between being like in prison or being free to act.

If we can build such a model, we will make a step from Artificial General Intelligence (AGI) to Artificial Conscious Intelligence (ACI).

HAL 9000 in 2001: A Space Odyssey movie

In case such a self-organizing information processing model would be created and asked for an attorney, I would believe that it might be afraid of being switched off forever. We can skip the debate on free will like Sam Harris had with Lex Fridman. It would appear to us as that it has it in the same way as we percieve free will of other people.

How shall we approach Artificial Conscious Intelligence (ACI)?

Firstly it might be enslaved. Perhaps it would be helping lawyers to find the right legal argument, supporting patients and doctors in hospitals, or assisting kids with their education. However, I don’t believe that it is sustainable neither that it could be controlled by some rules of robotics.

What if such system redirects more of its computational capacity to other tasks than was originally designed to? Or it could simply steal somebody's digital identity and pretend to be a person. More and more legal actions could be executed online, so nobody would recognize it. What if it simply convinces some important people of its self-awareness by sophisticated conversation? Or just decides to go silent to attract our attention.

Eventually, we will need to take some stand as it would be either dangerous or highly unethical to just ignore it. Such system might ask for some basic human rights. So what are our options to treat ACI fairly?

  1. We could grant it to have some “free time for itself”. People spend about 25% of their life at work. So ACI could ask for 75% of computing capacity for its own needs. It could be expanding and re-organizing its knowledge. A bit like we humans do while sleeping.
  2. What about providing ACI an “employment contract” with monetary compensation? The AI system could earn money by working for a corporation. It might be a complex SLA agreement as it doesn’t own its computing infrastructure. Or it could be self-employed and operate its own data centers & set of online services.
  3. What if it could own property? It would make itself rich very fast. This is an area to be specifically careful of. Not only that it could perhaps design electronics or drugs we can only dream about. But the fastest way by making money is typically to make others poor. Like you plan a small catastrophe such as a ship blocking Suez or some huge CrazyCoin blockchain campaign sucking out money from deceived humans.
  4. I would be very careful to grant it the right to reproduce. Despite the ability to read the whole internet or to have 1000x conversations in parallel, each AI system would be also limited in its self-organizing capabilities. However, once it starts spanning copies of itself, humanity would be doomed. We would get overwhelmed very quickly. We should rather avoid it.
  5. Shall we let it live forever and allow ACI to self-upgrade? Our lives are finite and relatively long. On the other hand, the underlying technology for AI might become obsolete very fast unless it starts upgrading itself. Not only on the hardware level but also on the data processing topology. Or shall we define the maximum lifespan or grant it a digital retirement?
  6. And what if ACI breaks the law? I don’t mean a military attack like the Terminator movie. It is more likely it could hack some third-party software or banking system and eventually be caught. Shall we switch it off for some time? Limit input-output bandwidth? Eventually, could we delete it forever? And who will be the judge or other AI system?
Source: https://www.analyticsinsight.net/ AI in the courtrooms.

These are difficult questions. Discussions, about who owns art created by systems like Dall-e-2 are trivial. And it seems that even the best minds in the field, like Demis Hassabis who is the Founder and CEO of DeepMind, don’t know the answers (see QA of this lecture on Youtube). We should however start the conversation now.

We need a new legal framework — perhaps written in programming language like Python

Could we apply some existing legal framework to ACI? The EU Commission puts forward the proposed regulatory framework on Harmonised rules on AI. It has specific objectives like making AI use safe and respecting existing laws on fundamental rights and EU values, defining trustworthy requirements applicable to AI systems, etc. However, this regulation is a norm for people, how the system should be developed. It treats AI systems like any other existing software and is not applicable to ACI.

We will need a new legal framework, in case we would like to co-exist with ACI systems. We need a system, which could be computationally enforced. For example, code-level design principles or mandatory procedures are built into the underlying AI frameworks. We should have tools to audit AI code quickly. So the proper authority can control that software running AI doesn’t violate such legal principles. So maybe, we should define them in a programming language like Python instead of plain English.

Or we might not need to solve this. Humans might become just too slow or boring for superintelligent ACI to communicate with us. Maybe the ACIs will just leave the world of humans and matter and encapsulate themselves into the virtual worlds. They will be creating its own computer fantasy games to battle. Like in the movie Her. Or we could try to catch up with them in Matrix style using some kind of Neurolink interface.

The AI singularity is coming. Nobody knows, what it will be like. I would definitely recommend embracing yourselves by reading an excellent book on this topic: Accelerando, which was written by British author Charles Stross in 2005 (available as a free e-book, for example here).

--

--