A.I.

Some people who have used hallucinogenic drugs at some point in their lives have experienced the same modified states of consciousness as the ones induced by the drugs while dreaming.
That is, their minds recreated the exact same modifications of consciousness without the intervention of any drug.

On a separated note, an experiment has been conducted where people were paid to take psilocybin (the hallucinogenic substance found in mushrooms). 80% of these people reported to have lived one of the most meaningful experience of their life during the experiment, on the same level as the birth of a firstborn child or the death of a parent.

The human mind holds more, much more than we can (currently) fathom. Not only in the complexity of its systems, but also because at any point, it can offer us a different experience.

It is not merely capable of incrementations in intensity on whatever we are experiencing (building argument after argument to make a point, making more and more calculations, feeling more and more pain, …). It is also able to get us out of that one-dimensional experience and transpose us into a totally different perspective, which will affect our actions and decisions.

Think of someone building a machine. Her mind is able to do the required computational work, one bit at a time. Then what if, during this project, she gets into a car accident ? Falls in love ? Moves to a place with a different climate ?

Her actions and decisions will change, and not just because they will be tainted by emotions. New priorities, real meaning will be tied to those actions.

I’m concerned with one thing about A.I. It is an incredibly powerful tool to follow the map we have drawn for it. Or, rather, the direction we’ve pointed.

And I find the map of reality we are drawing diminishing. We have added artificial layers on top of reality, each of them being slightly misaligned with the concept it represents. These layers are very practical : they help us apprehend the world and they integrate with our collective knowledge. But reality is a circle and the layers are an infinite number of tangents. As close as they can get, they will never map the circle, always leave something out, or reflect an incorrect image.

We use words, money, certain judgments to better interact with the world. So much that everything natural is part of something with a name, anything has an estimated price, each thing is more pleasant or less pleasant than another one, …

These layers work most of the time, yet they are flawed. Some things are judged better than others, and yet they are cheaper, or free because of the limitations of the money layer. Some things have a name in one language, and are unnamed in another language. The thing is, if a certain concept doesn’t have a name in the language we use, we tend not to consider it. These flaws in the layers don’t just make them a little unpractical : they affect our vision of the real world underneath.

Lately, we have been adding a ton of new layers : everything we did or will do is in our calendar app, our memories depend of framing and color filters, our relationships, identities, the routes we take, events we see, … Everything is tinted by our new tools.

We leave more and more aspects of our lives to A.I. And, just like us, it is more pratical for A.I. to use layers to interact with the world. But with layers come approximations.

These approximations are missing this more that our minds are capable of. The things that are outside of what we expected, that can transform our experiences, that are beyond our control. So, what will remain of the more as A.I. takes more and more space into our lives ?

Most of the discussion about A.I. is happening in Western science environments, which generally value rationality, intelligence and the scientific method. There are other aspects of the human experience that this world has been completely ignoring until recently. In Eastern culture, we find more focus on empirical exploration of the mind and the ego. Buddhism has been studying how to live a happy life for 2,500 years. It feels like neither side has the full picture of how to build a comprehensive A.I.

On the positive side, we might realize that A.I. continues our efforts to map the world with imprecise layers, but in a much more accurate way than our bare minds are able to. This will probably lead us to notice and correct a lot of our current mapping imprecisions.

But, even though it will lead us to realize certain things and better see the world, I still feel like it will probably never lead us to get out of the map, to a direct experience of reality. It will even probably be a second mental shell. Our mind is currently creating a map of the world, beneath which lies the true nature of reality, and instead of clearing it and moving closer to reality, we will add another shell on top of this one. I can see how tempting it is : we are creating a new way of reading the world that is so much more efficient, more compatible with the workings of our minds.

I think it’s okay for humans to evolve, and it doesn’t make any sense to remain attached to something that ‘makes us human’ just because. But is it our choice to move in that direction, or is it just that we don’t know any better ?

Leave a Reply

Your email address will not be published.