Curious Cognitome
July 18, 2018

Of self-fulfilling prophecy and A.I.

Even a blind person nowadays can see that our world becomes more and more polarized. Multi-polarized world demands every you and every me to be a member of one of its numerous groups or at least create your own if you can. Different groups, no surprise, proclaim different opinions; some of them are strong and became viral, some are weak thus get lost in the ocean of noise.

Usually, polarization is the most obvious when something appears to undermine stability and current status quo. The last few decades there were lots of such shaky points, from designed babies to the future of A.I. If suddenly all people on Earth assume that there are no supernatural powers, but only people, and it's us who are responsible for everything what we create or destroy — we might make our planet much better place to live. Talking about super-hard questions it is crucial to see what kind of picture we all are painting, what directions we are taking to step on and why. We have to know our whys.

My interest in artificial intelligence topic began from my teenage years when at first time I figured out the anthology of Isaak Asimov's books on my grandpa's bookshelf. I read everything I could grab from there, later everything in the local library, and later in the web. I was amazed and I could not perfectly distinguish words "natural" vs. "artificial" in regards to anything with a spark of consciousness. I still can't.

Nice memoirs from my past-self and indirectly the reason of my worries. Here's why.

There is a concept in psychology called self-fulfilling prophecy. To keep it simple: it is a sort of prediction that makes itself to come true. Mainly because it causes a dependance between our beliefs and behavior. If a kid believe in a monster under the bed she will keep a light during the night or will cover herself with a blanket even if it is +40 above zero inside the room (hardly true if it's not a room somewhere in tropical zone, but it proves the point).

Anyways, we can see here the behavior which seems let's say a bit odd. Unnecessary, dummy actions just because of some strong belief that is valuable only for the kid, and it's real in its consequences. We all are such kids.

Sociologist Robert K. Merton who coined this expression "self-fulfilling prophecy" defines it so:

The self-fulfilling prophecy is, in the beginning, a false definition of the situation evoking a new behavior which makes the original false conception come true. This specious validity of the self-fulfilling prophecy perpetuates a reign of error. For the prophet will cite the actual course of events as proof that he was right from the very beginning.

The source of Merton's insight we could consider Thomas theorem, which states that:

"If men define situations as real, they are real in their consequences".

That's a really interesting [scary] concept which tells us that our behavior depends on our perception of the situation and the meaning we give to it, rather than the situation itself. 

Regarding one of the most popular questions about A.I. — how will it affect our lives and what are the consequences of this process — we might see several opinion-camps. Each camp is a group of people who have similar perception of "what is going on there" and gives the same meaning to it while answering the questions. The bigger the group is — the more attention it gathers, the more influential it becomes, the more people assume that described situation is real. Its chances to set up a specific behavior is growing as well.

In the end the whole process causes the inter-subjectivity phenomenon. That happened to us throughout the whole human history. We can't live without make-believe stories. As the Noah Harari describes it — it's neither objective experience, not completely subjective, it is an ability to believe in the most spread web of meaning. If everyone around me will suddenly start to believe in a pink unicorn — my mind will ask me questions like: "Am I crazy or the whole world gone crazy? Perhaps it is me and the pink unicorn really exists".

You can replace pink unicorn with anything you want. It does not really matters. What matters is how many people around you agree with this thought/opinion/mem/etc. and do they behave accordingly.

You may replace pink unicorn withbaaad bad robots, evil AI, helpful AImonstrous designed babies, those little Frankensteins or modified and genius, all forms of life are sacred to aliens will kill us all, thank you Ripley merciful god, cruel god, etc. You can follow this pattern as far as your fantasy goes...

Just follow the logic: certain opinion — certain amount of people — chosen narrative — certain behavior — certain consequences.

What particular scenario will you prefer to create in the nearest future? What meaning will you give to A.I. as to the whole arising technology? What kind of chain you'd like to launch? How often will you use critical thinking while deciding about "good things" or "bad things"?

All these answers are individual, though they create the web of meaning we all live in. 


P.S. When I am reading about an evil AI in 100th time I am close to the conclusion that human beings will never think in other way except the anthropocentric. No wonder. Anything we create is a reflection of our kind and we will never get rid of it, until the time we became different kind ourselves.

Reflections are different, but almost every time when we are talking about embodied images of omnipotent us, who have unlimited abilities and power — we tend to make them as nasty as possible [gods, AI, androids, robots, etc.] Is the human nature itself something rotten to the core?

Just love this pic. Thank you, Asch Paradigm.