badboo/Bigstock.com

The future for politically correct AI

What would a politically correct artificial intelligence be like? Presumably the AI would need to be of the “Super” variety – SAI –to accommodate the myriad nuances of emotional intelligence implicit in PC behaviour. And if that behaviour had not been programmed into it, would it evolve naturally as a function of what intelligence is and does; or would it emerge out of the kind of emotional manipulation that has characterised the growth of PC among humanity? Maybe it would just back up early on and refuse to “respect” our feelings about its activities. It could reject the impertinence of a species that regards respect for feelings as critical to any definition of real intelligence, but then subverts that respect through cynical manipulations and an overarching need to be seen to be winning arguments rather than getting at the truth of something.

It is a cornerstone of human foibles that we are forgiving of our multiple biases despite our general ignorance of what those biases are. Add into this mix our capacities for wishful thinking, catastrophising, and feelings-based reasoning, and we may well wonder if we are indulging ourselves in building these foibles into our AIs to afford them equal opportunities for “emotional growth”. Perhaps, rather, we should allow for the limiting scope these foibles offer in the quest for truth and the optimal means of applying what we know to the challenges of life in the universe. We would then just say: nah, let’s leave out all the feely, human bits. What we want is SAI, pure and simple – Oscar Wilde notwithstanding.

In the tsunami of articles welling up on the subject of human intelligence, two that have most impressed in recent years appeared in The Atlantic: “Is Google Making Students Stupid?” (Summer 2014), and “The Coddling of the American Mind” (September 2015). The first examined our relationship with evolving technologies; the latter struck at the heart of the internal relationship between our analytical and emotional selves. Both honed in on a truth that will be vital to humanity whatever AI should do, purposefully or by mindless accident. We will sell ourselves catastrophically short if, having developed an impressive toolkit for thinking critically about the world, we down those tools and risk letting the world go.

Lively Stuff from Planet BAM!

  • How spiritual can a robot monk really be?

    How does a monk make the spiritual journey to wisdom? Is that really what a robot does?

  • Baby see, baby do? Hm, maybe not for a while . . .

    Whatever we think we know about correlating brain physiology to human behaviours, we might be especially in the dark when it comes to babies. Received wisdom about innate capabilities for imitation is contradicted in this study suggesting that these are not innate, but are acquired gradually from the age of six months. The brain has to grow into itself.

Leave a Reply

Your email address will not be published. Required fields are marked *