AndreyPopov/Bigstock.com

Algorithms cannot really know you if you don’t

Count the number of times you notice stories about how the various engines of AI – the algorithms, the machine learning software burrowing ever deeper into the foibles of human behaviour – are getting “to know you better than you know yourself.” What started as variations on “You liked pepperoni pizza, here’s some more” evolved quickly into “People into pizza also love our chocolate ice cream” and on to special deals for discerning consumers of jazz, discounted National Trust membership, and deals on car insurance.

Emboldened as the human animal always is by early success, it was bound to be a small leap for the algo authors to progress beyond bemusing purchasing correlations to more immodest claims. Hence the boast of the data scientist cited in the online journal Real Life, claiming that the consumer is now being captured “as a location in taste space”.

Advocates for algorithms will smile with the rest of us at the anecdote about how the writer’s single imprudent purchase of a graphic novel inspired Amazon into morphing from the whimsy of a one-night stand into the sweating nightmarish stalker from hell; of course, they will claim that algorithms will only get better in inferring desires from behaviours, however seemingly complex.

The writer makes a very good case for doubting this, however, going into some detail on how the various dark promptings of his delight in Soviet cinema of the 1970s are unlikely to excite an algorithmic odyssey to the comedic treatments of pathological sadness in the original Bob Newhart Show.

And yet: the witty sadsack being as likely to emerge in Manhattan as in Moscow, it is not inconceivable that algorithms might evolve to a point of sifting through the frilly flotsams and whimsical whatevers of daily life in multiple dimensions of time and space, to home in on the essentially miserable git who is Everyman. But this is to assume some consistency of purpose to miserable gitness (indeed any manifestations of human behaviour), reckoning that there is no git so miserable that he ceases to know his own mind. And here, Aeon Magazine weighs in to good effect.

There are so many layerings of self and sub-self that implicit bias runs amok even when we run our self-perceptions through the interpretive filters of constantly morphing wishful thinking. So know yourself? It may be that only The Shadow Knows.

Lively Stuff from Planet BAM!

  • Measurement of AI’s impact must keep pace with speed of implementation

    Far more interesting than the headline observation that artificial intelligence is “hard to see” is the Alan Turing quotation buried in the text: “If a machine is expected to be infallible, it cannot also be intelligent.” Microsoft researcher Kate Crawford considers the pace at which AI is being integrated into social institutions without adequate reflection upon the social and economic impact.

  • He hallucinates, you dream, I have a vision

    This articulation of the relationship between neurochemistry and what goes on in human dream states is fascinating but could go further with an acknowledgement that not all dreams are produced in the same way, with the same intensity, or with the same relationship to the waking world. It could go further still with acknowledging that what gives many dreams their potency is not so much the dream itself (whatever that might be) but the conclusions inferred upon reflection and the actions that result.

    And we can go further yet with an admission that humanity’s cunning awareness of the point about conclusions and the value of behavioural manipulation has found any number of charlatans happy to share in dreams they never actually had precisely because they can foresee the impact of artifice upon credulity . . .

  • Experiential empathy: Star Trek founder’s prize addresses our key deficit

    The Roddenberry Foundation’s newly launched XPrize seeks to boldly go beyond the frontiers of neuroscience to explore how advances in Artificial Intelligence, Virtual and Augmented Reality can address what President Obama calls a “deep empathy deficit”. Can we reach the United Nation’s Sustainable Development Goals unless we answer the question: “If humanity is so smart, why ain’t it more kind?”

  • Unlocking the Human Code at the Royal Institution in London

    This Intelligence Squared event on 5 October will see physician author and biological researcher Siddhartha Mukherjee considering the implications and potential in the revolution in genomics. If we are indeed the first organism to have “learned to read its own instructions”, how will we change the game when we progress from reading the rules to writing them?

Leave a Reply

Your email address will not be published. Required fields are marked *