There are lots of smart philosophers around and they don’t agree on a definition of consciousness. Daniel Dennett thinks it is an illusion: “our brains are actively fooling us”. It is nothing like an illusion for David Chalmers, who sees it as being so impenetrable to our understanding as to be THE Hard Problem of humanity – although not so hard as to be beyond joking about it: he has also characterised it as “that annoying time between naps”. More recently, in The New York Times, Galen Strawson was similarly light-hearted in analogizing Louis Armstrong’s response to a request for a definition of jazz: “If you gotta ask, you’ll never know.” More seriously, he offers the thought that we know what it is because “the having is the knowing”. While begging the question if this is equally true of cats and people, the musings of the Dennetts, Chalmers’ and Strawsons of the world make it clear that anyone who thinks that philosophy is dead is certainly insensible, if not downright nuts.
Perhaps what makes the problem hard is the attempt to define it from within it. To borrow from Strawson, can we understand consciousness better through examining the border between having it and not having it? What is the catalyst that tips it from not being into being? Research out of Yale and the University of Copenhagen may have nudged us closer to an answer. Using PET scanners to measure the metabolising of glucose in the brains of comatose patients, researchers were able to predict with 94% accuracy which ones would recover consciousness. It appears that the “level of cerebral energy necessary for a person to return to consciousness (is) 42%.” Bemused readers of The Hitchhiker’s Guide to the Galaxy will grasp the significance of the number 42 as the “Answer to the Ultimate Question of Life, the Universe, and Everything.”
Lively Stuff from Planet BAM!
- VCLA at UCLA: moving robot cognition into the realms of inference
Evolution of deep learning at the summit of robot development distinguishes reality perceived and understood at some basic level from phenomena whose existence and properties are inferred from outcomes and the behaviour of related phenomena. So: from what a robot learns about folding clothes, it infers folding principles to apply to items not seen before.
- Machine and Deep Learning: terms not interchangeable
This article in Forbes explains that machine learning is a supervised approach to binary promptings – cat or non-cat, in a squillion iterations. Deep learning involves a more nuanced examination of the principles of shapes and shadings from which, in unsupervised settings, a programme can derive its own conclusions of the essence of cat, or of coat, or cancer, etc.
- But does alcohol make you smarter – or just seem so . . .
An amusing reflection from the “Turing Test” School of Intelligence: does a loss of inhibition prompt thoughts that lead to greater creativity, and thence to wisdom? Or is reality merely dulled as perceptions get rosier, as in “I drink to make other people sound more interesting.”