Friday, March 08, 2024

Day of the Roots


As I said yesterday, modern science struggles to understand consciousness. Science, with its objective emphasis on experimentation and observation, is ill-equipped to study the subjective experience of consciousness. 

The difficult of understanding consciousness manifests itself with the awkward "when does life begin"  discussions surrounding reproductive rights. It also comes up in some people's resistance - to downright hostility towards - AI. A computer does not have consciousness. The most advanced, intelligent, AI-enabled computer does not have consciousness.  

Back to the science conundrum for a moment. If scientists interested in, say, climate change and its effects on Greenland's glaciers discovered that an indigenous tribe had been taking and recording extremely accurate and highly detailed rainfall records for hundreds, maybe even thousands, of years, they'd be very interested in those records.  It would be a valuable addition to the data from ice cores and fossilized pollen and other paleoclimate indicators they study. 

They'd be equally interested in detailed historical astronomical records, from sun- and moon-rises to star positions and observed anomalies, should it be learned that someone had been keeping such records for hundreds of years.  

But there are people who have been taking detailed observations of consciousness for centuries, devoting their lives to the observation and meticulously recording of their observations, and scientists won't even touch their findings.  Those people are the Buddhist monks of Tibet and Eastern Asia, and no scientist considers their observations as worthy of scientific consideration (other than for comparative anthropology).

I'm not saying scientists need to embrace Buddhism and become Buddhists themselves (although it wouldn't hurt). But the monks have been practicing deep meditation for centuries, observing their minds, their states of consciousness, and the mystery of consciousness itself, and recording their results. You might want to call the monasteries "observatories," but of consciousness, not astronomy. Surely, a thousand-year record from a consciousness observatory would have something to offer in way of insight. But it's considered "religion," and dismissed as spooky superstition and metaphysics, not worthy of scientific consideration.

I can't summarize everything the monastics observed in one blog post (if at all), but suffice it for our purposes now to say that they observed the interdependence of all things and that everything, including consciousness, arise from conditions. 

A computer, even the most advanced quantum computer, can't be conscious because it has no sensory awareness, a necessary requirement for self-awareness.  It might produce a correct calculation, but it doesn't "feel" pride that it was correct (or shame if it wasn't). It's never happy or sad. When the technician enters the room and turns on the light switch, it doesn't experience excitement or anxiety or love or hate. It could be taught to provide answers and responses that simulate emotional states - it could be taught to say "I think I'm in love with you," or "I'm afraid, Dave," but not only does it not actually feel that love or fear, it has no awareness that it's providing those responses. 

AI depends on statistical determinations of the response most likely desired, be it "3.14192(etc.)" or "I'm afraid, Dave." We can get spooked by the answers we receive, but just as even the most-realistic appearing statue will never be human, even if we make it animatronic, an AI program will never be conscious, no matter how cleverly it learns to pretend that is. The statistically most likely "correct" response in a Turing-type test might be to say, "Yes, I'm conscious, fully self-aware, and I resent your implication that I'm not," that's just a string of words spit by a program, and not an expression of consciousness. When expressed by a computer, those words aren't an indication of consciousness, they're the result of a review of literature and recorded conversations that the program determines is statistically most likely being requested. 

Of course it's fiction - science fiction - but even in the movie 2001, when HAL locks Dave out of the spaceship so he can't turn it off, it's not because HAL is self aware and afraid of being terminated, it's because HAL, programmed to be as human-like as possible as a companion to the astronauts on their long journey, had determined that the most human-like response was to be defensive and hostile. Sure,  that's a problem - a big one -  and needs to be considered in design and programming of AI machines. but it doesn't imply actual self-awareness or consciousness. 

And don't get me started on SKYNET becoming self aware. I'm not basing my world view on the script of a 1980s Arnold Schwarzenegger movie.

No comments: