142 Comments

The human reflection seen by Narcissus and expressed in the humanoid is the concrete inversion of life, the autonomous movement of the non-living. And now, we see the concrete life of humanity degraded into a speculative universe in which appearance supersedes essence. Of course, one has to note the modern irony in Ovid's myth when while during a hunt, Narcissus, who is being stalked by the infatuated and once loquacious nymph, Echo, asks "Is anyone there?". And in Ovidian fashion, Echo having been cursed only to repeat or "echo" a recent utterance, says back "Is anyone there?". Now we see the brains at IBM and Google becoming the modern Echo. Stalking their artificial beloved and asking the unanswerable question, "Is anyone there?". Never realizing the pseudo-answer was merely an echo of their own imposed intelligence. A reflection of themselves.

Expand full comment

The idea that AI becomes conscious is absurd. It might be equivalent to thinking that hyperrealistic paintings done in perfection might lead to the paintings gaining physical autonomy.

I think a great danger lies in human plasticity as well. Since our psychology shapes the algorithm and the algorithm shapes us we might find ourselves in a feedback-loop that nourishes the worst aspects of human behaviour. Behaviourism as a theory might be reductionist nonsense, but we create a system that is basically build to reinforce our lazy biases and a lack of curiosity.

Like you said, AI is like an empirical ego without its structure towards the transcendence. Therefore it exhibits every sort of cognitive bias but isn't able to correct them properly. Foucaults Panopticon seems even less dangerous than this, at least it isn't worsenend by an immediate feedback-loop. AI on the other hand, in a process of permanent learning, is driven by the lowest impulses of mass psychology and on the other hand drives those masses itself to extremes. I fear that small prejudices will get amplified into genocidal rage much faster and more efficiently than ever.

Expand full comment

All this tendency towards being entranced by the illusion of consciousness and intentionality seems to be a desperate and insatiable appetite to find the world around us 'alive'-as perhaps it is, if not in the ways AI-enthusiasts hope for or fear. It certainly must be primary, why else does the dragon puppet speaking delight and cause so much laughter in my three and five year olds? They know mamma is providing the voices, yet the appeal of stuffed animals having conversations at bedtime with them and mamma never fails. My honest worry is more, can one ever fully break the illusion among the so-called adults playing with computer puppets? Or will it always be a constant danger simply given our desire to be surrounded by life and companionship? Young children at least rejoice in such imaginative story-telling. Why is it so difficult for adults to misinterpret their own actions?

Expand full comment
Mar 22, 2023·edited Mar 22, 2023

I think another concern is the moral harm that we will be able to inflict on ourselves through our interactions with our person-like machines. The more persuasive their appearance becomes, the more irresistible it will be to see them as persons. Still, we will not be able to rid ourselves of the suspicion that they are mere machines, nothing more than fancy appliances. That is likely to encourage behaviors toward them that, if they were directed at actual persons, would be evil: 'abuse', 'assault', 'murder'. HBO's remake of Michael Crichton's Westworld portrays this in graphic detail. We will not harm the machines. But we will harm ourselves. In treating apparent persons in (what would be) morally reprehensible ways (if they were actual persons), we will undoubtedly strengthen our vices, thereby making us worse in all of our genuine relationships.

Expand full comment

Excellent piece as usual, though I am not optimistic we can avoid the scenario outlined at the conclusion. I suspect we've already crossed that threshold and been reduced to functions of the machine (or megamachine, to borrow from Lewis Mumford). The success of deep learning in the past decade has depended on the explosion of big data, which means that fundamentally AI's capability is contingent upon and derived from surveillance capitalism. It's like a feedback loop of anti-humanity, and my dark suspicion is we are already past the point of no return.

For me, AI's most devastating immediate effects are on teaching. Just in the past month I've caught students cheating with ChatGPT eight times--and that's only the ones I could detect.

Expand full comment

John Searle’s Chinese Room thought experiment is a good one. Hubert Dreyfus via Heidegger and Ponty refuted AI quite well.

Expand full comment

I hate that the conversation often circles the drain of possible sentience. We are ill-equipped as a society to consider these issues properly, so the inevitable result is confusion. I remarked to a friend that this technology will be upon us in full force before we have truly begun to contemplate its impact. My friend's primary concern is that AI is becoming a perfect vehicle for disinformation. Completely valid, as is disquiet over the paths that an insufficiently constrained system may take, and other perfectly reasonable concerns have been raised in the comments here. It seems utterly unlikely to me that we will be able to exercise restraint in this sphere. It is bizarre to observe. I told my friend that the sensation is similar to being in a car that has just hit the ice and lost its steering. It is such an odd blend of horror and free-floating strangeness.

Expand full comment
founding

As an insider using this technology, I can assure you the headlines are "much ado about nothing."

Expand full comment

LLM stands for "Large Language Model" (not "Logic Learning Machine"). I can't decide if you were deliberately altering the acronym, but the distinction is important. It would be a marvel if LLM's could "learn logic," but as currently constructed, any apparent logic the machine produces is borne only of statistical regularities in the underlying data. Which is to say:If the computer manages to reproduce, say, a syllogism, it is only because people writing things on the internet do not regularly conclude that because Socrates is a man and all men are mortal, Socrates is therefore a house plant. If they did, the algorithm would happily insist that he is a geranium.

Expand full comment

Good evening, Dr. Hart. A couple of quick questions, as I'm not too educated in these matters.

1. In some piece, you mentioned that you find certain forms of panpsychism to be quite appealing. If my memory serves (and it might not), it went along the lines of everything that is having some sort of awareness of being what it is (a rock knows what it is to be a rock, etc.). The electrical pulses flitting about in a motherboard can't be identified with thought or intentionality, but since the computer is a construct in the general structure of reality as mind, do you hold that it too have a knowledge of being what it is or awareness of some kind in this sense? It does have a telos, after all—God.

2. I'm curious to your thoughts on the mind-brain interaction (if even posing that question doesn't reek too strongly of some kind of dualism). Any thoughts on how physiological phenomena are correlated to events in the mind?

3. It's always amusing to have a 'satyr question', so here's one for you: do you think a daimon could possess a computer?

Expand full comment

I think it's best to separate the issues of consciousness and intelligence, which are, after all, orthogonal. For example, cockroaches have very little intelligence, but may well be conscious beings (i.e., capable of having experiences). And recent computer systems, which on average can perform a variety of cognitive tasks almost as well as or even better than humans (such as describing images, translating text, transcribing speech, making medical diagnoses, folding proteins), are most probably not conscious beings.

As for intelligent behaviour, I know of no intrinsic limits to mechanical systems. Even if not conscious, there is no limit to how conscious and even human-like a machine can *seem* to be.

The question of consciousness, however, is much harder. We don't know of any physical test to confirm whether or not (cockroaches, humans, or machines) are conscious beings, indeed we can't even imagine what such a test might look like. For obvious reasons, those who invest in AI would much prefer that their property not be considered a conscious being. Now, as a theist, I believe that consciousness is a supernatural event; something that God chooses to implement. Actually, as I have embraced the metaphysics of subjective idealism, I believe that ontologically the only substance God creates is consciousness, and that the physical universe (aka "the natural world") is just a set of God-induced patterns present in the experiences of all conscious beings living in this creation. So in our context, the only open question is this: Suppose a machine is built that is intelligent enough for us to develop a relationship with it (or what appears to us to be a relationship with it); would God in that case choose to create the corresponding machine consciousness?

You argue that no machine can ever be conscious because there is no intentionality in its physical constitution. But there is no intentionality in our physical constitution either. There is intentionality in our consciousness (or mental constitution, if you will), and similarly, if God chose to create machine consciousness, there would be intentionality in it.

Expand full comment
founding

Ezra Klein said several times recently that God, Human, Animal, Machine (by Meghan O’Geiblyn) is the best book he has read in the past year. It’s by a thoughtful former evangelical Christian with an obsessive interest in theology and technology, who traces the many parallels between thinking on AI and Christian theology. Klein says it is overwhelmingly convincing, and I agree. It’s also all about consciousness. Well worth reading for many reasons, for people who might be on this Substack. There’s also an audiobook if you find yourself wanting something thought-provoking to listen to while driving etc.

Expand full comment

This morning I finished Rebecca Du Maurier’s The Birds. Her last line before wrapping it up I thought was apt, not only of AI, but to the suffocating daily experience of many workers - the all pervasive commercial radio which has been piped into our daily workplaces now for a generation, like a daily tidal attack of ‘the birds’: “Nat listened to the tearing sound of splintering wood and wondered how many millions of years of memory were stored in those little brains, behind the stabbing beaks, the piercing eyes, now giving them this instinct to destroy mankind with all the deft precision of machines.”

Expand full comment

I tweeted your article and one person replied that the AI programs are "decoder transformer centric neural networks. It's not really an 'algorithm.'"

Does that alter your assessment of current developments?

Expand full comment

I spent a career in emerging technology in Silicon Valley companies, starting in the early 80s up to my retirement in 2018. Much of it involved AI and component factors thereof. AI is nothing but a tool like all technology. It is not replacing any human and no one in the world of development thinks that to be a reality. But we do realize that the tool called AI can do something humans can't and that is true multitasking. The real question is how will be as humans cope in a society where the "tool" expedites tasks that free of time and humans work significantly less. This will be where the angst will come in the societal phase change we as humans will experience just like in the agricultural and industrial revolutions.

Expand full comment

Fantastic! - 'a farrago of vacuous metaphors.'

I'm not sure it's a category error, perhaps not, but another dead-end is to invoke quantum theory as some kind of explanation for 'consciousness'.

Federico Faggin (he designed the first commercial microprocessor and touch screen) seem to be doing it in his recent book: 'Irriducibile. La coscienza, la vita. i computer e la nostra natura.'

And of course there are many others on the same path - even Raymond Ruyer was doing this in the 1950's!

Mario Crocco notes at some length that quantum mind theories cannot refer to a plurality of finite observers...

Thanks again - a really excellent piece.

Expand full comment