...to an article
Very timely. I watched a YouTube video yesterday where an NBC journalist was showcasing a lab that showed a stimulus (four sequential images of a girl knocked down by a dragons tail) to a person. An AI, then, somewhat successfully processed the brain’s electrical responses to the stimulus and spat out something about seeing a girl be hit and knocked down. I, being the meat machine that I am, was quick in allowing myself to, once again, be fed a large spoonful of meaninglessness and wonder if the computers have us all figured out and if that’s all there is to humanity. Your recent Narcissus letter and this are invigorating reminders that experience is irreducible, imitations are not the same as the thing itself, and that it is precisely because one holds to reductive presuppositions that we think of ourselves as nothing more than machines.
Bought The Master and His Emissary after your conversation with Iain McGilchrist mainly because my theologian wife does not want me reading her books at lunch and getting them stained with salsa thumbprints or whatnot. Very good, though I am getting a bit too hyperaware of my brain.
I continue to feel that if the planets ever align for you to be able to speak at that festival it would justify the pilgrimage on my end.
As a computer science researcher you'll not hear me use the phrase "Artificial Intelligence". In the field we often prefer the more accurate "machine learning" because that's what these models do: learn patterns in large datasets by looking at billions of examples and make inferences based on these learnings. There is nothing like mind in these models and I currently see no path to get to mind, perhaps because as you point out the path doesn't exist. The dangers these models pose to society (increasing inequality etc.) nevertheless are very real.
Have you read "The Origins of Early Christian Literature" by Robyn Faith Walsh, or have you heard of its arguments? If so, what are your thoughts on it?
Not to be a nag, but I think this is a typo that might be worth fixing online: "While, at first, many of the thinkers of early modernity were content to draw brackets around physical nature, and to allow for the existence of realities beyond the physical, namely mind, soul, disembodied spirits, and God. They necessarily imagined the latter as being essentially extrinsic to the purely mechanical order that they animated, inhabited, or created."
"But consciousness simply cannot be explained by the mechanics of sensory stimulus and neurological response, because neither stimulus nor response is, by itself, a mental phenomenon; neither, as a purely physical reality, possesses conceptual content, intentional meaning, or personal awareness. The two sides of the correlation simply cannot be collapsed into a single observable datum, or even connected to one another in a clear causal sequence, and so not only can the precise relation between them not be defined; it cannot even be isolated as an object of scientific scrutiny."
This sounds very similar to Dharmakīrti's arguments. He was, of course, arguing for the existence of reincarnation, but he made a similar argument to do that.
Hidden behind a paywall, unfortunately.
As for the interesting metaphysical question of the relationship between mind and matter, I agree that "reason abhors dualism", but I would point out that this leaves us with three alternatives: That matter is reduced to or is an aspect of mind (the theistic understanding, most clearly on subjective idealism), that mind is reduced to or is an aspect of matter (the common but excessively problematic and therefore weak naturalistic understanding), or that both mind and matter are reduced to or are aspects of what's ontologically fundamental. If I were a metaphysical naturalist, I would choose the third option (called property dualism by people like David Chalmers): Reality is constituted by a mechanical substance that has both physical/material and conscious/mental manifestations.
One can define the concept of 'intelligence' any way one likes, but in common usage, any computer that discovers the cure for cancer or finds a method of generating energy from cold fusion would be considered intelligent. Similarly, if a technologically advanced alien civilisation were to contact us, we would label them as intelligent. We wouldn't demand proof that they were biological organisms, or get into discussions about whether they had consciousness, before attributing intelligence to them.
Thus, in the ordinary usage of the term, intelligence and consciousness are orthogonal. For example, a cockroach is certainly not intelligent, but it may well be a conscious being that feels pain when one of its legs is torn off. On the other hand, an incredibly intelligent computer may not be conscious and not feel any sensation whatsoever when one of its hard drives is removed.
The field of AI is about constructing artificial intelligence, not artificial consciousness. For obvious reasons, AI companies would happily agree that their systems are not conscious beings.
I’m still much more on the fence on whether or not it is possible for machines to emerge into consciousness. This isn’t so much because I think programmers can create this state in machines, as it is that a sufficiently dense and reflexive information state in the form of a machine could be more receptive to consciousness. Perhaps this is due to some of my sympathy for Integrated Information Theory (which at least takes consciousness as a given).
The purposive element, minded intention, that is used to develop these technologies seems to me to be an element that I have a tough time getting around when it comes to the question of whether or not consciousness is compatible with machines. Especially given the cognitive framework around which AI is being framed. It could be a defect in my own thinking. While I grant that cognition and consciousness are distinct, I am not sure how to separate them. Am I missing something here?