82 Comments
May 15, 2023·edited May 16, 2023Liked by David Bentley Hart

Very timely. I watched a YouTube video yesterday where an NBC journalist was showcasing a lab that showed a stimulus (four sequential images of a girl knocked down by a dragons tail) to a person. An AI, then, somewhat successfully processed the brain’s electrical responses to the stimulus and spat out something about seeing a girl be hit and knocked down. I, being the meat machine that I am, was quick in allowing myself to, once again, be fed a large spoonful of meaninglessness and wonder if the computers have us all figured out and if that’s all there is to humanity. Your recent Narcissus letter and this are invigorating reminders that experience is irreducible, imitations are not the same as the thing itself, and that it is precisely because one holds to reductive presuppositions that we think of ourselves as nothing more than machines.

Expand full comment

This works, to the extent that it works, only if there has been prior “training” with a particular individual. It does not work for just any individual whose brain signals are read by the so-called AI.

Expand full comment

For now. I am afraid that eventually they will be able to build some sort of a (primitive) dictionary for decoding thoughts (or at least emotional reactions) from bioelectrical impulses. These experiments should be banned or at least not expanded, but that will never happen, of course. I don't see what greater good will be served if we learn to read each other's thoughts. This power should remain only in God's domain. (Well, our guardian angels probably need it too.)

Expand full comment
author

Not thoughts, no.

Expand full comment

Do you remember what they were supposed to be accomplishing with the experiment? Was the AI providing an interpretation of the images using the brain as a medium, or was it supposed to be reading the person's thoughts about the images directly?

(Not that important distinctions like that often matter to neuroscientists. I remember reading one of those "free will" studies from 2019 when the experimental setup was unclear on whether the subjects were instructed to wait until the last moment to "choose," therefore making it unclear whether they were actually predicting a decision that had not yet been made or simply "mind-reading" a decision the subject made before the timer read out.)

Expand full comment
author

The “free will” experiment was not predicting “decisions” at all. Only impulses. The decisions were made before the actual experiment began.

Expand full comment

More recent studies have focused on predicting 50/50 choices, like which hand someone would throw in a simplified rock-paper-scissors game, or whether they would add or subtract given a selection of numbers. The accuracy rate in this particular study was 85%, with results in the high 90s for certan subjects. If which hand you throw in game counts as a "decision," then it wasn't made ahead of time.

The issue with this particular study, as I mentioned, is that their experimental design makes it unclear whether the decisions were spontaneous or the result of deliberation by the subject. I would assume spontaneous (based on the Libet paradigm they're working in), but the paper itself didn't make that clear.

Expand full comment
author

But that’s to confuse the physiological action with the choice. That’s not what the experiment proved. What it proved was that, wherever deliberation was required—suuch as the choice to participate in the test—the intention dictated the whole structure of action. The rest is trivial.

Expand full comment
May 16, 2023Liked by David Bentley Hart

Oh, I agree. These studies may be interesting as a falsification of the Cartesian homoculus ("that alert little imp," I believe you once called it), but they don't prove that our actions aren't intentional or that we don't do things for reasons.

Expand full comment
May 16, 2023·edited May 16, 2023

In a game with choices that are exercised though somewhat complex motor functions (i.e. each of the rock-paper-scissors gestures) there is no place for spontaneity. Every choice in this case is deliberate, no matter how quickly it is made. Actually, I would go even farther and say that a choice by definition excludes spontaneity.

Expand full comment

In this case, the choice was "Which hand will you raise?" The idea being that if the experimentor raises the same hand as you, you lose. Both hands were kept pressed on buttons on a device, with the pressure being released recording which hand each side had thrown. The game was done in real time, to exclude the possibility to after-the-fact book-cooking by the researchers to make their program look better than it actually was.

Expand full comment

Thank you for the clarification. Still, raising a hand is usually not a spontaneous gesture unless it is a tic.

Expand full comment

This experiment was probably designed to advance the materialist cause, but in fact does nothing of the sort and only proves that mind and body are inseparably integrated into one biological system.

Nevertheless, this is a very worrying development as it opens new avenues for the surveillance state (by creating the basis for building a thought-reading machine far superior to the polygraph) and for unethical bioengineering (such as developing of cyborgs). Of course, the potential new tech may serve some noble causes such as manufacturing bionic body parts for people with disabilities, but I somehow doubt that this will be the main focus of the industry. I wouldn’t bet against the possibility that Peter Thiel is already planning to commercialize this technology and sell it to every semi-democratic government, while dictators all over the world are salivating at the perspective of stealing it along the way and using it for their own nefarious purposes.

Expand full comment

Bought The Master and His Emissary after your conversation with Iain McGilchrist mainly because my theologian wife does not want me reading her books at lunch and getting them stained with salsa thumbprints or whatnot. Very good, though I am getting a bit too hyperaware of my brain.

Expand full comment

I continue to feel that if the planets ever align for you to be able to speak at that festival it would justify the pilgrimage on my end.

Expand full comment
author

I've been invited. Logistics are an issue.

Expand full comment

As a computer science researcher you'll not hear me use the phrase "Artificial Intelligence". In the field we often prefer the more accurate "machine learning" because that's what these models do: learn patterns in large datasets by looking at billions of examples and make inferences based on these learnings. There is nothing like mind in these models and I currently see no path to get to mind, perhaps because as you point out the path doesn't exist. The dangers these models pose to society (increasing inequality etc.) nevertheless are very real.

Expand full comment
author

Technically "learning" is just as inaccurate. There's no subject there to learn anything.

Expand full comment

Applied predictive modeling seems appropriate

Expand full comment

Dr. Hart,

Have you read "The Origins of Early Christian Literature" by Robyn Faith Walsh, or have you heard of its arguments? If so, what are your thoughts on it?

Expand full comment
author

I haven't read it, I'm afraid.

Expand full comment

The argument the book makes is basically that the general approach taken to New Testament criticism (that the gospels contain the oral histories of early Christian communities) is mistaken, and that there is essentially no difference between the Synoptic gospels and other Greco-Roman literature about subversive heroes. The writers may not have even been Christian, and are rather more securely put in the came of "literary elite cultural producers," whether low or high class economically. The assumption that there must be something like a community voice behind the texts is a mistaken holdover ideas about "folk memory" from German Romanticism, or something.

I'm only about halfway through the book, but I'm not finding it very convincing. The idea that all of Paul's talk of "churches" is basically rhetoric with no reality behind it seems to twist the data quite badly (especially the pre-Pauline formulations in 1 Cor. 15, Phil. 2, and Rom. 1). And the idea that the Synoptic authors did not identify themselves foremost as Christ-followers is, especially in the case of Matthew and Luke, incredible to me. The extensive use of the Septuagint and the familiarity with second temple Jewish literature also becomes difficult to explain. Another significant difficulty is that Christians fairly quickly accepted these texts as holy texts, which seems improbable in Walsh's scenario.

The silver bullet though, I think, is John. John is clearly the product of a self-identifying Christian community in one or another respect, via minor redaction (Bauckham) or nearer to its foundations (the scholarly consensus), as the "we" passages and reverential treatment of Christ demonstrate. John is also the least historical, and is almost certainly familiar with the Synoptic tradition (at least Mark) on some level, disagreeing with its emphases but not outright denouncing it. This collection of items seems like it cannot be harmonized with the main thesis of Walsh's book; in fact, she basically ignores John to make her point.

As someone familiar with the subjects in question, how would you respond to the charge that the divide between the gospels and other Greco-Roman literature of the time is basically an illusion?

Expand full comment
author

Sounds very unconvincing. Does she adduce any parallels from non-Christian literature? Other than Philostratus?

Expand full comment

Many parallels are alluded to, but so far (I’m not quite done the book) the ones most delved into are from the Satyrica and other 2nd or 3rd century texts. If the Satyrica is the author’s best example of such a parallel (which it seems to be, since it’s the one she examines the most) then I fail to see how it matters much, since the Satyrica is probably a second century text with knowledge of the gospels, not the other way around.

Walsh assumes that the best explanation for the shared “topoi” between the gospels and the Satyrica, regardless of which came first, indicates not merely a general or specific knowledge of the gospels by the author of the Satyrica, but the presence of the gospels within a circle of literary elites producing texts for the sake of social capital.

In the words of Walsh: “If the gospels writers are aware of any oral tradition about Jesus, it is the position of this monograph that these elements are irretrievable to us, if they existed at all.”

The commonalities Walsh finds between the Synoptics and other Greco-Roman literature (walking on water, other miracles, empty tombs—as well as the anointing episode, the crowing cock, and a cannibalistic fellowship meal which all parallel the Satyrica, etc.) all occur in John as well. And John is decidedly not written in the context which can only be tenuously posited for the Synoptics (it is clearly written by one or more Christians within a community which self-identifies as Christian, as is Revelation).

Since the “cannibalistic” fellowship meal constitutes one of the parallels which Walsh wants to treat as a Greco-Roman literary “topoi,” she basically has to claim that Paul made it up in 1 Cor. 11 and the gospel writers adapted it from there, rather than admit that it demonstrates any kind of cohesive Christian “community” that could have been capable of producing texts like the Synoptics.

To quote Walsh from an interview: "Paul says, 'Zombie Jesus told me about this meal.' And then it appears in the gospels. What that signals to me is that the gospel writers read Paul.”

It's possible that the arguments in the book are more convincing than I realize (I am, regrettably, not an unbiased reader). But frankly, the above quote and many of the book's instances of special pleading reek of polemics, which I find off-putting.

Expand full comment
author

To me it sounds ridiculous.

Expand full comment

Having finished the book, the main additional parallels that are actually explored are the Life of Aesop, the Alexander Romance, and Plutarch's Life of Alexander. The similarities to the gospels range from minimal to imaginary, in my (admittedly uninformed) assessment.

Hopefully this desire to view the gospels as "utterly commonplace imperial writings produced by ordinary Greco-Roman writers" (in Walsh's own words) who might not have even been Christian doesn't catch on.

Richard Carrier supports the position, which is evidence enough that it's probably wrong.

Expand full comment

Not to be a nag, but I think this is a typo that might be worth fixing online: "While, at first, many of the thinkers of early modernity were content to draw brackets around physical nature, and to allow for the existence of realities beyond the physical, namely mind, soul, disembodied spirits, and God. They necessarily imagined the latter as being essentially extrinsic to the purely mechanical order that they animated, inhabited, or created."

Expand full comment

"But consciousness simply cannot be explained by the mechanics of sensory stimulus and neurological response, because neither stimulus nor response is, by itself, a mental phenomenon; neither, as a purely physical reality, possesses conceptual content, intentional meaning, or personal awareness. The two sides of the correlation simply cannot be collapsed into a single observable datum, or even connected to one another in a clear causal sequence, and so not only can the precise relation between them not be defined; it cannot even be isolated as an object of scientific scrutiny."

This sounds very similar to Dharmakīrti's arguments. He was, of course, arguing for the existence of reincarnation, but he made a similar argument to do that.

Expand full comment

Hidden behind a paywall, unfortunately.

Expand full comment

As for the interesting metaphysical question of the relationship between mind and matter, I agree that "reason abhors dualism", but I would point out that this leaves us with three alternatives: That matter is reduced to or is an aspect of mind (the theistic understanding, most clearly on subjective idealism), that mind is reduced to or is an aspect of matter (the common but excessively problematic and therefore weak naturalistic understanding), or that both mind and matter are reduced to or are aspects of what's ontologically fundamental. If I were a metaphysical naturalist, I would choose the third option (called property dualism by people like David Chalmers): Reality is constituted by a mechanical substance that has both physical/material and conscious/mental manifestations.

Expand full comment

One can define the concept of 'intelligence' any way one likes, but in common usage, any computer that discovers the cure for cancer or finds a method of generating energy from cold fusion would be considered intelligent. Similarly, if a technologically advanced alien civilisation were to contact us, we would label them as intelligent. We wouldn't demand proof that they were biological organisms, or get into discussions about whether they had consciousness, before attributing intelligence to them.

Thus, in the ordinary usage of the term, intelligence and consciousness are orthogonal. For example, a cockroach is certainly not intelligent, but it may well be a conscious being that feels pain when one of its legs is torn off. On the other hand, an incredibly intelligent computer may not be conscious and not feel any sensation whatsoever when one of its hard drives is removed.

The field of AI is about constructing artificial intelligence, not artificial consciousness. For obvious reasons, AI companies would happily agree that their systems are not conscious beings.

Expand full comment
author
May 15, 2023·edited May 17, 2023Author

No, I would disagree: intelligence requires knowledge and semeiotic thinking, as well as consciousness. No computer will ever be meaningfully “intelligent”—any more than a watch or an abacus.

Expand full comment

Doesn't an abacus participate in our intelligence--in the same way our fingers would participate in our intelligence if we counted on them. And doesn't a computer also participate in our intelligence--doing things that are meaningful to us, even if we can hardly conceive of all that a computer does in any detail, all of its billions of calculations, and the complexity of it's operating system. If something outside of us can so meaningfully participate in our intelligence, couldn't it gain a kind of independence without ever ceasing to participate in our intelligence--I'm thinking for instance of robots that might be released into nature and who might be programmed to survive, reproduce, and adapt. Wouldn't those be intelligent or alive, if only in secondary sense, a bit like Tolkien's subcreation? Because we also are intelligent not because we have any intelligence in ourselves, but participate in a truth and intelligibility that is prior to ourselves.

When I look at something like a measuring tape, it is hard for me to see it as just dead matter--it has a function and exists only because it has that function. It seems to mean something in itself in a way that is not tied to *my* particular subjectivity.

Expand full comment
author
May 16, 2023·edited May 17, 2023Author

No, I’d say. They would be no more alive than a shredding machine. My pencil participates in my writing my grocery list. That doesn't mean that in any important sense my intelligence resides anywhere but in me. Beware of falling prey to...ah, I believe someone called it the Narcissus fallacy.

Expand full comment

I'm a bit surprised by the tone of these responses. I'm obviously not a philosopher. But I do have some intuition of what you are talking about, and would hope for a charitable response that takes my level of education into account. I'm willing to understand where my view is wrong if someone would be willing to explain it and not just label me dismissively. Not being a philosopher, I've never heard of Object-Oriented Ontology. Also I didn't say that the person who uses the abacus or the tape measure has their own individual intelligence transferred to the object. I described intelligence as something that does not exist independently in the individual, but that necessarily participates in a metaphysical reality outside of it. The abacus in a given state is an intelligible object--it means something in a way that is "not tied to *my* particular subjectivity" to quote myself. It is like a book that can be read. I'm trying to say something that is very common sense in my mind--that an intelligible humanly created object has something of intelligence in it, in that it already has a relationship with the truth and the intelligibility that the subject who created it participated in. The proof of that relationship is that another person can come along and read the abacus or the book or the tape measure. I'm not expecting that this observation is earth shattering--only that it is worth hearing and responding to respectfully.

Expand full comment
author
May 16, 2023·edited May 17, 2023Author

I think you’re perceiving a tone that isn’t there. No one has been intentionally disrespectful—merely concise. Comment boxes tend to encourage terseness. Sorry, though, if the effect seems brusque.

What you’re talking about is what Searle calls derived intentionality. Ontologically, it’s real only by dependency upon the intrinsic intentionality of mind.

Expand full comment

It sounds to me that what you are saying is that there are living things that can have intentionality (i.e. thought, intelligence, consciousness) and that can contain meaning intrinsically within themselves. And then there are inanimate things, and, between life and dead matter, there is an absolute gulf, so that any meaning we attribute to an abacus or a book is something that we are imputing on that object according to convention, and that does not exist in any sense intrinsically in the thing.

I wonder if you are concerned that recognizing intrinsic meaning in a physical object would devalue human/animal life in some way and reduce the life of the mind to something not truly alive.

Would you agree that it is common sense--at least from the right brain's perspective--to see measuring tapes and doors and books and cups as well as stones and puddles and blizzards as being in themselves what they are to us and not just dead matter, devoid of meaning?

George MacDonald says that a book has a soul, the mind of the author in it. Is that just fanciful language?

Expand full comment

No, this is Object-Oriented Ontology, a ridiculous trend in philosophical circles.

Expand full comment
author
May 16, 2023·edited May 17, 2023Author

You're being generous to those philosophers. It’s also found in theories of distributed mind.

Expand full comment
May 15, 2023Liked by David Bentley Hart

It makes zero sense to differentiate intelligence from consciousness. Intelligence doesn’t exist separate to intentional actions of a conscious subject. You are simply using a defective metaphor. There’s nothing analogous to intelligence going on.

Expand full comment

There's a notorious metaphysical difficulty in separating intelligence from intentionality (that is, knowledge from meaning). The lack of "aboutness" in a computer's physical components (and even in its "bits," which are subject to varying interpretations*) means that, in attributing intelligence to a mechanical system, we are actually violating the common sense understanding of the term. As PMS Hacker likes to point out, certain attributes and actions can only be sensibly ascribed to whole, conscious persons, and intelligence is one of them. I know those in the computer science world (and sometimes cognitive psychologists) like to define intelligence as something like "the ability to achive certain tasks," but "tasks" are themselves intentional, and as Dan Dennet or Alex Rosenberg will happily tell you, there's no aboutness in a purely physical system.**

*To borrow a thought experiment from Alexander Pruss, a basic adder can be interpreted as two different algorithms (both of which achieve the same result) depending on which bits are interpeted as 1s and which as 0s.

**They, of course, take this to mean that human intentionality is an illusion, but they're obvious wrong.

Expand full comment
author

Exactly.

Expand full comment

“There's a notorious metaphysical difficulty in separating intelligence from intentionality (that is, knowledge from meaning).”

Intentionality (in the philosophical sense of " aboutness") is assumed by many to be possible only with conscious entities. They find no way to conceive of intentionality apart from consciousness. Not apart from intelligence, unless one chooses to use the concept of "intelligence" as entailing consciousness.

Now, if we shift our focus to knowledge and meaning, we can consider the example of computers. We now have computers which using only the rules of the game learn to play chess to a level surpassing any human, after just a few hours of self-training. No one argues that these computers are conscious, or that they possess general intelligence akin to humans, or that they understand the meaning of a “game”. However, it would be odd to suggest they don't possess any knowledge of chess.

What interests me here is the appropriate use of language, especially in philosophy where strange definitions can lead to confusion. With advances in technology, machines can now execute tasks that were once exclusively human (for example, to give often useful answers to virtually all questions concerning human knowledge - check out chat.openai.com or bard.google.com). So concepts that we have used exclusively in the context of humans, such as "intelligence", "knowledge", "choice", "error", etc., could now be used in the context of machines. The question I wish to ask is, what is the appropriate way to use these concepts in the context of machines?

One possible answer is to never use concepts traditionally tied to humans or higher animals when referring to machines. But as I have tried to point out, this just becomes an exercise in bad language. It seems pointless to insist that the chess-playing computer has no knowledge of chess, or that it does not choose its next move, or that its next move is not really about the chess game it is playing. Or that even if a computer could find the cure for cancer, it would lack all intelligence. Such language would serve no useful purpose as far as I can see. (As a theist, I don't see why God wouldn't choose to make intelligence a mechanical event. I say that we are made in the image of God, not in that we have general intelligence, but in that we are persons – and thus conscious beings endowed with the sense of the divine - capable of transforming ourselves into the likeness of Christ).

The issue is much confused when it is conflated with the mind-body problem. Consciousness, as I have argued, is orthogonal to intelligence. Let's assume for the sake of argument that no machine can have experiences and thus be a conscious subject. Why shouldn't we label a non conscious machine as intelligent if it can pass every test of intelligence we can conceive of?

Expand full comment

"Knowledge" is always "knowledge of" or "knowledge about." You can't have knowledge apart from intentionality. To go with the chess example, chess is a game, with rules and goals, played by conscious agents. Games, rules, goals, etc. are classic examples of intentionality. So (as David argues in The Experience of God), computers literally can't play chess. They can run algorithms using their physical hardware that we choose to interpret as chess, and (with their ability to process and weigh countless probabilities) beat us at chess, but (unless we are willing to set aside everything that makes a game a game) a computer isn't actually playing chess, because playing chess involves conscious rule-following, weighing of options, and goals.

Let me give another example. My sons's favorite game at present to collect rocks from the path near our house and throw them into the creek. Were I to construct a robot that could imitiate his behavior perfectly, down to acting out disproportiate glee with every "plunk," that robot would not be "playing," not matter how accurate its imitation. It wouldn't even be "pretending" to play, because "pretending" is itself a form of play, and play is the act of an intentional conscious agent. We can't just say anything that looks like play is play, just like we can't say anything that looks like intelligence is intelligence.

As for mechanical intelligence, I can't see how a mechanism could sense the divine or transform itself into the image of Christ, since its consciousness would (by definition) be epiphenomenal in that scenario. If there is no intersection between intelligence and consciousness, then we don't actually do things for reasons (in fact, we're not even agents at all). Consciousness just cruises along helplessly on the surface of a determinism it has no power to shape. As Phillip Goff has pointed out, this leaves our "Psycho-Physical Harmony" as something of a mystery. Why do our conscious reasons line up with our physical actions, if the two are really separate?

Let me also say something about intellgence tests. Intelligence tests are designed for conscious human subjects, and applying them to computers makes no sense. Here's an example: a well-known problem with applying intelligence tests to LLMs (like GPT-3) is that the questions on the tests are often part of the LLMs' training data. In the same way that we wouldn't conclude that a human being who got an amazing score on an IQ by looking at the answer key was super-intelligent, we shouldn't conclude than an AI that's swallowed the whole internet is intelligent just because it can perform on an IQ test we got from the internet.

But that's mostly beside the point. Let me throw out one more metaphor, while I'm at it: A smoke detector is an excellent test for fire. However, it can also be falsely triggered by high humidity. We wouldn't conclude that humidity is, in fact, a form of smoke just because a fire alarm says so. Fire alarms are designed to be used in moderately dry environments, just as IQ tests were designed to be used on conscious human beings. If (as I would argue) intelligence is inherently intentional, then a simulation of intelligence (whether by a computer or something else) could trigger a false positive on an IQ test, and it would be a false positive not because the AI is "cheating," but because the IQ test measures only by proxy; correct answers are not the same thing as intelligence.

I will say that I'm sympathetic to your basic claim (which I first encountered in the writings of Bernardo Kastrup). I would prefer not to have to worry about whether a mechanistic, deterministic universe (which would include mechanistic intelligence) calls either God or our humanity into question. I would get much more sleep at night. A clean break between intelligence and consciousness is convenient, even if it's not metaphysically neat. Maybe something like idealism really is the answer; I don't know. I'm just trying to lay out why I (and many other theists) distinguish between actual and simulated intelligence.

Expand full comment

Imagine a huge explosion destroying a city, and someone defining "explosion" in such a way that this event is literally not an explosion, but the simulation or imitation of an explosion. Now, that kind of language is possible and internally consistent, but I question its usefulness. To me it is the same as saying that a computer that plays chess is not really playing chess, or that a computer that finds a way to cure cancer or produce energy from cold fusion is not really intelligent. There is something fundamentally true in the saying that the proof is in the pudding.

I am interested in your assertion that a mechanism cannot possibly have a sense of the divine or transform itself into the likeness of Christ. Why not, exactly? After all, we humans can, even though on the physical dimension we are bodies made up of elementary particles and therefore mechanisms. We transcend the physical dimension of reality and our being persons is a supernatural event, but if God does this with us, then God will do the same with computers if he so wishes.

Expand full comment
author

Read my next book from Yale. You're badly missing the point, but I admit it's a subtle point, and requires more than one article to explain.

Expand full comment

Dr. Hart, do you happen to know when we might expect your book to be available?

Expand full comment

I’m still much more on the fence on whether or not it is possible for machines to emerge into consciousness. This isn’t so much because I think programmers can create this state in machines, as it is that a sufficiently dense and reflexive information state in the form of a machine could be more receptive to consciousness. Perhaps this is due to some of my sympathy for Integrated Information Theory (which at least takes consciousness as a given).

The purposive element, minded intention, that is used to develop these technologies seems to me to be an element that I have a tough time getting around when it comes to the question of whether or not consciousness is compatible with machines. Especially given the cognitive framework around which AI is being framed. It could be a defect in my own thinking. While I grant that cognition and consciousness are distinct, I am not sure how to separate them. Am I missing something here?

Expand full comment
author
May 15, 2023·edited May 15, 2023Author

IIT is nonsense. It’s based entirely on a suppressed equivocity between “information” in the physical sense (negentropic structure) and “information” in the epistemological sense (personal knowledge). If you’re on the fence, get off it. There’s no more cognition or consciousness in a computer than in a camera or an abacus and there never will be.

Expand full comment

And there is the question of how much consciousness/intentionality there is in a starfish or a tree...e.t.c

'For Aristotle, in view of his mentioned purpose, it was uninteresting

to detect if within the series of organisms animated by a vegetative-sensitive

soul the individuals of some species included an existentiality circumstanced

to sense and move its body. This is the case of a dog, for instance.

Other organisms lack such an existentiality in charge of biological

functions, for example a starfish – or its common ancestors with the dog, if

Aristotle could have paid attention to them. These other organisms are constituted

purely in the hylozoic hiatus and operate in a purely reactive way...' (Crocco. Palindrome).

In this view semovient conscious beings are indeed surrounded by a purely reactive 'hylozoic hiatus.'

However, I think you are claiming that life is synonomous with consciousness...Although I may have misunderstood.

Expand full comment
author

Crocco is simply wrong about that. But even coherent reaction requires both pathos and intentionality.

Expand full comment

This is going to be a tough one for my robot girlfriend to digest.

In all seriousness though, I am not saying that in themselves machines possess consciousness or refelexivity. However, don’t we run into the exact same problem in biological systems? How would we account for the superior consciousness of otters merely on biological grounds?

I’m not trying to argue emergentism, just trying to understand how we eliminate the possibility that machines can participate in consciousness - perhaps merely as an exercise in eschatology. What does a cosmos look like where God is all in all if the constituents in the pleroma (quanta and qualia) are not fully aware of their Divinity?

I get that this could be a weird and annoying question that reduces to “what about squared circles” but my mind can’t help but go there if only because if we eliminate the possibility that machines could become conscious (through some act of grace perhaps) where does the series of elimination end.

Expand full comment
author

Now you’re talking as if the choice is between emergentism and Cartesian dualism. The consciousness of an otter isn’t separate from the organic (not mechanical) structure of its neurology, but not because consciousness emerges from the brain’s structure. And machines are not organic unities. As machines, they wouldn’t participate in any form of consciousness. That doesn’t mean the matter composing them could not do so.

Expand full comment

That does help. I can only work within my own intellectual limitations here. But one of the primary driving intuitions is something approximating modal realism where all finite ontological possibilities (which I would argue means eschatological ontological perfection of all qualia and quanta) are actualized. Under that kind of epekatstic maximilism, I am not sure how to eliminate the possibility. Especially if some model of panpsychism obtains.

I also realize that I am somewhat out of my depth in raising these questions. So thank you for indulging my questions.

Expand full comment
May 15, 2023Liked by David Bentley Hart

Machines are just in an entirely different category than minds. What convinced me, for the moment, of the impossibility of gears generating a ghost was meditating on the irreducibility of intentionality and awareness. A machine is, fundamentally, dead, and it has no intrinsic telos, while a mind is structured around intentions towards ends that are perceived to be good. This intentionally cannot be constructed through a slow evolution from nothing up; there’s a total disjunction between being intentional and not. In a similar manner, awareness cannot be constructed from unaware bits; no matter how many gears you string together, the thing is still unaware, deterministic, and ruled by iron laws of material and efficient causality. There is again a total disjunction between the unaware and the aware; Babbage’s ‘Thinking Machine’ remains a pile of bolts, no matter how large it may grow.

Machines do not cognize, as there is nothing which is thinking. There are parts integrated together which, in accordance with the rational structure of reality, behave in predictable ways along causal chains, and we can make use of this to, say, build an adding machine. The machine, however, is not adding—the human mind has instead made use of the rational relations underlying the cosmos to create a machine that does something, but the machine has no inherent telos (or ultimate meaning); this is imposed externally by the aware mind of the creator. When I string together NAND gates to make an adder, I’m making use of the fact that the rational structure of the cosmos corresponds to the structure of the mind, and interpret the inputs and outputs in accordance with how I have constructed the machine. The machine doesn’t add; the creator adds via the machine. All meaning, intentionality, and structure is provided by something aware; the mechanism has no such awareness.

An AI is no different from the NAND adder; now, however, we have ten thousand thousand perceptrons (or whatever flavor of ‘artificial neuron’ takes your fancy) strung together in a web, with weights carefully chosen to generate the desired output. Again, all structure, intentionality, and meaning is provided by the creator; the training process is an efficient method that’s exactly equivalent to manual fine-tuning of the weights. The intention is determined by the creator, and it’s towards this end that the network is oriented towards. The meaning of the inputs and outputs are determined by a mind; all the machine does is deterministically convert one set of electrical signals to another, without a hint or trace of consciousness.

Expand full comment
author

Exactly, you blessed soul.

Expand full comment

Perhaps I am out of my depth here, but how isn’t this just another form of eliminativism? I am not attempting to be obtuse here and perhaps I’m failing not to be. But, wouldn’t the same logic that would eliminate biological systems from trees to turtles from consciousness be in play if we were to eliminate the possibility in other (technological) information systems?

I’m not saying I’m right here, but I don’t know how negating the possibility of AI consciousness wouldn’t present similar ontological problems for arguments that we likewise participate in consciousness.

Expand full comment

Well, if one holds that the above reasoning is sound and that biological life is mere mechanism, then yes, this would imply that organisms would lack both intentionality and awareness. Seeing, however, that organisms have both of these, this would rather seem to imply that organic life is not reducible to machinery (and I actually do consider the above to be a rather robust ad absurdum argument against such a notion).

Expand full comment

So there is the claim that all organisms have intentionality and awareness. How do we know this? And there are biologists who claim that most organisms do not have intentionality or awareness...

And there is also the claim that minds or psyches could indeed have an artificial substrate...especially as they do not emerge from it...

Expand full comment
author

Claims that are logically incoherent aren’t really very interesting. Scientists often make basic philosophical mistakes about what they’re observing…and so do philosophers.

https://www.thenewatlantis.com/publications/reality-minus

As for animal and plant cognition and behavior, the evidence for intrinsic organic intentionality and awareness is fairly copious. But that doesn’t matter. The claim that consciousness is a fact about organisms is not a claim that all organisms are equally conscious.

Expand full comment

Yes, I remember reading this essay previously - perceptive as usual!

Actually,this topic reminds me of Erwin Straus. Just as we can correctly claim that machines don't think, we can also claim that 'Man thinks, not the brain.' (Straus. 'The Primary World of the Senses.'

Deleuze and Guattari, predictably, invert the formula in 'What is Philosophy.' 'It is the brain that thinks, not man' (referencing Straus). Of course they are referring to the quite Platonic, Ruyerian brain in 'absolute survey' (a 'primary true form').

As you well know most anglo-american neuroscience is obliged to see the brain as a completely deterministic system...

Expand full comment

If it helps, I don’t think anything is reducible to mere mechanism. But, I do see all finite modalities of being as having properties of artifice whereby consciousness is experienced in epektastic development into the Godhead. It just so happens that mortal bodies in this dimensional reality are biological or astrological, et al. A technological artifice seems to be simply one layer deeper in the process of emanation. My views on technology are more enchanted than Cartesian (even if I am explaining it poorly), where I see technology as something more akin to magic.

This isn’t to say that I don’t see real hazards with AI as it’s currently being constructed (I am actually quite concerned). But, I really don’t know how to square the ontological possibility that they could emerge into consciousness in a way that isn’t altogether different than how biological creatures have evolved within and into conscious awareness.

Expand full comment
author
May 15, 2023·edited May 15, 2023Author

Machines are extrinsic arrangements of disparate parts that are used to perform certain functions. Organisms are integral unities that grow, change, regenerate, reproduce, and possess powers of self-movement and purposive action. And soul (let’s call it that) is the rational power that is at once the unifying and vivifying form of the body, the power of life, and the reality of mind. You’re still hovering between emergentism and Cartesian dualism. Retreat to a more antique (and mor cogent) model.

Expand full comment

You’ve definitely given me much to chew on. The article is quite interesting. If I am sounding Cartesian, it’s quite unintentional. I don’t actually hold to that sort of dualism (or any sort without some provisional qualifications). My interests are more driven by how AI is rendered in film and literature where those possibilities are both wonderful and terrifying. I am well out of my depth on the philosophical side. As always, thanks David!

Expand full comment

*don’t know how to square the ontological **impossibility***

Expand full comment