The human reflection seen by Narcissus and expressed in the humanoid is the concrete inversion of life, the autonomous movement of the non-living. And now, we see the concrete life of humanity degraded into a speculative universe in which appearance supersedes essence. Of course, one has to note the modern irony in Ovid's myth when while during a hunt, Narcissus, who is being stalked by the infatuated and once loquacious nymph, Echo, asks "Is anyone there?". And in Ovidian fashion, Echo having been cursed only to repeat or "echo" a recent utterance, says back "Is anyone there?". Now we see the brains at IBM and Google becoming the modern Echo. Stalking their artificial beloved and asking the unanswerable question, "Is anyone there?". Never realizing the pseudo-answer was merely an echo of their own imposed intelligence. A reflection of themselves.
The idea that AI becomes conscious is absurd. It might be equivalent to thinking that hyperrealistic paintings done in perfection might lead to the paintings gaining physical autonomy.
I think a great danger lies in human plasticity as well. Since our psychology shapes the algorithm and the algorithm shapes us we might find ourselves in a feedback-loop that nourishes the worst aspects of human behaviour. Behaviourism as a theory might be reductionist nonsense, but we create a system that is basically build to reinforce our lazy biases and a lack of curiosity.
Like you said, AI is like an empirical ego without its structure towards the transcendence. Therefore it exhibits every sort of cognitive bias but isn't able to correct them properly. Foucaults Panopticon seems even less dangerous than this, at least it isn't worsenend by an immediate feedback-loop. AI on the other hand, in a process of permanent learning, is driven by the lowest impulses of mass psychology and on the other hand drives those masses itself to extremes. I fear that small prejudices will get amplified into genocidal rage much faster and more efficiently than ever.
All this tendency towards being entranced by the illusion of consciousness and intentionality seems to be a desperate and insatiable appetite to find the world around us 'alive'-as perhaps it is, if not in the ways AI-enthusiasts hope for or fear. It certainly must be primary, why else does the dragon puppet speaking delight and cause so much laughter in my three and five year olds? They know mamma is providing the voices, yet the appeal of stuffed animals having conversations at bedtime with them and mamma never fails. My honest worry is more, can one ever fully break the illusion among the so-called adults playing with computer puppets? Or will it always be a constant danger simply given our desire to be surrounded by life and companionship? Young children at least rejoice in such imaginative story-telling. Why is it so difficult for adults to misinterpret their own actions?
That’s a terrific observation. It reminded of something I’d forgotten. As an adult who works in art and animation, AI fills me with anxiety and dread. But as a child, I was desperate for computers and robots to become sentient—I wanted my toys to become my friends. In those post-R2-D2 years, I had a toy robot that relied on branching 8-track tape playback to cleverly simulate interactivity, and I was exceedingly fond of him. Perhaps some AI developers are trying to re-enchant the world in some misguided way. Perhaps some are lonely.
I do not think the anxiety and dread is necessarily unwarranted, at least in terms of our tendency to delude ourselves and to trust in the apparent capability of AI to solve our societal or political problems, as one of the above commentators noted. Still, if Narcissus was fooled by the reflection and represents many people encountering chat-bots, I wonder if Pygmalion is also an apt metaphor for this phenomenon, we yearn at the deepest level for our creations to be truly alive. Perhaps children are more likely to assume the friendliness of an encountered other, although both Narcissus and Pygmalion loved the illusions oddly enough to our modern fearful minds. We should not forget that only the gods or the fairy folk could grant life, as the Velveteen Rabbit found out.
It is because we never tire of the infinite, which is present in each person. To impose infiniteness onto objects and lower beasts is to know the metaphysical deficit that inspired God’s dispensation to Adam with the creation of Eve. The child-like impulse to project intelligence and commit the pathetic fallacy may be tied to the thirsty nature of man without the all-quenching waters of life. Why this tendency is more prominent in children is a curious fact. Man’s desire for a more perfect companion is only tolerably realized in another with the god-image. I suspect a lot of these geeks are lonely and constructing false companions. While the others are compelled by the Baconian quest of nature.
I agree in principle with much of what you are saying, particularly in regards to our desire for the infinite. My only hesitation here is simply to assume whole-heartedly the pathetic fallacy as perfectly true or to see all child-like recognition of intelligence as mere projection. Perhaps the fullness of truth is more interdependent and creative. I fear sometimes the idea that others only are a tolerable substitute for the infinite divine can lead to their metaphysical diminishment in our eyes. Surely there must be a path away from crude idolization that yet recognizes Wendell Berry's insight "I could not have desired her enough. She was a living soul and could be loved forever". If this is not true, I fail to see how John 3:16 could ever be.
I sympathize with not wanting to reduce the pathetic fallacy to a mere psychologism and do believe it is often divinely inspired insofar as it seeks to commune or “speak” with the infinite. I grant something more dynamic is happening. A “tolerable substitute” may be too harsh an ontological designation for Gods children but we are all metaphysically diminished in the face of God. This does not cheapen our existence or value but puts it in its proper place within the hierarchy of creation which highlights it value in the best way. Perhaps the middle ground between idolatry and holy worship is Jesus. His duality satisfies both the flesh in his human nature and the infinite in his divinity. Perhaps the answer lies somewhere in our relationship with him and solves the anthropomorphic riddle of human projection. You have a beautiful mind and I wish blessings upon you.
I think another concern is the moral harm that we will be able to inflict on ourselves through our interactions with our person-like machines. The more persuasive their appearance becomes, the more irresistible it will be to see them as persons. Still, we will not be able to rid ourselves of the suspicion that they are mere machines, nothing more than fancy appliances. That is likely to encourage behaviors toward them that, if they were directed at actual persons, would be evil: 'abuse', 'assault', 'murder'. HBO's remake of Michael Crichton's Westworld portrays this in graphic detail. We will not harm the machines. But we will harm ourselves. In treating apparent persons in (what would be) morally reprehensible ways (if they were actual persons), we will undoubtedly strengthen our vices, thereby making us worse in all of our genuine relationships.
Excellent piece as usual, though I am not optimistic we can avoid the scenario outlined at the conclusion. I suspect we've already crossed that threshold and been reduced to functions of the machine (or megamachine, to borrow from Lewis Mumford). The success of deep learning in the past decade has depended on the explosion of big data, which means that fundamentally AI's capability is contingent upon and derived from surveillance capitalism. It's like a feedback loop of anti-humanity, and my dark suspicion is we are already past the point of no return.
For me, AI's most devastating immediate effects are on teaching. Just in the past month I've caught students cheating with ChatGPT eight times--and that's only the ones I could detect.
The only way writing and history classes will survive the rise of LLMs is if students have to start presenting and defending their papers like a PhD thesis. Or go back to writing them by hand, using only offline resources. Either way, it will probably only be feasible in a private school context.
I concur. I tried it out with one classic essay assignment just to see what would happen and the results were apocalyptic. So now I'm exploring alternatives like the ones you suggest. Perhaps even having the students print out and "grade" essays generated by AI to show how limited they can be.
To use a somewhat dated sci-fi reference, it reminds me of the humans in Battlestar Galactica going back to analog technology in order to fight the Cylons.
They haven't published GPT's training data (and probably never will), but I have a theory that it includes literally millions of high school essays of every description, probably scraped from websites that provide "essay-writing services" (or whatever euphenism for cheating they used these days). There was piece in The Atlantic (?) by an English teacher expressing amazement at the power of ChatGPT to write "crossover" essays (shared themes between Hamlet and TKAM, that kind of thing). Unfortunately, that's exactly what we should expect it to be good at.
Of course, I'm sure the Phillistines who created GPT hold essays on Hamlet to be a quaint holdover from the days before you could get paid a six-figure salary just for dumping data into a neural network, so who cares if you ruin our ability to teach writing?
It wouldn't surprise me. I would guess it trained in part on sites like Sparknotes, and combining that with the 'essay-writing services' would create cheating-turtles all the way down.
The Chinese Room, while correct in principle, is an incomplete argument, because it still allows for the presence of syntax in a computer. Searle later realized that this was already to concede too much.
I hate that the conversation often circles the drain of possible sentience. We are ill-equipped as a society to consider these issues properly, so the inevitable result is confusion. I remarked to a friend that this technology will be upon us in full force before we have truly begun to contemplate its impact. My friend's primary concern is that AI is becoming a perfect vehicle for disinformation. Completely valid, as is disquiet over the paths that an insufficiently constrained system may take, and other perfectly reasonable concerns have been raised in the comments here. It seems utterly unlikely to me that we will be able to exercise restraint in this sphere. It is bizarre to observe. I told my friend that the sensation is similar to being in a car that has just hit the ice and lost its steering. It is such an odd blend of horror and free-floating strangeness.
Keeping AIs from being a malign influence in society is part of the broader problem as "AI alignment," or trying to get AIs to act according to human values. And given that the same people developing cutting edge AI (Meta, Google, Elon Musk) obviously don't give a damn about the consequences of the technology they've already created in other spheres, I'd say there's zero-chance for us to have a non-destrructive collective relationship with the new technology.
No doubt, many of the developers of these tools have little incentive to consider potential consequences, but in my opinion, the problem is much broader than that. I think we are collectively ill-equipped to deal with the dangers of AI for many reasons. The technologies themselves and the dispositions of their creators are just accelerants.
Mo, I agree with you, but I also suspect that it is "nothing" because you have a good amount of "something" about you. I mean that these illusions are a massive distraction and confusion to the many isolated souls running about in our market-driven world these days.
True enough, I suppose. All technology is risky. And risk properly understood is deviation from a norm - for better or worse. Personally, I see more risk in 10-year-old video gaming technology than in a chatbot. :)
Yes, in some ways, at least. Yet my son makes a cogent argument that technologies are merely emerging forms of literacy. And, while cultures rise and fall, it will end better than we think. Because Love wins!
Well, the young people I teach are woefully technologically illiterate in addition to being barely able to string a sentence together. And that's not even touching how most people (even profesional computer programmers!) often have no idea how computers work because they operate excl;usively in the realm of software. Bernardo Kastrup estimates that less than 2000 people worldwide understand computer hardware enough to build even a basic machine from scratch.
It’s a shame, too; building, say, an adder or a basic ALU from logic gates is not a particularly complicated procedure, but it *is* enormously rewarding!
I'm currently reading Robin Wall Kimmerer' Braiding Sweetgrass. There are now less than 2000 people worldwide who know how to weave a basket. Because robots are doing it. The only constant in life and culture is change. And I'm an eternal optimist. :)
LLM stands for "Large Language Model" (not "Logic Learning Machine"). I can't decide if you were deliberately altering the acronym, but the distinction is important. It would be a marvel if LLM's could "learn logic," but as currently constructed, any apparent logic the machine produces is borne only of statistical regularities in the underlying data. Which is to say:If the computer manages to reproduce, say, a syllogism, it is only because people writing things on the internet do not regularly conclude that because Socrates is a man and all men are mortal, Socrates is therefore a house plant. If they did, the algorithm would happily insist that he is a geranium.
It’s fashionable among AI enthusiasts to keep coming up with alternative meanings of LLM. It’s what passes for wit in those circles. In this case, that’s how the bot was described.
Good evening, Dr. Hart. A couple of quick questions, as I'm not too educated in these matters.
1. In some piece, you mentioned that you find certain forms of panpsychism to be quite appealing. If my memory serves (and it might not), it went along the lines of everything that is having some sort of awareness of being what it is (a rock knows what it is to be a rock, etc.). The electrical pulses flitting about in a motherboard can't be identified with thought or intentionality, but since the computer is a construct in the general structure of reality as mind, do you hold that it too have a knowledge of being what it is or awareness of some kind in this sense? It does have a telos, after all—God.
2. I'm curious to your thoughts on the mind-brain interaction (if even posing that question doesn't reek too strongly of some kind of dualism). Any thoughts on how physiological phenomena are correlated to events in the mind?
3. It's always amusing to have a 'satyr question', so here's one for you: do you think a daimon could possess a computer?
Different panpsychism; you’re thinking of the materialist version I consistently reject.
I’m not a Cartesian and don’t believe in any interaction between mind and body, as if they were discrete extrinsically related things. Body is soul expressed in concrete organism.
1. I found the passage in Roland I was thinking of:
“There is one who knows what it’s like to be a rock. And wouldn’t that infinite personal depth have to express itself, almost of necessity, in a finite personal interiority of sorts? Surely the knowledge of what it is to be a rock is already the spirit of the rock as a rock—the rock knowing itself. So isn’t that very knowledge of ‘what it’s like’ already the reality of a finite modality of personal knowledge, a kind of discrete spiritual self? A personal, reflective dimension as the necessarily contracted mode in which the uncontracted infinite act of mind is exemplified in that thing?”
How does this differ from the above?
2. Interesting; I’ll await your philosophy of mind book for more, God willing.
Interesting; what is the relationship between the two, then?
Is there much of a difference between a natural unity and said assemblage? The rock itself is a composite, and the machine is one thing fashioned by forces within nature (i.e. humans).
A formal composite and an extrinsic assemblage differ only according to intrinsic or extrinsic teleology. If we knew how to engineer genuine organism, who knows? But I assume a rock is in a very "patient" sense a kind of living history.
Interesting. What do you think of so-called minimal cells like the JCVI-Syn3.0 line, which has a properly novel genome, or *de novo* attempts to construct synthetic cells?
Ezra Klein said several times recently that God, Human, Animal, Machine (by Meghan O’Geiblyn) is the best book he has read in the past year. It’s by a thoughtful former evangelical Christian with an obsessive interest in theology and technology, who traces the many parallels between thinking on AI and Christian theology. Klein says it is overwhelmingly convincing, and I agree. It’s also all about consciousness. Well worth reading for many reasons, for people who might be on this Substack. There’s also an audiobook if you find yourself wanting something thought-provoking to listen to while driving etc.
I forgot to mention, but Ezra Klein at NYT is probably the best-informed journalist writing about AI at the moment. You might enjoy the Hard Fork podcast interview of Klein from a couple of weeks ago.
This morning I finished Rebecca Du Maurier’s The Birds. Her last line before wrapping it up I thought was apt, not only of AI, but to the suffocating daily experience of many workers - the all pervasive commercial radio which has been piped into our daily workplaces now for a generation, like a daily tidal attack of ‘the birds’: “Nat listened to the tearing sound of splintering wood and wondered how many millions of years of memory were stored in those little brains, behind the stabbing beaks, the piercing eyes, now giving them this instinct to destroy mankind with all the deft precision of machines.”
I spent a career in emerging technology in Silicon Valley companies, starting in the early 80s up to my retirement in 2018. Much of it involved AI and component factors thereof. AI is nothing but a tool like all technology. It is not replacing any human and no one in the world of development thinks that to be a reality. But we do realize that the tool called AI can do something humans can't and that is true multitasking. The real question is how will be as humans cope in a society where the "tool" expedites tasks that free of time and humans work significantly less. This will be where the angst will come in the societal phase change we as humans will experience just like in the agricultural and industrial revolutions.
That’s not the only problem it poses. It’s an instrument that can do considerable harm at any number of social and cognitive levels, and a tool that can be used for some very destructive ends.
Absolutely the harmful development is occurring as we write we see it in cyber terror The technology is being turned into the equivalent of a disease vector for spreading cyberattacks. Which is why we need to focus on how will we in society manage this development. This ship has sailed. If we want to succeed in an AI driven interconnected world we need to consider arbitrage. If we don’t we may become the subjects to those who do but the risk in the situation may mean arbitrage will drive us into a race of destruction and who hits the bottom first.
We as humans have always lived n a physical world and now we we are living in a physical and virtual world What will be the effect on our minds?
One of the key factors here in managing the "guardrails" is how we view and recognize that our Personal Data is essentially quantified human behavior in patterns. We have overdeveloped a world with surplus's of this quantified data, we'll need to establish laws/regulations to control the conduct of organizations & institutions.
Well, assuming that the AI Revolution works out the same way as the Industrial Revolution or Agricultural Revolution, lots of humans are going to be replaced, and the only people to profit will be the owners of the technology in question.
My point, and in my writings along with colleagues, like Bill Davidow, we all know people will be replaced in greater numbers than what we saw in the previous revolutions that experienced great societal phase change. It will touch white collar, blue collar and no collar people. Academics, researchers, medical etc. Today look at the number of hospitals where prostate surgery is performed by robots, no humans involved in the actual surgery. The questions will be how do we aa a society implement the greater good for the common good of all human beings.
I'm not sure it's a category error, perhaps not, but another dead-end is to invoke quantum theory as some kind of explanation for 'consciousness'.
Federico Faggin (he designed the first commercial microprocessor and touch screen) seem to be doing it in his recent book: 'Irriducibile. La coscienza, la vita. i computer e la nostra natura.'
And of course there are many others on the same path - even Raymond Ruyer was doing this in the 1950's!
Mario Crocco notes at some length that quantum mind theories cannot refer to a plurality of finite observers...
The human reflection seen by Narcissus and expressed in the humanoid is the concrete inversion of life, the autonomous movement of the non-living. And now, we see the concrete life of humanity degraded into a speculative universe in which appearance supersedes essence. Of course, one has to note the modern irony in Ovid's myth when while during a hunt, Narcissus, who is being stalked by the infatuated and once loquacious nymph, Echo, asks "Is anyone there?". And in Ovidian fashion, Echo having been cursed only to repeat or "echo" a recent utterance, says back "Is anyone there?". Now we see the brains at IBM and Google becoming the modern Echo. Stalking their artificial beloved and asking the unanswerable question, "Is anyone there?". Never realizing the pseudo-answer was merely an echo of their own imposed intelligence. A reflection of themselves.
The idea that AI becomes conscious is absurd. It might be equivalent to thinking that hyperrealistic paintings done in perfection might lead to the paintings gaining physical autonomy.
I think a great danger lies in human plasticity as well. Since our psychology shapes the algorithm and the algorithm shapes us we might find ourselves in a feedback-loop that nourishes the worst aspects of human behaviour. Behaviourism as a theory might be reductionist nonsense, but we create a system that is basically build to reinforce our lazy biases and a lack of curiosity.
Like you said, AI is like an empirical ego without its structure towards the transcendence. Therefore it exhibits every sort of cognitive bias but isn't able to correct them properly. Foucaults Panopticon seems even less dangerous than this, at least it isn't worsenend by an immediate feedback-loop. AI on the other hand, in a process of permanent learning, is driven by the lowest impulses of mass psychology and on the other hand drives those masses itself to extremes. I fear that small prejudices will get amplified into genocidal rage much faster and more efficiently than ever.
Well put.
All this tendency towards being entranced by the illusion of consciousness and intentionality seems to be a desperate and insatiable appetite to find the world around us 'alive'-as perhaps it is, if not in the ways AI-enthusiasts hope for or fear. It certainly must be primary, why else does the dragon puppet speaking delight and cause so much laughter in my three and five year olds? They know mamma is providing the voices, yet the appeal of stuffed animals having conversations at bedtime with them and mamma never fails. My honest worry is more, can one ever fully break the illusion among the so-called adults playing with computer puppets? Or will it always be a constant danger simply given our desire to be surrounded by life and companionship? Young children at least rejoice in such imaginative story-telling. Why is it so difficult for adults to misinterpret their own actions?
That’s a terrific observation. It reminded of something I’d forgotten. As an adult who works in art and animation, AI fills me with anxiety and dread. But as a child, I was desperate for computers and robots to become sentient—I wanted my toys to become my friends. In those post-R2-D2 years, I had a toy robot that relied on branching 8-track tape playback to cleverly simulate interactivity, and I was exceedingly fond of him. Perhaps some AI developers are trying to re-enchant the world in some misguided way. Perhaps some are lonely.
I do not think the anxiety and dread is necessarily unwarranted, at least in terms of our tendency to delude ourselves and to trust in the apparent capability of AI to solve our societal or political problems, as one of the above commentators noted. Still, if Narcissus was fooled by the reflection and represents many people encountering chat-bots, I wonder if Pygmalion is also an apt metaphor for this phenomenon, we yearn at the deepest level for our creations to be truly alive. Perhaps children are more likely to assume the friendliness of an encountered other, although both Narcissus and Pygmalion loved the illusions oddly enough to our modern fearful minds. We should not forget that only the gods or the fairy folk could grant life, as the Velveteen Rabbit found out.
Beautifully said.
It is because we never tire of the infinite, which is present in each person. To impose infiniteness onto objects and lower beasts is to know the metaphysical deficit that inspired God’s dispensation to Adam with the creation of Eve. The child-like impulse to project intelligence and commit the pathetic fallacy may be tied to the thirsty nature of man without the all-quenching waters of life. Why this tendency is more prominent in children is a curious fact. Man’s desire for a more perfect companion is only tolerably realized in another with the god-image. I suspect a lot of these geeks are lonely and constructing false companions. While the others are compelled by the Baconian quest of nature.
I agree in principle with much of what you are saying, particularly in regards to our desire for the infinite. My only hesitation here is simply to assume whole-heartedly the pathetic fallacy as perfectly true or to see all child-like recognition of intelligence as mere projection. Perhaps the fullness of truth is more interdependent and creative. I fear sometimes the idea that others only are a tolerable substitute for the infinite divine can lead to their metaphysical diminishment in our eyes. Surely there must be a path away from crude idolization that yet recognizes Wendell Berry's insight "I could not have desired her enough. She was a living soul and could be loved forever". If this is not true, I fail to see how John 3:16 could ever be.
Laurel,
I sympathize with not wanting to reduce the pathetic fallacy to a mere psychologism and do believe it is often divinely inspired insofar as it seeks to commune or “speak” with the infinite. I grant something more dynamic is happening. A “tolerable substitute” may be too harsh an ontological designation for Gods children but we are all metaphysically diminished in the face of God. This does not cheapen our existence or value but puts it in its proper place within the hierarchy of creation which highlights it value in the best way. Perhaps the middle ground between idolatry and holy worship is Jesus. His duality satisfies both the flesh in his human nature and the infinite in his divinity. Perhaps the answer lies somewhere in our relationship with him and solves the anthropomorphic riddle of human projection. You have a beautiful mind and I wish blessings upon you.
I think another concern is the moral harm that we will be able to inflict on ourselves through our interactions with our person-like machines. The more persuasive their appearance becomes, the more irresistible it will be to see them as persons. Still, we will not be able to rid ourselves of the suspicion that they are mere machines, nothing more than fancy appliances. That is likely to encourage behaviors toward them that, if they were directed at actual persons, would be evil: 'abuse', 'assault', 'murder'. HBO's remake of Michael Crichton's Westworld portrays this in graphic detail. We will not harm the machines. But we will harm ourselves. In treating apparent persons in (what would be) morally reprehensible ways (if they were actual persons), we will undoubtedly strengthen our vices, thereby making us worse in all of our genuine relationships.
Excellent piece as usual, though I am not optimistic we can avoid the scenario outlined at the conclusion. I suspect we've already crossed that threshold and been reduced to functions of the machine (or megamachine, to borrow from Lewis Mumford). The success of deep learning in the past decade has depended on the explosion of big data, which means that fundamentally AI's capability is contingent upon and derived from surveillance capitalism. It's like a feedback loop of anti-humanity, and my dark suspicion is we are already past the point of no return.
For me, AI's most devastating immediate effects are on teaching. Just in the past month I've caught students cheating with ChatGPT eight times--and that's only the ones I could detect.
The only way writing and history classes will survive the rise of LLMs is if students have to start presenting and defending their papers like a PhD thesis. Or go back to writing them by hand, using only offline resources. Either way, it will probably only be feasible in a private school context.
I concur. I tried it out with one classic essay assignment just to see what would happen and the results were apocalyptic. So now I'm exploring alternatives like the ones you suggest. Perhaps even having the students print out and "grade" essays generated by AI to show how limited they can be.
To use a somewhat dated sci-fi reference, it reminds me of the humans in Battlestar Galactica going back to analog technology in order to fight the Cylons.
They haven't published GPT's training data (and probably never will), but I have a theory that it includes literally millions of high school essays of every description, probably scraped from websites that provide "essay-writing services" (or whatever euphenism for cheating they used these days). There was piece in The Atlantic (?) by an English teacher expressing amazement at the power of ChatGPT to write "crossover" essays (shared themes between Hamlet and TKAM, that kind of thing). Unfortunately, that's exactly what we should expect it to be good at.
Of course, I'm sure the Phillistines who created GPT hold essays on Hamlet to be a quaint holdover from the days before you could get paid a six-figure salary just for dumping data into a neural network, so who cares if you ruin our ability to teach writing?
It wouldn't surprise me. I would guess it trained in part on sites like Sparknotes, and combining that with the 'essay-writing services' would create cheating-turtles all the way down.
Your machine learning comment reminds me of this relevant xkcd. https://imgs.xkcd.com/comics/machine_learning.png
John Searle’s Chinese Room thought experiment is a good one. Hubert Dreyfus via Heidegger and Ponty refuted AI quite well.
The Chinese Room, while correct in principle, is an incomplete argument, because it still allows for the presence of syntax in a computer. Searle later realized that this was already to concede too much.
For not mentioning the great Dreyfus, I give this article a 4 out of 5 ;)
Dreyfus's argument is actually weak on many of the more significant logical issues.
Still...he fought the good fight.
Of course.
And if I may, as I have your brief attention, I do enjoy your writing and presence on this blue marble immensely.
I hate that the conversation often circles the drain of possible sentience. We are ill-equipped as a society to consider these issues properly, so the inevitable result is confusion. I remarked to a friend that this technology will be upon us in full force before we have truly begun to contemplate its impact. My friend's primary concern is that AI is becoming a perfect vehicle for disinformation. Completely valid, as is disquiet over the paths that an insufficiently constrained system may take, and other perfectly reasonable concerns have been raised in the comments here. It seems utterly unlikely to me that we will be able to exercise restraint in this sphere. It is bizarre to observe. I told my friend that the sensation is similar to being in a car that has just hit the ice and lost its steering. It is such an odd blend of horror and free-floating strangeness.
Keeping AIs from being a malign influence in society is part of the broader problem as "AI alignment," or trying to get AIs to act according to human values. And given that the same people developing cutting edge AI (Meta, Google, Elon Musk) obviously don't give a damn about the consequences of the technology they've already created in other spheres, I'd say there's zero-chance for us to have a non-destrructive collective relationship with the new technology.
No doubt, many of the developers of these tools have little incentive to consider potential consequences, but in my opinion, the problem is much broader than that. I think we are collectively ill-equipped to deal with the dangers of AI for many reasons. The technologies themselves and the dispositions of their creators are just accelerants.
As an insider using this technology, I can assure you the headlines are "much ado about nothing."
Mo, I agree with you, but I also suspect that it is "nothing" because you have a good amount of "something" about you. I mean that these illusions are a massive distraction and confusion to the many isolated souls running about in our market-driven world these days.
True enough, I suppose. All technology is risky. And risk properly understood is deviation from a norm - for better or worse. Personally, I see more risk in 10-year-old video gaming technology than in a chatbot. :)
It's always worse than you think. Chatbots will accelerate a tendency toward effective illiteracy in the culture at large.
Yes, in some ways, at least. Yet my son makes a cogent argument that technologies are merely emerging forms of literacy. And, while cultures rise and fall, it will end better than we think. Because Love wins!
Nah.
Well, the young people I teach are woefully technologically illiterate in addition to being barely able to string a sentence together. And that's not even touching how most people (even profesional computer programmers!) often have no idea how computers work because they operate excl;usively in the realm of software. Bernardo Kastrup estimates that less than 2000 people worldwide understand computer hardware enough to build even a basic machine from scratch.
It’s a shame, too; building, say, an adder or a basic ALU from logic gates is not a particularly complicated procedure, but it *is* enormously rewarding!
I'm currently reading Robin Wall Kimmerer' Braiding Sweetgrass. There are now less than 2000 people worldwide who know how to weave a basket. Because robots are doing it. The only constant in life and culture is change. And I'm an eternal optimist. :)
LLM stands for "Large Language Model" (not "Logic Learning Machine"). I can't decide if you were deliberately altering the acronym, but the distinction is important. It would be a marvel if LLM's could "learn logic," but as currently constructed, any apparent logic the machine produces is borne only of statistical regularities in the underlying data. Which is to say:If the computer manages to reproduce, say, a syllogism, it is only because people writing things on the internet do not regularly conclude that because Socrates is a man and all men are mortal, Socrates is therefore a house plant. If they did, the algorithm would happily insist that he is a geranium.
It’s fashionable among AI enthusiasts to keep coming up with alternative meanings of LLM. It’s what passes for wit in those circles. In this case, that’s how the bot was described.
In this case, the usage is based on the neural switch learning process model, to which AI true believers liken the Large Language Model:
https://en.m.wikipedia.org/wiki/Logic_learning_machine
Someone described Bing Chat as using Logic Learning Machines? This is a rather deep confusion . . .
It’s an ideological claim, not a confusion. Or so I thought.
Good evening, Dr. Hart. A couple of quick questions, as I'm not too educated in these matters.
1. In some piece, you mentioned that you find certain forms of panpsychism to be quite appealing. If my memory serves (and it might not), it went along the lines of everything that is having some sort of awareness of being what it is (a rock knows what it is to be a rock, etc.). The electrical pulses flitting about in a motherboard can't be identified with thought or intentionality, but since the computer is a construct in the general structure of reality as mind, do you hold that it too have a knowledge of being what it is or awareness of some kind in this sense? It does have a telos, after all—God.
2. I'm curious to your thoughts on the mind-brain interaction (if even posing that question doesn't reek too strongly of some kind of dualism). Any thoughts on how physiological phenomena are correlated to events in the mind?
3. It's always amusing to have a 'satyr question', so here's one for you: do you think a daimon could possess a computer?
Different panpsychism; you’re thinking of the materialist version I consistently reject.
I’m not a Cartesian and don’t believe in any interaction between mind and body, as if they were discrete extrinsically related things. Body is soul expressed in concrete organism.
Only in the way a ghost might haunt a house.
1. I found the passage in Roland I was thinking of:
“There is one who knows what it’s like to be a rock. And wouldn’t that infinite personal depth have to express itself, almost of necessity, in a finite personal interiority of sorts? Surely the knowledge of what it is to be a rock is already the spirit of the rock as a rock—the rock knowing itself. So isn’t that very knowledge of ‘what it’s like’ already the reality of a finite modality of personal knowledge, a kind of discrete spiritual self? A personal, reflective dimension as the necessarily contracted mode in which the uncontracted infinite act of mind is exemplified in that thing?”
How does this differ from the above?
2. Interesting; I’ll await your philosophy of mind book for more, God willing.
A rock is a natural unity. A machine is an assemblage of extrinsic relata. In neither case, however, is consciousness a property inherent in matter.
Interesting; what is the relationship between the two, then?
Is there much of a difference between a natural unity and said assemblage? The rock itself is a composite, and the machine is one thing fashioned by forces within nature (i.e. humans).
A formal composite and an extrinsic assemblage differ only according to intrinsic or extrinsic teleology. If we knew how to engineer genuine organism, who knows? But I assume a rock is in a very "patient" sense a kind of living history.
Interesting. What do you think of so-called minimal cells like the JCVI-Syn3.0 line, which has a properly novel genome, or *de novo* attempts to construct synthetic cells?
Ezra Klein said several times recently that God, Human, Animal, Machine (by Meghan O’Geiblyn) is the best book he has read in the past year. It’s by a thoughtful former evangelical Christian with an obsessive interest in theology and technology, who traces the many parallels between thinking on AI and Christian theology. Klein says it is overwhelmingly convincing, and I agree. It’s also all about consciousness. Well worth reading for many reasons, for people who might be on this Substack. There’s also an audiobook if you find yourself wanting something thought-provoking to listen to while driving etc.
I forgot to mention, but Ezra Klein at NYT is probably the best-informed journalist writing about AI at the moment. You might enjoy the Hard Fork podcast interview of Klein from a couple of weeks ago.
This morning I finished Rebecca Du Maurier’s The Birds. Her last line before wrapping it up I thought was apt, not only of AI, but to the suffocating daily experience of many workers - the all pervasive commercial radio which has been piped into our daily workplaces now for a generation, like a daily tidal attack of ‘the birds’: “Nat listened to the tearing sound of splintering wood and wondered how many millions of years of memory were stored in those little brains, behind the stabbing beaks, the piercing eyes, now giving them this instinct to destroy mankind with all the deft precision of machines.”
Yes. But you mean Daphne du Maurier. Rebecca is of course her most famous novel.
Of course, yes - slip of the tongue !
I tweeted your article and one person replied that the AI programs are "decoder transformer centric neural networks. It's not really an 'algorithm.'"
Does that alter your assessment of current developments?
No. And they still function algorithmically.
I spent a career in emerging technology in Silicon Valley companies, starting in the early 80s up to my retirement in 2018. Much of it involved AI and component factors thereof. AI is nothing but a tool like all technology. It is not replacing any human and no one in the world of development thinks that to be a reality. But we do realize that the tool called AI can do something humans can't and that is true multitasking. The real question is how will be as humans cope in a society where the "tool" expedites tasks that free of time and humans work significantly less. This will be where the angst will come in the societal phase change we as humans will experience just like in the agricultural and industrial revolutions.
That’s not the only problem it poses. It’s an instrument that can do considerable harm at any number of social and cognitive levels, and a tool that can be used for some very destructive ends.
Absolutely the harmful development is occurring as we write we see it in cyber terror The technology is being turned into the equivalent of a disease vector for spreading cyberattacks. Which is why we need to focus on how will we in society manage this development. This ship has sailed. If we want to succeed in an AI driven interconnected world we need to consider arbitrage. If we don’t we may become the subjects to those who do but the risk in the situation may mean arbitrage will drive us into a race of destruction and who hits the bottom first.
We as humans have always lived n a physical world and now we we are living in a physical and virtual world What will be the effect on our minds?
One of the key factors here in managing the "guardrails" is how we view and recognize that our Personal Data is essentially quantified human behavior in patterns. We have overdeveloped a world with surplus's of this quantified data, we'll need to establish laws/regulations to control the conduct of organizations & institutions.
Well, assuming that the AI Revolution works out the same way as the Industrial Revolution or Agricultural Revolution, lots of humans are going to be replaced, and the only people to profit will be the owners of the technology in question.
My point, and in my writings along with colleagues, like Bill Davidow, we all know people will be replaced in greater numbers than what we saw in the previous revolutions that experienced great societal phase change. It will touch white collar, blue collar and no collar people. Academics, researchers, medical etc. Today look at the number of hospitals where prostate surgery is performed by robots, no humans involved in the actual surgery. The questions will be how do we aa a society implement the greater good for the common good of all human beings.
Fantastic! - 'a farrago of vacuous metaphors.'
I'm not sure it's a category error, perhaps not, but another dead-end is to invoke quantum theory as some kind of explanation for 'consciousness'.
Federico Faggin (he designed the first commercial microprocessor and touch screen) seem to be doing it in his recent book: 'Irriducibile. La coscienza, la vita. i computer e la nostra natura.'
And of course there are many others on the same path - even Raymond Ruyer was doing this in the 1950's!
Mario Crocco notes at some length that quantum mind theories cannot refer to a plurality of finite observers...
Thanks again - a really excellent piece.
"More to the point, and with a fine indifference to his readers’ peace of mind, he provided some very plausible rationales for his anxieties"
Glad to hear I'm not the only one who's lost sleep over this in recent weeks.