In my experience this framing feels incomplete to me. The choice between "barrier" and "no barrier" misses something. Like, the limit isn't in God's willingness to give, it's in the creature's capacity to receive. And even that isn't a hard ceiling; it's more like the experience is enabled through the Son, who makes approachable what otherwise simply couldn't be approached. Not a wall. Not no wall. More like the way is Jesus. And if anything my experience is irreducibly Trinitarian each person distinct but not separable.
Maybe I've just never forgiven Fr. Kimel for the those ChatGPT-generated articles he posted a few months ago, but this piece smells strongly of AI to me. Doesn't help that it also doesn't look much like Fortuin's other published writing. I could be wrong, but there's a lot that makes me suspicious.
Now you are sounding like my daughter's literature teacher :) (although she has a good reason, as cheating with ChatGPT in her class is rampant). I think you may be somewhat misled by the frequent use of em dashes and italics, but Fortuin has often employed them before on the same website. Would an AI use expressions such as “Heave Ho” (it would put a hyphen in between and use lower case and, in any case, wouldn't prioritize using such a colloquial expression in this type of article) or “nothin new under David’s sun” (it would certainly add a "g" or an apostrophe at the end of the first word and, most likely, wouldn't come up with this exact phrase)?
I didn't think it was exclusively AI-generated. I figured it was probably a summary of a human/AI "conversation" with the DBH excerpt as a prompt, or maybe a human-composed text fed through an AI and then edited into its final form by the author. That would account for the "human-isms" (typos, unusual turns of phrase) while also explaining the presence of stereotypically AI characteristics (short paragraphs, bolded headings, italics, lots of bulleted lists). Maybe that's not much of a case, but I've developed pretty serious trust issues toward text on the Internet over the past few years, and it's easy to set off my AI-dar.
I'm writing a thesis right now, and the "AI-isms" you just described (except for bulleted lists) are things I've naturally found myself doing as I go to keep things organized and intelligible to the reader. Also, as my past professors are well aware, I was (over)using em-dashes long before the current AI craze. If someone wants to critique these choices, they are welcome to. If someone accuses me of using AI, that's an entirely different story, and unfortunately I have no way of definitively proving them wrong (the extensive analysis which anything even approaching such a disproof would take is too time-consuming for most people to bother with).
If these really were "AI-isms" in the reliable sense, people would (or at least, will) simply edit text after it gets generated to remove them. It's incredibly easy to remove italics, to unbold text, etc. If those characteristics are/were semi-reliable AI-markers, they won't be soon, assuming they aren't already.
Of course, sometimes it is obvious. I correctly accused a friend of mine of using AI fairly recently on a short paper, and he didn't even deny it. But that was because I was 100% sure: the information in the paper was flatly wrong in a specific way no human writer could have possibly misconstrued.
But generally, the risk of damage to a person's self-esteem or reputation by a false accusation is so high that 95% of the time I see accusations as pointless. Those who think they have some kind of infallible clairvoyance are going to harm relationships and social trust more often than they actually expose anyone. Plus, a person who actually uses AI can just deny it, so the "exposing" isn't all it's cracked up to be even if one happens to be right. If one happens to be wrong, they've potentially made someone into a pariah for the crime of basically nothing (at most, being a bit of a boring writer, but sometimes not even that, just including too many italicizations or whatever).
Meanwhile, the accuser has risked nothing. If they're right, then congratulations! The closer the text was to something which a human might have written, the more risky and reckless the accusation, but also the more impressive if they are right! If they're wrong, well, it's just because their expectations were "so high" and the piece they were accusing was (apparently) impersonal/shallow/bad, and either way, no one will ever know for sure. It's practically a win-win especially since any potentially bad motives can be conveniently reduced to anxiety about AI. I'm not accusing you personally of any of this, but obviously, the context is not ideal for promoting sobriety and restraint.
The only way to consistently tell going forward is going to be extending legal requirement to disclose AI use and/or to have an irremovable in-text data watermark of some kind (I'm not a computer guy so I have no idea if the latter would be possible, but surely much can be overhauled in order to make it possible?). But we need reliable proofs, not "-isms" that will probably be ever-changing. Relying on that is a recipe for disaster, no matter how high one thinks their success rate is.
Well, as you'll notice from my original comment, my suspicions are not based purely on the presence of "AI-isms." Rather, there are two other circumstantial factors that I'm taking into account: 1) the fact that Fr. Kimel has published articles written entirely by AI in the past, and so it's reasonable to assume that he has no principled objection to its use on Eclectic Orthodoxy, and 2) the fact that the "AI-isms" I mentioned represent a departure from Fortuin's style in his earlier published works. It's true that some people just write short paragraphs with lots of subheaders, em dashes, and bullet points—nothing wrong with that—but it's also true that Fortuin's writing usually features extremely long paragraphs (his essay "Sola Scriptura, Holy Tradition, and the Hermeneutics of Christ" has multiple 12-sentencers) and makes use of the AI-isms in question only rarely. It was, I confess, the resemblence of this article to the earlier ones Fr. Kimel wrote with ChatGPT that initially raised by suspicions, but had Fortuin's earlier pieces on the site followed the same format (they're all from the pre-AI days), that would obviously have settled things in his favor. So, incidentally, would a statement about the article's composition process from Fortuin or Fr. Kimel—I'm not accusing either of them of being liars. Fortuin is a professor; he can obviously write well without help. But lots of people who know how to write still use AI, and this is especially true among Christians. Again, there's a specific history and context to my suspicions; I'm not the teacher giving a student an F just because something "looks AI." In fact, I'm not very confident at all in my ability to identify AI writing on purely stylistic grounds, so I'm not accusing Fortuin of writing like a bot.
I'm just on my guard because I know how easy it is to be burned.
Everyone I know in real life is more sanguine than I am about their ability to spot AI, and all of them have been taken in multiple times. My wife just had to remove a bunch of AI songs from our kid's morning playlist because I pointed out that 1) the artists had no documented existence prior to 2025, and 2) they were releasing more than a hundred songs a month. Sure, technically neither of those things proves they were AI, but a presumption of innocence in our technological and cultural moment would simply be credulous. Imagine living in a country where most of the currency was counterfeit, but then being reluctant to subject any individual transaction to scrutiny. That's the situation we're in right now, only with culture instead of money. Why should we hesitate to hold the money up to the light and look for the watermark? This has nothing to do with scoring points; it's just due diligence. If we aren't doing that, we are being had.
I was talking about public accusations (or even public-forum speculations visible to the author), not private judgements about what not to read or listen to. Obviously the songs you mentioned are almost certainly made by AI and should indeed be avoided (a hundred songs a month is a hilariously obvious tell). Private skepticism is perfectly understandable and justified.
But in the public arena we should, I think, hesitate to "hold the money up to the light" for everyone to see for the reasons I mentioned: the possibility of error is high, the risks to the person's reputation and self-esteem are significant, and it erodes what social trust might otherwise remain.
Just because someone has hosted AI generated articles on their website before (which I just heard about in your first comment and is, of course, disappointing) does not mean that a specific author not otherwise known for using it has used it. A few minor style changes shouldn't be sufficient to level such an accusation publicly.
I, for instance, actually laughed to myself after noticing the differences between two sets of notes I had written to myself while trying to solve one or another theological problem. Despite the similarity of the two topics and the fact that I had written both sets of notes to the same person (myself) mere hours apart and in more or less the same mood, they were entirely stylistically distinct (both in formatting and in the kind of language employed) to the point where they seemed to have been written by two different people. Even some of the words used to represent the same ideas were consistently different. None of this was intentional.
What amused me at the time was that I realized that if my notes had been subject to the analysis of historical/literary scholars (e.g. Biblical scholars) living a thousand years later, they would have assumed either years of development or two different authors, whereas both notes were written in the same afternoon, to and by the same person, about similar topics.
The same problems apply when trying to determine if someone is using AI: "style" is not (usually) a reliable metric. If we are going to accuse someone publicly, we need something more concrete.
But that's just my view. I can understand if someone disagrees.
Anyway, something in your reply caught my eye:
". . . lots of people who know how to write still use AI, and this is especially true among Christians."
I was not aware of this greater tendency amongst Christians. Assuming that it's real, why do you think it might be the case?
Well, that's the thing: for me, that "social trust" ship has already sailed. Everything on the Internet is AI until proven otherwise; my good opinion, once lost, is lost forever. I have my own theories about the "Christians and AI" thing—and it may be unique to the United States—but it's definitely real. It was actually picked up on by sociologists pretty early on (there was a Barna study in 2023 that showed that Christians were way more likely to be using AI than the general populace) and I've heard plenty of horror stories from employees at Christian colleges and NGOs about being pushed into using it by the higher-ups (the ministry where my in-laws work is discussing creating an AI-simulacrum of their deceased founder to answer theological questions, to give just one example). If you ask me, it comes down to the fact that most American Christians are political conservatives, and American conservatives tend to conceive human flourishing in purely moral or spiritual terms, while treating economic and technological conditions as a simple matter of linear "progress." ("Guns don't kill people, people kill people" isn't just a slogan; it's an entire ethos). Catholics and Orthodox tend to be somewhat less naive on that score, but the need to be culturally "relevant" can also lead them astray. That's my take, at least.
I'll leave it at that, since I know our genial host doesn't like it when a thread gets too off-topic, and he is just as unpersuaded as you are by my skepticism about Fortuin's article.
I think this exchange between you and James is exactly why we have to be quick to thicken our skins when we're the ones poked, and instead support the newly-natural suspicion that "all is AL" now- rising above this as one more reason (and perhaps one of the greatest) to recognize there's something more than just a "new tool" in the emergence of AL.
In my experience this framing feels incomplete to me. The choice between "barrier" and "no barrier" misses something. Like, the limit isn't in God's willingness to give, it's in the creature's capacity to receive. And even that isn't a hard ceiling; it's more like the experience is enabled through the Son, who makes approachable what otherwise simply couldn't be approached. Not a wall. Not no wall. More like the way is Jesus. And if anything my experience is irreducibly Trinitarian each person distinct but not separable.
Well this points me in the direction of a partial answer to my question about your epistemology.
Thank you.
-mb
Maybe I've just never forgiven Fr. Kimel for the those ChatGPT-generated articles he posted a few months ago, but this piece smells strongly of AI to me. Doesn't help that it also doesn't look much like Fortuin's other published writing. I could be wrong, but there's a lot that makes me suspicious.
No, that’s silly.
Now you are sounding like my daughter's literature teacher :) (although she has a good reason, as cheating with ChatGPT in her class is rampant). I think you may be somewhat misled by the frequent use of em dashes and italics, but Fortuin has often employed them before on the same website. Would an AI use expressions such as “Heave Ho” (it would put a hyphen in between and use lower case and, in any case, wouldn't prioritize using such a colloquial expression in this type of article) or “nothin new under David’s sun” (it would certainly add a "g" or an apostrophe at the end of the first word and, most likely, wouldn't come up with this exact phrase)?
I didn't think it was exclusively AI-generated. I figured it was probably a summary of a human/AI "conversation" with the DBH excerpt as a prompt, or maybe a human-composed text fed through an AI and then edited into its final form by the author. That would account for the "human-isms" (typos, unusual turns of phrase) while also explaining the presence of stereotypically AI characteristics (short paragraphs, bolded headings, italics, lots of bulleted lists). Maybe that's not much of a case, but I've developed pretty serious trust issues toward text on the Internet over the past few years, and it's easy to set off my AI-dar.
I'm writing a thesis right now, and the "AI-isms" you just described (except for bulleted lists) are things I've naturally found myself doing as I go to keep things organized and intelligible to the reader. Also, as my past professors are well aware, I was (over)using em-dashes long before the current AI craze. If someone wants to critique these choices, they are welcome to. If someone accuses me of using AI, that's an entirely different story, and unfortunately I have no way of definitively proving them wrong (the extensive analysis which anything even approaching such a disproof would take is too time-consuming for most people to bother with).
If these really were "AI-isms" in the reliable sense, people would (or at least, will) simply edit text after it gets generated to remove them. It's incredibly easy to remove italics, to unbold text, etc. If those characteristics are/were semi-reliable AI-markers, they won't be soon, assuming they aren't already.
Of course, sometimes it is obvious. I correctly accused a friend of mine of using AI fairly recently on a short paper, and he didn't even deny it. But that was because I was 100% sure: the information in the paper was flatly wrong in a specific way no human writer could have possibly misconstrued.
But generally, the risk of damage to a person's self-esteem or reputation by a false accusation is so high that 95% of the time I see accusations as pointless. Those who think they have some kind of infallible clairvoyance are going to harm relationships and social trust more often than they actually expose anyone. Plus, a person who actually uses AI can just deny it, so the "exposing" isn't all it's cracked up to be even if one happens to be right. If one happens to be wrong, they've potentially made someone into a pariah for the crime of basically nothing (at most, being a bit of a boring writer, but sometimes not even that, just including too many italicizations or whatever).
Meanwhile, the accuser has risked nothing. If they're right, then congratulations! The closer the text was to something which a human might have written, the more risky and reckless the accusation, but also the more impressive if they are right! If they're wrong, well, it's just because their expectations were "so high" and the piece they were accusing was (apparently) impersonal/shallow/bad, and either way, no one will ever know for sure. It's practically a win-win especially since any potentially bad motives can be conveniently reduced to anxiety about AI. I'm not accusing you personally of any of this, but obviously, the context is not ideal for promoting sobriety and restraint.
The only way to consistently tell going forward is going to be extending legal requirement to disclose AI use and/or to have an irremovable in-text data watermark of some kind (I'm not a computer guy so I have no idea if the latter would be possible, but surely much can be overhauled in order to make it possible?). But we need reliable proofs, not "-isms" that will probably be ever-changing. Relying on that is a recipe for disaster, no matter how high one thinks their success rate is.
Well, as you'll notice from my original comment, my suspicions are not based purely on the presence of "AI-isms." Rather, there are two other circumstantial factors that I'm taking into account: 1) the fact that Fr. Kimel has published articles written entirely by AI in the past, and so it's reasonable to assume that he has no principled objection to its use on Eclectic Orthodoxy, and 2) the fact that the "AI-isms" I mentioned represent a departure from Fortuin's style in his earlier published works. It's true that some people just write short paragraphs with lots of subheaders, em dashes, and bullet points—nothing wrong with that—but it's also true that Fortuin's writing usually features extremely long paragraphs (his essay "Sola Scriptura, Holy Tradition, and the Hermeneutics of Christ" has multiple 12-sentencers) and makes use of the AI-isms in question only rarely. It was, I confess, the resemblence of this article to the earlier ones Fr. Kimel wrote with ChatGPT that initially raised by suspicions, but had Fortuin's earlier pieces on the site followed the same format (they're all from the pre-AI days), that would obviously have settled things in his favor. So, incidentally, would a statement about the article's composition process from Fortuin or Fr. Kimel—I'm not accusing either of them of being liars. Fortuin is a professor; he can obviously write well without help. But lots of people who know how to write still use AI, and this is especially true among Christians. Again, there's a specific history and context to my suspicions; I'm not the teacher giving a student an F just because something "looks AI." In fact, I'm not very confident at all in my ability to identify AI writing on purely stylistic grounds, so I'm not accusing Fortuin of writing like a bot.
I'm just on my guard because I know how easy it is to be burned.
Everyone I know in real life is more sanguine than I am about their ability to spot AI, and all of them have been taken in multiple times. My wife just had to remove a bunch of AI songs from our kid's morning playlist because I pointed out that 1) the artists had no documented existence prior to 2025, and 2) they were releasing more than a hundred songs a month. Sure, technically neither of those things proves they were AI, but a presumption of innocence in our technological and cultural moment would simply be credulous. Imagine living in a country where most of the currency was counterfeit, but then being reluctant to subject any individual transaction to scrutiny. That's the situation we're in right now, only with culture instead of money. Why should we hesitate to hold the money up to the light and look for the watermark? This has nothing to do with scoring points; it's just due diligence. If we aren't doing that, we are being had.
I was talking about public accusations (or even public-forum speculations visible to the author), not private judgements about what not to read or listen to. Obviously the songs you mentioned are almost certainly made by AI and should indeed be avoided (a hundred songs a month is a hilariously obvious tell). Private skepticism is perfectly understandable and justified.
But in the public arena we should, I think, hesitate to "hold the money up to the light" for everyone to see for the reasons I mentioned: the possibility of error is high, the risks to the person's reputation and self-esteem are significant, and it erodes what social trust might otherwise remain.
Just because someone has hosted AI generated articles on their website before (which I just heard about in your first comment and is, of course, disappointing) does not mean that a specific author not otherwise known for using it has used it. A few minor style changes shouldn't be sufficient to level such an accusation publicly.
I, for instance, actually laughed to myself after noticing the differences between two sets of notes I had written to myself while trying to solve one or another theological problem. Despite the similarity of the two topics and the fact that I had written both sets of notes to the same person (myself) mere hours apart and in more or less the same mood, they were entirely stylistically distinct (both in formatting and in the kind of language employed) to the point where they seemed to have been written by two different people. Even some of the words used to represent the same ideas were consistently different. None of this was intentional.
What amused me at the time was that I realized that if my notes had been subject to the analysis of historical/literary scholars (e.g. Biblical scholars) living a thousand years later, they would have assumed either years of development or two different authors, whereas both notes were written in the same afternoon, to and by the same person, about similar topics.
The same problems apply when trying to determine if someone is using AI: "style" is not (usually) a reliable metric. If we are going to accuse someone publicly, we need something more concrete.
But that's just my view. I can understand if someone disagrees.
Anyway, something in your reply caught my eye:
". . . lots of people who know how to write still use AI, and this is especially true among Christians."
I was not aware of this greater tendency amongst Christians. Assuming that it's real, why do you think it might be the case?
Well, that's the thing: for me, that "social trust" ship has already sailed. Everything on the Internet is AI until proven otherwise; my good opinion, once lost, is lost forever. I have my own theories about the "Christians and AI" thing—and it may be unique to the United States—but it's definitely real. It was actually picked up on by sociologists pretty early on (there was a Barna study in 2023 that showed that Christians were way more likely to be using AI than the general populace) and I've heard plenty of horror stories from employees at Christian colleges and NGOs about being pushed into using it by the higher-ups (the ministry where my in-laws work is discussing creating an AI-simulacrum of their deceased founder to answer theological questions, to give just one example). If you ask me, it comes down to the fact that most American Christians are political conservatives, and American conservatives tend to conceive human flourishing in purely moral or spiritual terms, while treating economic and technological conditions as a simple matter of linear "progress." ("Guns don't kill people, people kill people" isn't just a slogan; it's an entire ethos). Catholics and Orthodox tend to be somewhat less naive on that score, but the need to be culturally "relevant" can also lead them astray. That's my take, at least.
I'll leave it at that, since I know our genial host doesn't like it when a thread gets too off-topic, and he is just as unpersuaded as you are by my skepticism about Fortuin's article.
I think this exchange between you and James is exactly why we have to be quick to thicken our skins when we're the ones poked, and instead support the newly-natural suspicion that "all is AL" now- rising above this as one more reason (and perhaps one of the greatest) to recognize there's something more than just a "new tool" in the emergence of AL.
Ala Kingsnorth's "Against the Machine" thesis.