There’s been so much news about developments in AI that I thought I’d pose a query to our field: Is AI in a position to contribute anything to the study of medieval philosophy?
I’m not interested in hand-wringing about how our students might use AI to cheat. That strikes me as their problem, not mine. I’m interested in how we might use AI for our own benefit, in our research.
- Has anyone experimented with ChatGPT?
- Has anyone set up a program that would translate scholastic Latin into English? This seems like it should be relatively easy to do, but so far as I know it hasn’t yet been done.
- What else might modern technology be doing for us?
Send in your replies, and maybe we can all take an extra week’s vacation this summer.
I’m somewhat surprised that there isn’t a reliable transcribing algorithm yet (there have been attempts but not super successful as of yet). Given that Siri can be trained to read even my crappy handwriting, Gothic bookhand doesn’t seem like such a huge challenge. Transkribus is the main attempt I know of in this direction, but is not quite there yet.
Not really AI, but the classics folks have been doing some cool stuff that makes working with large texts easier (enabling text comparisons, lemmatizing, etc.; cf. cltk.org).
Nicola Polloni (Leuven) sent me this long and thoughtful reply via email:
I’ve read your latest post on ChatGPT and was intrigued. Indeed, I’ve been quite impressed by ChatGPT since its release and have tried to check its usability in different circumstances and ways (not as many as I would have liked: it’s a horribly packed period!). Here are my short notes about my personal experience with it.
1) Defining hylomorphism and basic concept of medieval philosophy:
I have asked ChatGPT to provide a reliable, concise definition of hylomorphism and related notions (form, matter). As you know, the interface has been designed to give feedback to the algorithm to help it devise better answers to queries. The definition of hylomorphism given at the end of this quick “training” was quite good and surely reusable in a course. However, no real insights were given: no ideas or new points of view that may facilitate the research on the topic. The same happened with other concepts: prime matter, matter, Aquinas’ theology, etc.
2) Talking about minor authors and confronting positions:
I have asked ChatGPT about some medieval authors, with diverging results. For instance, the algorithm is able to say something interesting about Aquinas, something vaguer about Roger Bacon, and nothing at all about minor philosophers like Francisco de Toledo, Dominicus Gundissalinus, etc. The algorithm is able to compare some claims made by some major philosophers, yet without giving much insights and, again, nothing valuable for research and interpretation (best case scenario: it’s useful to students, with many caveats). In addition, there are some simplifications that sometimes become mistakes. For instance, ChatGPT is “convinced” that Aristotle himself claimed that prime matter is a pure potency (it seems that it shares the old-fashioned tendency to merge Aquinas and Aristotle).
3) Translating from Latin (and Chinese) into English:
ChatGPT is able to translate Latin to English. I’ve checked this function (or ability?!) with a few samples of text because I was very curious. In my opinion, it works far better than other algorithms (e.g., Google Translate) at least in some circumstances, but not too well. When the Latin becomes more difficult (not syntactically, but semantically: it seems to work fairly well with convoluted constructions), the rendering is far less reliable. I have also tried the algorithm to (re)translate a technical Chinese text on hylomorphism (proceeding from an article that I am publishing in China and that has been translated by some colleagues). The outcome was quite impressive, especially considering that Chinese is a complicated language with sinograms having a plurality of different, sometimes unrelated meanings and that many concept of Western philosophy are still new in Chinese. However, my knowledge of Chinese is still far from good. Were it reliable, ChatGPT would dramatically facilitate the communication of research among distant languages, fostering accessibility (but it’s too soon to see how reliable it actually is).
4) ChatGPT as a career consultant:
In Leuven, I have been tasked with teaching graduate students how to strategically design their academic careers (I know, it’s sort of hilarious considering that I don’t have a permanent position, but at least they know what may await them). In this context, I was curious to check whether ChatGPT was able to give advice on more practical matters like career opportunities for philosophers outside academia, strategic choices concerning the job market, etc. I have to confess that I was positively surprised by its replies. When tested on matters out of the philosophical “niche”, the algorithm truly rocks. For instance, its replies to the question about non-academic career opportunities for philosophers were very detailed and expanding on the main solutions usually offered by books, articles, and real consultants. Yet again this is not related to our philosophical work.
5) ChatGPT as a PR consultant:
ChatGPT is very good at generating catchy titles for books, events, and other items. The outcome depends on the description of the item you want a title for. Let me explain. When I asked for a catchy title for a YouTube Channel on medieval philosophy, the options that the algorithm offered were remarkable and catchy indeed. I assume it has data related to all YouTube channels and is able to assess what terms lead to more visualizations and apply that knowledge to a wide concept like medieval philosophy. Something similar happened when I asked for titles for a workshop on medieval science and quantification: it worked fairly well because these are wide topics. Yet, when I requested options for the title of a volume about Ibn Gabirol (a minor author, as you know), its replies were not as good and in some cases they included mistaken information.
6) ChatGPT as an artist:
The final set of queries that I have requested ChatGPT to perform is far more bizarre. Since I love to surprise my students with something they are not expecting, especially when it is related to something they can hear on the news, I have asked the algorithm to produce poems in quatrains and tercets on hylomorphism and prime matter. It was hilarious! Yet, in this case, too, the prioritization of rhymes sometimes led to some evident mistakes in the content of the poem.
To wrap it up: my general impression is that ChatGPT is a marvelous device that has much potential. I had glimpses of such a potential when dealing with questions and aspects that are more related to the “real world” (i.e., not research) and practical problems. Naturally, this is a result of the algorithm’s training, which is not based on philosophy and the history of philosophy except marginally. Consequently, its use for academic research purposes is limited, at least currently. Students may benefit from it, although there are some simplifications and mistakes that may hinder their learning process (but in my opinion, this is not our problem once they have been informed about the ethical issues and the partial reliability of the algorithm). We may benefit from it, at least in part (e.g., with translations from Asian languages, etc.), particularly when and if its reliability has been proven.
All this will evidently change many things and posit some fascinating challenges to all of us. As you’ve seen, there’s already some controversy about articles co-authored by ChatGPT (but this looks to me like a PR operation more than a real issue). What if one were to quote a definition given by the algorithm? That can probably go in a footnote, with no issues whatsoever. But what about a translation? That would probably be messier since translating and interpreting a medieval text is a fundamental part of our interpretative job. Finally, as far as I have seen, there is no problem (yet) about the generation of new ideas and interpretations. When asked to comment on texts, ChatGPT mostly produces a summary of the text with few, general remarks on the context of the considered theory (case study: hylomorphism, once again). However, this limitation is due to the relatively small amount of data on the topic that is currently available to the algorithm, and this may change with the new version of the algorithm, its current release being a sort of global training.