five top things i’ve been reading (thirty-fourth edition)
the latest in a regular 'top 5' series
Don’t Outsource Your Thinking to ChatGPT, Victor Kumar
The Trolley Problem, Judith Jarvis Thomson
Philosophical Malpractice, Daniel Kodsi and John Maier
A mysterious LLC is using a 300-year-old law to target D.C. sports betting, Meagan Flynn
Eschaton, Darlingside
This is the thirty-fourth in a weekly series. As with previous editions, I’ll move beyond things I’ve been reading, toward the end.
1) This week I enjoyed reading Don’t Outsource Your Thinking to ChatGPT. This is a new article by Victor Kumar, published on his Substack, in which he makes two main arguments about the instrumental value of LLMs. First, Kumar argues that just because LLMs can be used for bad ends, this doesn’t mean we shouldn’t acknowledge and exploit their vast application value. “Wanting to ban LLMs because students use them to cheat”, he tells us, “is like wanting to outlaw cars because criminals use them as getaway vehicles”.
Second, Kumar argues that just because part of LLMs’ vast application value pertains to “specialized cognitive tasks”, this doesn’t mean we should fully outsource our intellectual endeavours to them. Here, he makes the following analogy:
“An LLM is actually like a Swiss army knife. It can help you brainstorm, generate examples, and copy-edit. It’s also great at drafting generic emails. But when it matters, the last thing you want to use an LLM for is to produce copy for others to consume—the copy is bad and outsourcing writing is bad for you. Their real value lies, rather, in generating inputs to your own thoughts.”
Using another couple of neat analogies to narrow in further, Kumar argues that when we’re using LLMs for research purposes, we should approach them like ‘choose your own adventure’ books, rather than player pianos. In other words, don’t get LLMs to create output that imitates what you might’ve thought up. Instead, use them selectively and iteratively to “deepen your understanding” and “spark [your] curiosity”.
To return to the title of Kumar’s piece, “outsourc[ing] your thinking to GPT” is easy to avoid, on a superficial level. I mean, you literally cannot get someone else to do your thinking for you: only you can think your thoughts! And LLMs — which, as I’ve argued before, aren’t ‘someones’ — cannot even think for themselves, never mind for you. But none of this, of course, is what Kumar means. Rather, he means that we should protect our reasoning skills against LLM overreach.
Beyond those two main arguments about LLMs’ instrumental value — both of which I buy — I read Kumar’s piece therefore as raising one of the most important questions of the current AI moment: as a writer, where should you draw the line on your LLM usage? This is a question I think about a lot, mainly because it has particular relevance for us philosophers who use writing as a central means of both doing and sharing our philosophical work.
I use GPT (and sometimes DeepSeek) for two main purposes. First, I use 4o (and sometimes o3) in the way I once used Google. That is, if I need or want answers to practical questions about the external world at very short notice — is this noisy white box in my freezer an ice machine? what are my options for dumplings within the radius of a ten-minute walk? which paintings should I definitely not miss at this gallery when taking my tastes into account? why do Americans spell this word in such a weird way? — GPT is now my standard first port of call.
Second, I use o3 (and sometimes o3pro) to check my work. This second usage provokes serious complications, however, because there are certain kinds of questions that I definitely don’t want GPT to answer for me, including accidentally. Most of all, if I’m writing something in order to work out and share my views on some particular philosophical problem, then the last thing I want is for GPT, while ‘checking my work’, to tell me my eventual conclusion in advance. Indeed, if GPT did this, then it wouldn’t be my conclusion! And in a metaphysical sense, which happens to be really important to me, it would be precluded from ever becoming my conclusion.
The deepest concerns I have about involving GPT in my work practices don’t just relate to my desire to act with integrity, or my desire to develop my reasoning skills, therefore. They also relate to my love of solving important philosophical problems for myself — and the life-purpose-related sense of achievement I derive from feeling as if I’ve done so. I want this for every philosophical problem I seriously address. If I didn’t, then I wouldn’t be working hard on them in the first place!
To my mind, this is one of the great things about doing philosophy. That even though many of the problems you work on — as a philosopher — are problems that many philosophers have worked on before you, nonetheless they are your problem, while you’re working on them. And even though your eventual solution may well match other philosophers’ solutions in some broad or even quite narrow sense, your solution will be valuably different from theirs, because your approach and reasoning will have been different. Minimally, this is because you’re the one who did it! But also, not least, because I believe that two otherwise identical conclusions differ — in various important senses — when the reasoning behind them differs.
It’s already the case, therefore, that I mitigate hard against the influence of other people’s thinking when I really care about solving a philosophical problem for myself. I don’t read directly relevant stuff about the problem until I’ve worked out my own position, for instance. Even though I’m well aware this is inefficient! And I tend not to talk about these problems, during the ‘working out’ stage, with other people who might accidentally ‘help’ me! Nonetheless, generally, I’m happy to feed drafts of my philosophical writing through GPT a lot sooner than I’d share them with humans.
This is partly because I can constrain GPT in a much more hardcore manner than I’d ever want to try to constrain my fellow humans. I mean, you can hardly say to your philosophical friend, “Hey, I want you to read me my draft without commenting at all on it, just so I can hear it out loud!”, or “Hey, I want you to read and think about my draft, but then restrict your comments to whether any of my premises contain empirically factual errors!” Whereas I have no concerns about constraining GPT in these ways, for these ends.
This is because, as Kumar’s piece does a great job of reminding us, GPT is — among other things — a tool. It’s an astonishing human-built tool! So it’s ok to treat GPT as a means to these ends. Whereas it’s not ok to treat humans in this way, because humans — as living things with interests and rights — are not, and should not ever be conceived as, or used as, tools.
Of course, GPT is not a perfect tool for these work purposes of mine. I never rely solely on its claims about my factual errors, for instance — particularly claims regarding the content of the writing of other philosophers. As I’ve written here before:
“there’s no way I’d trust Gpt for important exegetical purposes. But I wouldn’t trust human-written secondary literature for that, either. If you want to be sure what some particular philosopher wrote in some particular text, then go and read it.”
And, on the more general ‘constraints on comments’ matter, I’m constantly reminding GPT that I never want any rewrites or new ideas. I’m pretty sure it’s got this by now, and thankfully — so far — it’s never wrecked my day (or more likely week, month, or year!) by solving an important philosophical problem for me. That is, it’s never put together the problem I’m working on, with its ‘knowledge’ of my approaches, values, intuitions, red lines, style, and so on, to tell me what I ‘would’ or ‘should’ conclude.
But I still worry about blurred lines, here. What does it really mean to say that I don’t want ‘new ideas’? Without sounding like Locke, what counts an ‘idea’, for these purposes? I mean, I’d be ok with GPT rescuing the integrity of my argument by pointing out that I’d accidentally included a stray ‘not’, somewhere. But this kind of ‘coherence check’ can quickly slip into something more substantive — if it hasn’t already!
And in the past, I used to ask GPT to tell me if I’d missed any obvious objections, in my final drafts. But the relief I felt, each time, at concluding that I didn’t need to integrate responses to any of GPT’s objections — because I’d already covered these objections, or because they were insufficiently relevant — was surely in part related to worrying that I’d asked GPT to do something I should’ve done for myself. So I’ve stopped asking for that.
Anyway, one point I want to make by going into this level of detail about my own practices is that I think Kumar’s lines on all this are interestingly different from mine. Ok, these differences probably largely depend on what Kumar counts as an LLM “generating inputs to your own thoughts”. But unlike him, I’m not so keen on the idea of GPT “spark[ing] my curiosity”, and I definitely don’t want it to generate examples for me. I’m probably being over-precious here, however!
Moreover, if GPT came up with a great example in a chat I was having with it about some other seemingly unrelated philosophical matter, and then this example later became relevant to some philosophical problem I was working on, then I’m sure I’d be happy to use the example with attribution to GPT. So I’m probably being inconsistent, too! After all, philosophy is philosophy: it’s really hard to separate out topics and domains and, yes, ideas. I wonder where other philosophers currently stand on these matters?
One final thought on LLM usage. I watched this podcast clip yesterday, in which Sam Altman discusses the risks involved in using GPT qua therapist or lawyer. These risks include those relating to the fact that human therapists and lawyers can often be bound to privacy, whereas as Altman openly admits, companies like his could be legally required to reveal the content of their users’ intimate LLM chats.
I found Altman’s stance on this, in the clip, surprisingly passive. We urgently need policy people to take responsibility here, he says! But is there an explicit warning on the GPT site, I wondered. I’ve never noticed one, and I use it many times every day. Has Altman talked much, publicly, about these concerns, before?
I never use GPT for such personal purposes. I’ve always assumed that writing intimate facts about my own, or anyone else’s, life into the GPT text box is totally crazy. Relatedly, I’ve assumed for a while that demand for psychologists (albeit not most kinds of lawyers) will persist and almost certainly boom as AI becomes better and better. But apparently, many other people do use GPT to this end...
2) As part of my ongoing practice of reading classic twentieth-century philosophy papers, I returned this week to Judith Jarvis Thomson’s The Trolley Problem (1976). As I wrote in a recent Eclectic Letters piece:
“I really like the way Judith Jarvis Thomson does philosophy. I often disagree with her conclusions; I think she has odd intuitions; and it annoys me that she ignores premises in obvious need of consideration. But I don’t think there’s anyone better at combining smartness, sincerity, and rigour, in this way that makes you want to stop reading and go do some philosophy yourself. Nozick’s a competitor, but he tries too hard. The two of them are also bound together by their excellent use of the thought experiment — something that gets an unfair rap these days. What a surprise that a technical mechanism, designed to isolate and test abstract ideas, cannot perfectly direct our practical reasoning!”
The trolley problem is probably Thomson’s second most famous thought experiment, after the ‘violinist’ thought experiment, which she uses in her A Defence of Abortion (1971). Technically, the trolley problem is really Philippa Foot’s thought experiment, because Foot wrote about it first, even if she didn’t name it as such. But Thomson’s trolley-problem paper is extremely well known, and Thomson invents some of the best-known trolley-problem variants, including the famous ‘fat man on a bridge’ variant.
I’m afraid all this lead-in is just a teaser, however. I was intending to write at length about the Thomson paper, today. And in particular, I was intending to relate my thoughts on it to some thoughts I’ve been having lately about what I see as the infinite value of each individual human life, and why I think this infinite value makes it impossible to weigh up different numbers of lives against each other.
But I want to spend what’s left of this evening thinking about democracy, in preparation for the second episode of my new philosophy podcast, which I’m recording on Tuesday morning. So you’ll have to wait until next weekend for my further thoughts on Thomson…
3) I really enjoyed this recent Washington Post article about the most fantastic rule of law case. Put simply, a "mysterious Delaware-based LLC" is currently appealing to the ‘Statute of Anne’ (an eighteenth-century British law, which provides some protection against gambling losses, and which somehow "found its way to the district of Columbia”) to try to recover hundreds of millions of dollars. So far so good.
But the article tells us that “[on] Monday, D.C. lawmakers may vote to change the Statute of Anne for the first time in decades by clarifying that the 18th-century law does not apply to legalized modern sports betting”. Retroactive law-making, you yell! It seems, from the article, that your opponents have already started pushing back, yelling intention, jurisdictional changes, temporal relevance, coherence, you name it. Definitely one to watch and think about…
4) In this recent Philosophers Magazine article, Daniel Kodsi and John Maier raise strong objections to arguments presented in an amicus brief signed by a load of Yale philosophers. The brief (which I haven’t yet read in full) was written in relation to a recent Supreme Court case, United States v. Skrmetti, in which it was eventually held that “Tennessee’s law prohibiting certain medical treatments for transgender minors is not subject to heightened scrutiny under the Equal Protection Clause of the Fourteenth Amendment and satisfies rational basis review”. Or, as Kodsi and Maier summarise, “banning medical interventions on minors that disrupt their normal sexual development does not constitute invidious discrimination”. The Yale philosophers’ brief argued that discrimination did obtain.
Again, I haven’t time now to go into detail about my views on the Kodsi-Maier piece, except to say that it should, sadly, be required reading for many people in philosophy departments. It’s a piece that attempts too much, but many of its arguments are, to my mind, totally spot on.
In the past, I used to write about these matters a lot — particularly in relation to my concerns about what I believe amounts to the mutilation of children’s bodies, and also about the ways in which philosophers have responded to the topic of gender identity more broadly. For various reasons, I don’t feel much need or desire to write about these things anymore. But I’m really glad that Kodsi and Maier wrote this piece.
5) A couple of weeks ago, a friend introduced me to Eschaton (2018), an electronic-sounding pop song by a contemporary American group called Darlingside. I’ve listened to it multiple times every day since. One of the many things I love about this song is that when I listen to it, I in some sense enact its central line: “I hear the eschaton”. The BU Bridge also now stands high on my growing mental list of American bridges I’m planning to visit.








"I see as the infinite value of each individual human life, and why I think this infinite value makes it impossible to weigh up different numbers of lives against each other."
Bold. This strikes me as the kind of thing Eric Mack would believe. Do you have answer to risk questions?