I'd rather read the prompt
Thanks to Jin and Wayne for reviewing this piece to stop me from publishing anything that is overtly, outrageously, ostentatious ostrich drivel. Normal drivel is my specialty.
Clayton’s Website - Clayton Ramsey
I like Ramsey’s take on the whole. I’m not an educator so I can’t share the context, but I’m not surprised at how Ramsey describes certain students’ answers.
I write this article as a plea to everyone: not just my students, but the blog posters and Reddit commenters and weak-accept paper authors and Reviewer 2. Don’t let a computer write for you! I say this not for reasons of intellectual honesty, or for the spirit of fairness. I say this because I believe that your original thoughts are far more interesting, meaningful, and valuable than whatever a large language model can transform them into.
I will point you to How to live an intellectually rich life for comments on the import of writing, but needless to say, I am a huge proponent of writing–both as a creative exercise and to sharpen thoughts and the mind. A similar argument for reading, fiction and non-fiction both–fiction gets a bad rap from certain people, which I don’t understand. The best prose is in fiction, the prettiest words, the best sentences, the most ticklish and lithe phrases which just force you to speak them and dance lightly as they alight upon your tongue. So write! and read!
Ramsey argues against LLMs for creative expression, which is a stance I agree with. I am hesitant to say, however, that I’ve noticed any opposition to this. Are there truly people who use LLMs for tasks which aren’t mere drudgery (whatever meniality means to you; chacun à son goût)?
I should hope that the purpose of a class writing exercise is not to create an artifact of text but force the student to think; a language model produces the former, not the latter.
I think here, an educator’s perspective is colouring Ramsey’s statement. I do not condone mindless LLMing of assignments–my own usage of LLMs has been restricted, and I’ve the wherewithal to do HW and whatever else school demands without relying exclusively on some machine to aid me (excepting all the computing work I need those machines for.) I do use LLMs, though, and the key to my usage is deliberateness. I’ve been meaning to write of deliberateness for a while, but perhaps let me sketch a thesis: ideally, all actions are taken after careful, necessary deliberation, after consuming all possible information pertinent to the question at hand. This is, of course somewhat worthless, describing things in some Platonic ideal where the action can be treated as convex and so can be optimized with enough information and time.
In reality, everything isn’t knowable, there isn’t always an optimal, and you have very little time. So, people take sub-optimal actions–i.e. take actions without bearing the full brunt of reasoning them through. I do not condone this, too–however, I am far softer on those very same actions, given the understanding of that sacrifice–of deliberateness; of deliberately taking an action, bearing in mind the knowledge of a lack of knowledge. Deliberateness doesn’t balance out ignorance, but being aware of ignorance itself is an important first step. This idea extends far beyond this, but I will leave that to its own piece, eventually.
Vibe coding; that is, writing programs almost exclusively by language-model generation; produces an artifact with no theory behind it. The result is simple: with no theory, the produced code is practically useless.
Looping back to LLMs, the usage of LLMs must be deliberate. You must be aware that the software you write, if trusted at face value, will be crusty and sub-optimal and strange and inelegant. But it is so fast to write, so if you decide that is worth the tradeoff, using LLMs like this makes sense. Again, though, the key is that awareness, the deliberate rejection of quality for speed–recognizing the compromise. And to be clear, that quality cost is quite large–pure vibe coding is famously insecure, and surely not something you should do for any software of value. See Andriy Burkov’s posts (LinkedIn, Twitter too maybe?) for some good takes on vibe coding and “AI”.
Touching on the writing prompt again, then–even if the purpose of an exercise holds some lofty ideals of educating the to-be-educated (aka students), they may have other ideas, classifying this exercise as a waste of time; worthless; what have you. Again, I’m not saying I condone this, but it is understandable. Students stepping down this path could very reasonably do so deliberately, conceding the knowledge and self-reflection potentially begotten by the exercise in order to further some other goals (spending time with friends, playing games, reading books, calling family, etc.)–and while I’d hesitate to call that right I’d also hesitate to call it wrong. It is not ideal, of course, but the awareness of loss itself is enough to convince me that it is not wholly a loss. They have done the economics, so to speak, performing some kind of fuzzy cost benefit analysis and have struck unto this path with that (business) calculus in mind.
I still think that to be a loss for the student still, so I’d think Ramsey and I are in accord in that regard–but I don’t know if I can condemn this action either, under the (strong) assumption that a student is taking this action deliberately.
So, in short, a language model is great for making nonsense, and not so great for anything else.
This is a forceful statement, used more to close out the piece than spoken from the heart, I’d hazard. LLMs are not intelligent–hence my discomfort with a continued abuse of the term “AI” to refer to an artificial certainly-not-intelligent-but-not-unintelligent-thing. However, they are very useful, given the right task; given the right context. There is a time and a place, and they may be narrow, but LLMs are revolutionary. Just not for everything, and there again I think Ramsey and I would agree.
It is hard to overstate how monumental a shift modern ML allows–tech fads crash in then fade out like clockwork (the internet itself, the cloud, IOT, crypto). There were bubbles associated with each, but the underlying tech is fundamentally good and slowly became a silent mainstay (crypto’s role is giving reprieve to the venerable Ponzi.) The Internet itself was once a fad, and that certainly didn’t fade away. Snake oil salesmen are selling “AI” like every new technology–it will “solve” the world, so long as they get their money. It’s all unintelligent, artificial hype. Businesspeople don’t care about the underlying value of this modern tech–but there is value. Computers can understand your speech (mostly), plain and simple. That is a big deal–a barrier that hasn’t been surmounted, but has now, certainly been mounted. This is new. And that is exciting. That is the revolution.
The piece was pretty good overall. I think I may have wrongly attempted to extend some of its arguments beyond the context it was presented in, but I think it is interesting nonetheless to see how far these ideas hold.