Updates, mainly, ...
or, an annoyed old philosopher shouting at the world
It’s been while since I sat down to write something with Substack in mind. I’ve been working on a load of different things, but in terms of writing I’ve been working on an essay for the next issue of Guillotines that’s been proving trickier than I anticipated. The focus of the issue is on the concept of caring militancy that we’ve been developing in Plan C and I am writing a piece from the perspective of being a carer and care worker. This is a subtle but important distinction, as being a carer involves a degree of intimacy and emotional connection that makes the whole caring practice more complex. So perhaps more revealing, more personal. And I’ve had to trash two whole versions of the piece so far…
I didn’t have to trash them, of course, but I did. Deleted. Quite aggressively in fact - deleted, purged, removed from backups and clouds, totally trashed. That itself was a strange thing, but the pieces just felt so wrong. I think part of this was the heightened sense of being true to the subject, but there was also the anger that they contained. A kind of inchoate, fuming, spiteful anger, directed mainly at those who don’t seem to have to care. The care free. Or the care light brigade. It’s been a curious process in which I encountered something unexpected - the unconscious expressing itself in the text before it could be thought through.
I’ve been stepping back a little and had some of those strange conversations about trying to write something that isn’t there yet to read, strange because these conversations aren’t about an object, or even an idea, more about a kind of sensation on the tip of ones tongue. A new title for the piece, a new line, a small crystal that can begin to accrete perceptions and propositions, all this for something so ephemeral and transitory.
In other news I’ve been digging deep into the whole LLM and AI thing. Unlike so many on the left who seem quick to settle for a generally hostile approach - megacorps bad = everything bad, a position which presents with about as much critical function as Trump on a roll - I find the production of an LLM able to beat the Turing test via a methodology based on calculations to be a truly fascinating moment. Scientifically and technologically it is a remarkable breakthrough and one that has only been so rapidly picked up by capital because of the nature of it being a breakthrough.
Some of this response is due to my own background interest in the philosophical work around AI and consciousness, an interest that was there early on in my philosophical studies (mid-90’s). The whole Chomsky/Quine debate, for example, was a standard teaching module, useful for thinking about the structure and nature of language, albeit usually with third year students. I never much liked Chomsky, although I loved his poverty of stimulus argument, so it struck me as curious that the whole debate now seems pretty redundant and solved in favour of Quine.
Or the phenomenologists, or Wittgenstein, both of whom I was a close fellow-traveller to, and their whole emphasis on understanding and the difference from calculation, how the very idea of a ‘calculating’ mind was simply a category mistake. Or Ryle himself and the concept of a category mistake … the whole philosophy of mind and consciousness studies offered a rich, interesting and ‘real world’ set of problems and arguments, concepts and distinctions.
So it’s disappointing to read 98% of the guff and bluster around this whole technical and analytical development. The extinction narrative is little more than a scam line set up by Oxford no-marks, the desperate pleas to a lost humanity sound like liberal corpses spouting their usual shit and the moral claims to ownership and copyright feel at best disingenuous and at worst downright obnoxious. Nothing analogue gets scraped by a fucking bot so if you put it online I expect it to be free, and if I can get it free I will and I’m not about to morally condemn anyone else for doing so.
My own approach has been to see what the tech can do.
I rapidly realised it was little use in strict humanities work. It can provide some very useful research assistant tools and things like NotebookLLM and other RAG systems can be fun at first but effectively do little more than an RA helping out with a literature review. Cheaper than an RA of course, but then I never had an RA in my University days, always worked in poor places that weren’t graced with such things. It can be useful pedagogically I suspect, enabling rapid distilling of material for things like quizzes, short writing exercises and such like. But University level lecturers are almost universally crap teachers, barely competent in most cases and actively incompetent far too often. They usually have no active pedagogy other than the awful habits they have learnt as they worked through their Uni education layers, and mostly they depend on the capacity to blame others for anything that goes wrong.
Generally speaking I’m not a fan of academics working in the modern University - lazy, dumb and their presence in the University itself shows a kind of inherent conservatism that belies any radical pretences. They are often the most virulently and morally against AI, primarily out of a real fear that the shitty way they do their jobs can be easily replicated by an LLM and an active student body of peer-learning. Quite rightly they recognise their redundancy, but this is not a result of AI or LLM’s, it’s a result of the fact that they’ve sat and participated in the slow collapse of the Uni over the last twenty years, its transformation into a debt factory disguised as learning that they willingly support in their actions whilst cleansing their self-worth with meaningless moralistic outrage.
Still, the basic criticism of AI and LLM use in the humanities is correct. The very concept of ‘slow reading’, critical time, absorption, deep context, struggle - all this self-transformative work of the humanities is lost if you try and use AI to mediate your interaction. Philosophy is notorious for saying that it’s proud to be useless. Philosophers often hold a kind of ethical stance derived from a wounded pride that rests on something like this idea of philosophy being outside of standard knowledge practices established by science. One way of putting this is by saying something like ‘philosophy (or whichever humanities you like - history, literature, poetry, even politics) does not contribute to human knowledge but instead must be seen as contributing to human understanding. This is true, I think, although such understanding is also understood to be the work of an individual, for the most part, and so is often pacifying and quietist in its results.
So generally speaking, the use of an LLM in the humanities is pretty limited, no matter how fucking ‘smart’ (wide-ranging, comprehensive) they might become. Our literature reviews might get better, although I doubt it. The trouble is that if the lit review looks like it might suggest that the drivel you need to get published in order to fulfil some research metric is actually unnecessary the response is more likely to be ‘publish anyway’ than ‘back to the drawing board’. Why do I suggest this? Because this is already the practice of so much of the humanities. The LLM’s won’t corrupt the University humanities, they simply add fuel to the fire the academics set themselves. To be honest burn the university down is my go to slogan here.
Outside of this the capacity to actually study - a capacity that is already so degraded as to rightly be called a ‘lost art’ - is neither helped nor hindered by AI. Personally I think screens themselves have already shifted the study space so greatly that a return to books is more important than anything to do with AI. To the actual practice of sitting, quietly, with a book and a pencil and a cup of tea, reading. That’s not to suggest that screens and PDF search strings (part of my practice for at least twenty years I think) don’t have their place - again, they make literature review, peer knowledge investigation and state of play assessments a lot easier and I’ve found the capacity to track concepts through texts with PDF search strings to be something that can occasionally prove vital. None of that depends on anything other than the shift from pages to screens, from analogue to digital text.
The real threat to the humanities has appeared long before LLM’s. The loss of the study function in favour of the exam and assessment function necessary in an institution that has moved to ‘measurables’ is the real culprit here. So worry less about the AI in your students work than the fact that you’re selecting for power through exam questions you’re setting and the ‘impact’ assessments in your research applications. These are the real killers of thought. The moral corruption of the academic is the more devastating social phenomenon.
In terms of scientific work I can’t speak, other than to say that this whole domain is worse than the humanities, so corrupted by capital flow, power and ego, human failure and the ability to mimic knowledge as to render huge swathes of it as garbage. This is without even getting into the replicability crisis, the fictitious publishing and peer-reviewing processes and the simple corruption of results.
The real danger has been academic slop, not ai slop. The future collapse of the University in the face of the LLM and student self-learning is the result of the abandonment of the ‘slow time’ necessary for study. New tech simply speeds up the collapse of a dying monster as it reveals it’s internal contradictions grounded in an incapacity of the University to exclude the simulacra of learning. Of course the collapse won’t remove Universities from the world, rather it will transform them into the zombie institutions of power and selection they are making themselves into.
There is one area where some of this might be different and that is with regard to the idea of semantic search. The LLM brings with it a fundamental shift from string search to semantic search - so asking about ‘blue’ for example would previously bring back results based on finding the string ‘blue’ - with hugely complex modifications for sure, but always string based. PDF string search is an example of very simplistic forms of this, but string search has the ‘virtue’ of being deterministic, pretty much. Semantic search, on the other hand, could easily find connection to colour talk, or to blue things, or a whole range of potential connections based on the proximity of vectors within the semantic field. The technical details of constructing a RAG, from choices around chunking to the selection of a corpus, enable a high degree of experimentation here but in the end all result in a chat bot of some kind. And the real AI is not in a chatbot…that’s just the fluff at the consumer level, the pretty new thing to sell a sucker. The really interesting AI is not in the chatbot but the api.
We don’t yet know what an api can do.
Engineering is undergoing a profound change. Here the change is not about the voice of the interlocutor, no affective, psychological or moral question predominates. The change is in the tooling. Tools offer techniques and techniques offer time. The fierce competition that actually exists within the worker expresses itself in trying not to work as much, doing almost anything to avoid, obstruct, delegate or sabotage the need to actually do some work. The very concept of ‘management’ is premised on the idea that people won’t just get on and do their job (quite rightly). So instead we have to construct ways and means of guard-railing the worker to prevent time-leakage. Inevitably workers will use any means possible to do as little as possible. (And if you don’t, you’re a sucker.) A tool that enables this will be picked up and developed whatever moral outrage occurs.
The difference between the engineering and the humanities, for example, comes down to the fact that the tools are not at their most potent when thinking about human meaning but when they are addressed to the code connections that exist throughout any digital space. The capacity to build and manipulate those code connections don’t need ‘semantic’ structures in the human sense, rather they need capacities to act, and act they do. The engineer who uses Claude to code is not doing the same thing as the student who asks ChatGPT to write their essay. That coding tool is not enacting a meaning or understanding, it is acting, doing and learning from failure. This is not yet what we would - with any meaningfulness - call autonomy, rather it is automation. The history of automation doesn’t start with LLM’s, but sure as shit the next stage of it will involve them.
Engineers themselves are grappling with the reformation of their whole work structure around this new automation. Mostly the intelligent ones recognise both the inevitability of any automation tool to overwhelm the hand crafted, a process that is tied so intimately into the birth of capitalism (looms) as to be easily thought of as part of it, although that would be a huge conceptual conflation. Rather automation is something that capital can use because it can be put to work. Here ‘work’ means, the production cycle and automation is a lever to increase the speed of that cycle and thus it’s vigour. So automation is not a choice in a meaningful sense of the word.
Refusing automation is like ‘refusing society’, it can be an individualistic solution to construct a sense of self that can be maintained in the face of capitalism, but it’s effectively an opt out option for the privileged. Workers often rightly refuse automation as they know that the automated are unemployed or sped up, and resistance to such is part of the key dynamics of the time war that lies at the heart of capitalism, but refusing or resisting automation is at best a defensive struggle, one with little sense of a future and so almost inevitably set up for defeat in the long run.
Rather the option is always control struggle - who controls the tools, controls the flow1. And the most interesting development here is the clear dynamic to find ways to break the token sellers monopoly. The strengths that the engineers have are actually likely to be more useful in their attempts to maintain or wrest control because they are grounded in abstract layers of architectural and systems thinking that necessitate the human-machine interface as the success criteria. It has to not only work, but also work for the human or else the app, site, function or flow will not move. And the engineer is the intimate knowledge holder in this position, above and beyond anyone else - reminding me a little of the role of tool-makers in production factories.
I need to spend some more time tracking these development and so have been ‘learning to code’ and doing something akin to a workers enquiry via YouTube. I’ve always ‘coded’ a bit as I was online at the origin of the WWW and so quickly built websites, then learnt CSS and basic Javascript, but only ever enough to make my buttons have rounded corners or my mouse-overs blur the background, or whatever the current fashionable bullshit was. Eventually this morphed into Wordpress usage and I effectively delegated all that limited skill base onto an app. It was only about two or three years ago that I finally broke from Wordpress as it became increasingly difficult to maintain on a self-hosted VPS at Bluehost. I dumped self-hosting, switched to just letting the company take responsibility for the crazy hackable state of the app and then found out how much they wanted to charge and puked. So my whole infrastructure of a decade or more had to be dumped as capitalism had gradually withered it’s potential, or captured its flows.
So I was already thinking of moving onto a home lab setup when claude code came along, moving past the chatbot onto the command line. It proved useful as a linux tutor and taught me the power of things like sed, grep and the bash fork bomb before I turned to exploring JS and CSS in more detail. This tool was like an active manual that could be interrogated but it was also a dumb fuck that never knew when it was wrong (in that sense reminding me of the academic voice).
What was crucial to the whole experience, however is that this tool transformed the command line from an arcane instrument of confusion to a means of rapid connection and iteration. It opened up the primary tool connection of the engineer that must and always lies behind any visual interface in the world of the digital. This is a remarkable and radically transformative shift and one that is not so visible if you only encounter AI and the LLM’s via a visual interface of one kind or another, whether that be chatbots or pretty app building websites. And the command line combined with API tooling - machine to machine connections to data flows - produced a doer rather than a talker. This, I think, is the real impact that’s likely to be the core of the automation that will take place.
And finally…I’ve got another zine (or perhaps pamphlet…) made up, a couple perhaps since I last posted. Razorsmile #5 is out and you can grab a copy here.
If you’re a paid subscriber I will happily send you my printed material for free if you DM me an address, and you too can have a taste of the reading experience. Grab a cuppa and sit down for a little while.
I’ve written about the obfuscation and arcane knowledge held by workers with ‘skill’ before in an essay called Cunning technologies – hypnotizing chickens, horse whispering and the sorcery of social subjectivities. You can find a digital version (ironically) on my website here: https://razorsmile.org/publications/


