Column

AI and the Academic model collapse

Linnet Taylor (Tilburg University) was recently involved in an online discussion about how the NWO, our Dutch national research funder, has announced people are welcome to use genAI to write research proposals. Predictably, there was a difference between the responses of exact scientists and those of researchers working in more qualitative fields. This is worth examining because it’s more than a difference of opinion or experience: it’s a difference in worldviews that has implications for the future of research.

 

Published date
Copyright
Read time
5 minutes
AI in academia

or, why we should *not* stop worrying and learn to love the bomb

I was recently involved in an online discussion about how the NWO, our Dutch national research funder, has announced people are welcome to use genAI to write research proposals. Predictably, there was a difference between the responses of exact scientists and those of researchers working in more qualitative fields. This is worth examining because it’s more than a difference of opinion or experience: it’s a difference in worldviews that has implications for the future of research.

First, we are all living in a moment that is dominated by the rhetorical imaginaries of the tech industry. From the rise of effective altruism to most of G20 policy, we are living inside a narrative and ideological box that authorities, much of the media, and many researchers are having trouble distinguishing from the whole picture. This has led to a very real ‘alignment problem', but in reverse: the needs and perceptions of both people and policymakers are being forcefully aligned with the needs of the AI industry. General Artificial Intelligence is not even here yet, but we are pre-emptively aligning the people with the AI rather than the other way around.

If GenAI is applied in the research fields whose job it is to slow things down we will see academic model collapse

To examine for a moment where this ideology comes from: the computing sciences, and the computing industry, have over the last four or so decades become social sciences, working on policy and societal questions. They do so with a modular approach, however, that is not suited to problems of sociality or politics (for instance, labour policy, education or welfare provision, law enforcement or, in fact, much of public policy). This approach breaks them down into units that can then be worked on in isolation, and in doing so by definition loses sight of the actual problems.

The discussions we are hearing about GenAI come from this radically stripped-down worldview. The story is that GenAI can speed up progress and we have to get with the program. But ‘progress’, in this case, is defined as advancements in productivity and efficiency. This is great when it comes to analysing protein folding or finding new combinations of drugs to address serious diseases. This is not, however, how we define progress with regard to a million other fundamentally important things. Think of how to take care of people with Alzheimers, judging criminal or asylum cases, addressing climate change or resolving conflict. Think – and here we come back to research – of any question that requires independent, original approaches based on understanding people’s differing experiences of complex social or historical processes. 

We are all living in a moment that is dominated by the rhetorical imaginaries of the tech industry

In relation to any complex social or political question, progress can’t be equated with efficiency or productivity. Very often, progress comes from finding a way to slow things down so that we can understand what is going on. We already have highly developed, distributed systems for this – one example is democratic deliberation. Another is the whole apparatus of academic interpretive research, which has taken us around 2,000 years to build.

GenAI is already creating model collapse in search engines, where elaborate rankings of botshit are taking over from actual information, and the use of word-prediction software (LLMs) is taking the place of the critical assessment of information. If GenAI is applied in the research fields whose job it is to slow things down in order for everyone to get a grip on them and make good decisions, we will see academic model collapse, all along the line from primary schools to professors.

It doesn’t matter if GenAI can write us a poem, do our homework or write us a grant application on political philosophy. Those things are for *us* to do, individually and laboriously, because they are not ‘productive’ activities. They are how we learn how to think critically about the world and produce useful new ideas.   

Thinking inside the box of GenAI, business and profit is the opposite of what social sciences and the humanities are for

This is not a problem of alignment where we have to accept that AI can ultimately solve a problem if we just use it right. It’s one of knowing when to use AI and when not to. In relation to research grant applications for relevant and innovative qualitative work, then, GenAI is fundamentally problematic. Here, the argument that we must accept something fundamentally undesirable – namely people using a purée of other people’s work to gain public money meant to fund original thinking – just because we cannot easily stop people from doing it is not a good one. It’s an argument that has been made about most of the big problems in the world, problems that can’t be solved but only continually addressed – think of the slave trade, child labour, political reform, peacemaking.

In relation to those big problems, the ones that social science and the humanities are here to address, we can’t afford model collapse. Thinking inside the box of GenAI, business and profit is the opposite of what this portion of academia is for, but it does clarify where the problem of toxic productivity is coming from. The main reason we can’t collectively imagine preventing inappropriate uses of GenAI is because we have applied business principles of productivity and efficiency to a domain – academic research – that is nothing like a business. If we can imagine our way from ‘more is better’ to ‘better is better’, we may find some answers that the fog of AI is currently hiding from us.

Linnet Taylor is Associate Professor at the Tilburg Institute for Law, Technology, and Society (TILT). Her research focuses on data justice.

More from this author

Content ID

Published date