I regard myself as relatively technologically proficient and often an early adopter. Where many within my field would do their analyses in SPSS and write their articles in Word (which is fine), I prefer a workflow with, say, #RStudio and #Quarto, write analyses and text as a single reproducible document, and collaborate with #GitHub. Still, the whole generative AI thing has always... repelled me, and even more so for any kind of work within #academia. But I don’t find it easy to explain exactly what bothers me.
An important aspect of it is the «black box» thing. Scientific work should be transparent and reproducible. Output from an #LLM is anything but.
Another thing is watching colleagues get «coding advice» from an LLM for their statistical analyses that I immediately see will not run. Where, say, #lme4 syntax and #lavaan syntax is mixed up.
Third, I’ve seen horrendous examples where students ask LLMs to find research for them, with the LLM «digging up» one fictitious article after another, with fictitious results, sometimes with actual names of actual researchers, delivered with confidence. Admittedly, that was some while ago.
2/3