I've noticed a big shift toward, "We should be teaching kids how to cite AI."
That's antithetical to citations. A citation points you to the original source. Citing AI is effectively erasing original sources in a misguided effort to "teach students how to responsibly use AI tools."
Responsible use is teaching people that AI tools are plagiarism machines. Period.
So, this got kind of big.
I wrote a more detailed and thought out post about this topic on my blog. Comments are open and you're welcome to add your thoughts there.
Also, if your kids' teachers do something like this, be kind. Take time to talk with them. Don't berate. We all work hard to make sure kids are taught well.
@brianb The correct action would be to teach AI and the companies that run them to cite their sources
@brianb we should be teaching kids how to avoid AI.
@brianb Sorry, I'm deeply confused.
This is to say, 'Please tell us what the machine either made up or stole from somewhere, and it certainly won't tell you which, that you used in your work'?
If we could get these LLMs to source their data, that would be lovely (but they literally can't, reliably, so that will never happen), so asking you to say 'I heard it from Bob' has about as much weight here.
@brianb oh dear. Honestly I think knowledge about AI and it's ethical considerations is so limited, because technologists voices, and big tech voices, dominate the discourse so much.
It doesn't seem surprising that a teacher or administrator world struggle to fit it into the framework they have.
We should be teaching kids NOT to cite AI.
@brianb Citing an AI would be like citing "a bunch of Google searches my friend did."
@brianb Have a like for plagiarism machines.
Ok, likely unproductive response, but what the actual fuck!
Is this the education system still getting to grips with Wikipedia and the internet maybe being not entirely irrelevant and so applying the same degree of acceptance of Wikipedia to AI?
Or is this corporate propaganda?
Or just desperation at not know what to do anymore as tech disrupts traditional education structures (a real problem I’d wager)?
Either way, citing AI is just dumb and betrays someone who’s confused about AI
Makes sense. But still, seems like a quick capitulation which implies there’s more perceived heft behind AI. In part I’d say because it actually looks convincing and that traditional assessment is actually disrupted by it.
My bias here being that I suspect it a largely good thing that AI does disrupt traditional educational assessment as it likely reveals the superficiality in education that AI achieves.
@SmartmanApps @maegul @brianb “Use of Wikipedia”—you discovered (and penalised) students who went the the Wikipedia article for a quick overview of a topic and to find a primary source? Or students who copied chunks from Wikipedia and either cited Wikipedia or didn’t cite anything? Because the first one seems like a way to start (about as useful as asking a friend about the topic) and I’d say the second one is *abuse* of Wikipedia.
@johnaldis @maegul @brianb
Note I said "gave them examples of why not". Some of them not only plagiarised, some even copied an EXACT thing I'd told them was wrong! (hence the equivalence to using AI) They'd decided to not pay attention in class, cos just use Wikipedia for the assessment, then failed to notice I had SPECIFICALLY pointed out examples where it was wrong, then thought I wouldn't notice! I always say Wikipedia is "like an encyclopedia" in the same way that Madonna is like a virgin
@maegul @brianb If people knew how to cite things they find on Wikipedia, this wouldn’t be an issue. It’s not particularly unreliable, but in academic terms it’s a way of finding an original source to cite. Applying “the same degree of acceptance” to AI would then be fine—ask the tool (AI or Wikipedia) for an answer, ask it for its primary source, look up that primary source and verify. The fact that AI frequently fails at step 3 isn’t a problem with this process.
The correct citation style feels like it would follow the rules for citing a personal conversation.
You could cite a random granny about spectroscopy. It doesn't mean what she says about it is accurate.
@brianb I wrote this in response to your toot and the majority of replies you've been getting, but made it is own post because I will be referring back to it often: https://infosec.exchange/@magnesium/112467935216416521
I'm happy to have a conversation about it, where we can get into specifics and compromise our opinions, or not and just find our lines in the sand
@magnesium Ok, I'm with you on the specificity problem. LLMs are different than ML and other neural training AI that _do_ have promise. My post was coming from a position of generalized "generative AI" for knowledge because that's what we (teachers) see happening in classrooms.
Either way, it's a move that looks proactive (let's teach them how to use the stuff they're going to use anyways) without dealing with the root issue. I suspect it's because most don't understand the root problem.
@brianb I suppose it depends on what level you're teaching, and maybe even the course you're teaching. My kids in high school think of LLMs as cheating, especially in English class. I expect when kid#1 goes to university an LLM will be useful to get informational framework examples to build around. I'd say just like any tool knowing how and when to use that tool is a smart idea
@brianb Yeah, my MSc course handbook says that I should reference any conversations I have with GenAI. Nope.
@dajb@social.coop @brianb@fosstodon.org I mean, if it's okay to make shit up as long as it sounds plausible, I can do that myself and skip the AI.
@brianb I've been thinking lately that the world needs robust reference management tools a hell of a lot more than it needs these "AI" gadgets.
Suggestions that LLM output be citable on a par with works of intellect just might be the Big Dumb that shocks me out of my doldrums, to put fresh energy into #Jurism maintenance.
@brianb Citing the output of AI is like saying in which pub you overheard something.
@brianb The only good way to cite AI is with: “Caution: may contain traces of plagiarism.”
@brianb Part of why I quit dealing with an education/paedgogy group was because they were trying to propose this. They were mostly decent, but on this, they really would not listen.
The *final* reason was really being given the sense that there was a culture of elitism that wasn't challenged.
@brianb how to cite an LLM should include chasing down the original sources. That’s actually an extremely useful skill, one that will only become more valuable.
@brianb true. It makes me cringe that capitalists so bent on making the future have convinced a generation to not miss out on it. For the low monthly fee of…
I still wonder, if someone couldn’t be bothered to make the effort of writing it, why on Earth would I make the effort to read it? Is the point of education to bloviate into the void and get a good number?
@brianb
Citing AI is like saying "I found this on Google".
Perhaps that is why AI might become popular. People now use Google to find stuff and pay no attention to the source Google sent them to, they just found it "on Google". AI is meant for those folks.
@brianb it also misses the point. I tell a lot of post-grad students. Citations do a lot more than get you out of plagiarism issues. They form the scaffolding for your paper.
I tell them that citation demonstrate you have consulted expert sources and established facts that are agreed upon by others in the field.
Especially for students, when YOU state something as fact, remember, you are not an expert (yet), you don’t have your masters yet, so why should the reader believe you?
Instead, I want students to show interesting, well formed analysis and synthesis built on established facts which are cited from reputable and preferably peer reviewed sources.
@brianb @me_ I remember when wikipedia was gaining prominence folks were real worked up about students using it as though it was a primary source, and getting very frustrated that people talked about the Internet/online part as if that was the problem. Like, I was taught in school in the early ‘90s that no encyclopedia was an appropriate primary source. That just isn’t what they are.
Citing AI is like citing an encyclopedia written by people who don’t care about correctness.
@brianb And on top of everything else, they don't even reliably give the same answer every time!
From my recent experience in searching for things and not using AI, who's to say that some craptastic AI didn't write the piece you just found?...
Simply frustrating that none of your first page results ever look to be written by an intelligent human being and without a video ad between every paragraph.
My wife had to listen to an idiot pushing the whole let AI write for your students nonsense some months ago.
Society would be better served without it.
@MyWoolyMastadon Yup, searching has gotten much, much worse. I'm limited to Google at school (domain-wide policy), but this tweak helped to default to web-only results. Pairing it with the before:2023 flag also helps.
https://tedium.co/2024/05/17/google-web-search-make-default/
@brianb
citing AI is like citing
"well, everyone knows this, or at least my cousin Bill said that at the family barbecue, or come to think of it, he said his room-mate believed that"
@brianb @Brad_Rosenheim oh no I hate this
@brianb There is an assumption built into that of Intellectual Property not being theft (from the intellectual giants who came before us, on whose shoulders we stand.) Are we saying mankind should NEVER share knowledge freely; that it is not only "soft power" but "capital"? Has 'The AI' already decided the Knowledge Economy is capitalist? I don't remember being asked for my vote. What do The Billionaires have to say about their data theft and hoarding?
@brianb LLMs are not AI in the first place.
I would be in favor of teaching kids how to bootstrap research using LLMs, so long as it never, ever stops there, and instead follows up with finding the original citation the LLM was trained on and verifying its contents as accurately represented in the LLM response.
LLMs can be used like any other aggregator or indexing program: they provide a reference to other work.
But that's it. They need to be treated as the limited tool they are.
I absolutely hate that perfectly legitimate tech is being ruined by overzealous early-adopters and grifters.
Just like crypto-bros ruined the value of blockchain by reducing it to NFT and cryptocurrency bullshit, we're seeing the same thing happen with LLMs.
@brianb Others have made vague mentions to it, but there's a fundamental mistake here: the assumption that AI gives real information rather than information-shaped sentences.
It doesn't actually understand the question, because it doesn't understand anything. It's a machine that takes the words you throw at it and dumps them into a big bucket that knocks loose the series of words that generally come next to fall out the bottom.