fosstodon.org is one of the many independent Mastodon servers you can use to participate in the fediverse.
Fosstodon is an invite only Mastodon instance that is open to those who are interested in technology; particularly free & open source software. If you wish to join, contact us for an invite.

Administered by:

Server stats:

10K
active users

if the models have been trained on copyleft code, mustn’t all the code they generate also be copyleft?

@interfluidity Well, that's in effect the claim of stablediffusionlitigation.com, regarding pictures -- that the output from these systems is derivative of every bit of training data that went into them, with all the legal implications. Not every lawyer agrees; some would say that to determine whether one thing is a copy of another, for copyright purposes, you look at the things, not the process which generated them.

@interfluidity (In software, companies sometimes use "clean room" development to prove that new software they're writing *can't* have been a copy of an older program they're trying to emulate. But the requirements of the law are not that strict; if a programmer *does* read the GPLed code for the game Quake, that doesn't make any other game first-person shooter they write GPLed derivative code, just because they'd seen the Quake release.)

Steve Randy Waldman

@rst yes. for humans we presume there is a ghost in the machine: what we create is “ours”, not just merely derivative of what we may have seen. so far, though, for generative AI courts have said there is no ghost, no author beyond transformation of the inputs. copyright can’t be enforced on generative AI images, because there is no author. there is no one whose “fair use” can disencumber inputs, perhaps even very glancing inputs.