I've just released Mastodon De-Mob, an anti-harassment tool that lets users to block everyone who favorited or boosted a toot calling for harassment. https://mastodon-de-mob.codesections.com/
Please feel free to test it out with the example URL I provided; that will only block a test account I set up for this purpose. Hopefully, you won't ever need it for real.
It also reports the harassing toot, which I hope will prevent the tool's abuse for non-harassing toots.
My tool does *basically* the same thing but from a web app instead of a python CLI (even though I'm personally partial to CLI apps, I thought the web app might be a bit more accessible, especially for those who don't currently have python installed).
@codesections I have a curious question. How does this tool handle abuse? Brigarding an opinion i do not share is becomming alot easier with such a tool. Does the tool prevent brigarding or are only the mods of an instance responsible that this doesn't happen?
I'm not sure I understand what you mean by brigading in this context. I talk in the readme about my hopes that Mastodon De-Mob won't be used for guilt-by-association and the steps I took to avoid that. Is that also what you're concerned about, or is the "brigading" concern separate?
@codesections I am sorry. Brigarding is a term that is used on reddit to describe the coordinated effort to supress (mostly) a post or comment that does not fit into a specific narrative. I thought this term was applicable to mastodon as well. Thank you for clarifying your concerns in the readme of your tool. Let's hope that it will get used in a responsible way.
Oh, ok I understand the concern now. I think one difference between Mastodon and reddit comes from the difference between blocking someone and downvoting them. When a large group of people downvote a reddit post, that means that *other* people are less likely to see the post. But if a group of people block someone, then *that* group won't see their toots but others still will.
There can still be related harms (especially with blocklists) but the risk seems different
@codesections My concerns are not about the blocking itself. More like the, let's call it, "automated modmessage" i can send with it. There was the case where people in a coordinated effort spammed the mods of an instance to ban a certain user. And without any proof at all the admin banned the user because people were flooding his mod-qeue.
Yeah, that is a concern, and something I worry about with the open API—it would make it very easy to automate that sort of thing, if a programmer were so inclined.
I don't think *this* tool makes that issue worse: it takes a fair bit of clicking and typing to use this tool to send a single message to one instance's mods—it'd be faster to just copy/paste manual reports. But I agree it's a risk with the API.
My hope is that the moderation API (next version) will help
@codesections I want to state a thing. I think your tool is a great way for a person with not so much technical understanding to deal with a situation of harassment. My post was in no way criticising the existence of such a tool. I think it is some kind of ethical dilemma. Should we, as developers, write tools to help people while knowing that they can be abused to harm people instead? That's a question i have no easy answer for.
Yeah, I 100% agree that the potential for abuse presents a dilemma. There are absolutely software projects out there that have made the world a worse place, and it's worth putting thought into how/if that can be avoided.
I don't think there's any sweeping answer—I think it's something we just have to wrestle on the individual merits of each project as the issue presents itself. It's definitely something I put a lot of thought into, though.
> Should we, as developers, write tools to help people while knowing that they can be abused to harm people instead?
The truism applies: Anything can be used for abuse.
So write tools, but THEN make them as safe as you can (the important step a lot of toolmakers just shrug off, IMO), and then accept that you may have to add some guards and safeties later.
The first cars didn't have windshields, let alone crumple zones. It's not on/off, but a moving target.
Agreed with those thoughts re: ethical software dev.
Another important side to the issue is that you're (almost) never the only software developer working in a particular space. It can be tempting to think that the choice is between "build it with safeguards" and "don't build it at all", but really the choice is more often between "build it with the safeguards I'd put in" or "let someone else build it with the (lack of) safeguards *they'd* put in".
and while there's every chance both will end up getting built eventually regardless, that's not on you
what's on you is what your code is capable of -- and if users can shoot themselves in the foot with it trivially, you add a guard and/or a safety, you don't just lol about people shooting themselves in the foot and press on
(tho I mean AFTER you add the guard you can lol about people shooting themselves in the foot, I'm not anybody's mom, you do you)
While anything can be used for abuse, being conscious of those potential pitfalls while designing a product will make for a better project. Sure cars eventually got windshields, seatbelts, and turn signals, but it was a long and expensive struggle and much harm could have prevented. We have seen with Twitter and Reddit, retrofitting protections on a system can be harder than making it part of the design at the start.
> being conscious
yeah, by 'write tools' I really meant more 'design and build working prototypes that are unreleased until the next step', but it was late for me and I was running out of room and such
> harm could have been prevented
cars for a long while were new enough that the problems took more time to figure out, and they were slowed down by lack of our advanced tools, and such
preventives can't be defined until the problems are defined, IMO
just to be crystal: I didn't mean "release, then fix abuse vectors", more "test the living fuck out of it, puzzle out ways to break it and use it improperly, and then design safeguards"
letting all your abuse testing happen via user death/harassment is a mistake in the extreme, it treats people like a disposable resource instead of people
which is the Twitter mistake, IMO
(FB's was to under-moderate and pick mods badly, but that's arguable, I suppose)
but DEFINING problems is hard, due to a human problem: people do not want to consider the flaws of their work, ever. yes, impostor syndrome exists, and all that -- but despite it, there is still the urge in humanity, all of us to some degree, to pretend that if we just ignore a problem it won't really be a problem
see Meltdown+Spectre, the hotel keycard exploit (which still works in many smaller hotels b/c the 'fix' costs money, IIRC?), etc.
even the people I know who worked QA hated testing *their own tools*, developed in the QA section
the lie of "smile and it'll turn out all right" only works if you're one of the luckier people in this world, where minor problems aren't gonna snowball into catastrophes
like how someone with a six-figure income doesn't sweat an overdraw, but someone with a low five-digit might well end up in a paycheck loan hole because of *one* fuckup with their bank
but the survivor fallacy holds here too, so humans *around* successful humans see a 'positive attitude' and then imitate it in the hopes that'll work for them
anyway, yeah, that wasn't well worded, sorry about that. I don't think any software should go into the wild untested, and abuse vector analysis is vital testing for anything not-totally-embedded. (and for some totally-embedded things too. stuff can always be opened up somehow.)
But there is a difference between creating a community tool with little or no input from people experienced with communities (like twitter) versus discussion, contemplation, and seeking advice that you saw in the creation and evolution of Stack Overflow.
Even in the early days Twitter was having the same problems that you saw in The Well and Usenet. They weren't just not foreseeing potential pitfalls, they weren't even thinking about them and just focused on the technology (because it was really flaky those first few years). Listen back to those first dozen or so Stack Overflow podcasts https://stackoverflow.blog/podcasts/page/22/. It was almost all about how to encourage and moderate user behavior in a community.
@codesections good work you're doing.
@codesections "Get Toot" does nothing for me. I tried disabling uBlock but that only reset the whole page. Is there something I'm missing?
I'm sorry to hear that it didn't work. Let me make sure I understand:
- You put in your instance, clicked "Authorize", and authorized the app through your server
- You put in the URL for a toot and clicked "Get Toot"
- Nothing happened (no error on screen?)
Thanks for the bug report!
On a hunch, I disabled Privacy Badger as well as uBlock; didn't make a difference either. It's not even that nothing happens, but I don't even get visual feedback from clicking the Get Toot button, as if the button is only a picture. The cursor does change into a hand as if there's something to click, though.
Second, the "Network" tab of the same web inspector. (Depending on screen layout, you might need to click the ">>" button to select that tab). Everything there *should* have a 302 or 200 in the "status" column—we'd be looking for anything that doesn't.
Fosstodon is a Mastodon instance that is open to anyone who is interested in technology; particularly free & open source software.