fosstodon.org is one of the many independent Mastodon servers you can use to participate in the fediverse.
Fosstodon is an invite only Mastodon instance that is open to those who are interested in technology; particularly free & open source software. If you wish to join, contact us for an invite.

Administered by:

Server stats:

8.7K
active users

#responsibleai

6 posts6 participants0 posts today
Impudent Strumpet<p>What does Responsible AI mean to you? 🗣️️ <span class="h-card" translate="no"><a href="https://mastodon.social/@OpenMediaOrg" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>OpenMediaOrg</span></a></span> is gathering public input to help shape Canada’s AI laws. Make sure your voice is part of the conversation. <a href="https://mastodon.social/tags/AIRegulation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIRegulation</span></a> <a href="https://mastodon.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleAI</span></a>! Share your thoughts by Aug 25. 🗓️️ openmedia.org/AI-survey-2025 ✍️</p>
Sassinake! - ⊃∪∩⪽<p>What does Responsible AI mean to you? 🗣️️ <span class="h-card" translate="no"><a href="https://mastodon.social/@OpenMediaOrg" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>OpenMediaOrg</span></a></span> is gathering public input to help shape Canada’s AI laws. Make sure your voice is part of the conversation. <a href="https://mastodon.social/tags/AIRegulation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIRegulation</span></a> <a href="https://mastodon.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleAI</span></a>! Share your thoughts by Aug 25. 🗓️️ openmedia.org/AI-survey-2025-tw ✍️</p>
Harold Sinnott 📲<p>🧩 Northeastern University: </p><p>25% of generative-AI users will pilot agentic AI in 2025; adoption will double by 2027; embedding responsible governance is crucial. </p><p><a href="https://mastodon.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://mastodon.social/tags/AgenticAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AgenticAI</span></a> <a href="https://mastodon.social/tags/IoT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>IoT</span></a> <a href="https://mastodon.social/tags/5G" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>5G</span></a> </p><p><a href="https://ai.northeastern.edu/news/agentic-ai-in-2025-a-responsible-ai-expert-cuts-through-the-chaos" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">ai.northeastern.edu/news/agent</span><span class="invisible">ic-ai-in-2025-a-responsible-ai-expert-cuts-through-the-chaos</span></a></p>
Harold Sinnott 📲<p>🔐 Agentic AI brings power—and risk</p><p>@mitsmr breaks down a 3-phase strategy to secure AI agents across platforms.</p><p>If your AI can act, it also needs protection.</p><p>🔗 <a href="https://sloanreview.mit.edu/article/agentic-ai-security-essentials/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">sloanreview.mit.edu/article/ag</span><span class="invisible">entic-ai-security-essentials/</span></a></p><p><a href="https://mastodon.social/tags/AIAgents" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIAgents</span></a> <a href="https://mastodon.social/tags/Cybersecurity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Cybersecurity</span></a> <a href="https://mastodon.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleAI</span></a></p>
Centre for Population Change<p>🗞️ Head to section 12 of the new <a href="https://sciences.social/tags/ChangingPopulations" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ChangingPopulations</span></a> to read about just some of our researchers' achievements and awards over the last six months in our Researcher Spotlight 🔦🔦 </p><p>Congratulations to all 👏 👏 </p><p><a href="https://sway.cloud.microsoft/urKHaLPBnmc5tC1p?ref=Link" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">sway.cloud.microsoft/urKHaLPBn</span><span class="invisible">mc5tC1p?ref=Link</span></a></p><p><a href="https://sciences.social/tags/population" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>population</span></a> <a href="https://sciences.social/tags/socialscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>socialscience</span></a> <a href="https://sciences.social/tags/demography" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>demography</span></a> <a href="https://sciences.social/tags/interdisciplinary" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>interdisciplinary</span></a> <a href="https://sciences.social/tags/migration" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>migration</span></a> <a href="https://sciences.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://sciences.social/tags/AIinnovation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIinnovation</span></a> <a href="https://sciences.social/tags/Horizon2020" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Horizon2020</span></a> <a href="https://sciences.social/tags/eucommission" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>eucommission</span></a> <a href="https://sciences.social/tags/economics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>economics</span></a> <a href="https://sciences.social/tags/immigration" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>immigration</span></a> <a href="https://sciences.social/tags/migrants" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>migrants</span></a> <a href="https://sciences.social/tags/populationchange" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>populationchange</span></a> <a href="https://sciences.social/tags/socialsciences" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>socialsciences</span></a> <a href="https://sciences.social/tags/modelling" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>modelling</span></a> <a href="https://sciences.social/tags/forecasting" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>forecasting</span></a></p>
Anonymous Panda<p>Ukraine proves that responsible military AI isn’t just feasible: it’s urgent.<br>Governing AI under fire demands bold policies, agile legal frameworks, and moral clarity.<br>If a democracy at war can embed ethics-by-design, what's the excuse for the rest?</p><p><a href="https://defcon.social/tags/AIgovernance" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIgovernance</span></a> <a href="https://defcon.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://defcon.social/tags/MilitaryAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MilitaryAI</span></a> <a href="https://defcon.social/tags/WarfareEthics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>WarfareEthics</span></a> <a href="https://defcon.social/tags/DualUse" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DualUse</span></a> <a href="https://defcon.social/tags/HumanRights" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HumanRights</span></a> <a href="https://defcon.social/tags/DigitalSovereignty" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DigitalSovereignty</span></a> <a href="https://defcon.social/tags/Ukraine" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ukraine</span></a> <a href="https://defcon.social/tags/AutonomousWeapons" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AutonomousWeapons</span></a> <a href="https://defcon.social/tags/EthicsByDesign" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EthicsByDesign</span></a> <a href="https://defcon.social/tags/AIinConflict" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIinConflict</span></a> <a href="https://defcon.social/tags/LawfulAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LawfulAI</span></a> <a href="https://defcon.social/tags/HUDERIA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HUDERIA</span></a> <a href="https://defcon.social/tags/TechDiplomacy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechDiplomacy</span></a> <a href="https://defcon.social/tags/GlobalAIStandards" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GlobalAIStandards</span></a> <a href="https://defcon.social/tags/COBRA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>COBRA</span></a></p><p><a href="https://www.thecairoreview.com/essays/governing-ai-under-fire-in-ukraine/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">thecairoreview.com/essays/gove</span><span class="invisible">rning-ai-under-fire-in-ukraine/</span></a></p>
Harold Sinnott 📲<p>If your AI can act, it also needs protection.<br> <br>🔐 Agentic AI brings power—and risk</p><p>@mitsmr ➡️ 3-phase strategy to secure AI agents across platforms</p><p>🔗 <a href="https://sloanreview.mit.edu/article/agentic-ai-security-essentials/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">sloanreview.mit.edu/article/ag</span><span class="invisible">entic-ai-security-essentials/</span></a></p><p><a href="https://mastodon.social/tags/AIAgents" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIAgents</span></a> <a href="https://mastodon.social/tags/Cybersecurity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Cybersecurity</span></a> <a href="https://mastodon.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://mastodon.social/tags/MWC25" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MWC25</span></a></p>
Harald Klinke<p>CFP: Ethical AI in GLAM – Challenges and Opportunities for Digital Stewardship</p><p>New Collections journal focus issue explores how AI is transforming ethical practices in galleries, libraries, archives, and museums.</p><p>Submission deadline: October 20, 2025<br>Topics include: AI in collection management, reparative description, digital repatriation, digital labor, provenance, sustainability &amp; more.<br><a href="https://librarywriting.blogspot.com/2025/07/cfp-ethical-ai-in-glam-challenges-and.html" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">librarywriting.blogspot.com/20</span><span class="invisible">25/07/cfp-ethical-ai-in-glam-challenges-and.html</span></a><br><a href="https://det.social/tags/EthicalAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EthicalAI</span></a> <a href="https://det.social/tags/GLAM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GLAM</span></a> <a href="https://det.social/tags/CFP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CFP</span></a> <a href="https://det.social/tags/Archives" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Archives</span></a> <a href="https://det.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://det.social/tags/DigitalHeritage" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DigitalHeritage</span></a></p>
Brian Greenberg :verified:<p>A new lawsuit alleges Meta pirated nearly 2,400 adult videos since at least 2018 and seeded them on BitTorrent to accelerate other data downloads for AI training. The complaint claims this behavior isn’t incidental, it was deliberate leveraging of tit‑for‑tat to harvest large datasets. Meta previously used pirated books to train its Llama model—now adult content is part of the mix and could have exposed minors. This could reshape how we think about data sourcing in AI: it’s not only legal exposure but serious ethics around content type and audience. Licensing concerns, age gates, and fair use. This lawsuit touches all of it. Meta denies the accusations, but the ball is in court now.</p><p>TL;DR<br>⚠️ Meta accused of willfully pirating and seeding 2,396 adult videos<br>🧠 Seeding strategy allegedly used to speed download of terabytes of data<br>🔐 Content may have been shared with minors without age verification<br>📉 Could strengthen plaintiffs’ broader copyright challenges</p><p><a href="https://arstechnica.com/tech-policy/2025/07/meta-pirated-and-seeded-porn-for-years-to-train-ai-lawsuit-says/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">arstechnica.com/tech-policy/20</span><span class="invisible">25/07/meta-pirated-and-seeded-porn-for-years-to-train-ai-lawsuit-says/</span></a><br><a href="https://infosec.exchange/tags/Meta" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Meta</span></a> <a href="https://infosec.exchange/tags/AITraining" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AITraining</span></a> <a href="https://infosec.exchange/tags/CopyrightInfringement" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CopyrightInfringement</span></a> <a href="https://infosec.exchange/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://infosec.exchange/tags/porn" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>porn</span></a></p>
The Linux Foundation<p>Open Source AI is transforming how we build and use AI—lower cost, more control, and transparent practices.<br>See key takeaways from the GOSIM Forum and read the full report: <a href="https://www.linuxfoundation.org/research/gosim-2025?hsLang=en" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">linuxfoundation.org/research/g</span><span class="invisible">osim-2025?hsLang=en</span></a><br><a href="https://social.lfx.dev/tags/OpenSourceAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenSourceAI</span></a> <a href="https://social.lfx.dev/tags/AIstrategy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIstrategy</span></a> <a href="https://social.lfx.dev/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://social.lfx.dev/tags/LinuxFoundation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LinuxFoundation</span></a></p>
SRF IRIS<p>What does it actually mean when we say that generative AI raises ethical questions?<br>🔵 Dr. Thilo Hagendorff, our research group leader at IRIS3D, has taken this question seriously and systematically. With his interactive Ethics Tree, he has created one of the most comprehensive overviews of ethical problem areas in generative AI: <a href="https://lnkd.in/ebzZYaU7" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">lnkd.in/ebzZYaU7</span><span class="invisible"></span></a><br>More than 300 clearly defined issues – ranging from discrimination and disinformation to ecological impacts – demonstrate the depth and scope of the ethical landscape. This “tree” does not merely highlight risks, but structures a field that is increasingly under pressure politically, technologically, and socially.<br>Mapping these questions so systematically underlines the need for ethical reflection as a core competence in AI research – not after the fact, but as part of the epistemic and technical process.</p><p><a href="https://xn--baw-joa.social/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenerativeAI</span></a><br><a href="https://xn--baw-joa.social/tags/AIethics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIethics</span></a><br><a href="https://xn--baw-joa.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleAI</span></a><br><a href="https://xn--baw-joa.social/tags/EthicsInAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EthicsInAI</span></a><br><a href="https://xn--baw-joa.social/tags/TechEthics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechEthics</span></a><br><a href="https://xn--baw-joa.social/tags/AIresearch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIresearch</span></a><br><a href="https://xn--baw-joa.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a><br><a href="https://xn--baw-joa.social/tags/AIgovernance" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIgovernance</span></a><br><a href="https://xn--baw-joa.social/tags/DigitalEthics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DigitalEthics</span></a><br><a href="https://xn--baw-joa.social/tags/AlgorithmicBias" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AlgorithmicBias</span></a><br><a href="https://xn--baw-joa.social/tags/Disinformation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Disinformation</span></a><br><a href="https://xn--baw-joa.social/tags/SustainableAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SustainableAI</span></a><br><a href="https://xn--baw-joa.social/tags/InterdisciplinaryResearch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>InterdisciplinaryResearch</span></a><br><a href="https://xn--baw-joa.social/tags/ScienceAndSociety" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ScienceAndSociety</span></a><br><a href="https://xn--baw-joa.social/tags/IRIS3D" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>IRIS3D</span></a></p>
BaselOne<p>Ethik in der Softwareentwicklung – mehr als ein Nice-to-have.</p><p>Am 16.10. spricht <span class="h-card" translate="no"><a href="https://mstdn.science/@AlenaBuyx" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>AlenaBuyx</span></a></span> auf der <a href="https://mastodon.social/tags/BaselOne25" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>BaselOne25</span></a> über Verantwortung im Umgang mit KI:</p><p>🧠 Ethik in sensiblen Bereichen wie Gesundheit, Mobilität &amp; Bildung
⚖️ KI-Verordnung, DSGVO &amp; Open Source
💬 Entwickler:innen als Value Shapers statt reine System Builders</p><p>👉 Tickets &amp; Programm: <a href="https://baselone.org/#programm" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">baselone.org/#programm</span><span class="invisible"></span></a> 
</p><p><a href="https://mastodon.social/tags/KI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>KI</span></a> <a href="https://mastodon.social/tags/Ethik" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ethik</span></a> <a href="https://mastodon.social/tags/Softwareentwicklung" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Softwareentwicklung</span></a> <a href="https://mastodon.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://mastodon.social/tags/DeveloperCommunity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DeveloperCommunity</span></a> <a href="https://mastodon.social/tags/OpenSource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenSource</span></a></p>
buchatech :ve:<p>My 28th course just dropped!</p><p>This week my 28th course went live on Pluralsight. It’s about a topic that’s becoming more important by the day: Agentic AI Safety and Alignment.</p><p>When building AI agents, how do you ensure they don’t go rogue? This course teaches you how to design AI agent systems that behave safely, stay aligned with human intent, and reflect your company’s values. </p><p>This course teaches you how to:</p><p> Prevent unintended behaviors<br> Embed ethics and safety checks into agents<br> Guard against issues like prompt injection<br> Keep human oversight (human in the loop)<br> Avoid unexpected bills or policy violations</p><p>In the course I demo using Microsoft hashtag#CoPilot Studio and hashtag#Flowise, showing how to build AI agent systems that are safe, controllable, and aligned with a companies values.</p><p>Check out the course here: <a href="https://www.pluralsight.com/courses/agentic-ai-safety-alignment" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">pluralsight.com/courses/agenti</span><span class="invisible">c-ai-safety-alignment</span></a></p><p> <a href="https://techhub.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://techhub.social/tags/Pluralsight" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Pluralsight</span></a> <a href="https://techhub.social/tags/AgenticAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AgenticAI</span></a> <a href="https://techhub.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://techhub.social/tags/OpenAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenAI</span></a> <a href="https://techhub.social/tags/Flowise" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Flowise</span></a> <a href="https://techhub.social/tags/MicrosoftCopilot" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MicrosoftCopilot</span></a> <a href="https://techhub.social/tags/Opensource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Opensource</span></a> <a href="https://techhub.social/tags/Copilot" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Copilot</span></a></p>
Sanjay Mohindroo<p>CIOs, it’s time to stop guarding trust and start building it. Are you ready? <a href="https://social.vivaldi.net/tags/TrustAsAService" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TrustAsAService</span></a> <a href="https://social.vivaldi.net/tags/DigitalTrust" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DigitalTrust</span></a> <a href="https://social.vivaldi.net/tags/CIOLeadership" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CIOLeadership</span></a> <a href="https://social.vivaldi.net/tags/TechEthics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechEthics</span></a> <a href="https://social.vivaldi.net/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://social.vivaldi.net/tags/CyberSecurity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CyberSecurity</span></a> <a href="https://social.vivaldi.net/tags/ZeroTrust" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ZeroTrust</span></a> <a href="https://social.vivaldi.net/tags/DataTransparency" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DataTransparency</span></a> <a href="https://social.vivaldi.net/tags/TrustInTech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TrustInTech</span></a> <a href="https://social.vivaldi.net/tags/TechForGood" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechForGood</span></a><br><a href="https://medium.com/@sanjay.mohindroo66/trust-as-a-service-the-cios-call-to-lead-the-digital-trust-movement-a9f508f8cf24" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">medium.com/@sanjay.mohindroo66</span><span class="invisible">/trust-as-a-service-the-cios-call-to-lead-the-digital-trust-movement-a9f508f8cf24</span></a></p>
Open Science Fair<p>Coming up at <a href="https://mastodon.social/tags/OSFair2025" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OSFair2025</span></a>: AI Readiness &amp; Governance in Science.</p><p>16 Sept | 16:15 | 📍 Auditorium C, <a href="https://mastodon.social/tags/CERN" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CERN</span></a></p><p>Insights from RDA, TIGER WGs &amp; more on ethics, data visitation, and AI rights in science.</p><p>🔗Register: <a href="https://shorturl.at/cXoKW" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">shorturl.at/cXoKW</span><span class="invisible"></span></a> 🔗Programme: <a href="https://shorturl.at/VdcvL" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">shorturl.at/VdcvL</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/OpenScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenScience</span></a> <a href="https://mastodon.social/tags/AIgovernance" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIgovernance</span></a> <a href="https://mastodon.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://mastodon.social/tags/OSFair2025" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OSFair2025</span></a> <a href="https://mastodon.social/tags/CERN" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CERN</span></a> <a href="https://mastodon.social/tags/OpenAIRE" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenAIRE</span></a> <span class="h-card" translate="no"><a href="https://mastodon.social/@OpenAIRE" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>OpenAIRE</span></a></span></p>
marmelab<p>Mistral AI just took a bold step towards transparency by publishing the first lifecycle analysis (LCA) of an AI model.</p><p>📊 The results? Training Mistral Large 2 (128B parameter) emitted 20,000t CO₂e.</p><p>It confirms what many feared:<br>👉 AI is a massive carbon emitter.</p><p>At Marmelab, this issue has been on our radar for a while. That’s why we conducted our own study earlier this year.</p><p>🔗 Read it here: <a href="https://marmelab.com/blog/2025/03/19/ai-carbon-footprint.html" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">marmelab.com/blog/2025/03/19/a</span><span class="invisible">i-carbon-footprint.html</span></a></p><p><a href="https://mastodon.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://mastodon.social/tags/Sustainability" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Sustainability</span></a> <a href="https://mastodon.social/tags/ClimateTech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ClimateTech</span></a> <a href="https://mastodon.social/tags/MistralAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MistralAI</span></a></p>
Harald Klinke<p>Identifying Prompted Artist Names from Generated Images<br>Can we detect which artist names were used in prompts – just by looking at the AI-generated image?</p><p>This study introduces a dataset of 1.95M images covering 110 artists and explores generalization across prompt types and models. Multi-artist prompts remain the hardest.</p><p><a href="https://arxiv.org/abs/2507.18633" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/abs/2507.18633</span><span class="invisible"></span></a></p><p><a href="https://det.social/tags/AIArt" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIArt</span></a> <a href="https://det.social/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenerativeAI</span></a> <a href="https://det.social/tags/Copyright" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Copyright</span></a> <a href="https://det.social/tags/StyleTransfer" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>StyleTransfer</span></a> <a href="https://det.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleAI</span></a></p>
OS-SCI<p><a href="https://mastodon.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleAI</span></a> licensing is on the rise, combining open-source flexibility with ethical restrictions. Open RAIL licenses now represent nearly 10% of ML model repositories on Hugging Face. 𝗵𝘁𝘁𝗽𝘀://𝗼𝘀-𝘀𝗰𝗶.𝗰𝗼𝗺/𝗯𝗹𝗼𝗴/𝗼𝘂𝗿-𝗯𝗹𝗼𝗴-𝗽𝗼𝘀𝘁𝘀-𝟭/𝘁𝗵𝗲-𝗳𝘂𝘁𝘂𝗿𝗲-𝗼𝗳-𝗲𝘁𝗵𝗶𝗰𝗮𝗹-𝗮𝗶-𝗿𝗲𝘀𝗽𝗼𝗻𝘀𝗶𝗯𝗹𝗲-𝗹𝗶𝗰𝗲𝗻𝘀𝗶𝗻𝗴-𝗮𝗻𝗱-𝘁𝗵𝗲-𝗶𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻-𝗼𝗳-𝗹𝗮𝗿𝗴𝗲-𝗹𝗮𝗻𝗴𝘂𝗮𝗴𝗲-𝗺𝗼𝗱𝗲𝗹𝘀-𝟭𝟮𝟲</p>
Matt Berryman<p>Sustainable AI can’t be an afterthought. <a href="https://mastodon.social/tags/GreenAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GreenAI</span></a> <a href="https://mastodon.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleAI</span></a></p>
Mia<p>'I Love Generative AI and Hate the Companies Building It' - 'when I fell in love with generative AI, I wanted to use it ethically.</p><p>That went well.</p><p>Turns out, there are no ethical AI companies. What I found instead was a hierarchy of harm where the question isn’t who’s good — it’s who sucks least.' <a href="https://cwodtke.medium.com/i-love-generative-ai-and-hate-the-companies-building-it-3fb120e512ac" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">cwodtke.medium.com/i-love-gene</span><span class="invisible">rative-ai-and-hate-the-companies-building-it-3fb120e512ac</span></a> </p><p><a href="https://hcommons.social/tags/genAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>genAI</span></a> <a href="https://hcommons.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://hcommons.social/tags/ethics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ethics</span></a></p>