fosstodon.org is one of the many independent Mastodon servers you can use to participate in the fediverse.
Fosstodon is an invite only Mastodon instance that is open to those who are interested in technology; particularly free & open source software. If you wish to join, contact us for an invite.

Administered by:

Server stats:

8.8K
active users

#ChatGPT

228 posts198 participants6 posts today

[Перевод] Обнаружение уязвимостей агентов ИИ. Часть III: Утечка данных

Как риск усиливается в мультимодальных AI-агентах, когда скрытые инструкции, встроенные в безобидно выглядящие изображения или документы, могут инициировать утечку конфиденциальных данных без какого-либо взаимодействия с пользователем. Давайте разбираться

habr.com/ru/articles/922742/

ХабрОбнаружение уязвимостей агентов ИИ. Часть III: Утечка данныхВ третьей части серии демонстрируется, как риск усиливается в мультимодальных AI-агентах, когда скрытые инструкции, встроенные в безобидно выглядящие изображения или документы, могут инициировать...

If you need therapy, a therapist is legally bound to keep what you say confidential. And if you use ChatGPT for mental health care? “… we haven’t figured that out yet …” CEO Sam Altman recently said on the This Past Weekend with Theo Von podcast. @Techcrunch has the story and link to their conversation covering a broad range of topics:

flip.it/2I7xJn

TechCrunch · Sam Altman warns there's no legal confidentiality when using ChatGPT as a therapist | TechCrunch
More from Sarah Perez 💙

"As chatbots grow more powerful, so does the potential for harm. OpenAI recently debuted “ChatGPT agent,” an upgraded version of the bot that can complete much more complex tasks, such as purchasing groceries and booking a hotel. “Although the utility is significant,” OpenAI CEO Sam Altman posted on X after the product launched, “so are the potential risks.” Bad actors may design scams to specifically target AI agents, he explained, tricking bots into giving away personal information or taking “actions they shouldn’t, in ways we can’t predict.” Still, he shared, “we think it’s important to begin learning from contact with reality.” In other words, the public will learn how dangerous the product can be when it hurts people."

theatlantic.com/technology/arc

The Atlantic · ChatGPT Gave Instructions for Murder, Self-Mutilation, and Devil WorshipBy Lila Shroff

Twice now, I’ve been reading an insightful article and then they write, “So I asked ChatGPT to analyze…”

And I stop reading the article. It just makes me so mad. AI is not intelligent. It’s garbage and it’s a crutch and it’s a dangerous trend.

#NoAI
#AI_slop
#ChatGPT