fosstodon.org is one of the many independent Mastodon servers you can use to participate in the fediverse.
Fosstodon is an invite only Mastodon instance that is open to those who are interested in technology; particularly free & open source software. If you wish to join, contact us for an invite.

Administered by:

Server stats:

8.6K
active users

#sprachmodellen

1 post1 participant0 posts today
Philo Sophies<p><a href="https://earthstream.social/tags/Zoomposium" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Zoomposium</span></a> mit Dr. <a href="https://earthstream.social/tags/Gabriele" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Gabriele</span></a> <a href="https://earthstream.social/tags/Scheler" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Scheler</span></a>: "Die <a href="https://earthstream.social/tags/Sprache" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Sprache</span></a> des <a href="https://earthstream.social/tags/Gehirns" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Gehirns</span></a> - oder wie <a href="https://earthstream.social/tags/KI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>KI</span></a> von <a href="https://earthstream.social/tags/biologischen" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>biologischen</span></a> <a href="https://earthstream.social/tags/Sprachmodellen" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Sprachmodellen</span></a> lernen kann" </p><p>Es gibt einen <a href="https://earthstream.social/tags/Paradigmenwechsel" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Paradigmenwechsel</span></a> weg vom rein informationstechnologischen-mechanistischen, rein daten-getriebenen <a href="https://earthstream.social/tags/Big" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Big</span></a> <a href="https://earthstream.social/tags/Data" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Data</span></a>-Konzept der <a href="https://earthstream.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> hin zu immer stärker informationsbiologische-polykontexturalen, struktur-getriebenen <a href="https://earthstream.social/tags/K%C3%BCnstliche" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Künstliche</span></a>, <a href="https://earthstream.social/tags/Neuronale" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Neuronale</span></a> <a href="https://earthstream.social/tags/Netzwerke" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Netzwerke</span></a> (<a href="https://earthstream.social/tags/KNN" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>KNN</span></a>)-Konzepten. </p><p>Mehr auf: <a href="https://philosophies.de/index.php/2024/11/18/sprache-des-gehirns/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">philosophies.de/index.php/2024</span><span class="invisible">/11/18/sprache-des-gehirns/</span></a></p><p>oder: <a href="https://youtu.be/forOGk8k0W8" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">youtu.be/forOGk8k0W8</span><span class="invisible"></span></a></p>
Philo Sophies<p><a href="https://planetearth.social/tags/Zoomposium" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Zoomposium</span></a> mit Dr. <a href="https://planetearth.social/tags/Gabriele" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Gabriele</span></a> <a href="https://planetearth.social/tags/Scheler" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Scheler</span></a>: "Die <a href="https://planetearth.social/tags/Sprache" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Sprache</span></a> des <a href="https://planetearth.social/tags/Gehirns" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Gehirns</span></a> - oder wie <a href="https://planetearth.social/tags/KI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>KI</span></a> von <a href="https://planetearth.social/tags/biologischen" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>biologischen</span></a> <a href="https://planetearth.social/tags/Sprachmodellen" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Sprachmodellen</span></a> lernen kann" </p><p>Es gibt einen <a href="https://planetearth.social/tags/Paradigmenwechsel" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Paradigmenwechsel</span></a> weg vom rein informationstechnologischen-mechanistischen, rein daten-getriebenen <a href="https://planetearth.social/tags/Big" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Big</span></a> <a href="https://planetearth.social/tags/Data" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Data</span></a>-Konzept der <a href="https://planetearth.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> hin zu immer stärker informationsbiologische-polykontexturalen, struktur-getriebenen <a href="https://planetearth.social/tags/K%C3%BCnstliche" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Künstliche</span></a>, <a href="https://planetearth.social/tags/Neuronale" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Neuronale</span></a> <a href="https://planetearth.social/tags/Netzwerke" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Netzwerke</span></a> (<a href="https://planetearth.social/tags/KNN" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>KNN</span></a>)-Konzepten. </p><p>Mehr auf: <a href="https://philosophies.de/index.php/2024/11/18/sprache-des-gehirns/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">philosophies.de/index.php/2024</span><span class="invisible">/11/18/sprache-des-gehirns/</span></a></p><p>oder: <a href="https://youtu.be/forOGk8k0W8" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">youtu.be/forOGk8k0W8</span><span class="invisible"></span></a></p>
Verfassungklage@troet.cafe<p><a href="https://troet.cafe/tags/Unicode" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Unicode</span></a>: </p><p><a href="https://troet.cafe/tags/ChatGPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ChatGPT</span></a> hinterlässt <a href="https://troet.cafe/tags/unsichtbare" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>unsichtbare</span></a> <a href="https://troet.cafe/tags/Zeichen" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Zeichen</span></a> im <a href="https://troet.cafe/tags/Text" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Text</span></a></p><p>In den neueren <a href="https://troet.cafe/tags/Sprachmodellen" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Sprachmodellen</span></a> von <a href="https://troet.cafe/tags/OpenAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenAI</span></a> hinterlässt die KI offenbar absichtlich unsichtbare Zeichen. </p><p>Dem Bericht nach hinterlassen die Modelle <a href="https://troet.cafe/tags/GPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPT</span></a>- <a href="https://troet.cafe/tags/o3" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>o3</span></a> und <a href="https://troet.cafe/tags/o4" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>o4</span></a>-mini unsichtbare <a href="https://troet.cafe/tags/Unicode" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Unicode</span></a>-Zeichen in dem generierten Text. <a href="https://troet.cafe/tags/OpenAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenAI</span></a> kündigte diese Maßnahme bisher nicht an. </p><p><a href="https://www.golem.de/news/wasserzeichen-chatgpt-hinterlaesst-unsichtbare-zeichen-im-text-2504-195509.html" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">golem.de/news/wasserzeichen-ch</span><span class="invisible">atgpt-hinterlaesst-unsichtbare-zeichen-im-text-2504-195509.html</span></a></p>
RHET AI Center<p><span class="h-card" translate="no"><a href="https://mastodon.social/@KIT_Karlsruhe" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>KIT_Karlsruhe</span></a></span> </p><p>Außerdem freuen uns auf den Abendvortrag von Katharina Zweig zu <a href="https://mstdn.social/tags/Sprachmodellen" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Sprachmodellen</span></a>.</p><p>Die Vorträge können von Interessierten besucht werden, der Vortrag von Katharina Zweig findet im Foyer im Erdgeschoss des InformatiKOM (KIT) statt.</p><p>Alle weiteren Infos zu den Vorträgen gibt es hier: <a href="https://rhet.ai/2025/02/20/gal-research-school-27-28-02-2025-vortraege/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">rhet.ai/2025/02/20/gal-researc</span><span class="invisible">h-school-27-28-02-2025-vortraege/</span></a></p>
Philo Sophies<p><a href="https://techhub.social/tags/News" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>News</span></a> 📢 aus der <a href="https://techhub.social/tags/KI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>KI</span></a>-<a href="https://techhub.social/tags/Forschung" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Forschung</span></a> 🤖: „Wie <a href="https://techhub.social/tags/ChatGPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ChatGPT</span></a> 📱 von menschlichen <a href="https://techhub.social/tags/Gehirnen" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Gehirnen</span></a> 🧠 profitieren kann – oder wie wir <a href="https://techhub.social/tags/Maschinen" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Maschinen</span></a> 💻 das <a href="https://techhub.social/tags/Denken" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Denken</span></a> 🤓 beibringen“</p><p>Es geht bei der aktuellen KI-Forschung von Herrn <a href="https://techhub.social/tags/Krau%C3%9F" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Krauß</span></a> folglich um ein echtes „jointventure“ zwischen <a href="https://techhub.social/tags/KI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>KI</span></a> und <a href="https://techhub.social/tags/Neurowissenschaft" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Neurowissenschaft</span></a>, da die Daten und Methoden direkt zur Verbesserung von großen <a href="https://techhub.social/tags/Sprachmodellen" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Sprachmodellen</span></a> (<a href="https://techhub.social/tags/Large" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Large</span></a> <a href="https://techhub.social/tags/Language" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Language</span></a> <a href="https://techhub.social/tags/Model" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Model</span></a> <a href="https://techhub.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a>), wie z. B. <a href="https://techhub.social/tags/ChatGPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ChatGPT</span></a> beitragen können und im Gegenzug die <a href="https://techhub.social/tags/kognitiven" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>kognitiven</span></a> <a href="https://techhub.social/tags/Neurowissenschaften" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Neurowissenschaften</span></a> aus der <a href="https://techhub.social/tags/Implementierung" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Implementierung</span></a> und <a href="https://techhub.social/tags/Simulation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Simulation</span></a> von <a href="https://techhub.social/tags/kognitiven" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>kognitiven</span></a> <a href="https://techhub.social/tags/Prozessen" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Prozessen</span></a> auf <a href="https://techhub.social/tags/Maschinen" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Maschinen</span></a> ebenfalls wieder etwas über die <a href="https://techhub.social/tags/Verwendung" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Verwendung</span></a> und <a href="https://techhub.social/tags/Bildung" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Bildung</span></a> von <a href="https://techhub.social/tags/Sprache" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Sprache</span></a> im <a href="https://techhub.social/tags/Gehirn" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Gehirn</span></a> erfahren können. </p><p>Wer mehr zu <a href="https://techhub.social/tags/Patrick" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Patrick</span></a> <a href="https://techhub.social/tags/Krau%C3%9F" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Krauß</span></a> ' sehr interessanten <a href="https://techhub.social/tags/Forschungsergebnissen" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Forschungsergebnissen</span></a> erfahren möchte, kann sich hier informieren: </p><p><a href="https://www.ai.fau.digital/speakers/dr-patrick-kraus/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">ai.fau.digital/speakers/dr-pat</span><span class="invisible">rick-kraus/</span></a></p><p>oder auf: <a href="https://philosophies.de/index.php/2023/10/24/bauanleitung-kuenstliches-bewusstsein/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">philosophies.de/index.php/2023</span><span class="invisible">/10/24/bauanleitung-kuenstliches-bewusstsein/</span></a></p>
Science Media Center Germany<p>Studie in Nature stellt eine Methode vor, um bestimmte <a href="https://sciencemediacenter.social/tags/Halluzinationen" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Halluzinationen</span></a> bei <a href="https://sciencemediacenter.social/tags/Sprachmodellen" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Sprachmodellen</span></a> zu reduzieren. 2 unabhängige Forschende schätzten die Studie ein: Methode kann hilfreich sein, hat aber Einschränkungen und ist kein Allheilmittel. <a href="https://www.sciencemediacenter.de/alle-angebote/research-in-context/details/news/halluzinationen-in-sprachmodellen-reduzieren/?mtm_campaign=mastodon&amp;mtm_kwd=halluzinationen-in-sprachmodellen-reduzieren" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">sciencemediacenter.de/alle-ang</span><span class="invisible">ebote/research-in-context/details/news/halluzinationen-in-sprachmodellen-reduzieren/?mtm_campaign=mastodon&amp;mtm_kwd=halluzinationen-in-sprachmodellen-reduzieren</span></a></p>
Rob Tranquillo<p><a href="https://social.tchncs.de/tags/VonGoom" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>VonGoom</span></a>: Ein neuartiger Ansatz um die Datenbasis in großen <a href="https://social.tchncs.de/tags/Sprachmodellen" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Sprachmodellen</span></a> zu biasen.</p><p>"VonGoom: A Novel Approach for Data Poisoning in Large Language Models."</p><p><a href="https://delcomplex.com/vonGoom" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">delcomplex.com/vonGoom</span><span class="invisible"></span></a></p><p><a href="https://social.tchncs.de/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a></p>
Weiterbildung Digital<p>Zu <a href="https://colearn.social/tags/chatGPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>chatGPT</span></a> bzw. "großen <a href="https://colearn.social/tags/Sprachmodellen" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Sprachmodellen</span></a>":<br>Shanahan 2022, Talking about Large Language Models<br><a href="https://arxiv.org/pdf/2212.03551.pdf" rel="nofollow noopener" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/pdf/2212.03551.pdf</span><span class="invisible"></span></a><br>(via @vboykis@twitter.com)</p>
Weiterbildung Digital<p>Zu <a href="https://colearn.social/tags/chatGPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>chatGPT</span></a> bzw. "großen <a href="https://colearn.social/tags/Sprachmodellen" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Sprachmodellen</span></a>": <br>Piantadosi/Hill 2022, Meaning without reference in large language models<br><a href="https://arxiv.org/pdf/2208.02957.pdf" rel="nofollow noopener" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/pdf/2208.02957.pdf</span><span class="invisible"></span></a></p>