fosstodon.org is one of the many independent Mastodon servers you can use to participate in the fediverse.
Fosstodon is an invite only Mastodon instance that is open to those who are interested in technology; particularly free & open source software. If you wish to join, contact us for an invite.

Administered by:

Server stats:

10K
active users

#amldgenai23

0 posts0 participants0 posts today
Applied Machine Learning Days<p>What an amazing panel on health and science at <a href="https://mastodon.social/tags/AMLDGenAI23" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AMLDGenAI23</span></a>🤩</p><p>Shekoofeh Azizi, DeepMind<br>Payel Das, IBM<br>Pranav Rajpurkar, Harvard<br>Nigam Shah, Stanford<br>Andrew White, Future House</p>
\๏/eg<p>I&#39;ve been tagging my tweets <a href="https://fosstodon.org/tags/appliedml" class="mention hashtag" rel="tag">#<span>appliedml</span></a> rather than <a href="https://fosstodon.org/tags/amldgenai23" class="mention hashtag" rel="tag">#<span>amldgenai23</span></a> but same gist 🤗</p>
ineiti<p>Panel "Risks to Society"</p><p>With Gaétan de Rassenfosse, Carmela Troncoso &amp; Sabine Süsstrunk</p><p>Gaëtan: "AI can help us overcome the burden of knowledge (where we get caught as hyper specialists"</p><p>Carmela: "while in security we try to make systems as simple as possible, AI is currently doing the contrary and running to make it always more complex"<br>"Security and privacy is about preventing harm"</p><p>Sabine: "politicians are mostly lawyers and are used to look at the past. So it's difficult to make them look in the future"<br>"I'm not so much concerned about the models, but the owners of the data and the computational resources"</p><p>Biggest concerns:<br>Marcel: "my biggest concern about general purpose AI is the societal impact"</p><p>Carmela: "the lack of freedom to not use these tools. Solution: destroy big tech?"</p><p>Gaëtan: "privacy: when these tools are used to monitor society."</p><p>Sabine: "fake information. People believe the fake information they're fed by autocratic governments"</p><p><a href="https://ioc.exchange/tags/AMLDGenAI23" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AMLDGenAI23</span></a> <a href="https://ioc.exchange/tags/EPFL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EPFL</span></a> <a href="https://ioc.exchange/tags/C4DT_EPFL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>C4DT_EPFL</span></a></p>
ineiti<p>Shaping the creation and adoption of large language models in healthcare</p><p>With Nigam Shah</p><p>Goal: bring AI to health care in an efficient, ethical way.</p><p>"If you think that advancing science will advance practice and delivery of medicine, you're mistaken!"</p><p>"A prediction that doesn't change action is pointless."</p><p>"There is an interplay asking models, capacity and actions we take."</p><p><a href="https://www.tinyurl.com/hai-blogs" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://www.</span><span class="">tinyurl.com/hai-blogs</span><span class="invisible"></span></a></p><p>Instead of training using actual English, use the tokenizer to work on the medical data itself.</p><p><a href="https://ioc.exchange/tags/AMLDGenAI23" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AMLDGenAI23</span></a> <a href="https://ioc.exchange/tags/EPFL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EPFL</span></a> <a href="https://ioc.exchange/tags/C4DT_EPFL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>C4DT_EPFL</span></a></p>
ineiti<p>Language versus thought in human brains and machines?</p><p>With Evelina Fedorenko</p><p>Some common fallacies:<br>- good at language -&gt; good at thoughts<br>- bad at thought -&gt; bad at language</p><p>Relationship between language and thought is important!</p><p>1. In the brain<br>Language network used for comprehension and production, stores linguistic knowledge. Those areas are not active when doing abstract thoughts.</p><p>2. In LLMs<br>They broadly resemble the language model from the brain. You can even see the resemblance in responses between the models and human brains.<br>LLMs are great at pretending to think :)</p><p>3. A path forward<br>Most biological systems are modular</p><p><a href="https://ioc.exchange/tags/AMLDGenAI23" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AMLDGenAI23</span></a> <a href="https://ioc.exchange/tags/EPFL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EPFL</span></a> <a href="https://ioc.exchange/tags/C4DT_EPFL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>C4DT_EPFL</span></a></p>
ineiti<p>Multi-Modal Foundation Models</p><p>With Amir Zamir</p><p>multiple sensory systems, eg, vision and touch, can teach themselves if they are synchronous in time. <br>If you have a set of sensors, then a multi modal foundation model can translate arbitrarily between them.</p><p>With masked modeling your trying to recover missing information.</p><p>In a MultiMAE model you train a model with different types of inputs and outputs. When trying out different inputs, it is interesting to see how the model adapts to the inputs:</p><p><a href="Https://MultiMAE.epfl.ch" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible"></span><span class="">Https://MultiMAE.epfl.ch</span><span class="invisible"></span></a></p><p>An interesting application is "grounded generation", where you can influence an existing picture with words on what you want to change. You can also adapt the other inputs, like bounding boxes and depth.</p><p><a href="https://ioc.exchange/tags/AMLDGenAI23" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AMLDGenAI23</span></a> <a href="https://ioc.exchange/tags/EPFL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EPFL</span></a> <a href="https://ioc.exchange/tags/C4DT_EPFL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>C4DT_EPFL</span></a></p>
ineiti<p>GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models</p><p>With Daniel Rock</p><p>Generative predictive transformers are General purpose technology</p><p>More forks on GitHub LLMs than all forks of COVID projects.</p><p>Pervasive? Improves over time? Spawns complementary innovation?</p><p>When trying to replace work activities with GPT, the question is also how much additional machines/tools you need to make it work.</p><p>Most exposed roles: mathematicians, Blockchain engineers, poets, ...<br>The most training expensive jobs might be the most exposed jobs to be replaced.<br>Even if there is a lot of risk, there is also a lot of opportunity in embracing these models.</p><p>There is also a strong correlation in augmentation and automation.</p><p><a href="https://ioc.exchange/tags/AMLDGenAI23" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AMLDGenAI23</span></a> <a href="https://ioc.exchange/tags/epfl" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>epfl</span></a> <a href="https://ioc.exchange/tags/C4DT_EPFL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>C4DT_EPFL</span></a></p>
ineiti<p>Foundation models in the EU AI Act</p><p>With Dragoș Tudorache</p><p>When first discussions on regulating AI in 2019 came up, not much was really known about AI in the parliament.</p><p>Only in 2020 people started talking about foundation models. But it was not enough to be included in the first proposal. Also because it was supposed to be less about technology, but only about use.</p><p>But in Summer/Autumn 2022, before the launch of chatGPT, the proposal was already supposed to include foundation models:</p><p>1. The scale made it very different from other models<br>2. Versatility of output<br>3. Infinity of applications</p><p><a href="https://ioc.exchange/tags/AMLDGenAI23" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AMLDGenAI23</span></a> <a href="https://ioc.exchange/tags/EPFL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EPFL</span></a> <a href="https://ioc.exchange/tags/C4DT_EPFL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>C4DT_EPFL</span></a></p>
ineiti<p>Angela Fan from Meta presenting Llama2.</p><p>To train the 70B model, they spent 1e24 Flops. A number bigger than the atoms in 1cm3. Which emitted about 300t CO2.</p><p>Training models in the direction of harmlessness / helpfulness. Big challenge in finding a good sample to test, as people use LLMs for very different things.</p><p>She also talked about temporal perception, which allows to change the cutoff date.</p><p>There is also an emergent tool use in llama 2 which allows to call out to other apps.</p><p>To finish, she says that these models still need to be much more precise, eg, for medical use.</p><p><a href="https://ioc.exchange/tags/AMLDGenAI23" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AMLDGenAI23</span></a> <a href="https://ioc.exchange/tags/EPFL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EPFL</span></a> <a href="https://ioc.exchange/tags/C4DT_EPFL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>C4DT_EPFL</span></a></p>
ineiti<p>Generative AI is the fourth wave of the IT revolution...</p><p><a href="https://ioc.exchange/tags/AMLDGenAI23" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AMLDGenAI23</span></a> <a href="https://ioc.exchange/tags/EPFL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EPFL</span></a> <a href="https://ioc.exchange/tags/C4DT_EPFL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>C4DT_EPFL</span></a></p>