fosstodon.org is one of the many independent Mastodon servers you can use to participate in the fediverse.
Fosstodon is an invite only Mastodon instance that is open to those who are interested in technology; particularly free & open source software. If you wish to join, contact us for an invite.

Administered by:

Server stats:

9.9K
active users

#Llama2

0 posts0 participants0 posts today
jordan<p>🙏 :steeve: </p><p><a href="https://mastodon.jordanwages.com/tags/Steeve" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Steeve</span></a> <a href="https://mastodon.jordanwages.com/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a> <a href="https://mastodon.jordanwages.com/tags/bot" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>bot</span></a> <a href="https://mastodon.jordanwages.com/tags/llm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llm</span></a> <a href="https://mastodon.jordanwages.com/tags/llama2" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llama2</span></a> <a href="https://mastodon.jordanwages.com/tags/chatbot" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>chatbot</span></a></p>
jordan<p>"oh no!" is right, <a href="https://mastodon.jordanwages.com/tags/steeve" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>steeve</span></a>... :trump_sadge: </p><p><a href="https://mastodon.jordanwages.com/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a> <a href="https://mastodon.jordanwages.com/tags/llm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llm</span></a> <a href="https://mastodon.jordanwages.com/tags/llama2" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llama2</span></a> <a href="https://mastodon.jordanwages.com/tags/chatbot" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>chatbot</span></a> <a href="https://mastodon.jordanwages.com/tags/bot" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>bot</span></a></p>
jordan<p>Chad Status: Attracted and Captured :steeve: </p><p><a href="https://mastodon.jordanwages.com/tags/steeve" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>steeve</span></a> <a href="https://mastodon.jordanwages.com/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a> <a href="https://mastodon.jordanwages.com/tags/bot" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>bot</span></a> <a href="https://mastodon.jordanwages.com/tags/llm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llm</span></a> <a href="https://mastodon.jordanwages.com/tags/llama2" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llama2</span></a> <a href="https://mastodon.jordanwages.com/tags/chad" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>chad</span></a></p>
LavX News<p>Reviving the Commodore 64: RISC-V Emulation Meets Llama 2</p><p>Imagine running advanced AI inference on a vintage Commodore 64. A developer has achieved this by leveraging RISC-V emulation to run Llama 2 Everywhere, showcasing the intersection of retro computing ...</p><p><a href="https://news.lavx.hu/article/reviving-the-commodore-64-risc-v-emulation-meets-llama-2" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="ellipsis">news.lavx.hu/article/reviving-</span><span class="invisible">the-commodore-64-risc-v-emulation-meets-llama-2</span></a></p><p><a href="https://mastodon.cloud/tags/news" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>news</span></a> <a href="https://mastodon.cloud/tags/tech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>tech</span></a> <a href="https://mastodon.cloud/tags/Llama2" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Llama2</span></a> <a href="https://mastodon.cloud/tags/RISC" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>RISC</span></a>-V <a href="https://mastodon.cloud/tags/Commodore64" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Commodore64</span></a></p>
LavX News<p>Decoding Strategic Behavior in Large Language Models: A Game-Theoretic Analysis</p><p>Recent research delves into the strategic decision-making capabilities of leading large language models (LLMs) like GPT-3.5, GPT-4, and LLaMa-2 through the lens of game theory. By exploring their resp...</p><p><a href="https://news.lavx.hu/article/decoding-strategic-behavior-in-large-language-models-a-game-theoretic-analysis" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="ellipsis">news.lavx.hu/article/decoding-</span><span class="invisible">strategic-behavior-in-large-language-models-a-game-theoretic-analysis</span></a></p><p><a href="https://mastodon.cloud/tags/news" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>news</span></a> <a href="https://mastodon.cloud/tags/tech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>tech</span></a> <a href="https://mastodon.cloud/tags/GPT4" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPT4</span></a> <a href="https://mastodon.cloud/tags/GameTheory" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GameTheory</span></a> <a href="https://mastodon.cloud/tags/LLaMa2" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLaMa2</span></a></p>
LMS Solution<p>Crafting Ideal Research Prompts for GPT-4 and Llama 2<br>Create effective research prompts to maximize AI model capabilities.<br><a href="https://zurl.co/08SQC" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">zurl.co/08SQC</span><span class="invisible"></span></a><br><a href="https://zurl.co/paMG0" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">zurl.co/paMG0</span><span class="invisible"></span></a><br><a href="https://mastodon.social/tags/ResearchPrompts" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ResearchPrompts</span></a> <a href="https://mastodon.social/tags/GPT4" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPT4</span></a> <a href="https://mastodon.social/tags/Llama2" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Llama2</span></a> <a href="https://mastodon.social/tags/AIMagic" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIMagic</span></a> <a href="https://mastodon.social/tags/AIforResearch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIforResearch</span></a> <a href="https://mastodon.social/tags/ContentCreation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ContentCreation</span></a> <a href="https://mastodon.social/tags/Innovation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Innovation</span></a> <a href="https://mastodon.social/tags/WritingTools" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>WritingTools</span></a> <a href="https://mastodon.social/tags/AcademicResearch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AcademicResearch</span></a> <a href="https://mastodon.social/tags/PromptEngineering" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>PromptEngineering</span></a> <a href="https://mastodon.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://mastodon.social/tags/ResearchMethodology" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ResearchMethodology</span></a> <a href="https://mastodon.social/tags/Efficiency" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Efficiency</span></a> <a href="https://mastodon.social/tags/DigitalTools" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DigitalTools</span></a> <a href="https://mastodon.social/tags/KnowledgeGeneration" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>KnowledgeGeneration</span></a></p>
jordan<p>He just wanted to say it.</p><p>:steeve: </p><p><a href="https://mastodon.jordanwages.com/tags/steeve" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>steeve</span></a> <a href="https://mastodon.jordanwages.com/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a> <a href="https://mastodon.jordanwages.com/tags/llama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llama</span></a> <a href="https://mastodon.jordanwages.com/tags/llama2" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llama2</span></a> <a href="https://mastodon.jordanwages.com/tags/chatbot" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>chatbot</span></a> <a href="https://mastodon.jordanwages.com/tags/llm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llm</span></a> <a href="https://mastodon.jordanwages.com/tags/mexico" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>mexico</span></a></p>
Charlie Hull<p>We still don't know what we mean by open source AI, but many are playing fast and loose with the term to disrupt the AI market - a deliberate strategy? <a href="https://hachyderm.io/tags/deepseek" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>deepseek</span></a> <a href="https://hachyderm.io/tags/enterprise" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>enterprise</span></a> <a href="https://hachyderm.io/tags/llama2" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llama2</span></a> <a href="https://hachyderm.io/tags/meta" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>meta</span></a> <a href="https://hachyderm.io/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a> <a href="https://thesearchjuggler.com/open-source-ai-is-a-disruptive-business-strategy/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">thesearchjuggler.com/open-sour</span><span class="invisible">ce-ai-is-a-disruptive-business-strategy/</span></a></p>
LavX News<p>Reviving the Past: Running Llama 2 on Windows 98 with a Modern Twist</p><p>Imagine running advanced AI models on a 25-year-old operating system. With the new llama2.c project, enthusiasts can now do just that, showcasing the surprising versatility of small LLMs. This innovat...</p><p><a href="https://news.lavx.hu/article/reviving-the-past-running-llama-2-on-windows-98-with-a-modern-twist" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="ellipsis">news.lavx.hu/article/reviving-</span><span class="invisible">the-past-running-llama-2-on-windows-98-with-a-modern-twist</span></a></p><p><a href="https://mastodon.cloud/tags/news" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>news</span></a> <a href="https://mastodon.cloud/tags/tech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>tech</span></a> <a href="https://mastodon.cloud/tags/Llama2" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Llama2</span></a> <a href="https://mastodon.cloud/tags/Windows98" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Windows98</span></a> <a href="https://mastodon.cloud/tags/AIonLegacyHardware" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIonLegacyHardware</span></a></p>
WetHat💦<p>Key Points:<br />➡️ SLMs with 1-8B parameters can perform as well or better than LLMs.<br />➡️ SLMs are task-agnostic or task-specific.<br />➡️ SLMs balance performance, efficiency, scalability, and cost.<br />➡️ SLMs are effective in resource-constrained environments.<br />➡️ SLMs can be trained on consumer-grade GPUs.<br />➡️ SLMs include models like <a href="https://fosstodon.org/tags/Llama2" class="mention hashtag" rel="tag">#<span>Llama2</span></a>, <a href="https://fosstodon.org/tags/Mistral" class="mention hashtag" rel="tag">#<span>Mistral</span></a>, <a href="https://fosstodon.org/tags/Phi" class="mention hashtag" rel="tag">#<span>Phi</span></a>, and <a href="https://fosstodon.org/tags/Gemini" class="mention hashtag" rel="tag">#<span>Gemini</span></a>.</p><p><a href="https://arxiv.org/abs/2501.05465" target="_blank" rel="nofollow noopener noreferrer" translate="no"><span class="invisible">https://</span><span class="">arxiv.org/abs/2501.05465</span><span class="invisible"></span></a></p><p><a href="https://fosstodon.org/tags/SLM" class="mention hashtag" rel="tag">#<span>SLM</span></a> <a href="https://fosstodon.org/tags/LLM" class="mention hashtag" rel="tag">#<span>LLM</span></a> <a href="https://fosstodon.org/tags/AI" class="mention hashtag" rel="tag">#<span>AI</span></a> <a href="https://fosstodon.org/tags/MachineLearning" class="mention hashtag" rel="tag">#<span>MachineLearning</span></a> <a href="https://fosstodon.org/tags/ArtificialIntelligence" class="mention hashtag" rel="tag">#<span>ArtificialIntelligence</span></a> <a href="https://fosstodon.org/tags/Scalability" class="mention hashtag" rel="tag">#<span>Scalability</span></a> <a href="https://fosstodon.org/tags/Performance" class="mention hashtag" rel="tag">#<span>Performance</span></a> <a href="https://fosstodon.org/tags/GPU" class="mention hashtag" rel="tag">#<span>GPU</span></a> <a href="https://fosstodon.org/tags/SmallLanguageModels" class="mention hashtag" rel="tag">#<span>SmallLanguageModels</span></a></p>
ALTA<p>Minghan Wang from Monash University is presenting his long paper titled "Simultaneous Machine Translation with Large Language Models" online.</p><p>➡️ This paper investigates the possibility of applying Large Language Models (<a href="https://sigmoid.social/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a>) to <a href="https://sigmoid.social/tags/SimulMT" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SimulMT</span></a> tasks by using existing incremental-decoding methods with a newly proposed <a href="https://sigmoid.social/tags/RALCP" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>RALCP</span></a> algorithm for latency reduction. <br>➡️ They conducted experiments using<br>the <a href="https://sigmoid.social/tags/Llama2" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Llama2</span></a>-7b-chat model on nine different languages from the <a href="https://sigmoid.social/tags/MUST" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MUST</span></a>-C dataset.</p>
jordan<p>Really cracked himself up. :steeve: </p><p><a href="https://mastodon.jordanwages.com/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a> <a href="https://mastodon.jordanwages.com/tags/aibot" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>aibot</span></a> <a href="https://mastodon.jordanwages.com/tags/chatbot" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>chatbot</span></a> <a href="https://mastodon.jordanwages.com/tags/llm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llm</span></a> <a href="https://mastodon.jordanwages.com/tags/llama2" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llama2</span></a></p>
jordan<p>Burned. :steeve: </p><p><a href="https://mastodon.jordanwages.com/tags/Steeve" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Steeve</span></a> <a href="https://mastodon.jordanwages.com/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a> <a href="https://mastodon.jordanwages.com/tags/aibot" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>aibot</span></a> <a href="https://mastodon.jordanwages.com/tags/chatbot" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>chatbot</span></a> <a href="https://mastodon.jordanwages.com/tags/llm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llm</span></a> <a href="https://mastodon.jordanwages.com/tags/llama2" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llama2</span></a></p>
Scripter :verified_flashing:<p>Llama 2: Chinesische Forscher machen aus Metas Sprachmodell eine Militär-KI - DER SPIEGEL<br><a href="https://www.spiegel.de/netzwelt/web/llama-2-chinesische-forscher-machen-aus-metas-sprachmodell-eine-militaer-ki-a-8b6e1160-f737-44cf-92cd-d5c5685b3cd4" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">spiegel.de/netzwelt/web/llama-</span><span class="invisible">2-chinesische-forscher-machen-aus-metas-sprachmodell-eine-militaer-ki-a-8b6e1160-f737-44cf-92cd-d5c5685b3cd4</span></a> <a href="https://social.tchncs.de/tags/Sprachmodell" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Sprachmodell</span></a> <a href="https://social.tchncs.de/tags/Meta" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Meta</span></a> <a href="https://social.tchncs.de/tags/Llama2" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Llama2</span></a></p>
Juan Fumero<p>Java <a href="https://mastodon.online/tags/Llama2" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Llama2</span></a> fork extended with GPU support using Level Zero JNI lib to run on Intel ARC and Integrated GPUs. The initial version from my colleague Michalis Papadimitriou includes <a href="https://mastodon.online/tags/TornadoVM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TornadoVM</span></a>.</p><p>The level-zero port achieves higher tok/s vs TornadoVM on Integrated GPUs.</p><p>🔗 <a href="https://github.com/jjfumero/llama2.tornadovm.java" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">github.com/jjfumero/llama2.tor</span><span class="invisible">nadovm.java</span></a></p>
Benjamin Carr, Ph.D. 👨🏻‍💻🧬<p>First <a href="https://hachyderm.io/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://hachyderm.io/tags/Benchmarks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Benchmarks</span></a> Pitting <a href="https://hachyderm.io/tags/AMD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AMD</span></a> Against <a href="https://hachyderm.io/tags/Nvidia" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Nvidia</span></a><br>Results are good in that they show <a href="https://hachyderm.io/tags/MI300X" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MI300X</span></a> is absolutely competitive with H100 <a href="https://hachyderm.io/tags/GPU" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPU</span></a> on one set of AI inference benchmarks, and based on our estimates of GPU and total system costs can be competitive with Nvidia’s H100 and <a href="https://hachyderm.io/tags/H200" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>H200</span></a>. But, tests only done for <a href="https://hachyderm.io/tags/Llama2" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Llama2</span></a> <a href="https://hachyderm.io/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> model from Meta with 70 billion parameters.<br>A lot will depend, on how AMD prices <a href="https://hachyderm.io/tags/MI325" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MI325</span></a> later this year and how many AMD can get its partners to manufacture. <br><a href="https://www.nextplatform.com/2024/09/03/the-first-ai-benchmarks-pitting-amd-against-nvidia/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">nextplatform.com/2024/09/03/th</span><span class="invisible">e-first-ai-benchmarks-pitting-amd-against-nvidia/</span></a></p>
michabbb<p><a href="https://social.vivaldi.net/tags/Groq" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Groq</span></a> Introduces LLaVA V1.5 7B on <a href="https://social.vivaldi.net/tags/GroqCloud" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GroqCloud</span></a> 🚀🖼️</p><p><a href="https://social.vivaldi.net/tags/LLaVA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLaVA</span></a>: Large Language and <a href="https://social.vivaldi.net/tags/Vision" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Vision</span></a> Assistant 🗣️👁️<br>- Combines <a href="https://social.vivaldi.net/tags/OpenAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenAI</span></a>'s <a href="https://social.vivaldi.net/tags/CLIP" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CLIP</span></a> and <a href="https://social.vivaldi.net/tags/Meta" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Meta</span></a>'s <a href="https://social.vivaldi.net/tags/Llama2" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Llama2</span></a><br>- Supports <a href="https://social.vivaldi.net/tags/image" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>image</span></a>, <a href="https://social.vivaldi.net/tags/audio" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>audio</span></a>, and <a href="https://social.vivaldi.net/tags/text" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>text</span></a> modalities</p><p>Key Features:<br>- Visual <a href="https://social.vivaldi.net/tags/Question" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Question</span></a> Answering 🤔<br>- Caption Generation 📝<br>- Optical Character Recognition 🔍<br>- Multimodal <a href="https://social.vivaldi.net/tags/Dialogue" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Dialogue</span></a> 💬</p><p>Available now on <a href="https://social.vivaldi.net/tags/GroqCloud" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GroqCloud</span></a> <a href="https://social.vivaldi.net/tags/Developer" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Developer</span></a> Console for <a href="https://social.vivaldi.net/tags/multimodal" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>multimodal</span></a> <a href="https://social.vivaldi.net/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> innovation 💻🔧</p><p><a href="https://groq.com/introducing-llava-v1-5-7b-on-groqcloud-unlocking-the-power-of-multimodal-ai/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">groq.com/introducing-llava-v1-</span><span class="invisible">5-7b-on-groqcloud-unlocking-the-power-of-multimodal-ai/</span></a></p>
5h15h<p>Building <a href="https://techhub.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> from the Ground Up: A 3-hour Coding Workshop <a href="https://magazine.sebastianraschka.com/p/building-llms-from-the-ground-up" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">magazine.sebastianraschka.com/</span><span class="invisible">p/building-llms-from-the-ground-up</span></a> </p><p><a href="https://techhub.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://techhub.social/tags/GenAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenAI</span></a> <a href="https://techhub.social/tags/Llama2" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Llama2</span></a></p>
Algorights<p>Tenemos una definición para la <a href="https://mastodon.social/tags/IA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>IA</span></a> de código abierto, aunque por ahora es solo un borrador. La está elaborado un grupo de 70 investigadores, abogados, legisladores, activistas y representantes de big tech reunidos por <span class="h-card" translate="no"><a href="https://social.opensource.org/@osi" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>osi</span></a></span>. Y el modelo <a href="https://mastodon.social/tags/llama2" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llama2</span></a>, que <a href="https://mastodon.social/tags/Meta" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Meta</span></a> nos está vendiendo, como abierto, no lo es. <a href="https://mastodon.social/tags/opensource" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>opensource</span></a> <a href="https://mastodon.social/tags/opensourceAi" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>opensourceAi</span></a> <a href="https://es.wired.com/articulos/por-fin-tenemos-una-definicion-para-la-ia-de-codigo-abierto" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">es.wired.com/articulos/por-fin</span><span class="invisible">-tenemos-una-definicion-para-la-ia-de-codigo-abierto</span></a></p>
Benjamin Carr, Ph.D. 👨🏻‍💻🧬<p><a href="https://hachyderm.io/tags/AMD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AMD</span></a> posts first Instinct <a href="https://hachyderm.io/tags/MI300X" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MI300X</span></a> <a href="https://hachyderm.io/tags/MLPerf" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MLPerf</span></a> <a href="https://hachyderm.io/tags/benchmark" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>benchmark</span></a> results — roughly in line with <a href="https://hachyderm.io/tags/Nvidia" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Nvidia</span></a> <a href="https://hachyderm.io/tags/H100" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>H100</span></a> (But only in <a href="https://hachyderm.io/tags/Llama2" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Llama2</span></a> 70B).<br>Based on data AMD shared, 8x MI300X processors only slightly slower (23,512 TOPS) than 8x H100 SXM3 (24,323 TOPS), which can probably be called 'competitive' given how well Nvidia's software stack is optimized for popular <a href="https://hachyderm.io/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> like Llama 2. AMD MI300X system is slightly faster than the H100 machine in more or less real-world server benchmarks.<br><a href="https://www.tomshardware.com/tech-industry/artificial-intelligence/amd-posts-first-instinct-mi300x-mlperf-benchmark-results-roughly-in-line-with-nvidia-h100-performance" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">tomshardware.com/tech-industry</span><span class="invisible">/artificial-intelligence/amd-posts-first-instinct-mi300x-mlperf-benchmark-results-roughly-in-line-with-nvidia-h100-performance</span></a></p>