fosstodon.org is one of the many independent Mastodon servers you can use to participate in the fediverse.
Fosstodon is an invite only Mastodon instance that is open to those who are interested in technology; particularly free & open source software. If you wish to join, contact us for an invite.

Administered by:

Server stats:

9.9K
active users

#cuda

3 posts3 participants0 posts today
Habr<p>[Перевод] Обзор CUDA: сюрпризы с производительностью</p><p>Наверное, я очень опоздал с изучением CUDA. До недавнего времени даже не знал, что CUDA — это просто C++ с небольшими добавками. Если бы я знал, что изучение её пойдёт как по маслу, я бы столько не медлил. Но, если у вас есть багаж привычек C++ , то код на CUDA у вас будет получаться низкокачественным. Поэтому расскажу вам о некоторых уроках, изученных на практике — возможно, мой опыт поможет вам ускорить код.</p><p><a href="https://habr.com/ru/articles/901750/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">habr.com/ru/articles/901750/</span><span class="invisible"></span></a></p><p><a href="https://zhub.link/tags/CUDA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CUDA</span></a> <a href="https://zhub.link/tags/%D0%BF%D0%B0%D1%80%D0%B0%D0%BB%D0%BB%D0%B5%D0%BB%D0%B8%D0%B7%D0%BC" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>параллелизм</span></a> <a href="https://zhub.link/tags/%D0%B3%D1%80%D0%B0%D1%84%D0%B8%D1%87%D0%B5%D1%81%D0%BA%D0%B8%D0%B5_%D0%BF%D1%80%D0%BE%D1%86%D0%B5%D1%81%D1%81%D0%BE%D1%80%D1%8B" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>графические_процессоры</span></a> <a href="https://zhub.link/tags/%D0%BE%D0%BF%D1%82%D0%B8%D0%BC%D0%B8%D0%B7%D0%B0%D1%86%D0%B8%D1%8F" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>оптимизация</span></a></p>
Habr<p>Три икса: новый уровень работы с большими свертками в PyTorch для обучения моделей</p><p>Привет, Хабр! Продолжим разговор про свертки в ML-обучении на C++. Мы уже обсудили, какие есть подходы к реализации сверток, — ссылку на первую часть ищите в конце статьи. Теперь поговорим, как в одном моем проекте нужно было расширить функциональность PyTorch для работы со свертками размерностью больше трех, а потом использовать их в обучении моделей. Сначала рассмотрим, какие ограничения на выбор алгоритма накладывает возможность обучения моделей, а затем изучим два подхода к реализации свертки и адаптируем их к нашей задаче.</p><p><a href="https://habr.com/ru/companies/yadro/articles/899612/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">habr.com/ru/companies/yadro/ar</span><span class="invisible">ticles/899612/</span></a></p><p><a href="https://zhub.link/tags/%D0%BC%D0%B0%D1%88%D0%B8%D0%BD%D0%BD%D0%BE%D0%B5_%D0%BE%D0%B1%D1%83%D1%87%D0%B5%D0%BD%D0%B8%D0%B5" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>машинное_обучение</span></a> <a href="https://zhub.link/tags/cuda" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cuda</span></a> <a href="https://zhub.link/tags/convolution" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>convolution</span></a> <a href="https://zhub.link/tags/%D1%81%D0%B2%D0%B5%D1%80%D1%82%D0%BE%D1%87%D0%BD%D1%8B%D0%B5_%D0%BD%D0%B5%D0%B9%D1%80%D0%BE%D0%BD%D0%BD%D1%8B%D0%B5_%D1%81%D0%B5%D1%82%D0%B8" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>сверточные_нейронные_сети</span></a></p>
Maquinari.cat<p>Els xinesos de Moore Threads volen portejar els software CUDA de Nvidia a les seves GPU sota el seu stack anomenat MUSA.</p><p><a href="https://www.tomshardware.com/pc-components/gpus/chinas-moore-threads-polishes-homegrown-cuda-alternative-musa-supports-porting-cuda-code-using-musify-toolkit" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">tomshardware.com/pc-components</span><span class="invisible">/gpus/chinas-moore-threads-polishes-homegrown-cuda-alternative-musa-supports-porting-cuda-code-using-musify-toolkit</span></a></p><p><a href="https://mastodon.social/tags/Nvidia" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Nvidia</span></a> <a href="https://mastodon.social/tags/MooreThreads" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MooreThreads</span></a> <a href="https://mastodon.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CUDA</span></a> <a href="https://mastodon.social/tags/MUSA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MUSA</span></a></p>
HGPU group<p>Large Language Model Powered C-to-CUDA Code Translation: A Novel Auto-Parallelization Framework</p><p><a href="https://mast.hpc.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CUDA</span></a> <a href="https://mast.hpc.social/tags/CodeGeneration" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CodeGeneration</span></a> <a href="https://mast.hpc.social/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> <a href="https://mast.hpc.social/tags/Package" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Package</span></a></p><p><a href="https://hgpu.org/?p=29864" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=29864</span><span class="invisible"></span></a></p>
HGPU group<p>Scalability Evaluation of HPC Multi-GPU Training for ECG-based LLMs</p><p><a href="https://mast.hpc.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CUDA</span></a> <a href="https://mast.hpc.social/tags/PTX" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>PTX</span></a> <a href="https://mast.hpc.social/tags/HPC" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HPC</span></a> <a href="https://mast.hpc.social/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> <a href="https://mast.hpc.social/tags/PyTorch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>PyTorch</span></a> <a href="https://mast.hpc.social/tags/DeepLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DeepLearning</span></a> <a href="https://mast.hpc.social/tags/DL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DL</span></a></p><p><a href="https://hgpu.org/?p=29863" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=29863</span><span class="invisible"></span></a></p>
HGPU group<p>GigaAPI for GPU Parallelization</p><p><a href="https://mast.hpc.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CUDA</span></a> <a href="https://mast.hpc.social/tags/ImageProcessing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ImageProcessing</span></a> <a href="https://mast.hpc.social/tags/Package" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Package</span></a></p><p><a href="https://hgpu.org/?p=29860" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=29860</span><span class="invisible"></span></a></p>
♡ Eva Winterschön ♡<p>💻 FreeBSD CUDA drm-61-kmod 💻</p><p>"Just going to test the current pkg driver, this will only take a second...", the old refrain goes. Surely, it will not punt away an hour or so of messing about in loader.conf on this EPYC system... </p><p>- Here are some notes to back-track a botched/crashing driver kernel panic situation. <br>- Standard stuff, nothing new over the years here with loader prompt. <br>- A few directives are specific to this system, though may provide a useful general reference. <br>- The server has an integrated GPU in addition to nvidia pcie, so a module blacklist for the "amdgpu" driver is necessary (EPYC 4564P).</p><p>Step 1: during boot-up, "exit to loader prompt"<br>Step 2: set/unset the values as needed at the loader prompt</p><p>unset nvidia_load<br>unset nvidia_modeset_load<br>unset hw.nvidiadrm.modeset<br>set module_blacklist=amdgpu,nvidia,nvidia_modeset<br>set machdep.hyperthreading_intr_allowed=0<br>set verbose_loading=YES<br>set boot_verbose=YES<br>set acpi_dsdt_load=YES<br>set audit_event_load=YES<br>kern.consmsgbuf_size=1048576<br>set loader_menu_title=waffenschwester<br>boot</p><p>Step 3: login to standard tty shell <br>Step 4: edit /boot/loader.conf (and maybe .local)<br>Step 5: edit /etc/rc.conf (and maybe .local)<br>Step 6: debug the vast output from kern.consmsgbuf logs</p><p><a href="https://mastodon.bsd.cafe/tags/freebsd" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>freebsd</span></a> <a href="https://mastodon.bsd.cafe/tags/nvidia" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>nvidia</span></a> <a href="https://mastodon.bsd.cafe/tags/cuda" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cuda</span></a> <a href="https://mastodon.bsd.cafe/tags/gpu" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>gpu</span></a> <a href="https://mastodon.bsd.cafe/tags/engineering" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>engineering</span></a> <a href="https://mastodon.bsd.cafe/tags/terminal" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>terminal</span></a> <a href="https://mastodon.bsd.cafe/tags/saturday" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>saturday</span></a></p>
GripNews<p>🌕 GitHub - Rust-GPU/Rust-CUDA:使用 Rust 撰寫和執行快速 GPU 程式碼的生態系統<br>➤ 打造 Rust 在 GPU 計算領域的地位<br>✤ <a href="https://github.com/Rust-GPU/Rust-CUDA" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">github.com/Rust-GPU/Rust-CUDA</span><span class="invisible"></span></a><br>Rust-CUDA 是一個專案,旨在使 Rust 成為使用 CUDA 工具包進行高效能 GPU 計算的首選語言。它提供了一系列函式庫和工具,可將 Rust 編譯為快速的 PTX 程式碼,並與現有的 CUDA 函式庫整合。 該專案包含 `rustc_codegen_nvvm` (Rust 編譯器後端)、`cuda_std` (GPU 端功能)、`cudnn` (深度神經網路加速)、`cust` (CPU 端 CUDA 功能)、`gpu_rand` (GPU 隨機數產生) 和 `optix` (光線追蹤) 等多個 crates,旨在覆蓋整個 CUDA 生態系統。 儘管目前仍處於早期開發階段,但 Rust-CUDA 旨在克服以往 Rust 與 CUDA 整合的困難,並充分利用 Rust 的優勢,如效能<br><a href="https://mastodon.social/tags/%E9%96%8B%E7%99%BC%E5%B7%A5%E5%85%B7" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>開發工具</span></a> <a href="https://mastodon.social/tags/GPU" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPU</span></a> <a href="https://mastodon.social/tags/Rust" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Rust</span></a> <a href="https://mastodon.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CUDA</span></a></p>
Hacker News 50<p>Rust CUDA Project</p><p>Link: <a href="https://github.com/Rust-GPU/Rust-CUDA" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">github.com/Rust-GPU/Rust-CUDA</span><span class="invisible"></span></a><br>Discussion: <a href="https://news.ycombinator.com/item?id=43654881" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">news.ycombinator.com/item?id=4</span><span class="invisible">3654881</span></a></p><p><a href="https://social.lansky.name/tags/rust" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>rust</span></a> <a href="https://social.lansky.name/tags/cuda" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cuda</span></a></p>
Denzil Ferreira :fedora:<p>Been fighting the whole day trying to get ROCm to play nice with 780M and PyTorch. Using latest <a href="https://techhub.social/tags/rocm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>rocm</span></a> and my laptop just freezes with gfx1103 and using HSA override to 11.0.0 and with 10.3.0 :blobcatknife: </p><p><a href="https://techhub.social/tags/amd" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>amd</span></a> really needs to fix this crap for their GPUs. Using Docker and their provided ROCm images. I know, 780M is not supported. But c’mon, ALL Nvidia cards can run <a href="https://techhub.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CUDA</span></a> just fine. <a href="https://techhub.social/tags/rant" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>rant</span></a></p>
Hacker News<p>Rust CUDA Project</p><p><a href="https://github.com/Rust-GPU/Rust-CUDA" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">github.com/Rust-GPU/Rust-CUDA</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/Rust" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Rust</span></a> <a href="https://mastodon.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CUDA</span></a> <a href="https://mastodon.social/tags/Project" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Project</span></a> <a href="https://mastodon.social/tags/Rust" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Rust</span></a> <a href="https://mastodon.social/tags/GPU" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPU</span></a> <a href="https://mastodon.social/tags/Programming" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Programming</span></a> <a href="https://mastodon.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CUDA</span></a> <a href="https://mastodon.social/tags/Development" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Development</span></a> <a href="https://mastodon.social/tags/Tech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Tech</span></a> <a href="https://mastodon.social/tags/Innovation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Innovation</span></a></p>
Habr<p>Ведущий разработчик ChatGPT и его новый проект — Безопасный Сверхинтеллект</p><p>Многие знают об Илье Суцкевере только то, что он выдающийся учёный и программист, родился в СССР, соосновал OpenAI и входит в число тех, кто в 2023 году изгнал из компании менеджера Сэма Альтмана. А когда того вернули, Суцкевер уволился по собственному желанию в новый стартап Safe Superintelligence («Безопасный Сверхинтеллект»). Илья Суцкевер действительно организовал OpenAI вместе с Маском, Брокманом, Альтманом и другими единомышленниками, причём был главным техническим гением в компании. Ведущий учёный OpenAI сыграл ключевую роль в разработке ChatGPT и других продуктов. Сейчас Илье всего 38 лет — совсем немного для звезды мировой величины.</p><p><a href="https://habr.com/ru/companies/ruvds/articles/892646/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">habr.com/ru/companies/ruvds/ar</span><span class="invisible">ticles/892646/</span></a></p><p><a href="https://zhub.link/tags/%D0%98%D0%BB%D1%8C%D1%8F_%D0%A1%D1%83%D1%86%D0%BA%D0%B5%D0%B2%D0%B5%D1%80" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Илья_Суцкевер</span></a> <a href="https://zhub.link/tags/Ilya_Sutskever" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ilya_Sutskever</span></a> <a href="https://zhub.link/tags/OpenAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenAI</span></a> <a href="https://zhub.link/tags/10x_engineer" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>10x_engineer</span></a> <a href="https://zhub.link/tags/AlexNet" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AlexNet</span></a> <a href="https://zhub.link/tags/Safe_Superintelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Safe_Superintelligence</span></a> <a href="https://zhub.link/tags/ImageNet" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ImageNet</span></a> <a href="https://zhub.link/tags/%D0%BD%D0%B5%D0%BE%D0%BA%D0%BE%D0%B3%D0%BD%D0%B8%D1%82%D1%80%D0%BE%D0%BD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>неокогнитрон</span></a> <a href="https://zhub.link/tags/GPU" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPU</span></a> <a href="https://zhub.link/tags/GPGPU" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPGPU</span></a> <a href="https://zhub.link/tags/CUDA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CUDA</span></a> <a href="https://zhub.link/tags/%D0%BA%D0%BE%D0%BC%D0%BF%D1%8C%D1%8E%D1%82%D0%B5%D1%80%D0%BD%D0%BE%D0%B5_%D0%B7%D1%80%D0%B5%D0%BD%D0%B8%D0%B5" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>компьютерное_зрение</span></a> <a href="https://zhub.link/tags/LeNet" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LeNet</span></a> <a href="https://zhub.link/tags/Nvidia_GTX" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Nvidia_GTX</span></a>&nbsp;580 <a href="https://zhub.link/tags/DNNResearch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DNNResearch</span></a> <a href="https://zhub.link/tags/Google_Brain" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Google_Brain</span></a> <a href="https://zhub.link/tags/%D0%90%D0%BB%D0%B5%D0%BA%D1%81_%D0%9A%D1%80%D0%B8%D0%B6%D0%B5%D0%B2%D1%81%D0%BA%D0%B8" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Алекс_Крижевски</span></a> <a href="https://zhub.link/tags/%D0%94%D0%B6%D0%B5%D1%84%D1%84%D1%80%D0%B8_%D0%A5%D0%B8%D0%BD%D1%82%D0%BE%D0%BD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Джеффри_Хинтон</span></a> <a href="https://zhub.link/tags/Seq2seq" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Seq2seq</span></a> <a href="https://zhub.link/tags/TensorFlow" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TensorFlow</span></a> <a href="https://zhub.link/tags/AlphaGo" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AlphaGo</span></a> <a href="https://zhub.link/tags/%D0%A2%D0%BE%D0%BC%D0%B0%D1%88_%D0%9C%D0%B8%D0%BA%D0%BE%D0%BB%D0%BE%D0%B2" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Томаш_Миколов</span></a> <a href="https://zhub.link/tags/Word2vec" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Word2vec</span></a> <a href="https://zhub.link/tags/fewshot_learning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>fewshot_learning</span></a> <a href="https://zhub.link/tags/%D0%BC%D0%B0%D1%88%D0%B8%D0%BD%D0%B0_%D0%91%D0%BE%D0%BB%D1%8C%D1%86%D0%BC%D0%B0%D0%BD%D0%B0" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>машина_Больцмана</span></a> <a href="https://zhub.link/tags/%D1%81%D0%B2%D0%B5%D1%80%D1%85%D0%B8%D0%BD%D1%82%D0%B5%D0%BB%D0%BB%D0%B5%D0%BA%D1%82" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>сверхинтеллект</span></a> <a href="https://zhub.link/tags/GPT" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPT</span></a> <a href="https://zhub.link/tags/ChatGPT" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ChatGPT</span></a> <a href="https://zhub.link/tags/ruvds_%D1%81%D1%82%D0%B0%D1%82%D1%8C%D0%B8" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ruvds_статьи</span></a></p>
Hacker News 50<p>Nvidia adds native Python support to CUDA</p><p>Link: <a href="https://thenewstack.io/nvidia-finally-adds-native-python-support-to-cuda/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">thenewstack.io/nvidia-finally-</span><span class="invisible">adds-native-python-support-to-cuda/</span></a><br>Discussion: <a href="https://news.ycombinator.com/item?id=43581584" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">news.ycombinator.com/item?id=4</span><span class="invisible">3581584</span></a></p><p><a href="https://social.lansky.name/tags/cuda" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cuda</span></a> <a href="https://social.lansky.name/tags/python" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>python</span></a> <a href="https://social.lansky.name/tags/nvidia" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>nvidia</span></a></p>
N-gated Hacker News<p>NVIDIA finally joins the 21st century by adding <a href="https://mastodon.social/tags/Python" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Python</span></a> support to <a href="https://mastodon.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CUDA</span></a>, because who needs cutting-edge tech when you can just catch up with 2006? 🕰️ Meanwhile, The New Stack is begging you to re-subscribe like a clingy ex who just can't take a hint. 📧💔<br><a href="https://thenewstack.io/nvidia-finally-adds-native-python-support-to-cuda/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">thenewstack.io/nvidia-finally-</span><span class="invisible">adds-native-python-support-to-cuda/</span></a> <a href="https://mastodon.social/tags/NVIDIA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NVIDIA</span></a> <a href="https://mastodon.social/tags/TheNewStack" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TheNewStack</span></a> <a href="https://mastodon.social/tags/TechNews" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TechNews</span></a> <a href="https://mastodon.social/tags/Subscribe" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Subscribe</span></a> <a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/ngated" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ngated</span></a></p>
Hacker News<p>Nvidia adds native Python support to CUDA</p><p><a href="https://thenewstack.io/nvidia-finally-adds-native-python-support-to-cuda/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">thenewstack.io/nvidia-finally-</span><span class="invisible">adds-native-python-support-to-cuda/</span></a></p><p><a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/Nvidia" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Nvidia</span></a> <a href="https://mastodon.social/tags/Python" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Python</span></a> <a href="https://mastodon.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CUDA</span></a> <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodon.social/tags/TechNews" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TechNews</span></a></p>
Amartya<p>My brain is absolutely fried. <br />Today is the last day of coursework submissions for this semester. What a hectic month. <br />DNN with PyTorch, Brain model parallelisation with MPI, SYCL and OpenMP offloading of percolation models,hand optimizing serial codes for performance.<br />Two submissions due today. Submitted one and finalising my report for the second one. <br />Definitely having a pint after this</p><p><a href="https://fosstodon.org/tags/sycl" class="mention hashtag" rel="tag">#<span>sycl</span></a> <a href="https://fosstodon.org/tags/hpc" class="mention hashtag" rel="tag">#<span>hpc</span></a> <a href="https://fosstodon.org/tags/msc" class="mention hashtag" rel="tag">#<span>msc</span></a> <a href="https://fosstodon.org/tags/epcc" class="mention hashtag" rel="tag">#<span>epcc</span></a> <a href="https://fosstodon.org/tags/cuda" class="mention hashtag" rel="tag">#<span>cuda</span></a> <a href="https://fosstodon.org/tags/pytorch" class="mention hashtag" rel="tag">#<span>pytorch</span></a> <a href="https://fosstodon.org/tags/mpi" class="mention hashtag" rel="tag">#<span>mpi</span></a> <a href="https://fosstodon.org/tags/openmp" class="mention hashtag" rel="tag">#<span>openmp</span></a> <a href="https://fosstodon.org/tags/hectic" class="mention hashtag" rel="tag">#<span>hectic</span></a> <a href="https://fosstodon.org/tags/programming" class="mention hashtag" rel="tag">#<span>programming</span></a> <a href="https://fosstodon.org/tags/parallelprogramming" class="mention hashtag" rel="tag">#<span>parallelprogramming</span></a> <a href="https://fosstodon.org/tags/latex" class="mention hashtag" rel="tag">#<span>latex</span></a></p>
Amartya<p>Started SYCL this semester in my MSc, and I have a coursework on it. <br />I have never been more frustrated in my life. <br />I am not saying SYCL is bad. I might just be too dumb to master it in a sem in order to port an existing CPU code to use MPI &amp; SYCL together.<br />CUDA was much easier for me for the same task.</p><p><a href="https://fosstodon.org/tags/sycl" class="mention hashtag" rel="tag">#<span>sycl</span></a> <a href="https://fosstodon.org/tags/hpc" class="mention hashtag" rel="tag">#<span>hpc</span></a> <a href="https://fosstodon.org/tags/parallelprogramming" class="mention hashtag" rel="tag">#<span>parallelprogramming</span></a> <a href="https://fosstodon.org/tags/gpu" class="mention hashtag" rel="tag">#<span>gpu</span></a> <a href="https://fosstodon.org/tags/nvidia" class="mention hashtag" rel="tag">#<span>nvidia</span></a> <a href="https://fosstodon.org/tags/cuda" class="mention hashtag" rel="tag">#<span>cuda</span></a> <a href="https://fosstodon.org/tags/msc" class="mention hashtag" rel="tag">#<span>msc</span></a> <a href="https://fosstodon.org/tags/scientificcomputing" class="mention hashtag" rel="tag">#<span>scientificcomputing</span></a> <a href="https://fosstodon.org/tags/amd" class="mention hashtag" rel="tag">#<span>amd</span></a> <a href="https://fosstodon.org/tags/mpi" class="mention hashtag" rel="tag">#<span>mpi</span></a> <a href="https://fosstodon.org/tags/epcc" class="mention hashtag" rel="tag">#<span>epcc</span></a></p>
Habr<p>[Перевод] «Я ненавижу C++, но восхищаюсь его мастерами»: Дженсен Хуанг (Nvidia) о том, как ИИ вернулся домой</p><p>Nvidia давно вышла за пределы игровых миров — сегодня её технологии формируют будущее ИИ, научных исследований, связи и многого другого. Но как компания, начавшая с графики, стала флагманом искусственного интеллекта? В интервью для Computerphile (25.03.2025) Хуанг рассказывает, как закон Амдала уживается с тензорными ядрами, а CUDA из инструмента разработчика превратилась в основу для преобразования индустрий. Это интервью о процессе, в котором технологии развиваются, пересекаются и возвращаются туда, с чего начинали.</p><p><a href="https://habr.com/ru/companies/bothub/articles/895682/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">habr.com/ru/companies/bothub/a</span><span class="invisible">rticles/895682/</span></a></p><p><a href="https://zhub.link/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a> <a href="https://zhub.link/tags/%D0%B8%D0%B8" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ии</span></a> <a href="https://zhub.link/tags/%D0%B4%D0%B6%D0%B5%D0%BD%D1%81%D0%B5%D0%BD_%D1%85%D1%83%D0%B0%D0%BD%D0%B3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>дженсен_хуанг</span></a> <a href="https://zhub.link/tags/nvidia" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>nvidia</span></a> <a href="https://zhub.link/tags/cuda" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cuda</span></a> <a href="https://zhub.link/tags/transformer" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>transformer</span></a> <a href="https://zhub.link/tags/%D0%B7%D0%B0%D0%BA%D0%BE%D0%BD_%D0%B0%D0%BC%D0%B4%D0%B0%D0%BB%D0%B0" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>закон_амдала</span></a> <a href="https://zhub.link/tags/5g" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>5g</span></a></p>
Hacker News 50<p>Ask HN: Why hasn't AMD made a viable CUDA alternative?</p><p>Discussion: <a href="https://news.ycombinator.com/item?id=43547309" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">news.ycombinator.com/item?id=4</span><span class="invisible">3547309</span></a></p><p><a href="https://social.lansky.name/tags/cuda" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cuda</span></a></p>
pafurijaz<p>It seems that <a href="https://mastodon.social/tags/Vulkan" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Vulkan</span></a> could be the real alternative for using <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> on GPUs or CPUs of any brand, without necessarily having to rely on <a href="https://mastodon.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CUDA</span></a> or <a href="https://mastodon.social/tags/AMD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AMD</span></a>'s <a href="https://mastodon.social/tags/ROCm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ROCm</span></a>. I thought <a href="https://mastodon.social/tags/SYCL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SYCL</span></a> was the alternative. This might finally free us from of monopoly <a href="https://mastodon.social/tags/Nvidia" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Nvidia</span></a>.<br><a href="https://mastodon.social/tags/Khronos" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Khronos</span></a></p>