Want to run #RamaLama AI on OpenShift DevSpaces Rohan Kumar has got you covered:
https://developers.redhat.com/articles/2025/06/13/how-run-ai-models-cloud-development-environments
While everyone's talking about Apple's Containerization framework announcement at WWDC, just a few days ago #krunkit quietly hit a major milestone: GPU passthrough in VMs, with up to 80% native LLM performance.
#krunkit and #podman are still the best hypervisor and container combination on macOS. #libkrun is also a great option for AI microVMs. We are thankful to be able to take advantage of these features in #RamaLama #AI
https://github.com/containers/ramalama
But I'm still looking forward to where Apple's variants go.
Local AI just got simpler!
Podman AI Lab now uses RamaLama’s GPU-ready containers—unifying efforts to streamline model deployment on your machine.
Faster setup
GPU acceleration
Consistent container experience
Learn more: https://developers.redhat.com/articles/2025/06/03/podman-ai-lab-and-ramalama-unite-easier-local-ai#
#AIDev #Podman #RamaLama #Containers #podmandesktop
Lukáš Růžička si pro vás připravil článek o tom, jak na #Fedora používat #AI lokálně pomocí #ramalama.
https://mojefedora.cz/ramalama-aneb-vyhanime-lamy-na-vlastni-louku/
On route to #redhatsummit, watch out for: "AI inferencing for developers and administrators", "Securing AI workloads with RamaLama", "RamaLama Making developing AI Boring". We may even see a vlm demo, very accurate models as we can see here #ramalama #llamacpp
Build your own local RAG with Ramalama and granite model https://medium.com/@nicolabertoli92/build-your-own-local-rag-with-ramalama-and-granite-model-d5d89e612114
#Ramalama #aiml #rag
Simplify AI data integration with RamaLama and RAG
https://developers.redhat.com/articles/2025/04/03/simplify-ai-data-integration-ramalama-and-rag#
#Docling #Ramalama #podman #aiml
How RamaLama helps make AI model testing safer https://www.infoworld.com/article/3853769/how-ramalama-helps-make-ai-model-testing-safer.html
#aiml #Ramalama #Container
@TheNewStack interviews Eric and Dan, maintainers of RamaLama about containerizing #AI development. If you haven't heard of the #RamaLama project before, this is a quick intro:
https://thenewstack.io/ramalama-project-brings-containers-and-ai-together/
RAG does not prove as useful for helping out on the codebase with #LLM. At least not as good as I expected.
What does work for you? What model works best? Codelama? Deepseek? Mistral? Qwen?
Exciting news! https://ramalama.ai is officially live!
RamaLama makes AI inferencing boring. Just OCI containers handling AI models seamlessly.
Huge thanks to Jessica Chitas & Cara Delia for making this happen!
How RamaLama runs AI models in isolation by default
https://developers.redhat.com/articles/2025/02/20/how-ramalama-runs-ai-models-isolation-default
I find it baffling, how many people are outraged by this blogpost:
https://blogs.gnome.org/uraeus/2025/02/03/looking-ahead-at-2025-and-fedora-workstation-and-jobs-on-offer/
1. It's a personal blog, not even an official #Fedora / #RedHat blog ("official Red Hat communication happens on Redhat.com")
2. Yes, the blog post talks about AI. ABOUT #GRANITE AND #RAMALAMA, which are frameworks that allow setting up hw accelerated machine learning on Fedora #Workstation and make it easy. THIS DOESN'T MEAN WE'LL HAVE #CHATGPT IN FEDORA
Guys, reading comprehension is a thing...
Meet llama-run, the newest tool in the llama.cpp ecosystem! Simplify running LLMs with one command, flexible configs, and seamless integration into OCI environments. Focus on outcomes, not infrastructure.
https://developers.redhat.com/blog/2024/12/17/simplifying-ai-ramalama-and-llama-run
Tired of the AI hype? I am too!
RAMalama makes working with AI models boring (and that's a GOOD thing!).
Check out the latest #redhat #developers article to learn how RAMalama simplifies AI workflows:
- Streamline model deployment
- Reduce boilerplate code
- Focus on results, not infrastructure
#AI #MachineLearning #RAMalama
https://developers.redhat.com/articles/2024/11/22/how-ramalama-makes-working-ai-models-boring
The goal of #RamaLama is to make working with #AI boring.
https://github.com/containers/ramalama
#containers #sysadmin #tips