Erik Jonker<p>Interesting, i am still in the stage of evaluating this concept/idea in my head but definitely worth reading.<br>"Photon: Federated Pre-training of Large Language Models"<br><a href="https://flower.ai/blog/2025-05-09-photon/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">flower.ai/blog/2025-05-09-phot</span><span class="invisible">on/</span></a><br><a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> <a href="https://mastodon.social/tags/federated" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>federated</span></a> <a href="https://mastodon.social/tags/pretraining" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>pretraining</span></a> <a href="https://mastodon.social/tags/photon" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>photon</span></a> <a href="https://mastodon.social/tags/flowerai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>flowerai</span></a></p>