fosstodon.org is one of the many independent Mastodon servers you can use to participate in the fediverse.
Fosstodon is an invite only Mastodon instance that is open to those who are interested in technology; particularly free & open source software. If you wish to join, contact us for an invite.

Administered by:

Server stats:

8.8K
active users

#arca380

0 posts0 participants0 posts today
Mika<p>Done this with the <a href="https://sakurajima.social/tags/AMD" rel="nofollow noopener" target="_blank">#AMD</a> 5600G's iGPU, now testing <a href="https://sakurajima.social/tags/Jellyfin" rel="nofollow noopener" target="_blank">#Jellyfin</a> hardware transcoding with my <a href="https://sakurajima.social/tags/Intel" rel="nofollow noopener" target="_blank">#Intel</a> <a href="https://sakurajima.social/tags/ArcA380" rel="nofollow noopener" target="_blank">#ArcA380</a><span> GPU - one question though, is it normal for CPU usage to be rather high at least in the early parts of streaming (while transcoding), even with hardware transcoding?<br><br>I'm just trying to figure out if it is actually hardware transcoding - I'm assuming it is, bcos on the admin dashboard I'm seeing that it's transcoding as AV1 and I'm sure my Ryzen 7 1700 would not be able to do/handle that, esp considering that I'm testing with 4 streams playing concurrently? but the CPU usage is rather high the first minute or two or more from when the stream starts, it does lower down afterwards - the video playback is perfectly fine, no stuttering or anything like that.<br><br>My method of passthrough is the same as I did with the 5600G, that is a simple passthrough to the </span><a href="https://sakurajima.social/tags/LXC" rel="nofollow noopener" target="_blank">#LXC</a> container, then to the <a href="https://sakurajima.social/tags/Docker" rel="nofollow noopener" target="_blank">#Docker</a> container running Jellyfin. I don't think I noticed this high CPU usage when testing the 5600G, and the only minor configuration difference between the two was I'd used <a href="https://sakurajima.social/tags/VAAPI" rel="nofollow noopener" target="_blank">#VAAPI</a> with the 5600G and disabled <a href="https://sakurajima.social/tags/AV1" rel="nofollow noopener" target="_blank">#AV1</a> encoding (since idt it supports it), while on the Arc A380 GPU I'm using Intel's <a href="https://sakurajima.social/tags/QSV" rel="nofollow noopener" target="_blank">#QSV</a><span> and have enabled AV1 encoding.<br><br>Am I correct to assume that hardware transcoding is indeed working? Cos again, I'm quite certain my Ryzen 7 1700 would definitely NOT be able to handle this lol esp since I only give the LXC container 2 cores.</span></p>
Mika<p><span>ok ive to go now but the plan once i return ~2 hrs later is to dig thru my pendrives and use 1 to backup my current bios settings, use another to load up the latest stable bios version, take pics of my current bios settings, upgrade to the latest bios version, and then lastly to re-configure bios as close as possible to my current settings.<br><br>fingers crossed, my </span><a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener" target="_blank">#Proxmox</a> node would still be fine and hopefully, the ReBAR option would show up on my BIOS, cos rn on BIOS v4.6 (2020), it's not (even with CSM disabled, above 4G decoding enabled). My server hardware: <a href="https://sakurajima.social/tags/AMD" rel="nofollow noopener" target="_blank">#AMD</a> Ryzen 7 1700, <a href="https://sakurajima.social/tags/ASRock" rel="nofollow noopener" target="_blank">#ASRock</a> B450M Pro4 <a href="https://sakurajima.social/tags/motherboard" rel="nofollow noopener" target="_blank">#motherboard</a>, and <a href="https://sakurajima.social/tags/Intel" rel="nofollow noopener" target="_blank">#Intel</a> <a href="https://sakurajima.social/tags/ArcA380" rel="nofollow noopener" target="_blank">#ArcA380</a> <a href="https://sakurajima.social/tags/GPU" rel="nofollow noopener" target="_blank">#GPU</a>.</p>
Mika<p>I've installed my <a href="https://sakurajima.social/tags/Intel" rel="nofollow noopener" target="_blank">#Intel</a> <a href="https://sakurajima.social/tags/ArcA380" rel="nofollow noopener" target="_blank">#ArcA380</a> GPU on my <a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener" target="_blank">#Proxmox</a> node and... I'm having a hard time configuring the <a href="https://sakurajima.social/tags/ASRock" rel="nofollow noopener" target="_blank">#ASRock</a> <a href="https://sakurajima.social/tags/BIOS" rel="nofollow noopener" target="_blank">#BIOS</a><span> bcos for wtv reason, it's not outputting the full screen to my test monitor, so I can't really see the full list of options and menus I'd like to configure to enable ReBAR and so on. I've never experienced this before so.. what gives?<br><br>---<br><br>Edit: Just switched to another, more 'modern' monitor and it works fine.</span></p>
Mika<p>I have finally caved in and dove into the rabbit hole of <a href="https://sakurajima.social/tags/Linux" rel="nofollow noopener" target="_blank">#Linux</a> Container (<a href="https://sakurajima.social/tags/LXC" rel="nofollow noopener" target="_blank">#LXC</a>) on <a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener" target="_blank">#Proxmox</a><span> during my exploration on how to split a GPU across multiple servers and... I totally understand now seeing people's Proxmox setups that are made up exclusively of LXCs rather than VMs lol - it's just so pleasant to setup and use, and superficially at least, very efficient.<br><br>I now have a </span><a href="https://sakurajima.social/tags/Jellyfin" rel="nofollow noopener" target="_blank">#Jellyfin</a> and <a href="https://sakurajima.social/tags/ErsatzTV" rel="nofollow noopener" target="_blank">#ErsatzTV</a> setup running on LXCs with working iGPU passthrough of my server's <a href="https://sakurajima.social/tags/AMD" rel="nofollow noopener" target="_blank">#AMD</a> Ryzen 5600G APU. My <a href="https://sakurajima.social/tags/Intel" rel="nofollow noopener" target="_blank">#Intel</a> <a href="https://sakurajima.social/tags/ArcA380" rel="nofollow noopener" target="_blank">#ArcA380</a> GPU has also arrived, but I'm prolly gonna hold off on adding that until I decide on which node should I add it to and schedule the shutdown, etc. In the future, I might even consider exploring (re)building a <a href="https://sakurajima.social/tags/Kubernetes" rel="nofollow noopener" target="_blank">#Kubernetes</a>, <a href="https://sakurajima.social/tags/RKE2" rel="nofollow noopener" target="_blank">#RKE2</a><span> cluster on LXC nodes instead of VMs - and if that's viable or perhaps better.<br><br>Anyway, I've updated my </span><a href="https://sakurajima.social/tags/Homelab" rel="nofollow noopener" target="_blank">#Homelab</a> Wiki with guides pertaining LXCs, including creating one, passing through a GPU to multiple unprivileged LXCs, and adding an <a href="https://sakurajima.social/tags/SMB" rel="nofollow noopener" target="_blank">#SMB</a><span> share for the entire cluster and mounting them, also, on unprivileged LXC containers.<br><br></span>🔗 <a href="https://github.com/irfanhakim-as/homelab-wiki/blob/master/topics/proxmox.md#linux-containers-lxc" rel="nofollow noopener" target="_blank">https://github.com/irfanhakim-as/homelab-wiki/blob/master/topics/proxmox.md#linux-containers-lxc</a></p>
Mika<p>I... actually managed to do this and it was somewhat messy to get through with it, but I did it. My 'stoppers' initially were simply needing to update some of the <a href="https://sakurajima.social/tags/Jellyfin" rel="nofollow noopener" target="_blank">#Jellyfin</a>'s <code>xml</code> configs for any wrong/old paths/values, and lastly, the <a href="https://sakurajima.social/tags/SQLite" rel="nofollow noopener" target="_blank">#SQLite</a> DBs themselves which had old paths as well - most of which were easy to fix as they're <code>text</code> values, but some were (JSON) <code>blob</code>s, using the same extension on <a href="https://sakurajima.social/tags/VSCode" rel="nofollow noopener" target="_blank">#VSCode</a>, this wasn't that hard to do either by simply exporting the blob, editing the blob's JSON text value, and reimporting the blob to the column. Oh, I also had to update <code>meta.json</code><span> files of all plugins I've installed to point to the new path to their logos.<br><br>Now my Jellyfin </span><a href="https://sakurajima.social/tags/LinuxServer" rel="nofollow noopener" target="_blank">#LinuxServer</a>.io container sitting in an unprivileged (<a href="https://sakurajima.social/tags/Debian" rel="nofollow noopener" target="_blank">#Debian</a> <a href="https://sakurajima.social/tags/Linux" rel="nofollow noopener" target="_blank">#Linux</a>) <a href="https://sakurajima.social/tags/LXC" rel="nofollow noopener" target="_blank">#LXC</a> container on <a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener" target="_blank">#Proxmox</a> is set up with hardware transcoding using the <a href="https://sakurajima.social/tags/AMD" rel="nofollow noopener" target="_blank">#AMD</a> Ryzen 5 5600G onboard iGPU (cos I'm getting impatient in waiting for my <a href="https://sakurajima.social/tags/Intel" rel="nofollow noopener" target="_blank">#Intel</a> <a href="https://sakurajima.social/tags/ArcA380" rel="nofollow noopener" target="_blank">#ArcA380</a> to arrive). I'll update my <a href="https://sakurajima.social/tags/ErsatzTV" rel="nofollow noopener" target="_blank">#ErsatzTV</a> container to do the same. Everything's perfect now, 'cept, I still wouldn't recommend users to stream Jellyfin on the web or a web-based client using transcoding, cos while the transcoding itself is perfect, Jellyfin seems to have an issue (that I never got on <a href="https://sakurajima.social/tags/Plex" rel="nofollow noopener" target="_blank">#Plex</a><span>) whereby the subtitle would desync pretty consistently if not direct playing - with external or embedded subs, regardless. Dk if that can ever be fixed though, considering the issue has been up since 2023 with no fix whatsoever.<br><br>There's also a separate issue I'm having where Jellyfin does not seem to support discovering/serving media files that are contained in a symlink directory (even though there were some people on their forums reporting in the past that it should) - I've reported it last week, but it's not going anywhere for now. Regardless though, I'm absolutely loving Jellyfin despite some of its rough edges, and my users are loving it too. I think I've considered myself 'migrated' from Plex to Jellyfin, but I'll still keep Plex around as backup for these 2 cases/issues I've mentioned, for now.<br><br></span>🔗 <a href="https://github.com/jellyfin/jellyfin-web/issues/4346" rel="nofollow noopener" target="_blank">https://github.com/jellyfin/jellyfin-web/issues/4346</a><span><br><br></span>🔗 <a href="https://github.com/jellyfin/jellyfin/issues/13858" rel="nofollow noopener" target="_blank">https://github.com/jellyfin/jellyfin/issues/13858</a><span><br><br>RE: </span><a href="https://sakurajima.social/notes/a6j9bhrbtq" rel="nofollow noopener" target="_blank">https://sakurajima.social/notes/a6j9bhrbtq</a></p>
Mika<p>Bruh, I might've wasted my time learning how to passthrough a GPU to an <a href="https://sakurajima.social/tags/LXC" rel="nofollow noopener" target="_blank">#LXC</a> container on <a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener" target="_blank">#Proxmox</a> (as well as mount a SMB/CIFS share) and write up a guide (haven't been able to test yet, cept with the latter) - all by doing some seemingly <i>magic</i> <a href="https://sakurajima.social/tags/Linux" rel="nofollow noopener" target="_blank">#Linux</a> <i>fu</i><span> with some user/group mappings and custom configs, if it turns out that you could actually achieve the same result just as easily graphically using a standard wizard on PVE.<br><br>It's 4am, I'll prolly try to find time later during the day, or rather evening (open house to attend at noon), and try using the wizard to 1) Add a device passthrough on an LXC container for my </span><a href="https://sakurajima.social/tags/AMD" rel="nofollow noopener" target="_blank">#AMD</a> iGPU (until my <a href="https://sakurajima.social/tags/Intel" rel="nofollow noopener" target="_blank">#Intel</a> <a href="https://sakurajima.social/tags/ArcA380" rel="nofollow noopener" target="_blank">#ArcA380</a> GPU arrives) and see if the root user + service user on the container could access it/use it for transcoding on <a href="https://sakurajima.social/tags/Jellyfin" rel="nofollow noopener" target="_blank">#Jellyfin</a>/<a href="https://sakurajima.social/tags/ErsatzTV" rel="nofollow noopener" target="_blank">#ErsatzTV</a>, and 2) Add a SMB/CIFS storage on the Proxmox Datacenter, tho my <a href="https://sakurajima.social/tags/NAS" rel="nofollow noopener" target="_blank">#NAS</a><span> is also just a Proxmox VM in the same cluster (not sure if this is a bad idea?) and see if I could mount that storage to the LXC container that way.<br><br></span><a href="https://sakurajima.social/tags/Homelab" rel="nofollow noopener" target="_blank">#Homelab</a> folks who have done this, feel free to give some tips or wtv if you've done this before!</p>
Mika<p>I'm writing a guide on splitting a GPU passthrough across multiple <a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener" target="_blank">#Proxmox</a> <a href="https://sakurajima.social/tags/LXC" rel="nofollow noopener" target="_blank">#LXC</a><span> containers based on a few resources, including the amazing Jim's Garage video.<br><br>Does anyone know the answer to this question of mine though, on why he might've chosen to map a seemingly arbitrary GID </span><code>107</code> on the LXC Container to the Proxmox host's <code>render</code> group GID of <code>104</code> - instead of mapping <code>104 -&gt; 104</code>, as he did with the <code>video</code> group, where he mapped <code>44 -&gt; 44</code><span> (which seems to make sense to me)?<br><br>I've watched his video seemingly a million times, and referred to his incredibly simplified guide on his GitHub that's mostly only meant for copy-pasting purposes, and I couldn't quite understand why yet - I'm not sure if it really is arbitrary and </span><code>107</code> on the LXC Container could be anything, including <code>104</code> if we wanted to... or if it (i.e. <code>107</code>) should've been the LXC Container's actual <code>render</code> group GID, in which case then it should've also been <code>104</code> instead of <code>107</code><span> on his Debian LXC Container as it is on mine.<br><br>Anyway, super excited to test this out once my </span><a href="https://sakurajima.social/tags/Intel" rel="nofollow noopener" target="_blank">#Intel</a> <a href="https://sakurajima.social/tags/ArcA380" rel="nofollow noopener" target="_blank">#ArcA380</a><span> arrives. I could probably already test it by passing through one of my node's Ryzen 5 5600G iGPU, but I worry if I'd screw something up, seeing that it's the only graphics onboard the node.<br><br></span>🔗 <a href="https://github.com/JamesTurland/JimsGarage/issues/141" rel="nofollow noopener" target="_blank">https://github.com/JamesTurland/JimsGarage/issues/141</a></p>
heise online (inoffiziell)Intel hat die endgültigen Eckdaten der Spieler-Grafikkarten Arc-A(lchemist) verraten. Die schnellste konkurriert auf dem Papier mit Nvidias GeForce RTX 3060 Ti. <br><a href="https://www.heise.de/news/Spieler-Grafikkarten-Finale-Spezifiaktionen-fuer-Intels-Arc-A-Reihe-bekannt-7259747.html" rel="nofollow noopener" target="_blank">Spieler-Grafikkarten: Finale Spezifiaktionen für Intels Arc-A-Reihe bekannt</a><br>