fosstodon.org is one of the many independent Mastodon servers you can use to participate in the fediverse.
Fosstodon is an invite only Mastodon instance that is open to those who are interested in technology; particularly free & open source software. If you wish to join, contact us for an invite.

Administered by:

Server stats:

9.8K
active users

#raid10

0 posts0 participants0 posts today
Habr<p>Почему мы перешли на RAID 10</p><p>Недавно у нас развалился RAID 5. Один диск на первом году своей жизни умер сам от естественных причин. Такое может быть и в период трёхлетней гарантии — нечасто, но может. Мы вынули его, поставили на его место диск из горячего резерва — и во время ребилда в массиве умер второй диск. Данные умерли вместе с ним. Один из пользователей, чьи данные там были, очень живо интересовался тем, что за конфигурация у нас была. Вплоть до моделей дисков, дат их производства и серийных номеров. Он, вероятно, считал, что там стоит какое-то старьё, и до последнего не верил, что так бывает на новом железе. Потом очень искренне смеялся над фразой, что ни одна схема резервирования RAID не даёт стопроцентной гарантии сохранности данных. Это правда: ни одна схема резервирования никогда не гарантирует 100 %. Случается всякое. Диски из одной партии могут умереть в один день: у нас такое было только один раз несколько лет тому назад, но было. Разболтавшийся кулер может вызвать резонансные вибрации, которые убьют два массива целиком: такое было больше пяти лет тому назад, и мы долго расследовали ту ситуацию. Бывает всё. В России не очень принято выплачивать компенсации за простои и потерю данных. В прошлом году мы поняли, что это важно делать, и включили такие пункты в соглашение. Это привело к целой цепочке последствий, в частности, к тому, что мы перешли на RAID 10 как на новый для нас стандарт хранения данных.</p><p><a href="https://habr.com/ru/companies/ruvds/articles/881290/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">habr.com/ru/companies/ruvds/ar</span><span class="invisible">ticles/881290/</span></a></p><p><a href="https://zhub.link/tags/ruvds_%D1%81%D1%82%D0%B0%D1%82%D1%8C%D0%B8" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ruvds_статьи</span></a> <a href="https://zhub.link/tags/raid10" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>raid10</span></a> <a href="https://zhub.link/tags/%D1%85%D0%BE%D1%81%D1%82%D0%B8%D0%BD%D0%B3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>хостинг</span></a> <a href="https://zhub.link/tags/%D1%85%D1%80%D0%B0%D0%BD%D0%B5%D0%BD%D0%B8%D0%B5_%D0%B4%D0%B0%D0%BD%D0%BD%D1%8B%D1%85" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>хранение_данных</span></a> <a href="https://zhub.link/tags/%D1%81%D0%B5%D1%80%D0%B2%D0%B5%D1%80" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>сервер</span></a></p>
Asta [AMP]<p><span>question for tech-y storage people: I just nabbed 4 extra (used) disks with a 6 TB capacity from a place that re-sells electronics (oregonrecycles.com); I think they did some testing but obviously I dunno what to extent, and also, if it was just limited to stuff like SMART data, well...<br><br>anyway, I wanna RAID it, which is fine, but I don't know how much lifetime these have left on them. For four disks with unknown usage, should I use RAID6 or RAID10? It's not </span><i>job critical</i><span> data I'll be storing on these (mostly media and such). They'll eventually be migrated into a larger RAID array but that won't happen until I'm stable and can afford to rebuild my server so this is fine for now.<br><br>I wouldn't </span><i>mind</i><span> the better read/write performance that comes with RAID10 even though it has less parity. I suspect these disks were all used together so they might have similar wear/tear patterns; in that case, I'm wondering if RAID6's double parity actually buys me any extra life? Like, given 4 disks with the same history and a probably known disk failure rate, I'm not really clear as to whether double parity is going to make much of a difference (and that if one goes down, the others probably aren't too far behind).<br><br></span><a href="https://fire.asta.lgbt/tags/techPosting" rel="nofollow noopener noreferrer" target="_blank">#techPosting</a> <a href="https://fire.asta.lgbt/tags/raid" rel="nofollow noopener noreferrer" target="_blank">#raid</a> <a href="https://fire.asta.lgbt/tags/raid6" rel="nofollow noopener noreferrer" target="_blank">#raid6</a> <a href="https://fire.asta.lgbt/tags/raid10" rel="nofollow noopener noreferrer" target="_blank">#raid10</a> <a href="https://fire.asta.lgbt/tags/storage" rel="nofollow noopener noreferrer" target="_blank">#storage</a> <a href="https://fire.asta.lgbt/tags/nas" rel="nofollow noopener noreferrer" target="_blank">#nas</a></p>
Leszek Ciesielski<p>If I were to setup a <a href="https://hachyderm.io/tags/DYI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DYI</span></a> <a href="https://hachyderm.io/tags/NAS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NAS</span></a> server in 2025, what should it be? I've got an HP Proliant N40L G7 with 5 drives (a mix of 750 GB and 1 TB ones) that used to run <a href="https://hachyderm.io/tags/Gentoo" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gentoo</span></a> and had the drives in <a href="https://hachyderm.io/tags/RAID10" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>RAID10</span></a> over <a href="https://hachyderm.io/tags/LVM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LVM</span></a> volumes. This worked, but was quite a faff to manage.</p><p>What's the modern equivalent? I need it to run docker containers and serve files over LAN, that's it.</p>
tootbrute<p>Is there an authoritave <a href="https://fedi.arkadi.one/tags/mdadm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>mdadm</span></a> reference like <a href="https://openzfs.readthedocs.io/en/latest/" rel="nofollow noopener noreferrer" target="_blank">https://openzfs.readthedocs.io/en/latest/</a> is for zfs?</p><p>Stack overflow q &amp; a's not filling me with confidence</p><p><a href="https://fedi.arkadi.one/tags/raid10" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>raid10</span></a> <a href="https://fedi.arkadi.one/tags/raid" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>raid</span></a></p>
tootbrute<p>ok the more i read about this the more i'm getting confused</p><p>is A+B a full set of data, or A + A is the whole set of data?</p><p><a href="https://fedi.arkadi.one/tags/raid10" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>raid10</span></a> <a href="https://fedi.arkadi.one/tags/mdadm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>mdadm</span></a></p>
scs | scnr<p>Fragen, die man sich mitten in der Nacht vor dem Schlafen gehen stellt - <a href="https://chaos.social/tags/ZFS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ZFS</span></a> Datapool für VMs/LXCs bei (noch) fünf (bald sechs) gleich großen SSDs mit <a href="https://chaos.social/tags/RAID10" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>RAID10</span></a>, <a href="https://chaos.social/tags/RAIDZ" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>RAIDZ</span></a> oder <a href="https://chaos.social/tags/RAIDZ2" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>RAIDZ2</span></a>? Das werde ich wohl aber erst morgen (as in nach dem Schlafen) entscheiden müssen/dürfen. <a href="https://chaos.social/tags/HomeLab" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HomeLab</span></a></p>
Vito Botta<p>Can anyone confirm whether with <a href="https://botta.social/tags/softraid" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>softraid</span></a> <a href="https://botta.social/tags/raid10" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>raid10</span></a> you can safely remove a faulty drive from an array even if it contains the root partition of the live system? I found mixed information on this. <a href="https://botta.social/tags/linux" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linux</span></a></p>
keadamander<p><a href="https://fosstodon.org/tags/raid5" class="mention hashtag" rel="tag">#<span>raid5</span></a> <br /><a href="https://fosstodon.org/tags/raid10" class="mention hashtag" rel="tag">#<span>raid10</span></a> </p><p>Final Hardware for NAS : VisionFive2 with 4x512GB USB drives installed. I also had added an M.2 drive with 1TB before, maybe I will use it for non-critical data storage.</p>
keadamander<p><a href="https://fosstodon.org/tags/raid5" class="mention hashtag" rel="tag">#<span>raid5</span></a> <br /><a href="https://fosstodon.org/tags/raid10" class="mention hashtag" rel="tag">#<span>raid10</span></a></p><p>I am setting up a SAMBA NAS with RAID ... using USB Flash drives. &quot;Why? WTF?&quot; you might ask ... because I can ;-)</p><p>A so-called crazy idea that wants to be realized.</p><p>So I bought the cheapest USB flash drives I could get from an electronics retailer nearby. Around 45€ per stick and this 4 times.</p><p>I have Raspberry Pi 4 and VisonFive2 to choose from.</p><p>As RPI4 has 2xUSB 2.0 and 2xUSB 3.0 I will go for VF2 4xUSB3.0.</p>
Rudi<p>My <a href="https://nightmare.haus/tags/NFS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NFS</span></a> might have commit death, but I'm not sure how yet... So far from the autopsy: Two physical disks in a <a href="https://nightmare.haus/tags/RAID10" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>RAID10</span></a> virtual disk have failed. It seems they may be the same stripe but I'm unsure how to confirm. I've ordered two new disks to try and rebuild the raid but I think any of the non-archived data will be lost 😭 if I have to restore data from my archives, I may push mastodon to use a smaller disk for media instead of NFS.</p><p>Any <a href="https://nightmare.haus/tags/Dell" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Dell</span></a> <a href="https://nightmare.haus/tags/PowerEdge" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>PowerEdge</span></a> <a href="https://nightmare.haus/tags/Debian" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Debian</span></a> or relevant perspective welcome!!</p>
Christiaan :nixos: :flag_nl:<p>So my first <a href="https://fosstodon.org/tags/Btrfs" class="mention hashtag" rel="tag">#<span>Btrfs</span></a> migration seemed to have gone smoothly. After installing <a href="https://fosstodon.org/tags/Debian" class="mention hashtag" rel="tag">#<span>Debian</span></a> 12 on the system drive I could just mount my old Btrfs <a href="https://fosstodon.org/tags/RAID10" class="mention hashtag" rel="tag">#<span>RAID10</span></a> and the file/directory structure I had put on it was still intact ✌️🤠</p><p><a href="https://fosstodon.org/tags/HomeLab" class="mention hashtag" rel="tag">#<span>HomeLab</span></a></p>
YoSiJo :anxde: :debian: :tor:<p>```bash<br>while true; do<br> sleep 1<br> grep --perl-regexp --quiet '^1*\d\.' /proc/loadavg &amp;&amp;<br> btrfs balance status /mnt/btrfs/… | grep --quiet 'No balance found on' &amp;&amp;<br> btrfs balance start -dconvert=raid10 -dlimit=1 /mnt/btrfs/…<br> sleep 1<br>done<br>```</p><p>Mögen die Spiele beginnen. 300TB <a href="https://social.anoxinon.de/tags/btrfs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>btrfs</span></a> von <a href="https://social.anoxinon.de/tags/raid1c3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>raid1c3</span></a> in <a href="https://social.anoxinon.de/tags/raid10" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>raid10</span></a> Konvertieren.<br>Denke mal in ein paar Monaten wird das erst durch sein.</p>
Karanbir Singh<p>My old <a href="https://social.afront.org/tags/qnap" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>qnap</span></a> 4disk <a href="https://social.afront.org/tags/NAS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NAS</span></a> has given up and decided to call it a day. Anyone want to recommend a replacement?</p><p>I want to retain the 4 3.5" sata disks, and add another 4 2.5" ssd's. Also looking for a <a href="https://social.afront.org/tags/10gig" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>10gig</span></a> link and enough performance to do it justice. I run <a href="https://social.afront.org/tags/raid10" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>raid10</span></a> , and <a href="https://social.afront.org/tags/ipv6" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ipv6</span></a></p>
Christiaan :nixos: :flag_nl:<p>Just out of pure curiousity, but does anyone in their <a href="https://fosstodon.org/tags/HomeLab" class="mention hashtag" rel="tag">#<span>HomeLab</span></a> have a <a href="https://fosstodon.org/tags/StorageAreaNetwork" class="mention hashtag" rel="tag">#<span>StorageAreaNetwork</span></a> or <a href="https://fosstodon.org/tags/SAN" class="mention hashtag" rel="tag">#<span>SAN</span></a> for short? I am contemplating setting one up with a few <a href="https://fosstodon.org/tags/SBCs" class="mention hashtag" rel="tag">#<span>SBCs</span></a> or low wattage machines, with all <a href="https://fosstodon.org/tags/RAID1" class="mention hashtag" rel="tag">#<span>RAID1</span></a> set-up instead of <a href="https://fosstodon.org/tags/RAID10" class="mention hashtag" rel="tag">#<span>RAID10</span></a> on a single machine.</p><p>What are you thoughts about that scenario community? <a href="https://fosstodon.org/tags/100DaysOfHomeLab" class="mention hashtag" rel="tag">#<span>100DaysOfHomeLab</span></a> <a href="https://fosstodon.org/tags/SelfHosted" class="mention hashtag" rel="tag">#<span>SelfHosted</span></a> <a href="https://fosstodon.org/tags/SelfHosting" class="mention hashtag" rel="tag">#<span>SelfHosting</span></a></p>
Christiaan :nixos: :flag_nl:<p><a href="https://fosstodon.org/tags/100DaysOfHomeLab" class="mention hashtag" rel="tag">#<span>100DaysOfHomeLab</span></a> <a href="https://fosstodon.org/tags/Day4of100" class="mention hashtag" rel="tag">#<span>Day4of100</span></a></p><p>So another update from me, because sometimes things go slow. I&#39;ve booted a <a href="https://fosstodon.org/tags/NixOS" class="mention hashtag" rel="tag">#<span>NixOS</span></a> live image and the <a href="https://fosstodon.org/tags/badblocks" class="mention hashtag" rel="tag">#<span>badblocks</span></a> program is running a check of my four 4TB drives (been running for 86 hrs and done 84% I think the last time I checked). </p><p>I did change my decision to use <a href="https://fosstodon.org/tags/NILFS2" class="mention hashtag" rel="tag">#<span>NILFS2</span></a> &amp; <a href="https://fosstodon.org/tags/RAID6" class="mention hashtag" rel="tag">#<span>RAID6</span></a>. After deliberate consideration I&#39;ve switched my choice to <a href="https://fosstodon.org/tags/ZFS" class="mention hashtag" rel="tag">#<span>ZFS</span></a> with <a href="https://fosstodon.org/tags/RAID10" class="mention hashtag" rel="tag">#<span>RAID10</span></a>.<br />— no longer NILFS2 because it doesn&#39;t have compression and ZFS even includes the kitchen sink<br /> - 1/2</p>
Dr. Roy Schestowitz (罗伊)<a class="hashtag" href="https://pleroma.site/tag/linux" rel="nofollow noopener noreferrer" target="_blank">#Linux</a> <a class="hashtag" href="https://pleroma.site/tag/kernel" rel="nofollow noopener noreferrer" target="_blank">#Kernel</a> Space: <a class="hashtag" href="https://pleroma.site/tag/trenchboot" rel="nofollow noopener noreferrer" target="_blank">#Trenchboot</a> , <a class="hashtag" href="https://pleroma.site/tag/raid10" rel="nofollow noopener noreferrer" target="_blank">#RAID10</a> , Spelling Mistakes and <a class="hashtag" href="https://pleroma.site/tag/initcalls" rel="nofollow noopener noreferrer" target="_blank">#Initcalls</a> <a href="http://www.tuxmachines.org/node/142561" rel="nofollow noopener noreferrer" target="_blank">http://www.tuxmachines.org/node/142561</a>
Dr. Roy Schestowitz (罗伊)Kernel: i.MX8, <a href="https://pleroma.site/tag/rockchip" rel="nofollow noopener noreferrer" target="_blank">#Rockchip</a> , RGB LED, KUnit, <a href="https://pleroma.site/tag/raid10" rel="nofollow noopener noreferrer" target="_blank">#RAID10</a><br> <a href="http://www.tuxmachines.org/node/119198" rel="nofollow noopener noreferrer" target="_blank">http://www.tuxmachines.org/node/119198</a> <a href="https://pleroma.site/tag/kernel" rel="nofollow noopener noreferrer" target="_blank">#kernel</a> <a href="https://pleroma.site/tag/linux" rel="nofollow noopener noreferrer" target="_blank">#linux</a>
Dr. Roy Schestowitz (罗伊)Too much disk IO on sda in <a href="https://pleroma.site/tag/raid10" rel="nofollow noopener noreferrer" target="_blank">#RAID10</a> setup <a href="https://blog.windfluechter.net/content/blog/2019/01/05/1750-too-much-disk-io-sda-in-raid10-setup" rel="nofollow noopener noreferrer" target="_blank">https://blog.windfluechter.net/content/blog/2019/01/05/1750-too-much-disk-io-sda-in-raid10-setup</a> <a href="https://pleroma.site/tag/debian" rel="nofollow noopener noreferrer" target="_blank">#debian</a> <a href="https://pleroma.site/tag/linux" rel="nofollow noopener noreferrer" target="_blank">#linux</a> <a href="https://pleroma.site/tag/kernel" rel="nofollow noopener noreferrer" target="_blank">#kernel</a>