Well, it happened folks! I just lost the other disk in the span while rebuilding earlier failed disk. All vm data is poofted! I have to make a new array and spin up a truenas vm and restore the config. That'll get me access to whatever local backups are in my storage box. RAID is not a backup!

Side note, I don't think I'll be able to recover completely, but lately I've been thinking about nuke & pave and re doing everything now that I have more of an idea of how I want things.

@unicornfarts damn. Indeed, Raid is not a backup.

Running with a single spare is always dicey as well. In my boxes I either have two spares for a set (Raid 6) or an enclosure spare. That's on faster controllers though; I know the Perc is really slow on anything above 0 and 1.

@fedops @unicornfarts I'm sorry to hear about this data-loss. It always stings.

I saw this a lot in a past life. With Raid1, if all reads (for a given LBA) go to the same disk then the controller won't notice if the corresponding blocks are unreadable on the other disk.

For Raid5, it's similar: the controller doesn't read parity blocks until another disk fails, and only then does it discover that parity blocks are bad.

Work-around: weekly consistency checks if they're supported.

@markusl @fedops
Yeah, I don't think i'm a fan of this PERC, or RAID for that matter. I've been very happy with zfs in my storage box so I think I'll try and setup an iSCSI share between ESXi and TrueNAS.
A) So I can have my VMs backed by zfs instead, and
B) To take advantage of the automated VMWare snapshots feature in TrueNAS.
A setup like that is probably more reliable than RAID on old spinning rust.

@unicornfarts yeah that sounds good. My Dev ESXi server currently only has a single disk datastore as it does have 12 spindles but just a jbod controller. It was originally a TrueNas box from iXsystems.

I've been waiting for TrueNas Scale to come out of beta to run it in a VM with SAS passthrough on that box. I want two pools, one datastore and the other as a backup for itself and a couple other machines. Those automatic VM snapshots would be nice.
@markusl

@fedops I was just planning out something similar pool-wise. My current setup is 6 old sas disks on the perc controller, as well as a jbod controller linked up externally to a sas expander in the storage box with 12 more disks. If I ditch the PERC I may be able to connect it's 6 disks in the server to the HBA as well, but I'll have to see how many internal connecters are left. I may have to install another expander in the server itself πŸ€·β€β™‚οΈ
@markusl

@fedops Then I can have one large pool for esx with it's local disks, and back that up to the other pool with my other backups and data
@markusl

@unicornfarts I think you can just use the Perc as a jbod controller as well. If not you could maybe make 6 1-disk spans. Not great but since you have it may as well use it.
@markusl

@fedops found a couple perc 6/i controllers laying around. Think I'll test flashing one instead of the h700.

Follow

@unicornfarts good you keep a stash of stuff! Remember, it's not hoarding if it's electronics. 😁

Β· Β· Tusky Β· 2 Β· 1 Β· 1

@fedops looks like there is no IT/hba mode for perc 6. Also there's a HW limitation -- disks can't exceed 2tb. So these are no good for my use case. Looks like the perc h700 isn't ideal for jbod either?

My options are either go the long ugly SAS cable route, or order another suitable hba/jbod card dedicated to the internal disks. Unless I'm missing something.

@unicornfarts my Percs are series 9 730s, they support jbod. Seems like the series 6 indeed don't.

I'd go the single disk array route. Create 6 spans with one disk each. It seems to be the normal route Perc 6 owners take for zfs.

If you want larger disks a different non-Raid controller would probably be better. I'd assume they can be had from recyclers on eBay pretty cheaply.

@fedops I'm looking at ebay for options. I did discover that when I upgraded from the perc 6 to the 7, (for drive size reasons) I needed to get new sas cables on account that the 7 uses 8087 connectors and not the 8484 connections like on the 6. Turns out these cables I got are plenty long enough to reach the back of the chassis. What I could do is install a 8088ext/8087int bracket like I have in my storage box, then connect the hba to the bracket with a really short 8088 cable.

@fedops bracket: servethehome.com/external-sass

I wonder if there are benefits to only using one hba for all 18 disks.. or if two would be better. I had planned on installing a graphics card into the server at some point for transcoding/streaming, and I probably can't fit two controllers AND a video card.

@unicornfarts I guess the main advantage of two controllers would be the additional theoretical PCIe throughput. Question is if your disks are fast enough to max one out.

@fedops Certainly not if I were using SSDs. But I don't think I can saturate even one cable with mechanical drives unless I had like 18+ drives connected to one port via an expander.

@fedops You bring up a good point though, I wonder what the pci-x bus on the server is doing. like if it's actually an x8 slot.. *googles*

@fedops so the two pci slots on riser 1 are x4 slots (plus the integrated storage slot which runs at 4x/8x depending on card), the slots on riser 2 are x8. Good to know.

Sign in to participate in the conversation
Fosstodon

Fosstodon is an English speaking Mastodon instance that is open to anyone who is interested in technology; particularly free & open source software.