mdadm; previously working; after "failure", cannot join array due to disk size

Abstract

I had an useful Raid 5 array, I restarted package, and afterwards mdadm could not re - assamble one component.

Seeing that it was just one component, I assumed it would certainly be very easy to simply re - sync. Yet that ended up not to function, due to the fact that evidently currently the tool is not huge sufficient to sign up with the array!?

Initial Raid Setup

Sadly instead made complex. I have a Raid 5 incorporating 2 3 TB disks with 2 straight raids (containing 1tb+2tb). I did not dividing the disks, that is, the raid extends physical discs. In knowledge this is possibly what created the first failing.

After the eventful reboot

mdadm would certainly reject to assemble among the straight selections, asserting that there existed no superblock (monitoring with mdadm - - check out on both really did not return anything). Unfamiliar person yet, evidently they still had some partitiontable remains on them.

Now I assumed that the quickest remedy would certainly be to simply re - create the straight array, add it to the larger raid5 array and afterwards have it re - sync. Therefore I decided to simply remove those dividing table access, that is: dividing them to freespace. And afterwards I developed a straight array extending both of the disks.

# mdadm --create /dev/md2 --level=linear --raid-devices=2 /dev/sda /dev/sdc

However, when attempting to add them back to the array, I get

# mdadm --add /dev/md0 /dev/md2        
mdadm: /dev/md2 not large enough to join array

So I am I appropriately presuming the disks reduced?

Counting blocks

I presume it is time for some block counts!

Both parts of the straight array:

RO    RA   SSZ   BSZ   StartSec            Size   Device
rw   256   512  4096          0   1000204886016   /dev/sda
RO    RA   SSZ   BSZ   StartSec            Size   Device
rw   256   512  4096          0   2000398934016   /dev/sdc

If mdadm is straight setting would certainly have no expenses, the amount of both dimensions would certainly be larger than among the 3tb drives (3000592982016). Yet that is not the case:

/ proc/mdstat records that the straight array has dimension 2930015024, which is 120016less than the called for

# mdadm --detail /dev/md0 | grep Dev\ Size
Used Dev Size : 2930135040 (2794.39 GiB 3000.46 GB)

But that is horribly dubious! Prior to restarting (an albeit earlier manifestation) of this straight array became part of the larger array!

What I think happend

After the reboot, mdadm identified that a component of the array was missing out on. Given that it was the tiniest participant, the array tool dimension was automagically expanded to fill out the next tiniest tool.

Yet that does not seem like uh, reasonable actions, does it?

A choice would certainly be that for one reason or another I am no more developing the maximum dimension straight raid, yet that is type of ridiculous too.

What I have actually been contemplating to do

Shrink the abject array to exclude the "broken" straight array and afterwards attempt to - - add and also - - expand once more. Yet I hesitate that does not transform the tool dimension in fact.

Given that I do not recognize just what failed, I would certainly favor to first recognize what created this trouble to begin with prior to doing anything rash.

5
2022-07-25 20:44:06
Source Share
Answers: 1

So uh I presume well the disks reduced?

The location mdadm gets for metadata by default possibly expanded I've had some instances lately where mdadm threw away a monstrous 128MiB for no noticeable factor. You intend to examine mdadm --examine /dev/device* for the data offset access. Preferably it needs to disappear than 2048 fields.

If that is without a doubt the trouble, you can make use of mdadm --create in addition to the --data-offset= parameter to make mdadm waste much less room for metadata.

If that is still not enough, you would certainly need to either attempt your good luck with the old 0.90 metadata (which could be one of the most room reliable as it makes use of no such countered), or reduce the opposite side of the RAID a little (bear in mind to reduce the LV / filesystem first).

4
2022-07-25 22:02:02
Source