This article is about implementing a Redundant Array of Independent Disks,
or RAID for short, in software on a Linux system.
If you are unfamiliar with RAID, please check out
For the rest of this article, I will assume the reader
is familiar with the various RAID levels and fault-tolerance
techniques used in a RAID environment.
However, for those who might be just a little rusty on the subject,
here is a quick review of the most important concepts:
There are pros and cons to both hardware- and software-based RAID
The main benefits of software-based RAID is economy and simplicity.
You don't need to purchase any additional hardware such as an
external RAID cabinet, and you are not introducing a new potential
point of failure in your system.
However, since software-based RAID is typically used for internal
devices, the drawback is that you are limited to the number of
devices you can physically fit and hook up inside the computer case.
Most commodity computers will come with 2, 3 or 4 SATA connectors
and perhaps one IDE connector, so the maximum number of drives you
can hook up inside the computer case is five, assuming you have the
space. If you account for
one DVD or CD drive in the system, you are left with no more than 4
disk drives. If you were planning to include a hot spare as part
of your configuration, you are down to 3 active drives as a maximum.
While you can still support a RAID 5 configuration
with 3 or 4 drives, this is really not the typical configuration
for a small system. In addition, although spare
drives can be configured in software-based RAID, I have found they do not
consistently behave as you would expect
(i.e. they don't dependably kick in immediately when a fault is detected).
The bottom line is that while software RAID is an excellent choice for
mirroring a pair of disk drives inside the computer case,
a large-scale RAID 5 configuration with hot spares is best
implemented using a dedicated external RAID controller with
Having said that, let me stress that software-based RAID remains
the ideal solution for protecting your data in a small or
medium-size Linux system.
It's easy to implement, economical,
and very dependable. Over the past decade,
I have configured dozens, perhaps hundreds of
Linux system for my clients with software-based disk mirrors
and they have worked flawlessly and have spared my clients from
costly catastrophes on several occasions.
For the sake of this tutorial (and to keep code samples short),
I will assume the most common
scenario, which is to configure a pair of mirrored disk drives in a
standard commodity Linux system. However, if you are more
interested in a RAID 5 configuration, you will be pleased to
see that the command syntax is virtually the same as for disk
mirroring and that all the
concepts and tools we cover in this tutorial apply to your
chosen configuration as well.
The first mainstream implementation of software-based RAID
on Linux was the raidtools package, which came
with most Linux distributions based on kernel 2.4.
This package included utilities such as mkraid,
raiddev, raidstart, raidstop
This software package was later superceded by mdadm,
which is the standard RAID utility on Linux as of this
writing (2011) and comes standard with most current Linux
distributions based on the 2.6 kernel.
This tutorial will not cover the raidtools suite of
utilities since they are now considered obsolete. If your
encounter any of the above-mentioned utilities on your Linux
system, you are probably using a very old version of Linux.
Fortunately, these old versions probably also include the
mdadm utility, so you may be able to use this tutorial after
all. To find out, enter this command at the shell prompt when
logged in as root:
# type mdadm
mdadm is /sbin/mdadm
(Note: the "#" sign represents your command prompt; you do not
type in this character.)
If you get a pathname in return as in the above example, the
utility is already installed on your system. If you get something
like "mdadm: not found," then you don't have this program on your
system at the moment. Still, you may be able to install it using
the standard software installation method appropriate to your
system, or you may be able to find a copy of this utility suitable
for your system with an Internet search.
The standard utility to manage software-based RAID on
current Linux distributions (i.e. based on kernel 2.6) is
mdadm (formerly called mdctl).
Unlike the raidtools package, mdadm is a single utility
rather than a suite of programs.
The process to set up a disk array using mdadm can be summarized
In the next sections, we will examine each step in greater detail,
and will also learn how to recover from a failed array member
and rebuild the array after a disk drive has been replaced.
The mdadm utility features a multitude of options, parameters and
modes, allowing you to do just about anything with arrays.
In this tutorial, we will only cover a very small subset of these
options — the celebrated 20% that lets you do 80% of your work.
As mentioned in the overview, before you can construct a disk array,
you need to create appropriate RAID partitions on the disks that you
intend to use.
If you are reading this turorial on creating and managing software-based
RAID on Linux, it's safe to assume
you don't need an introduction to disk partitioning,
so I will assume you already know how to create and manage disk
partitions using fdisk or similar partitioning software.
Whatever partitioning tool you choose, simply create partitions of
type "fd" on the disks that you intend to use as part of your array.
are planning to mirror two disk drives, make sure the two partitions are
of equal size. The same advice applies to RAID 5 configurations.
If the partitions are of different sizes, your array will
have the capacity of the smallest partition, which means you will
have wasted the extra space on the larger partition(s). For this reason,
you might as well make sure that all your RAID partitions are all
the same size. If you have disk drives of different sizes, simply
allocate the extra space on the larger disks to you swap area or other
Note that if you are setting up a RAID 0 configuration (striping, no
fault tolerance), the partitions may be of different sizes.
Once you have partitioned your disk drives as described above,
you can create an array with mdadm using either --create
or --build, as in:
# mdadm --create /dev/md0 -n 2 -l raid1 /dev/sda3 /dev/sdb3
Let's examine each parameter in detail.
The --create option will populate the partition superblocks
with information about the array so that it may be
re-assembled automatically later, without having to specify
all the above parameters. If you were to use --build
instead, no information at all would be stored on the superblocks,
which means all the above parameters would have to be specified
each time the array had to be re-assembled.
For this reason, you should normally
use --create instead of --build unless you know exactly
what you're doing and why you're doing it.
The next parameter, /dev/md0, is the name of the device node
that we are creating through this process. This node will be
treated by the operating system as if it were a single disk volume.
You can format it, partition it, and create filesystems on it, just
like you would a standard disk.
Normally, on Unix-type operating system such as Linux, device nodes
can be given just about any name. However, mdadm insists on a
certain naming convention for array names, so you cannot just call
your array /dev/my_cool_raid, for instance. Specifically, the
"standard" names that will be accepted are /dev/mdX or
/dev/md/X, where X is a number. For instance,
/dev/md0 and /dev/md/0 are both acceptable.
Something else to be aware of: if you partition an array, the
partitions will be automatically named according to a specific convention as
well. For instance, the first partition you create on array
/dev/md2 will be called /dev/md2p1. You won't get to assign this
name; the partitioning utility will do this automatically.
Note that arrays created on a 2.4 kernel are not partitionable.
Arrays created on a 2.6 kernel or newer can be partitioned.
Getting back to our example, we chose /dev/md0 as the name
of our array. If we were to create a second array on this system,
we would probably call it /dev/md1, although any number other than
zero would be acceptable.
The next parameter, -n 2, simply indicates the
number of devices in our array. Since we are
mirroring two disk drives, this is the number we indicated here.
Next, the -l option (that's a lower-case L)
specifies the RAID level we wish
to implement. For disk mirroring, the RAID level is 1. The mdadm
utility will accept either "1" or "raid1" interchangeably.
This is about the simplest example we can use for creating a new
If we were to create a more complex RAID 5 array instead, we
would have to specify 3 or more disk partitions and, like this:
# mdadm --create /dev/md1 -n 3 -l raid5 \
/dev/sda2 /dev/sdb2 /dev/sdc1
This command would create a RAID 5 array named /dev/md1
from the with 3 disk partitions /dev/sda2, /dev/sdb2 and /dev/sdc1.
Note that there is no requirement to use the same partition number
on each disk. For instance, in the above example, we used the
second partition of disks /dev/sda and /dev/sdb,
and the first partition of /dev/sdc.
By now, you have created the disk array of your choice,
such as a 2-disk mirror or perhaps a more elaborate
RAID 5 array or a large striping volume (RAID 0).
Regardless of the type you chose, the result is a
device node (such as /dev/md0) that looks to the
operating system as a single disk volume.
Now, what do you do with it?
Well, you can do with it just about anything you would do with a
standard disk drive. You can partition it (or not), you can use it
for swap, you can use it for a filesystem of any type, and you can
even encrypt it the same way you would a standard filesystem.
In most cases, however, you will probably just want to create a
filesystem on this new volume so you can store the valuable data
you wanted to safeguard by using RAID.
For instance, assuming you have created a new array named /dev/md0
and want to use it as a Reiser filesystem as /data, you would use:
# mkfs -t reiserfs /dev/md0
# mount /dev/md0 /data
Simple as that!
From this point on, whatever files you put in /data will be stored
on the array.
Of course, if you want this RAID partition to be
automatically mounted at boot-time, you would simply create an
entry in your /etc/fstab file (or whatever is
appropriate to your Linux distribution). This part is outside the
scope of this RAID tutorial, so we will assume the reader is
already familiar with basic system administration principles on the
We are now using the array as a standard filesystem, so
we know the array is working.
But is it healthy? After all, if one of the disks in a
RAID 1 or 5
array were faulty, you would not notice since that's the whole
point of fault tolerance.
So, how can you tell if any of the disks in your array are faulty?
Simple. There is a pseudo-file named
/proc/mdstat which contains current statistics on the array.
Simply cat this file to see what's going on, like this:
# cat /proc/mdstat
Personalities : [raid1]
read_ahead 1024 sectors
md1 : active raid1 sda3(F) sdb3
7285376 blocks [2/1] [_U]
md0 : active raid1 sda1 sdb1
4024128 blocks [2/2] [UU]
The above example shows a RAID 1 configuration (disk
mirroring) consisting of two disk drives, /dev/sda and /dev/sdb.
We have two separate arrays using different partitions on these
two drives. Specifically, /dev/md1 mirrors the third
partition of disks /dev/sda and /dev/sdb, while
/dev/md0 mirrors of the first partition of each disk.
Next to each array name (md1 and md0 in this example), you get a
lot of information packed on a single line. First, we can see
that each array is active, which is good news so far.
We also see that each array is a RAID 1 configuration,
i.e. a disk mirror.
Finally, we get the list of all the physical
disks or disk partitions that make up each array. Next to each
device name, there is a number in square brackets (as in sda3
or sdb3). This number indicates the sequential order of each
member of the array. In this case, sda3 is the first member of the
array and sdb3 is the second member.
The order is useful to determine which member of an array is at
fault when something goes wrong.
In our example above, we can see that md0 consists of 2 disk
partitions and that both are active. That's what the "[2/2]"
represents. However, for md1, we see that while it is also made up
of 2 disks, only 1 is currently active, as indicated by the "[2/1]"
flag. This array has one faulty drive, but which one?
This is where the second indicator comes in, the one with the U's.
A "U" means "up" while an underscore means down, or faulty. The
string of characters is shown in the sequential order of the array
members, so disk 0 is first, disk 1 is next, and so on. Let's
examine this output again:
md1 : active raid1 sda3(F) sdb3
7285376 blocks [2/1] [_U]
The "[_U]" indicator tells us the first member of the array is
down, while the second member is up and running. Looking at the
first line, we see that sda3 was the first member (as indicated by
the zero in square brackets), so that's the faulty device.
The "(F)" next to the device name also indicates this is the faulty
We will see in a minute how to fix this type of problem, but before
we do, let me say a few words about monitoring the array on a
real-life production machine.
We have just seen how to manually check the array by examining the
contents of /proc/mdstat using cat. This works fine, but
you would have to get in a habit of doing it every day to detect
failures or problems in a timely manner. This, of course, is
Fortunately, there is a "--monitor" option to mdadm
which will cause the program
to periodically check the array for any change in status. When
this happens, the program will either
create an entry in the system log, email a notification to someone,
or invoke a program of your choice.
Of course, if you wish to be alerted immediately whenever
an array member becomes faulty, a log entry won't do the trick. An
email message would be great, but your system must be configured
with a mail server such as Postfix or Sendmail, which is rarely the
case on small office machines.
In my opinion, the best option is to get mdadm to invoke a program
of your choice. That program can be a simple shell script that
will print a notification on your default printer or display an
alert on your screen.
For instance, let's say you have created a short shell script named
/usr/bin/alert_me that will ensure you get the message. You could
invoke mdadm with the --monitor option like this:
# mdadm --monitor -p /usr/bin/alert_me /dev/md0
In this example, we are using the -p option (for "program")
to cause mdadm to invoke /usr/bin/alert_me whenever a change of
status is detected in the array /dev/md0.
Note that, except for testing purposes, you will usually want this
process to run in the background to constantly monitor your array.
To this end, there is the ‐f option, which daemonizes
If you do not specify an array to monitor, mdadm will monitor all
arrays defined in the mdadm.conf configuration file, which is
probably what you want. Consequently,
the most common usage would be to run mdadm with the -f option and
without specifying any arrays, like this:
# mdadm --monitor -f -p /usr/bin/alert_me
Of course, if you want this process to be invoked automatically at
boot-time, you would include it in your boot-time scripts.
If you are not particularly fond of modifying boot-time scripts,
you can use a simple crontab job to periodically check the contents
of /proc/mdstat, looking for any underscores ("down" disks)
in the "[UU]" indicator. A small
shell script like this one is all it takes:
if grep "blocks.*_" /proc/mdstat
echo -e "`date`: Problem with the array!" | lpr
Note that we specify a default value for PATH at the top of the
script as a precaution to ensure that standard Linux commands
such as grep and lpr will be found even if the
program is invoked by cron.
You can surely make this script more sophisticated, but those few
lines will print a warning to the default printer if an underscore
appears in the contents of /proc/mdstat on the same line as the
number of blocks.
Assuming you have named this script /usr/bin/chk_arrays, you can
then run it once a day with a crontab entry like this:
10 4 * * * /usr/bin/chk_arrays
This example would run your chk_arrays script at 4:10 AM
Again, I am assuming the reader is a qualified system administrator
familiar with using the crontab service to execute periodic
tasks, so I will not delve into the mechanics of using
In the example above, we determined that one member of our array
was faulty. What can possibly cause that?
A defective disk drive would certainly explain the problem, but in
many cases, it's just a glitch due to a timing issue at boot-time
or perhaps a power spike or some static electricity on a sensitive
So, the first time you detect a problem with your array, you should
probably not jump to the conclusion that your drive is defective.
Instead, I suggest adding the faulty disk back to the array and
see if it gets re-integrated successfully. If the disk continues
to work without incident for days or weeks afterwards, then it was
probably just a one-time glitch. However, if the same disk fails
again, then it's probably getting flaky and you should replace it
at the first convenient opportunity.
IMPORTANT: If you are going to physically remove a faulty disk from the
array while the array is operating (i.e. a "hot-swap"), you must
first mark that disk as "removed." We will see how to do this in
the next section.
Whether you are re-adding the same disk to the array or replacing
it with a new one, the procedure to add a disk to an array
is the same: you would use mdadm with the ‐a
option, as in this example:
mdadm /dev/md1 -a /dev/sda3
In this example, we are adding the partition /dev/sda3 to the
(Note the unusual syntax: the array name is given first, followed by
the -a option and the device name.)
If this command fails initially, you may have to remove the disk from
the array before you can add it back. See the next section on how
to do this.
Note that we have covered in the previous section how to determine
which member of the array was at fault.
If you have jumped directly to this section
and missed this part, check out the previous section to see how
we identified /dev/sda3 as the failing drive.
A disk in an array can be either active, failed or
"Active" is the normal state of a functioning disk; "failed" means
the disk is still part of the array but is not functioning
properly; and "removed" means the disk is no longer a member of the
When a disk becomes faulty, the array will automatically mark
it as "failed" and will no longer attempt to read from it or write to it.
If you are going to replace this faulty disk
while the array is operating, you
must first remove it as a member of the array using the -r option,
# mdadm /dev/md1 -r /dev/sda3
mdadm: hot removed /dev/sda3
Then, it is safe to physically swap out the drive and insert a new
one, which you will then partition and add to the array using
the ‐a option to mdadm as described earlier.
Important: You should only hot-swap a drive if you are using
hot-swappable hardware, which usually involves a special cabinet.
If you are using standard commodity disk drives not specifically
designed for hot-swapping, then shut your system down and power it off
before replacing any components.
Note that you can manually flag a disk as "failed" using the ‐f
option, like this:
# mdadm /dev/md1 -f /dev/sda3
mdadm: set /dev/sda3 faulty in /dev/md1
This may be useful when you intend to remove a healthy drive from
the array. You would first mark it as failed using -f, and then
remove it using -r.
Manually "failing" a disk may also be useful for testing purposes.
Let's assume you have detected a failed drive in the array
and have used the -a option to mdadm to add it back. For the next
few minutes, and possibly a few hours if the array is large, the
RAID driver in your kernel will laboriously re-construct the data
on the new disk to bring it back in sync with the other(s).
While this is happening, users can continue to use the system
normally, although they might notice a small degradation in
performance while the array is being rebuilt.
So, how can you tell when this process is over? How can you
monitor how well it's progressing? Again, the answer is to examine
the contents of /proc/mdstat. If rebuilding is in progress, you
will get something like this when you cat this pseudo-file:
# cat /proc/mdstat
Personalities : [raid1]
read_ahead 1024 sectors
md1 : active raid1 hda6 hdb6
7285376 blocks [2/1] [_U]
[====>................] recovery = 20.0% (1464688/7285376)
md0 : active raid1 hda1 hdb1
4024128 blocks [2/2] [UU]
The above example was run on a 2.4 kernel where disk IDE disk
drives are represented with names starting with /dev/hd (for hard
Here, we see that array /dev/md1 is being rebuilt; a progress
bar shows us how far along this process has come.
In this case, the process is 20% done and the system estimates it
will be completed in 8.6 minutes at the current rate of progress.
When the rebuilding process has completed, the progress bar will
no longer show up and the healthy "[2/2]" and "[UU]" indicators will
So far, we have created an array, created a filesystem on it,
mounted it and used it.
Now, what if you wanted to de-activate the array for some reason?
Easy: just invoke madam with the -S option (or its long synonym, --stop) to stop the array.
Of course, if you had mounted the array as a filesystem, you must first
un-mount that filesystem (using umount) before you
can de-activate the array.
For instance, to de-activate the array /dev/md0 which we might have
mounted as a local filesystem, we would first un-mount the
filesystem, like this:
# umount /dev/md0
Then, we could safely stop the array with this:
# mdadm -S /dev/md0
mdadm: stopped /dev/md0
At this point, you would no longer see /dev/md0 in the contents of
/proc/mdstat; that array would no longer exist on the system. That
doesn't mean all the data on the array has been destroyed; on the
contrary, the data is still intact on the array members — it
is simply no longer accessible while the array is stopped.
To re-activate the array, you must use either the --build option or the
--assemble option, depending on whether you had originally
created the array with the --build or --create option, respectively.
We have seen earlier that --create will populate the partitions
superblocks with information about the array. The --assemble
option simply uses this information to determine the array's RAID level,
the number of disks it comprises,
the number of spares, if any, and so on. As
a result, the --assemble is the easiest and safest option to
re-activate an array, but you must have used --create initially
to construct this array (which is the recommended way).
If you did, you can re-activate /dev/md0 using this simple syntax:
# mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1
On the other hand, if you have used --build to construct your array
in the first place, then there is no information regarding that
array in the superblocks of its members and you will need to use
the same syntax again to re-create it, specifying all the original
details. For instance, to re-activate a 2-disk RAID 1 array,
named /dev/md1 and consisting of partitions /dev/sda3 and /dev/sdb3,
you would need something like this:
# mdadm --build /dev/md0 -n 2 -l raid1 /dev/sda1 /dev/sdb1
Again, let me stress that reassembling or rebuilding an
array does not zero-out or damage its contents in any way;
it just makes them accessible again through the appropriate
/dev/mdX device node.
Finally, here is a handy short-cut: To re-assemble all arrays on
the system that are not currently active,
use the --scan parameter in conjunction with --assemble, like this:
# mdadm --assemble --scan
mdadm: /dev/md1 has been started with 2 drives
For convenience, mdadm supports a configuration file named
mdadm.conf containing various default values and custom
configuration settings for your system.
Typically, that file will be located in /etc or in /etc/mdadm.
As usual, we will only examine a small but useful subset of all
the available configuration options for that file.
Probably the most important set of defaults stored in mdadm.conf is
the list of array members that should be automatically assembled at
boot-time to reconstruct your array(s).
For instance, if you have configured a 2-disk mirror on your
system using /dev/sda3 and /dev/sdb3 and would like this array to
be automatically reconstructed at boot-time, you would create the
following entry in your mdadm.conf file:
ARRAY /dev/md0 devices=/dev/sda3,/dev/sdb3
If you have created the array using the --create option, all the
information related to the RAID level, number of devices, and so
on is already stored in the devices' superblocks, so the above is enough
information for mdadm to figure out how to reassemble the array.
However, mdadm is flexible enough to scan a set of devices and
figure out what needs to be reassembled without any specific
instructions. To make this work, you would simply specify a set of
devices to scan and allow mdadm to figure things out on its own.
That set of devices to scan is specified with the DEVICES
parameter, like this:
DEVICE /dev/sda* /dev/sdb*
ARRAY /dev/md0 UUID=9187a482:5dde19d9:eea3cc4a:d646ab8b
In this example, mdadm will scan all partitions on /dev/sda and
/dev/sdb and look for members of the array known by the specified
UUID ("Universally Unique IDentifier").
The resulting array will be named /dev/md0.
(We will see shortly how to determine the UUID of an array.)
To make things even more flexible, you can tell mdadm to scan
all disk partitions on the system by specifying the keyword
partitions instead of listing the devices to scan. When
that keyword is used, mdadm will examine the contents of
/proc/partitions and will scan the superblocks of all
the partitions in that list to locate array members.
ARRAY /dev/md0 UUID=9187a482:5dde19d9:eea3cc4a:d646ab8b
A UUID, or Universally Unique Identifier,
is a string of 32 hexadecimal digits which is assigned to a
device to uniquely identify it. This is a more stable way to
identify a disk or array than using standard Linux names such as
/dev/sda or /dev/md1, for instance, since these names can sometimes
be automatically reassigned due to hardware changes on the system.
For instance, an external USB disk drive will be given the next
SCSI disk identifier under Linux, such as /dev/sdc. If another
external disk is plugged into the system, it will be given the name
/dev/sdd, and so on. At boot-time, if one disk initializes more
quickly than the other, that disk will be detected as /dev/sdc
and the slower disk will then become /dev/sdd, which may be the
reverse of what they were named at the last system boot. Using
UUIDs to refer to these disks eliminates this potential confusion.
The -D (or --detail) option to mdadm gives you a lot of information
about the specified array, including the UUID, as in this example:
# mdadm -D /dev/md2
Version : 0.90
Creation Time : Wed Dec 30 20:20:42 2009
Raid Level : raid1
Array Size : 250003456 (238.42 GiB 256.00 GB)
Used Dev Size : 250003456 (238.42 GiB 256.00 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Wed Aug 3 08:13:09 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : c5b70045:0e67d425:382b02f3:7cfbedcb
Events : 0.560929
Number Major Minor RaidDevice State
0 8 20 0 active sync /dev/sdb4
1 8 4 1 active sync /dev/sda4
Note: Since mdadm needs to read from
device superblocks to obtain much of its information, some options
require you to have superuser privileges, i.e. to be logged in as
root or to use sudo to successfully execute the command.
Another convenient option is to invoke mdadm with the the --examine
(or -E) option in conjunction with the --scan option to
cause the utility to scan the entire system for all arrays and generate
output lines that can be used directly in your mdadm.conf file,
# mdadm --examine --scan
ARRAY /dev/md3 UUID=2c4a0ac9:34a7e9c4:c125a9e4:b033b3aa
ARRAY /dev/md4 UUID=a63e5dce:5f539d3d:e4a565b5:56b2fcf2
ARRAY /dev/md2 UUID=c5b70045:0e67d425:382b02f3:7cfbedcb
For most people, the most difficult thing to figure out is how
to delete an unwanted array. This is because information in
the array is stored in the superblock, so the mdadm command
appears to never forget entirely about an array that you
created in the past, even after you think you have removed
all visible traces of it.
As we have seen before, you can use the --stop option (or -S)
to stop (de-activate) an array, like this:
# mdadm --stop /dev/md1
mdadm: stopped /dev/md1
That might certainly get rid of the array from the /proc/mdstat
list, but you can still "see" it when using the --examine option,
# mdadm --examine --scan
ARRAY /dev/md0 UUID=fc4ae565:a6507797:96d73425:077260fe
ARRAY /dev/md1 UUID=01b7d97b:8272a525:b9b64416:664988b1
And of course, using "mdadm --assemble --scan" will magically
bring it back.
So, what if you want to get rid of this array completely, including
all its contents?
You might try removing each device from the array
(using mdadmin ‐r) but you will
find that you cannot remove the last one since it will always
be active as long as the array is running, and you cannot
remove a device from an array unless is it running.
The fact is, the only way to cause mdadm to forever forget
about an array you have created with the --create option is to
zero-out the superblock of each partition which is a member of
To do this, make sure the array is not mounted (use umount to
unmount it if appropriate), then stop the array as in this example:
# mdadm --stop /dev/md1
mdadm: stopped /dev/md1
Then, invoke mdadm with the --zero-superblock option to zero-out the
superblocks of each member of the array, like this:
# mdadm --zero-superblock /dev/sda3
# mdadm --zero-superblock /dev/sdb3
That's it, you're done! Now, there is no way that mdadm will
automatically find this array since the information it was
using to put it back together is now gone.
Keep in mind that since you have not overwritten the data
portions of the two partitions which were making up this array,
your data is still
available if you wished to manually re-create the array. For
instance, this command would reconstruct the array:
# mdadm --build /dev/md1 -n 2 -l raid1 /dev/sda3 /dev/sdb3
If you want to categorically eliminate this data, you should
change the partition type of each member to a standard Linux
partition (83) instead of RAID (fd), and then create a
filesystem on each partition or overwrite each partition with zeroes or
garbage using the method of your choice.
ARRAY /dev/md0 UUID=2c4a0ac9:34a7e9c4:c125a9e4:b033b3aa
Keep in mind that there are usually several ways to get something
done — we have shown here only the simplest or most common
methods to get you started quickly.
Did you find an error on this page or
do you have a comment?