When down-converting a RAID1 device, it is the last device that is extracted
and removed when the user does not specify a particular device. However,
when a device is specified (and it is not the last), the device is removed and
the remaining sub-LVs are "shifted down" to fill the hole. This cause problems
when resuming the LV because if the shifted devices were resumed (and thus
renamed) before the sub-LV being extracted, there would be a name conflict.
The solution is to resume the extracted sub-LVs first so that they can be
properly renamed preventing a possible conflict.
This addresses bug 801967.
Version 2.02.96 -
================================
+ Fix name conflicts that prevent down-converting RAID1 when specifying a device
Improve thin_check option passing and use configured path.
Add --with-thin-check configure option for path to thin_check.
Detect lvm binary path in lvmetad udev rules.
return 0;
}
+ /*
+ * We resume the extracted sub-LVs first so they are renamed
+ * and won't conflict with the remaining (possibly shifted)
+ * sub-LVs.
+ */
+ dm_list_iterate_items(lvl, &removal_list) {
+ if (!resume_lv(lv->vg->cmd, lvl->lv)) {
+ log_error("Failed to resume extracted LVs");
+ return 0;
+ }
+ }
+
/*
* Resume original LV
- * This also resumes all other sub-lvs (including the extracted)
+ * This also resumes all other sub-LVs
*/
if (!resume_lv(lv->vg->cmd, lv)) {
log_error("Failed to resume %s/%s after committing changes",
done
done
done
+
+# 3-way to 2-way convert while specifying devices
+lvcreate --type raid1 -m 2 -l 2 -n $lv1 $vg $dev1 $dev2 $dev3
+wait_for_sync $vg/$lv1
+lvconvert -m1 $vg/$lv1 $dev2
+lvremove -ff $vg
+
#
# FIXME: Add tests that specify particular devices to be removed
#