Zdenek Kabelac [Mon, 26 Jun 2017 16:22:40 +0000 (18:22 +0200)]
libdm: fix initialization of head for reused structure
Dmeventd reuses 'dm_task' struct for some STATUS operation, but due to
missing reinitization of dm_task target list, it has caused misprocesing
of recieved events as the parsed target has been simply added to the
list of existing status and cause multiple actions being called for
single event.
Zdenek Kabelac [Sat, 24 Jun 2017 20:28:25 +0000 (22:28 +0200)]
raid: allow more sync action for extraction
Since we discovered status reporting from 'md' goes from large set
of weird states we can't just decided based on this word.
So let it pass for rebuild and idle as well
and check for health devices afterwards.
Zdenek Kabelac [Sat, 24 Jun 2017 11:24:26 +0000 (13:24 +0200)]
libdm: implement dm_percent_to_round_float
Add function to adjust printing of percent values in better way.
Rounding here is going along following rules:
0% & 100% are always clearly reported with .0 decimal points.
Values slightly above 0% we make sure a nearest bigger
non zero value with given precission is printed
(i.e. 0.01 for %.2f will be shown)
For values closely approaching 100% we again detect and adjust value
that is less then 100 when printed.
(i.e. 99.99 for %.2f will be shown).
For other values we are leaving them with standard rounding mechanism
since we care mainly about corner values 0 & 100 which need to be
printed precisely.
Zdenek Kabelac [Fri, 23 Jun 2017 20:50:19 +0000 (22:50 +0200)]
raid: recognize transient failed raid leg
When raid leg rimage device is marked as 'D'ead by mdcore,
lvm2 was not able to replace such device with allocate policy,
as device has not appared as missing.
Zdenek Kabelac [Fri, 23 Jun 2017 12:40:10 +0000 (14:40 +0200)]
raid: improving messages for regionsize change
Handle change of 'region size' better and follow also standard rule
if the command can't success (i.e. size is already same) we return
error for all such cases.
Also log_pring more info about adjusted value (just like we
do for rounding)
Also avoid keep pointers on 'display_*' values - they are in
ringbuffer for immediate use - not to be kept across multiple calls
(as they could be already overwritten by later calls) - so dropped
seg_region_size_str
Zdenek Kabelac [Thu, 22 Jun 2017 17:53:27 +0000 (19:53 +0200)]
cache: fix lvdisplay --maps
'lvdisplay -m' tried to go through NULL policy settings,
when such policy was not defined for CachedLV.
Patch is fixing display of cache-pool without defined settings,
as this is now a valid pool and we mostly want users to define
these settings when actually really caching a LV.
Zdenek Kabelac [Thu, 22 Jun 2017 15:12:27 +0000 (17:12 +0200)]
cache: drop usage of origin_only
Since cache LV can be a stacked device, there is no real reason
trying to use slight optimised tree for origin_only cache reload
(it could be even wrongly implemented in this case).
Zdenek Kabelac [Thu, 22 Jun 2017 15:08:47 +0000 (17:08 +0200)]
cache: make syncing abortable by user
When user runs command like 'lvconvert --splitcache' the operation
might be actually either slow or not making any progress in kernel,
so lets give user a chance to abort such operation.
When user press 'Ctrl+C' device table is restored to pre-flushing state.
raid: avoid explicit activation of SubLVs on reshape/takeover
Remove explicit activation of SubLVs and let lv_update_and_reload()
perform the proper (pre-)loading sequencing of tables.
This avoids related callback functions which are removed.
Zdenek Kabelac [Tue, 20 Jun 2017 15:11:42 +0000 (17:11 +0200)]
activation: fix usage of origin_only
When lock-holding LV differs from actually request locked LV,
we drop origin_only flag as it has no use - it'd be applied
on completely different LV.
Example of problem:
Raid is thin-pool _tdata LV.
Raid run origin_only locking on stacked device.
As lock holder is discovered thinLV.
Whole origin_only operation is then applied only on thinLV
changing the meaning of whole operation.
NOTE: this patch does not change anything for LV that are
already top-level lock holding LVs (i.e. thinLVs, snahoshots/origins).
Zdenek Kabelac [Mon, 19 Jun 2017 10:37:40 +0000 (12:37 +0200)]
fsadm: restore no answer
Commit 1fe4f80e45a6bfcceed5aaab97fc0e27dfcf2b88 in current version
introduced regression for a terminal user, as he could not enter 'n'
as answer. Add missing break for this case (No whats_new).
Jonathan Brassow [Fri, 16 Jun 2017 15:11:58 +0000 (10:11 -0500)]
test: New test file for validating kernel status during sync ops
New tests to add checking for '100%' in-sync at start of "recover"
process (it shouldn't happen, but I've seen it before). Also,
check status over the whole cycle of various sync processes ("resync"
and "recover").
Zdenek Kabelac [Fri, 16 Jun 2017 11:20:25 +0000 (13:20 +0200)]
raid: report percent with segtype info
Enhance reporting code, so it does not need to do 'extra' ioctl to
get 'status' of normal raid and provide percentage directly.
When we have 'merging' snapshot into raid origin, we still need to get
this secondary number with extra status call - however, since 'raid'
is always a single segment LV - we may skip 'copy_percent' call as
we directly know the percent and also with better precision.
NOTE: for mirror we still base reported number on the percetage of
transferred extents which might get quite imprecisse if big size
of extent is used while volume itself is smaller as reporting jump
steps are much bigger the actual reported number provides.
2nd.NOTE: raid lvs line report already requires quite a few extra status
calls for the same device - but fix will be need slight code improval.
Jonathan Brassow [Thu, 15 Jun 2017 16:06:08 +0000 (11:06 -0500)]
test: New test file for validating kernel status during sync ops
First test in this file checks whether 'aa' is ever spotted during
a "recover" operation (it should not be). More tests should follow
in this file to look for oddities in status output - especially as
it relates to the sync_ratio, dev_health, and sync_action fields.
Jonathan Brassow [Wed, 14 Jun 2017 19:49:42 +0000 (14:49 -0500)]
clean-ups: remove unused var, add 'static' for local fn, adjust test
For the test clean-up, I was providing too many devices to the first
command - possibly allowing it to allocate in the wrong place. I was
also not providing a device for the second command - virtually ensuring
the test was not performing correctly at times.
Jonathan Brassow [Wed, 14 Jun 2017 13:41:05 +0000 (08:41 -0500)]
lvconvert: Disallow removal of primary when up-converting (recovering)
This patch ensures that under normal conditions (i.e. not during repair
operations) that users are prevented from removing devices that would
cause data loss.
When a RAID1 is undergoing its initial sync, it is ok to remove all but
one of the images because they have all existed since creation and
contain all the data written since the array was created. OTOH, if the
RAID1 was created as a result of an up-convert from linear, it is very
important not to let the user remove the primary image (the source of
all the data). They should be allowed to remove any devices they want
and as many as they want as long as one original (primary) device is left
during a "recover" (aka up-convert).
This fixes bug 1461187 and includes the necessary regression tests.
Jonathan Brassow [Wed, 14 Jun 2017 13:39:50 +0000 (08:39 -0500)]
RAID (lvconvert/dmeventd): Cleanly handle primary failure during 'recover' op
Add the checks necessary to distiguish the state of a RAID when the primary
source for syncing fails during the "recover" process.
It has been possible to hit this condition before (like when converting from
2-way RAID1 to 3-way and having the first two devices die during the "recover"
process). However, this condition is now more likely since we treat linear ->
RAID1 conversions as "recover" now - so it is especially important we cleanly
handle this condition.
Jonathan Brassow [Wed, 14 Jun 2017 13:39:07 +0000 (08:39 -0500)]
lvconvert: Don't require a 'force' option during RAID repair.
Previously, we were treating non-RAID to RAID up-converts as a "resync"
operation. (The most common example being 'linear -> RAID1'.) RAID to
RAID up-converts or rebuilds of specific RAID images are properly treated
as a "recover" operation.
Since we were treating some up-convert operations as "resync", it was
possible to have scenarios where data corruption or data loss were
possibilities if the RAID hadn't been able to sync completely before a
loss of the primary source devices. In order to ensure that the user took
the proper precautions in such scenarios, we required a '--force' option
to be present. Unfortuneately, the force option was rendered useless
because there was no way to distiguish the failure state of a potentially
destructive repair from a nominal one - making the '--force' option a
requirement for any RAID1 repair!
We now treat non-RAID to RAID up-converts properly as "recover" operations.
This eliminates the scenarios that can potentially cause data loss or
data corruption; and this eliminates the need for the '--force' requirement.
This patch removes the requirement to specify '--force' for RAID repairs.
Jonathan Brassow [Wed, 14 Jun 2017 13:33:42 +0000 (08:33 -0500)]
lvconvert: linear -> raid1 upconvert should cause "recover" not "resync"
Two of the sync actions performed by the kernel (aka MD runtime) are
"resync" and "recover". The "resync" refers to when an entirely new array
is going through the process of initializing (or resynchronizing after an
unexpected shutdown). The "recover" is the process of initializing a new
member device to the array. So, a brand new array with all new devices
will undergo "resync". An array with replaced or added sub-LVs will undergo
"recover".
These two states are treated very differently when failures happen. If any
device is lost or replaced while "resync", there are no worries. This is
because any writes created from the inception of the array have occurred to
all the devices and can be safely recovered. Even though non-initialized
portions will still be resync'ed with uninitialized data, it is ok. However,
if a pre-existing device is lost (aka, the original linear device in a
linear -> raid1 convert) during a "recover", data loss can be the result.
Thus, writes are errored by the kernel and recovery is halted. The failed
device must be restored or removed. This is the correct behavior.
Unfortunately, we were treating an up-convert from linear as a "resync"
when we should have been treating it as a "recover". This patch
removes the special case for linear upconvert. It allows each new image
sub-LV to be marked with a rebuild flag and treats the array as 'in-sync'.
This has the correct effect of causing the upconvert to be treated as a
"recover" rather than a "resync". There is no need to flag these two states
differently in LVM metadata, because they are already considered differently
by the kernel RAID metadata. (Any activation/deactivation will properly
resume the "recover" process and not a "resync" process.)
We make this behavior change based on the presense of dm-raid target
version 1.9.0+.
lvconvert: fix detached SubLV deactivation in cluster
On conversion from raid10 to raid0 (takeover), all rmeta
devices and the rimage devices of mirrored stripes are
detached from the raid10 LV. The remaining rimage areas
are being shifted down into the slots of the detached
ones hence requiring renames to show proper _N suffix
sequences (e.g. 0,1,2,3 instead of 0,2,4,6). Only the
top-level raid10 LV has a cluster lock, not the detached
SubLVs thus their deactivation is impossible and e.g the
rename from *_rimage_6 to *_rimage_3 will fail. Fix by
activating exclusively before deactivating and removing.
Bryn M. Reeves [Wed, 7 Jun 2017 18:26:09 +0000 (19:26 +0100)]
dmfilemapd: update file block count at daemon start
The file block count stored in the filemap_monitor was lazily
initialised at the time of the first check. This causes problems
in the case that the file has been truncated between this time and
the time the daemon started: the initial block count and current
block count match and the daemon fails to detect a change.
Separate the setting of the block count from the check and make a
call to update the value at the start of _dmfilemapd().
Bryn M. Reeves [Wed, 7 Jun 2017 18:21:10 +0000 (19:21 +0100)]
libdm: allow truncated files in dm_stats_update_regions_from_fd()
It's not an error to attempt to update regions from an fd that has
been truncated (or otherwise no longer has any allocated extents):
in this case, the call should remove all regions corresponding to
the group, and return an empty region table.
Prohibit activation of reshaping RaidLVs on incompatible
lvm2 runtime by storing e.g. 'raid5+RESHAPE' segment type
strings in the lvm2 metadata. Incompatible runtime not
supporting reshaping won't be able to activate those thus
avoiding potential data corruption.
Any new non-reshaping lvconvert command will reset the
segment type string from 'raid5+RESHAPE' to 'raid5'.