Zdenek Kabelac [Thu, 22 Jun 2017 17:53:27 +0000 (19:53 +0200)]
cache: fix lvdisplay --maps
'lvdisplay -m' tried to go through NULL policy settings,
when such policy was not defined for CachedLV.
Patch is fixing display of cache-pool without defined settings,
as this is now a valid pool and we mostly want users to define
these settings when actually really caching a LV.
Zdenek Kabelac [Thu, 22 Jun 2017 15:12:27 +0000 (17:12 +0200)]
cache: drop usage of origin_only
Since cache LV can be a stacked device, there is no real reason
trying to use slight optimised tree for origin_only cache reload
(it could be even wrongly implemented in this case).
Zdenek Kabelac [Thu, 22 Jun 2017 15:08:47 +0000 (17:08 +0200)]
cache: make syncing abortable by user
When user runs command like 'lvconvert --splitcache' the operation
might be actually either slow or not making any progress in kernel,
so lets give user a chance to abort such operation.
When user press 'Ctrl+C' device table is restored to pre-flushing state.
raid: avoid explicit activation of SubLVs on reshape/takeover
Remove explicit activation of SubLVs and let lv_update_and_reload()
perform the proper (pre-)loading sequencing of tables.
This avoids related callback functions which are removed.
Zdenek Kabelac [Tue, 20 Jun 2017 15:11:42 +0000 (17:11 +0200)]
activation: fix usage of origin_only
When lock-holding LV differs from actually request locked LV,
we drop origin_only flag as it has no use - it'd be applied
on completely different LV.
Example of problem:
Raid is thin-pool _tdata LV.
Raid run origin_only locking on stacked device.
As lock holder is discovered thinLV.
Whole origin_only operation is then applied only on thinLV
changing the meaning of whole operation.
NOTE: this patch does not change anything for LV that are
already top-level lock holding LVs (i.e. thinLVs, snahoshots/origins).
Zdenek Kabelac [Mon, 19 Jun 2017 10:37:40 +0000 (12:37 +0200)]
fsadm: restore no answer
Commit 1fe4f80e45a6bfcceed5aaab97fc0e27dfcf2b88 in current version
introduced regression for a terminal user, as he could not enter 'n'
as answer. Add missing break for this case (No whats_new).
Jonathan Brassow [Fri, 16 Jun 2017 15:11:58 +0000 (10:11 -0500)]
test: New test file for validating kernel status during sync ops
New tests to add checking for '100%' in-sync at start of "recover"
process (it shouldn't happen, but I've seen it before). Also,
check status over the whole cycle of various sync processes ("resync"
and "recover").
Zdenek Kabelac [Fri, 16 Jun 2017 11:20:25 +0000 (13:20 +0200)]
raid: report percent with segtype info
Enhance reporting code, so it does not need to do 'extra' ioctl to
get 'status' of normal raid and provide percentage directly.
When we have 'merging' snapshot into raid origin, we still need to get
this secondary number with extra status call - however, since 'raid'
is always a single segment LV - we may skip 'copy_percent' call as
we directly know the percent and also with better precision.
NOTE: for mirror we still base reported number on the percetage of
transferred extents which might get quite imprecisse if big size
of extent is used while volume itself is smaller as reporting jump
steps are much bigger the actual reported number provides.
2nd.NOTE: raid lvs line report already requires quite a few extra status
calls for the same device - but fix will be need slight code improval.
Jonathan Brassow [Thu, 15 Jun 2017 16:06:08 +0000 (11:06 -0500)]
test: New test file for validating kernel status during sync ops
First test in this file checks whether 'aa' is ever spotted during
a "recover" operation (it should not be). More tests should follow
in this file to look for oddities in status output - especially as
it relates to the sync_ratio, dev_health, and sync_action fields.
Jonathan Brassow [Wed, 14 Jun 2017 19:49:42 +0000 (14:49 -0500)]
clean-ups: remove unused var, add 'static' for local fn, adjust test
For the test clean-up, I was providing too many devices to the first
command - possibly allowing it to allocate in the wrong place. I was
also not providing a device for the second command - virtually ensuring
the test was not performing correctly at times.
Jonathan Brassow [Wed, 14 Jun 2017 13:41:05 +0000 (08:41 -0500)]
lvconvert: Disallow removal of primary when up-converting (recovering)
This patch ensures that under normal conditions (i.e. not during repair
operations) that users are prevented from removing devices that would
cause data loss.
When a RAID1 is undergoing its initial sync, it is ok to remove all but
one of the images because they have all existed since creation and
contain all the data written since the array was created. OTOH, if the
RAID1 was created as a result of an up-convert from linear, it is very
important not to let the user remove the primary image (the source of
all the data). They should be allowed to remove any devices they want
and as many as they want as long as one original (primary) device is left
during a "recover" (aka up-convert).
This fixes bug 1461187 and includes the necessary regression tests.
Jonathan Brassow [Wed, 14 Jun 2017 13:39:50 +0000 (08:39 -0500)]
RAID (lvconvert/dmeventd): Cleanly handle primary failure during 'recover' op
Add the checks necessary to distiguish the state of a RAID when the primary
source for syncing fails during the "recover" process.
It has been possible to hit this condition before (like when converting from
2-way RAID1 to 3-way and having the first two devices die during the "recover"
process). However, this condition is now more likely since we treat linear ->
RAID1 conversions as "recover" now - so it is especially important we cleanly
handle this condition.
Jonathan Brassow [Wed, 14 Jun 2017 13:39:07 +0000 (08:39 -0500)]
lvconvert: Don't require a 'force' option during RAID repair.
Previously, we were treating non-RAID to RAID up-converts as a "resync"
operation. (The most common example being 'linear -> RAID1'.) RAID to
RAID up-converts or rebuilds of specific RAID images are properly treated
as a "recover" operation.
Since we were treating some up-convert operations as "resync", it was
possible to have scenarios where data corruption or data loss were
possibilities if the RAID hadn't been able to sync completely before a
loss of the primary source devices. In order to ensure that the user took
the proper precautions in such scenarios, we required a '--force' option
to be present. Unfortuneately, the force option was rendered useless
because there was no way to distiguish the failure state of a potentially
destructive repair from a nominal one - making the '--force' option a
requirement for any RAID1 repair!
We now treat non-RAID to RAID up-converts properly as "recover" operations.
This eliminates the scenarios that can potentially cause data loss or
data corruption; and this eliminates the need for the '--force' requirement.
This patch removes the requirement to specify '--force' for RAID repairs.
Jonathan Brassow [Wed, 14 Jun 2017 13:33:42 +0000 (08:33 -0500)]
lvconvert: linear -> raid1 upconvert should cause "recover" not "resync"
Two of the sync actions performed by the kernel (aka MD runtime) are
"resync" and "recover". The "resync" refers to when an entirely new array
is going through the process of initializing (or resynchronizing after an
unexpected shutdown). The "recover" is the process of initializing a new
member device to the array. So, a brand new array with all new devices
will undergo "resync". An array with replaced or added sub-LVs will undergo
"recover".
These two states are treated very differently when failures happen. If any
device is lost or replaced while "resync", there are no worries. This is
because any writes created from the inception of the array have occurred to
all the devices and can be safely recovered. Even though non-initialized
portions will still be resync'ed with uninitialized data, it is ok. However,
if a pre-existing device is lost (aka, the original linear device in a
linear -> raid1 convert) during a "recover", data loss can be the result.
Thus, writes are errored by the kernel and recovery is halted. The failed
device must be restored or removed. This is the correct behavior.
Unfortunately, we were treating an up-convert from linear as a "resync"
when we should have been treating it as a "recover". This patch
removes the special case for linear upconvert. It allows each new image
sub-LV to be marked with a rebuild flag and treats the array as 'in-sync'.
This has the correct effect of causing the upconvert to be treated as a
"recover" rather than a "resync". There is no need to flag these two states
differently in LVM metadata, because they are already considered differently
by the kernel RAID metadata. (Any activation/deactivation will properly
resume the "recover" process and not a "resync" process.)
We make this behavior change based on the presense of dm-raid target
version 1.9.0+.
lvconvert: fix detached SubLV deactivation in cluster
On conversion from raid10 to raid0 (takeover), all rmeta
devices and the rimage devices of mirrored stripes are
detached from the raid10 LV. The remaining rimage areas
are being shifted down into the slots of the detached
ones hence requiring renames to show proper _N suffix
sequences (e.g. 0,1,2,3 instead of 0,2,4,6). Only the
top-level raid10 LV has a cluster lock, not the detached
SubLVs thus their deactivation is impossible and e.g the
rename from *_rimage_6 to *_rimage_3 will fail. Fix by
activating exclusively before deactivating and removing.
Bryn M. Reeves [Wed, 7 Jun 2017 18:26:09 +0000 (19:26 +0100)]
dmfilemapd: update file block count at daemon start
The file block count stored in the filemap_monitor was lazily
initialised at the time of the first check. This causes problems
in the case that the file has been truncated between this time and
the time the daemon started: the initial block count and current
block count match and the daemon fails to detect a change.
Separate the setting of the block count from the check and make a
call to update the value at the start of _dmfilemapd().
Bryn M. Reeves [Wed, 7 Jun 2017 18:21:10 +0000 (19:21 +0100)]
libdm: allow truncated files in dm_stats_update_regions_from_fd()
It's not an error to attempt to update regions from an fd that has
been truncated (or otherwise no longer has any allocated extents):
in this case, the call should remove all regions corresponding to
the group, and return an empty region table.
Prohibit activation of reshaping RaidLVs on incompatible
lvm2 runtime by storing e.g. 'raid5+RESHAPE' segment type
strings in the lvm2 metadata. Incompatible runtime not
supporting reshaping won't be able to activate those thus
avoiding potential data corruption.
Any new non-reshaping lvconvert command will reset the
segment type string from 'raid5+RESHAPE' to 'raid5'.
Zdenek Kabelac [Fri, 9 Jun 2017 10:56:55 +0000 (12:56 +0200)]
configure: report yes or no for wiping and system
Avoid reporting 'checking result' as maybe - it should
clearly tell 'yes' or 'no'.
Just shuffle printed message to the place, where we
already know the 'maybe' answer.
So instead of printing 'unclear':
checking whether to enable libblkid detection of signatures when wiping... maybe
checking for BLKID... yes
checking whether to use udev-systemd protocol for jobs in background... maybe
checking for SYSTEMD... yes
show this:
checking for BLKID... yes
checking whether to enable libblkid detection of signatures when wiping... yes
checking for SYSTEMD... yes
checking whether to use udev-systemd protocol for jobs in background... yes
Zdenek Kabelac [Fri, 9 Jun 2017 08:59:37 +0000 (10:59 +0200)]
cache: lvcreate --cachepool checks for cache pool
Code path missed validation of lvcreate --cachepool argument.
If the non cache-pool LV was passed in, code has still continued
further work and failed later on internal error. Validate this
condition at right place now.
Zdenek Kabelac [Thu, 8 Jun 2017 08:46:22 +0000 (10:46 +0200)]
thin: disallow creation of too big thin pools
When a combination of thin-pool chunk size and thin-pool data size
goes beyond addressable limit, such volume creation is directly
prohibited.
Maximum usable thin-pool size is calculated with use of maximal support
metadata size (even when it's created smaller) and given chunk-size.
If the value data size is found to be too big, the command reports
error and operation fails.
Previously thin-pool was created however lots of thin-pool data LV was
not usable and this space in VG has been wasted.
lvconvert: reject RAID conversions on inactive LVs
Only support RAID conversions on active LVs.
If we'd accept e.g. upconverting linear -> raid1 on inactive
linear LVs, any LV flags passed to the kernel aren't properly
cleared thus errouneously passing them on every activation.
Add respective check to lv_raid_change_image_count() and
move existing one in lv_raid_convert() for better messages.
Tony Asleson [Fri, 2 Jun 2017 17:29:27 +0000 (12:29 -0500)]
lvmdbusd: Prevent stall when update thread gets exception
If during the process of fetching current lvm state we experience an
exception we fail to call set_result on the queued_requests we were
processing. When this happens those threads block forever which causes
the service to stall infinitely. Only clear the queued_requests after
we have called set_result.
Tony Asleson [Fri, 2 Jun 2017 17:25:01 +0000 (12:25 -0500)]
lvmdbusd: Add background command to flight recorder
We were not adding background tasks to flight recorder. Add the meta
data to the flight recorder when we start the command and update the meta
data when the command is finished. Locking was added to meta data to
prevent concurrent update and returning string representation as these can
happen in two different threads.
vgreduce previously allowed --all and --removemissing together even though
it only actual did the remove missing. The lvm dbus daemon was passing
--all anytime there was no entries in pv_object_paths. This change supplies
--all if and only if we are not removing missing and the pv_object_paths
is empty.
Vgreduce has and continues to enforce the invalid combination of supplying a
device list when you specify --all or --removemissing so we do not need
to check for that invalid combination explicitly in the lvm dbus service as
it's already covered.
David Teigland [Thu, 1 Jun 2017 16:10:09 +0000 (11:10 -0500)]
print warning about in-use orphans
Warn about a PV that has the in-use flag set, but appears in
the orphan VG (no VG was found referencing it.)
There are a number of conditions that could lead to this:
. The PV was created with no mdas and is used in a VG with
other PVs (with metadata) that have not yet appeared on
the system. So, no VG metadata is found by lvm which
references the in-use PV with no mdas.
. vgremove could have failed after clearing mdas but
before clearing the in-use flag. In this case, the
in-use flag needs to be manually cleared on the PV.
. The PV may have damanged/unrecognized VG metadata
that lvm could not read.
. The PV may have no mdas, and the PVs with the metadata
may have damaged/unrecognized metadata.
David Teigland [Fri, 26 May 2017 18:26:09 +0000 (13:26 -0500)]
disable repairing in-use flag on orphan PVs
A PV holding VG metadata that lvm can't understand
(e.g. damaged, checksum error, unrecognized flag)
will appear as an in-use orphan, and will be cleared
by this repair code. Disable this repair until the
code can keep track of these problematic PVs, and
distinguish them from actual in-use orphans.
lvconvert: reject changing number of stripes on single core
Reject any stripe adding/removing reshape on raid4/5/6/10 because
of related MD kernel deadlock on single core systems until
we get a proper fix in MD.
Zdenek Kabelac [Mon, 29 May 2017 12:20:38 +0000 (14:20 +0200)]
flags: add segtype flag support
Switch METADATA_FORMAT flag usage to be stored via segtype
instead of 'status' flag which appeared to cause major
incompatibility troubles.
For backward compatiblity segtype flags are still accepted also
via 'status' bits which were used from version 2.02.169 so metadata
saved by this newer lvm2 version should still work nicely, although
new save version will no longer work on this older lvm2 version.
Zdenek Kabelac [Mon, 29 May 2017 12:20:05 +0000 (14:20 +0200)]
flags: add read and print of segtype flag
Allow storing LV status bits with segment type name field.
Switching to this since this field has better support for compatibility
with older version of lvm2 - since such unknown segtype will not cause
complete invisiblity of metadata from older lvm2 code - just the
particular LV will become unusable with unknown type of segment.
Marian Csontos [Fri, 26 May 2017 13:34:47 +0000 (15:34 +0200)]
test: Fix dbus testing using testsuite
- Must reread all objects as PVs might be removed.
- Never consider testsuite provided PVs nested, or tearDown fails to
remove any outstanding VGs on them.