David Teigland [Tue, 13 Sep 2016 21:50:47 +0000 (16:50 -0500)]
lvmlockd: fix metadata validation when rescanning VG
When rescanning a VG from disk, the metadata read from
each PV was compared as a sanity check. The comparison
is done by exporting the vg metadata from each dev to
a config tree, and then comparing the config trees.
The function to create the config tree inserts
extraneous information along with the actual VG metadata.
This extra info includes creation_time. The config
trees for two devs can easily be created one second
apart in which case the different creation_times would
cause the metadata comparison to fail. The fix is to
exclude the extraneous info from the metadata comparison.
Use issue_discard to lower memory demands on discardable test devices
Use large devices directly through prepare_pvs
I'm still observing more then 0.5G of data usage through.
Particullary:
'lvcreate' followed by 'lvconvert' (which doesn't yet support --nosync
option) is quite demanging, and resume returns quite 'late' when
a lot of data has been already written on PV.
Reinstantiate reporting of metadata percent usage for cache volumes.
Also show the same percentage with hidden cache-pool LV.
This regression was caused by optimization for a single-ioctl in
2.02.155.
cache: scrubbing for cache origin LV - Bug 1169495
Allow RAID scrubbing on cache origin sub-LV
This patch adds the ability to perform RAID scrubbing on the cache
origin sub-LV (https://bugzilla.redhat.com/1169495). Cache origin
operations are restricted to non-clustered RAID LVs until there can
be further testing in a cluster (even for exclusive activation).
User can either specify directly _corig LV
or he can specify cache LV and operation --syncation is
passed ONLY to _corig LV.
If users wants to manipulation with cache-pool devices - he
needs to specify this object name.
Signed-off-by: Jonathan Brassow <jbrassow@redhat.com>
Peter Rajnoha [Tue, 6 Sep 2016 11:12:02 +0000 (13:12 +0200)]
dev-type: check for DEVLINKS udev db variable existence if udev_device_get_is_initialized fn is not present
Older udev versions (udev < v165), don't have the official
udev_device_get_is_initialized function available to query for
device initialization state in udev database. Also, devices don't
have USEC_INITIALIZED udev db variable set - this is bound to the
udev_device_get_is_initialized fn functionality.
In this case, check for "DEVLINKS" variable instead - all block devices
have at least one symlink set for the node (the "/dev/block/<major:minor>".
This symlink is set by default basic udev rules provided by udev directly.
We'll use this as an alternative for the check that initial udev
processing for a device has already finished.
Peter Rajnoha [Mon, 5 Sep 2016 09:33:08 +0000 (11:33 +0200)]
lvmetad: check udev for mpath component several times if udev record not initialized yet
It's possible (mainly during boot) that udev has not finished
processing the device and hence the udev database record for that
device is still marked as uninitialized when we're trying to look
at it as part of multipath component check in pvscan --cache code.
So check several times with a short delay to wait for the udev db
record to be initialized before giving up completely.
David Teigland [Wed, 31 Aug 2016 18:05:53 +0000 (13:05 -0500)]
lvmetad: use udev to ignore multipath components during scan
When scanning devs to populate lvmetad during system startup,
filter-mpath with native sysfs multipath component detection
may not detect that a dev is multipath component. This is
because the multipath devices may not be set up yet.
Because of this, pvscan will scan multipath components during
startup, will see them as duplicate PVs, and will disable
lvmetad. This will leave lvmetad disabled on systems using
multipath, unless something or someone runs pvscan --cache
to rescan.
To avoid this problem, the code that is scanning devices to
populate lvmetad will now check the udev db to see if a
dev is a multipath component that should be skipped.
(This may not be perfect due to inherent udev races, but will
cover most cases and will be at least as good as it's ever
been.)
Peter Rajnoha [Tue, 30 Aug 2016 13:33:50 +0000 (15:33 +0200)]
lvmdump: use lsblk -s and lsblk -O in lvmdump only if these options are supported
The lsblk is just a nice helper here - it's not crucial for lvmdump so
do best effort here and use the most we can from current version of
lsblk that is installed on system. The lsblk -s option was added a bit
later after lsblk introduction and lsblk -O support even more later -
so if these are not available, use only pure lsblk output without any
extras.
Tony Asleson [Mon, 29 Aug 2016 19:26:16 +0000 (14:26 -0500)]
lvmdbusd: Handle different daemon options
- Prevent --lvmshell with --nojson, not a valid combination
- If user is preventing json, then no lvmshell usage
- Return boolean on Manager.UseLvmShell
Tony Asleson [Wed, 24 Aug 2016 23:29:35 +0000 (18:29 -0500)]
lvmdbusd: Use udev until ExternalEvent occurs
The normal mode of operation will be to monitor for udev events until an
ExternalEvent occurs. In that case the service will disable monitoring
for udev events and use ExternalEvent exclusively.
Note: User specifies --udev the service will always monitor udev regardless
if ExternalEvent is being called too.
Tony Asleson [Fri, 12 Aug 2016 20:23:05 +0000 (15:23 -0500)]
lvmdbusd: Add support for using lvm shell
With the addition of JSON and the ability to get output which is known to
not contain any extraneous text we can now leverage lvm shell, so that we
don't fork and exec lvm command line repeatedly.
Tony Asleson [Fri, 12 Aug 2016 20:20:49 +0000 (15:20 -0500)]
lvmdbusd: Add date & ts if running in a terminal
When we are running in a terminal it's useful to have a date & ts on log
output like you get when output goes to the journal. Check if we are
running on a tty and if we are, add it in.
Zdenek Kabelac [Wed, 24 Aug 2016 08:05:09 +0000 (10:05 +0200)]
cache: do not monitor cache-pool
Avoid monitoring of activated cache-pool - where the only purpose ATM
is to clear metadata volume which is actually activate in place
of cache-pool name (using public LV name).
Since VG lock is held across whole clear operation, dmeventd cannot
be used anyway - however in case of appliction crash we may
leave unmonitored device.
In future we may provide better mechanism as the current name
replacemnet is creating 'uncommon' table setups in case the metadata
LV is more complex type like raid (needs some futher thinking about
error path results).
Another point to think about is the fact we should not clear device
while holding lock (i.e. dmeventd mirror repair cannot work in cases
like this).
Peter Rajnoha [Thu, 25 Aug 2016 12:53:32 +0000 (14:53 +0200)]
conf: fix typo in report/columns_as_rows config option name recognition
Commit e947c362dd0ae1da2d76925fe6b38244c5f46d25 introduced
config_settings.h file for central place to store all definitions for
config options. By mistake, it used report/colums_as_rows instead
of report/columns_as_rows (missing "n" in "columns").
a579ba2ac27d fixed a regression causing a segfault if no external
origin existed but broke the logic leading to erroneous error
messages and creations of split off exported VGs in case the
external origin and the pool LVs were allocated on different PVs.
Creating a RaidLV in VGs with very small extent sizes caused
late failure in the kernel giving a not very informative error
message. Catch the attempt early and display failure message
'Unable to create RAID LV: requires minimum VG extent size 4.00 KiB'.
pvmove: prohibit non-resilient collocation of RAID SubLVs
'pvmove -n name pv1 pv2' allows to collocate multiple RAID SubLVs
on pv2 (e.g. results in collocated raidlv_rimage_0 and raidlv_rimage_1),
thus causing loss of resilence and/or performance of the RaidLV.
Fix this pvmove flaw leading to potential data loss in case of PV failure
by preventing any SubLVs from collocation on any PVs of the RaidLV.
Still allow to collocate any DataLVs of a RaidLV with their sibling MetaLVs
and vice-versa though (e.g. raidlv_rmeta_0 on pv1 may still be moved to pv2
already holding raidlv_rimage_0).
Because access to the top-level RaidLV name is needed,
promote local _top_level_lv_name() from raid_manip.c
to global top_level_lv_name().