Zdenek Kabelac [Mon, 5 Dec 2016 14:23:18 +0000 (15:23 +0100)]
activation: optimize away lv_has_target_type
It's actually not needed to call extra lv_has_target_type() to detect
snapshot merge is in progress - decode this right during status
capturing and save even few extra ioctl calls.
Zdenek Kabelac [Mon, 5 Dec 2016 13:31:25 +0000 (14:31 +0100)]
activation: lv_info_with_seg_status API change
Drop LV from passed API arg - it's always segment being checked.
Also use_layer is now in full control of lv_info_with_seg_status().
It decides which device needs to be checked to get 'the most info'.
TODO: future version should be able to expose status from
Zdenek Kabelac [Mon, 5 Dec 2016 09:20:42 +0000 (10:20 +0100)]
activation: lv_info_with_seg_status unify status selection
Start moving selection of status taken for a LV into a single place.
The logic for showing info & status has been spread over multiple
places and were doing too complex decision going agains each other.
Unify selection of status of origin & cow scanned device.
TODO: in future we want to grab status for LV and layered LV and have
both statuses present for display - i.e. when 'old snapshot'
of thinLV is takes and there is ongoing merge - at some moment
we are not capable to show all needed info.
Zdenek Kabelac [Thu, 1 Dec 2016 09:37:03 +0000 (10:37 +0100)]
activation: improve error handling for status reading
When lvm2 wants to see a status, it needs to validate,
segment for status reading is matching whan lvm2 expects in
metadata.
Also ensure status failure will not cause '0' from info reading
when actual info was collected properly.
Failure in 'status' reading is considered to be
a 'log_warn()' event only.
Zdenek Kabelac [Fri, 2 Dec 2016 12:57:52 +0000 (13:57 +0100)]
activation: status check switch to warn
When we can't parse status, switch to warning as this is not
considered an errornous case. LVS is not supposed to return
error status code when device is not what it's been expected to
be - but it should be WARNING a user there is something unexpected.
Zdenek Kabelac [Thu, 1 Dec 2016 16:58:06 +0000 (17:58 +0100)]
snapshot: reporting uses statusinfo
Convert lvs -o lv_merge_failed,lv_snapshot_invalid to use
lv_info_and_status function.
This makes it equal to attr value showing this info
(as they were different since they were derived from
different data set and different logic as well).
Also saves couple extra ioctl that were needed to obtain this info.
Peter Rajnoha [Thu, 1 Dec 2016 13:39:21 +0000 (14:39 +0100)]
report: order fields by type for field defintions in columns.h
When displaying <reporting_command> -o help, we'd like to have fields
grouped nicely, not starting having groups interleaved as it was before.
The code that displays the help output for fields takes the order as
written in columns.h file - this caused output like:
Tony Asleson [Wed, 30 Nov 2016 20:58:29 +0000 (14:58 -0600)]
lvmdbustest.py: Rename env test variable
Use LVM_DBUSD_TEST_MODE env variable to customize what we test.
Default is the same where we try to test all combinations of all
modes. Renamed to make it consistent with the other env variables
that are used in the unit test.
Tony Asleson [Wed, 30 Nov 2016 19:39:48 +0000 (13:39 -0600)]
lvmdbusd: Emit signal on Job completion
Added a properties changed signal on the job dbus object so that client
can wait for a signal that the job is complete instead of polling or
blocking on the wait method.
Tony Asleson [Wed, 30 Nov 2016 00:01:56 +0000 (18:01 -0600)]
lvmdbusd: Add --blackboxsize command line argument
Allows the user to override the number of commands that get dumped
to the log when we encounter a lvm error. Also useful during
development when you don't want to see the blackbox output.
lvchange: allow a transiently failed RaidLV to be refreshed
In case any SubLV of a RaidLV transiently fails, it needs
two "lvchange --refresh RaidLV" runs to get it to fully
operational mode again. Reason being, that lvm reloads all
targets for the RaidLV tree but doesn't resume the SubLVs
until after the whole tree has been reloaded in the first
refresh run. Thus the live mapping table of the SubLVs
still point to an "error" mapping and the dm-raid target
can't retrieve any superblock from the MetaLV(s) in processing
the constructor during this preload thus not discovering the
again accessible SubLVs. In the second run, the SubLV targets
map proper (meta)data, hence the constructor discovers those
fine now.
Solve by resuming the SubLVs of the RaidLV before
preloading the respective top-level RaidLV target.
Tony Asleson [Wed, 23 Nov 2016 20:49:23 +0000 (14:49 -0600)]
lvmdbusd: Only read whats buffered
When reading data from stdout & stderr we were reading until the
reading until we got None back which really isn't needed as the
read will return everything that is available.
Zdenek Kabelac [Wed, 23 Nov 2016 16:11:26 +0000 (17:11 +0100)]
mirror: preserve MIRRORED status for temporara image
When lvconvert adds a new leg - it's doing it free 'temporary' image
layer - however this temporary 'internal' mirror is also MIRRORED LV.
But the status bit was not properly transfered through layer.
Tony Asleson [Wed, 16 Nov 2016 23:29:23 +0000 (17:29 -0600)]
lvmdbusd: Place Manager.UseLvmShell request on queue
We need to acquire a lock which can block us which in turn causes
the dbus request handling to block as well. Place the request on
the work queue instead.
Tony Asleson [Wed, 16 Nov 2016 21:57:04 +0000 (15:57 -0600)]
lvmdbusd: Extra report FD read on no data
Our expectation was that when using the lvm shell that when the lvm prompt
was read from stdout, that all other ouput had been written and flushed.
However, this doesn't appear to be the case. Add extra read passes to
retrieve delayed report data.
Tony Asleson [Wed, 16 Nov 2016 17:39:57 +0000 (11:39 -0600)]
lvmdbustest.py: Reduce test client introspection calls
The default dbus python library mode of operation is to leverage
introspection. However, this introspection data isn't accessible
for users of the library and they have to specifically retrieve
the introspection data too. This resulted in many introspection
calls being made. This change eliminates introspection calls if
we are testing multiple concurrent test clients. If it's a single
client we will leverage a reduced amount of introspection data to
verify the introspection data is correct. Typically clients don't
leverage introspection data nearly as much as this test client.
Tony Asleson [Fri, 11 Nov 2016 18:34:38 +0000 (12:34 -0600)]
lvmdbustest.py: Support concurrent test runs
The env variable LVM_DBUSD_PV_DEVICE_LIST when present and filled in
with at least 4 physical devices will run concurrently with other
instances running as long as they specify different devices in their
env variable.
When the env variable is not present the test runs as it did before.
Tony Asleson [Thu, 10 Nov 2016 18:19:48 +0000 (12:19 -0600)]
lvmdbusd: Use one thread to fetch state updates
In preparation to have more than one thread issuing commands to lvm
at the same time we need to serialize updates to the dbus state and
retrieving the global lvm state. To achieve this we have one thread
handling this with a thread safe queue taking and coalescing requests.
Tony Asleson [Fri, 4 Nov 2016 18:16:24 +0000 (13:16 -0500)]
lvmdbustest.py: Remove raid4 use
Looks like this isn't support across versions. Need to add functionality
to service to return the supported segment types, so we only use the
supported ones.
Bryn M. Reeves [Thu, 17 Nov 2016 11:39:43 +0000 (11:39 +0000)]
libdm: separate dm_stats_populate() error cases
There are two possible errors in _dm_stats_populate_region():
* No region struct in dms->regions[region_id]
* Failure to parse data from @stats_print
These have very different causes: the first occurs where a client
program is populating one region at a time (region_id is a single
region identifier), and has not previously called dm_stats_list()
to dimension the region tables; this is an API usage error.
The second occurs when either we read unparseable data from the
kernel (kernel bug), or where various resource allocations fail.
Separate these two cases out and log separate messages for each
(allocation failures in the path already have their own distinct
message), since the "failed to parse.." message in the un-listed
handle case is confusing and misleading.
Peter Rajnoha [Mon, 14 Nov 2016 13:46:44 +0000 (14:46 +0100)]
dbus: only log msg as debug if lvm2-lvmdbusd unit missing for D-Bus notification
Do not emit warning message but only log debug message if
lvm2-lvmdbusd.service unit is missing and at the same time
we have global/notify_dbus=1 (which is used by default if we
configured sources with "--enable-notify-dbus"). We don't want
hard dependency between LVM2 and lvmdbusd so it's enough to log
only debug message in this case.
Zdenek Kabelac [Tue, 18 Oct 2016 10:58:22 +0000 (12:58 +0200)]
conf: support zero for missing_stripe_filler
Make it easier to replace missing segments with 'zero' returning
target - otherwise user would have to create some extra target
to provide zeros as /dev/zero can't be used (not a block device).
Also break code loop when segment is found and make it an INTERNAL_ERROR
where it's missing.
Zdenek Kabelac [Tue, 8 Nov 2016 10:54:28 +0000 (11:54 +0100)]
raid: faster rmeta clearing
Instead of clearing multiple rmeta device with sequential activation
process and waiting for udev for every _rmeta device separately,
activate all _rmeta devices first and then clear them and deactivate
afterwards.
Also update some tracing messages.
When anyhing goes wrong during clearing process, always try to
deactivate as much _rmeta devices as possible before fail.
Tony Asleson [Thu, 3 Nov 2016 23:27:22 +0000 (18:27 -0500)]
lvmdbusd: Remove the periodic timer task
This code is no longer needed because the back ground task has been
removed. Will add back if we change the design and end up utilizing
multiple worker threads.
Tony Asleson [Thu, 3 Nov 2016 23:25:12 +0000 (18:25 -0500)]
lvmdbusd: Take out background thread
There is no reason to create another background task when the task that
created it is going to block waiting for it to finish. Instead we will
just execute the logic in the worker thread that is servicing the worker
queue.