Zdenek Kabelac [Sun, 2 Jun 2013 19:59:57 +0000 (21:59 +0200)]
filters: update composable filter
Last commit made dump filter only partially composable.
Add remaining functionality and also support composable wipe,
which is needed, when i.e. vgscan needs to remove cache.
Petr Rockai [Mon, 27 May 2013 01:12:03 +0000 (03:12 +0200)]
tests: run all the test flavours in a single batch
This means that a test failure in one flavour no longer prevents all the
subsequent flavours from running. We also get an aggregate summary at
the end of the entire batch, instead of summaries interspersed with
progress output. Do not fail when flavour overrides are empty
Petr Rockai [Sun, 26 May 2013 22:49:40 +0000 (00:49 +0200)]
filters: toplevel filter not persistent
Add a generic dump operation to filters and make the composite filter call
through to its components. Previously, when global filter was set, the code
would treat the toplevel composite filter's private area as if it belonged a
persistent filter, trying to write nonsense into a non-sensical file.
Also deal with NULL cmd->filter gracefully.
Petr Rockai [Sun, 28 Apr 2013 20:38:22 +0000 (22:38 +0200)]
vgimportclone: override global_filter in lvm.conf
The global filter in system's lvm.conf may conflict with the custom filter we
set up in vgimportclone (they can easily fail to intersect). Since we explicitly
avoid talking to lvmetad in vgimportclone, it is safe and reasonable to do so.
Jonathan Brassow [Fri, 31 May 2013 16:25:52 +0000 (11:25 -0500)]
DM RAID: Add ability to throttle sync operations for RAID LVs.
This patch adds the ability to set the minimum and maximum I/O rate for
sync operations in RAID LVs. The options are available for 'lvcreate' and
'lvchange' and are as follows:
--minrecoveryrate <Rate> [bBsSkKmMgG]
--maxrecoveryrate <Rate> [bBsSkKmMgG]
The rate is specified in size/sec/device. If a suffix is not given,
kiB/sec/device is assumed. Setting the rate to 0 removes the preference.
Zdenek Kabelac [Thu, 30 May 2013 09:19:28 +0000 (11:19 +0200)]
libdm: compensate suspend counter for live table
This patch may not be fully correct. It tries to solve
the imbalanced suspend counter.
The problem starts when some LV is created and fails in resume path.
(i.e. resuming to large PV (enforced) over small loop devices)
This fails in _resume_node() after dm_task_run(). And while
existing device with empty table is left in inactive table,
further calls are reporting this device is in suspend state.
When later the lvm2 tries to rollback created device and deactivate it,
it will end with internal error, when we try to decrement
never incremented suspend counter.
As an 'easy fix' for now update suspend counter only for live nodes.
Zdenek Kabelac [Wed, 29 May 2013 10:39:33 +0000 (12:39 +0200)]
snapshot: require 3 chunks for creation
There is no point in creation of 2chunks snapshot,
since the snapshot is invalidated immeditelly with the first write
as there is no free chunk for COW blocks
(2 chunks are used by the snap header and the 1st. metadata chunk).
Enhance error message about the lowest usable size.
Zdenek Kabelac [Wed, 29 May 2013 10:42:09 +0000 (12:42 +0200)]
fid: fix reset of PV fid
Avoid hitting memory corruption (double free) in code path,
where PV FID has been already destroyed and the released pointer
was left in PV structure and could have been tried to be released
from there 2nd. time with final context destruction.
Zdenek Kabelac [Sun, 26 May 2013 15:07:08 +0000 (17:07 +0200)]
dmeventd: use dm_get_status_snapshot()
Switch to use libdm dm_get_status_snapshot() function for
reading status info.
This fixes bug, where the code was using 32bit integers,
while the snapshot target is able to return 64bit sizes.
However this also means, someone is using >1TB snapshot
cow devices, which is actually very bad idea anyway, since the
perfomance and memory usage in this case is very bad.
Zdenek Kabelac [Sun, 26 May 2013 15:00:14 +0000 (17:00 +0200)]
libdm: free mem pool on err path
Since we use get_status also in dmeventd, which may use one pool
for a single device, in case it would be repeatedly returning error,
it may not be freeing the pool and would cause slow but steady growth.
To stay safe in the error path release any allocated memory.
Jonathan Brassow [Thu, 16 May 2013 15:36:56 +0000 (10:36 -0500)]
Clean-up: Replace 'lv_is_active' with more correct/specific variants
There are places where 'lv_is_active' was being used where it was
more correct to use 'lv_is_active_locally'. For example, when checking
for the existance of a kernel instance before asking for its status.
Most of the time these would work correctly. (RAID is only allowed on
non-clustered VGs at the moment, which means that 'lv_is_active' and
'lv_is_active_locally' would give the same result.) However, it is
more correct to use the proper variant and it helps with future
scenarios where targets might be allowed exclusively (or clustered) in
a cluster VG.
Peter Rajnoha [Thu, 16 May 2013 06:12:37 +0000 (08:12 +0200)]
snapshot: fix check for snapshot-merge target presence
If calling _snap_target_present on 2nd and later call and for
a segment with MERGING flag set, we must return the status of
snapshot as well as snapshot-merge target presence, not just
the snapshot one.
Accept --yes on all commands, even ones that don't today have prompts,
so that test scripts that don't care about interactive prompts no
longer need to deal with them.
But continue to mention --yes only in the command prototypes that
actually use it.
Mike Snitzer [Mon, 13 May 2013 19:56:47 +0000 (15:56 -0400)]
Fix alignment of PV data area if detected alignment less than 1 MB
This fixes a long standing regression since LVM2 2.02.74 (commit 4efb1d9c,
"Update heuristic used for default and detected data alignment.")
The default PE alignment could be used (via MAX()) even if it was
determined that the device's MD stripe width, or minimal_io_size or
optimal_io_size were not factors of the default PE alignment (either 64K
or the newer default of 1MB, etc). This bug would manifest if the
default PE alignment was larger than the overriding hint that the
device provided (e.g. default of 1MB vs optimal_io_size of 768K).
Zdenek Kabelac [Mon, 13 May 2013 10:59:38 +0000 (12:59 +0200)]
mm: fix leak in fail path
If the dm_realloc would fail, the already allocate _maps_buffer
memory would have been lost (overwritten with NULL).
Fix this by using temporary line buffer.
Also add a minor cleanup to set end of buffer to '\0',
only when we really know the file size fits the preallocated buffer.
Peter Rajnoha [Mon, 13 May 2013 09:46:24 +0000 (11:46 +0200)]
toolcontext: check dm version lazily for udev_fallback setting
Setting the cmd->default_settings.udev_fallback also requires DM
driver version check. However, this caused useless mapper/control
access with ioctl if not needed actually. For example if we're not
using activation code, we don't need to know the udev_fallback as
there's no node and symlink processing.
For example, this premature mapper/control access caused problems
when using lvm2app even when no activation happens - there are
situations in which we don't need to use mapper/control, but still
need some of the lvm2app functionality. This is also the case for
lvm2-activation systemd generator which just needs to look at the
lvm2 configuration, but it shouldn't touch mapper/control.
Zdenek Kabelac [Thu, 2 May 2013 16:06:50 +0000 (18:06 +0200)]
report: improve reporting of active state
For reporting stacked or joined devices properly in cluster,
we need to report their activation state according the lock,
which activated this device tree.
This is getting a bit complex - current code tries simple approach -
For snapshot - return status for origin.
For thin pool - return status of the first known active thin volume.
For the rest of them - try to use dependency list of LVs and skip
known execptions. This should be able to recursively deduce top level
device for given LV.
Peter Rajnoha [Fri, 3 May 2013 11:20:07 +0000 (13:20 +0200)]
udev: fire pvscan --cache properly on CHANGE event for MD devices
Commit 756bcabbfe297688ba240a880bc2b55265ad33f0 restricted the
situations at which the LVM autoactivation fires - only on ADD
event for devices other than DM. However, this caused a problem
for MD devices...
MD devices are activated in a very similar way as DM devices:
the MD dev is created on first appeareance of MD array member
(ADD event) and stays *inactive* until the array is complete.
Just then the MD dev turns to active state and this is reported
to userspace by CHANGE event.
Unfortunately, we can't differentiate between the CHANGE event
coming from udev trigger/WATCH rule and CHANGE event coming from
the transition to active state - MD would need to add similar logic
we already use to detect this in DM world. For now, we just have
to enable pvscan --cache on *all* CHANGE events for MD so the
autoactivation of the LVM volumes on top of MD works.
A downside of this is that a spurious CHANGE event for MD dev
can cause the LVM volumes on top of it to be automatically activated.
However, one should not open/change the device underneath until
the device above in the stack is removed! So this situation should
only happen if one opens the MD dev for read-write by mistake
(and hence firing the CHANGE event because of the WATCH udev rule),
or if calling udev trigger manually for the MD dev.
(No WHATS_NEW here as this fixes the commit mentioned
above and which has not been released yet.)
Support for exclusive activation of snapshots revealed some problems.
When snapshot is created, COW LV is activated first (for clearing) and
then it's transformed into snapshot's COW LV, but it has left the lock
for such LV active in cluster and this lock could not have been removed
from dlm, unless snapshot has been removed within same dlm session.
If the user tried to remove snapshot after rebooting node, the lock was
missing, and COW LV could not have been detached.
Patch modifes the approach in this way:
Always deactivate COW LV for clustered vg after clearing (so it's
activated again via imlicit snapshot activation rule when snapshot is activated).
When snapshot is removed, activate COW LV as independend LV, so the lock
will exist for such LV, but only when the snapshot is active.
Also add test case for testing snapshot removal after cluster reboot.
Patch fixes hidden problem with lvm metadata caching.
When the pretest was made, only the commited data have been cached back
since the call lv_info_by_lvid() triggers mda read operation.
However call of lv_suspend_if_active() also reads precommited metadata.
The problem is visible in this sequence of calls:
which may end with leaving outdated mda in lvm cache, since vg_write()
drops cached metadata and vg_commit() only transforms precommited
to commited metadata, but in the case of pretesting we have
no precommited mda available so the cache will continue to use
old metadata. This happens, when suspend LV is inactive.