Always store discard setting in LV metadata. (Note that lvcreate_params
doesn't yet use --discard to set the initial value.)
Remove undocumented env var LVM_THIN_VERSION_MIN that has no use on a
live system.
Change verbose 'feature not found' messages to debug.
Use discard_str for string value of discard.
I think it's better not to abbreviate human-readable fields like
'discard' to a single character. Users can truncate it to the
first character themselves if they wish.
It's confusing to use the variable name discard for different things in
different places - use discard_str when it's a string not the enum.
Respond with "unknown" rather than a NULL pointer if there's an
internal error and the discard value is invalid.
Don't accept 'no_passdown' or 'no-passdown' variants in the LVM
metadata: this is written by the program so should only ever contain
"nopassdown" and should be validated strictly against that.
Remove the limit for major and minor number arguments used while specifying
persistent numbers via -My --major <major> --minor <minor> option which
was set to 255 before. Follow the kernel limit instead which is 12 bits
for major and 20 bits for minor number (kernel >= 2.6 and LVM formats
that does not have FMT_RESTRICTED_LVIDS - so still keep the old limit
of 255 for lvm1 format).
Peter Rajnoha [Tue, 31 Jul 2012 14:20:24 +0000 (16:20 +0200)]
systemd: add lvm2 activation generator
The lvm2 activation generator generates systemd units conditionally
based on the global/use_lvmetad lvm.conf setting.
If use_lvmetad=0, the lvm2-activation-early.service and lvm2-activation.service
units will be generated. These units are responsible for direct volume activation
by calling "vgchange -aay --sysinit" (this is actually the original on-boot
activation as it was used before). If use_lvmetad=1, no units will be generated
as we're relying on autoactivation.
Important thing to note is that the lvm2-activation units normally bring
in the udev-settle ("storage-wait") service that waits for udev to settle
(with block devices). We don't need this if lvmetad is used in conjunction
with autoactivation feature... but systemd units can't be enabled or disabled
(or dependencies added/removed) dynamically based on external configuration.
Therefore, we need the unit generator which adds support for such situations:
the units as a whole either exist or not based on the external configuration.
Jonathan Brassow [Thu, 26 Jul 2012 22:06:06 +0000 (17:06 -0500)]
vgextend: Allow PVs to be added to VGs that have PVs missing
Allowing people to add devices to a VG that has PVs missing helps
people avoid the inability to repair RAID LVs in certain cases.
For example, if a user creates a RAID 4/5/6 LV using all of the
available devices in a VG, there will be no spare devices to
repair the LV with if a device should fail. Further, because the
VG is missing a device, new devices cannot be added to allow the
repair. If 'vgreduce --removemissing' were attempted, the
"MISSING" PV could not be removed without also destroying the RAID
LV.
Allowing vgextend to operate solves the circular dependency.
When the PV is added by a vgextend operation, the sequence number is
incremented and the 'MISSING' flag is put on the PVs which are missing.
Peter Rajnoha [Thu, 26 Jul 2012 09:30:27 +0000 (11:30 +0200)]
systemd: ensure monitoring is handled after lvmetad
Monitoring is handled using "vgchange --monitor" call. Ensure that lvmetad is up
and running at the time of this call to prevent any fallback to direct scan
within the vgchange. The same applies for shutdown sequence but the other way
round - switch monitoring off and lvmetad afterwards.
Jonathan Brassow [Wed, 25 Jul 2012 00:02:06 +0000 (19:02 -0500)]
RAID: Fix segfault when attempting to replace RAID 4/5/6 device
Commit 8767435ef847831455fadc1f7e8f4d2d94aef0d5 allowed RAID 4/5/6
LV to be extended properly, but introduced a regression in device
replacement - a critical component of fault tolerance.
When only 1 or 2 drives are being replaced, the 'area_count' needed
can be equal to the parity_count. The 'area_multiple' for RAID 4/5/6
was computed as 'area_count - parity_devs', which could result in
'area_multiple' being 0. This would ultimately lead to a division by
zero error. Therefore, in calc_area_multiple, it is important to take
into account the number of areas that are being requested - just as
we already do in _alloc_init.
A regression introduced in 2.02.89 (11e520256b3005ed813ce83f8770aaab74edef3f)
caused the lvm dumpconfig <node> to print out
the node as well as its subsequent siblings.
The information about "only_one" mode got lost.
Before this patch (just an example node):
# lvm dumpconfig global/use_lvmetad
use_lvmetad=1
thin_check_executable="/usr/sbin/thin_check"
thin_check_options="-q"
(...all nodes to the end of the section)
With this patch applied:
# lvm dumpconfig global/use_lvmetad
use_lvmetad=1
Peter Rajnoha [Thu, 19 Jul 2012 14:45:08 +0000 (16:45 +0200)]
daemon-server: fix error message on daemon shutdown
If a daemon (like lvmetad that is using common daemon-server code)
received a kill signal that was supposed to shut the daemon down,
a spurious message was issued: "Failed to handle a client connection".
This happened if the kill signal came just in the middle of waiting
for a client request in "select" - the request that was supposed to
be handled was blank at that moment of course.
Peter Rajnoha [Tue, 10 Jul 2012 11:49:46 +0000 (13:49 +0200)]
activate: skip manual activation for --sysinit -aay
When --sysinit -a ay is used with vg/lvchange and lvmetad is up and running,
we should skip manual activation as that would be a useless step - all volumes
are autoactivated once all the PVs for a VG are present.
If lvmetad is not active at the time of the vgchange --sysinit -a ay
call, the activation proceeds in standard 'manual' way.
This way, we can still have vg/lvchange --sysinit -a ay called
unconditionally in system initialization scripts no matter if lvmetad
is used or not.
Jonathan Brassow [Tue, 26 Jun 2012 14:44:54 +0000 (09:44 -0500)]
RAID: Fix extending size of RAID 4/5/6 logical volumes.
Reducing a RAID 4/5/6 LV or extending it with a different number of
stripes is still not implemented. This patch covers the "simple" case
where the LV is extended with the same number of stripes as the orginal.
In process_each_pv() if we haven't yet scanned and the PV appears
to be an orphan, we must scan the other PVs looking for mdas that
reference it to find out what VG it is in.
1. If the PV has no mdas, we must scan.
2. If the PV has an mda that is not ignored we do not need to scan.
3. If the PV has an mda that is ignored, we do need to scan.
Change 'lv_passes_volumes_filter' fn back to static as it's not
actually needed in the other code (a remnant from devel version).
Fix lvm.conf comment referencing '--autoactivate' which was finally
decided to be '--activate ay'.
If _alloc_parallel_area for raid devices chooses an area already used
up, it doesn't notice that it has no space left in it and leaves
later code trying to place a zero-length area into the LV.
Peter Rajnoha [Thu, 28 Jun 2012 10:49:33 +0000 (12:49 +0200)]
initscript: call vgchange -aay instead of -aly
The clmvd init script called "vgchange -aly" before to activate
all VGs in cluster environment. This activated all VGs, no matter
if it was clustered or not.
Auto activation for clustered VGs is not supported yet so the behaviour
of -aay is still the same as before for clustered VGs. However, for
non-clustered VGs, we need to check with the activation/auto_activation_volume_list
whether the VG/LV should be activated on boot or not.
Peter Rajnoha [Thu, 28 Jun 2012 08:15:07 +0000 (04:15 -0400)]
lvcreate: add --activate ay (autoactivate)
One can use "lvcreate --aay" to have the newly created volume
activated or not activated based on the activation/auto_activation_volume_list
this way.
Note: -Z/--zero is not compatible with -aay, zeroing is not used in this case!
When using lvcreate -aay, a default warning message is also issued that zeroing
is not done.
Peter Rajnoha [Wed, 27 Jun 2012 13:35:11 +0000 (09:35 -0400)]
pvscan: add --activate ay option (autoactivate)
Define auto_activation_handler that activates VGs/LVs automatically
based on the activation/auto_activation_volume_list (activating all
volumes by default if the list is not defined).
The autoactivation is done within the pvscan call in 69-dm-lvmetad.rules
that watches for udev events (device appearance/removal).
For now, this works for non-clustered and complete VGs only.
Peter Rajnoha [Wed, 27 Jun 2012 14:21:15 +0000 (10:21 -0400)]
vgchange: add --activate ay option (autoactivate)
Normally, the 'vgchange -ay' activates all volume groups (that pass
the activation/volume_list filter if set).
This call can appear in two scenarios:
- system boot (so activation within a script in general)
- manual call on command line (so activaton on user's direct request)
For the former one, we would like to select which VGs should be actually
activated. One can define the list of VGs directly to do that. But that
would require the same list to be provided in all the scripts.
The 'vgchange -aay' will check for the activation/auto_activation_volume_list
in adition and it will activate only those VGs/LVs that pass this
filter (assuming all to be activated if the list is not defined - the
same logic we already have for activation/volume_list).
Init/boot scripts should use this form of activation primarily
(which, anyway, becomes only a fallback now with autoactivation done
on PV appearance in tandem with lvmetad in place).
Peter Rajnoha [Wed, 27 Jun 2012 12:59:34 +0000 (08:59 -0400)]
activate: add autoactivation hooks
Define an 'activation_handler' that gets called automatically on
PV appearance/disappearance while processing the lvmetad_pv_found
and lvmetad_pv_gone functions that are supposed to update the
lvmetad state based on PV availability state. For now, the actual
support is for PV appearance only, leaving room for PV disappearance
support as well (which is a more complex problem to solve as this
needs to count with possible device stack).
Add a new activation change mode - CHANGE_AAY exposed as
'--activate ay/-aay' argument ('activate automatically').
Factor out the vgchange activation functionality for use in other
tools (like pvscan...).
Peter Rajnoha [Wed, 27 Jun 2012 11:48:31 +0000 (07:48 -0400)]
args: add --activate synonym for --available arg
We're refererring to 'activation' all over the code and we're talking
about 'LVs being activated' all the time so let's use 'activation/activate'
everywhere for clarity and consistency (still providing the old
'available' keyword as a synonym for backward compatibility with
existing environments).
Update release_lv_segment_area not to discard any PV extents,
as it also gets used when moving extents between LVs.
Instead, call a new function release_and_discard_lv_segment_area() in
the two places where data should be discarded - lv_reduce() and
remove_mirrors_from_segments().
Peter Rajnoha [Fri, 22 Jun 2012 09:50:02 +0000 (05:50 -0400)]
udev: udev rules cleanup
Remove executable path detection in udev rules and use sbindir that
is configured, but still provide the original functionality by means
of 'configure --enable-udev-rule-exec-detection'.
Normally, the exec path for the tools called in udev rules should
not differ from the sbindir used, however, there are cases this is
necessary. For example different environments could be assembled
in a way that these path differ for some reason (distribution installer,
initrd ...).
This functionality is kept for compatibility only. Any environment
moving the binaries around and using different paths should be fixed
eventually!
Peter Rajnoha [Thu, 21 Jun 2012 12:41:52 +0000 (08:41 -0400)]
configure: run directory configuration cleanup
There were several hard-coded values for run directory around the code.
Also, some tools are DM specific only, others are LVM specific and there
was no distinction made here before. With this patch applied, we have
this cleaned up a bit (subsystem in brackets, defaults in parentheses):
[common] configurable PID_DIR (/var/run)
lvm [lvm] configurable RUN_DIR (/var/run/lvm)
configurable locking dir (/var/lock/lvm)
The changes briefly:
- added configure --with-default-pid-dir
- added configure --with-default-dm-run-dir
- added configure --with-lvmetad-pidfile
- by default, using one common pid directory for everything
(only lvmetad was not following this before)
Peter Rajnoha [Mon, 25 Jun 2012 09:34:21 +0000 (11:34 +0200)]
dev-io: open device read-only to obtain readahead value
There's no need to have the device open RW while obtaining the readahead value.
The RW open used before caused the CHANGE udev event to be generated if the
WATCH udev rule was set for the underlying device (and that is normally the
case both for non-dm and dm devices by default).
This did not cause any problems before since we were not interested in
*underlying* devices. However, with upcoming changes (autoactivation), we're
watching for events on underlying devices marked as PVs and such a spurious
event could cause the autoactivation code to be triggered. So when trying
to deactivate the volume, we could end up with immediate activation just after
that because of the CHANGE event originated in the WATCH udev rule since the
underlying device was open RW during the deactivation process.
Though maybe a better solution would be to completely filter such spurious
events out of the autoactivation process somehow, it's still useful if there
are as least spurious events generated as possible in the system itself.
Zdenek Kabelac [Fri, 22 Jun 2012 09:15:14 +0000 (11:15 +0200)]
fix: limit preallocate stack size
If the user would set bigger reserved stack size then what
is allowed in resources (ulimit -s), then he would get coredump
So avoid coredump and ignore creation of such large stack size
(lvm should work properly, with just 64KB, so the option could
be eliminated).