Zdenek Kabelac [Tue, 27 Feb 2024 16:16:55 +0000 (17:16 +0100)]
man: lvmthin update for external origin usage
Document usage of chaining external origins as it is now possibly to
create thinA LV in poolA which use thinB LV in poolB as it external
origin and user could create chain of such LV.
Zdenek Kabelac [Tue, 27 Feb 2024 16:23:01 +0000 (17:23 +0100)]
thin: support chaining of external origins
Improve support for building DM tree when there is a chain
of external origins used for LV.
For this we cannot use track_external_lv_deps as this works
only for LV with just one external origin in its device tree.
Instead add directly 'dev' to the instead of add whole LV.
This avoid possibly recurive endless loop, however we may eventally
have some problems with undiscovered/missing devices in DM tree.
Zdenek Kabelac [Mon, 4 Mar 2024 10:45:35 +0000 (11:45 +0100)]
thin: delayed resume for LV conversion
When we have some existing LV and this LV is being converted to
external origin - during the DM table manipulation there is a short
moment when the LV is being 'resumed' as 'read-only' volume
while still being live as 'rw' volume i.e. we could have had
a single thin LV active twice.
To avoid such weird scenarios of dual access to a same volume, we
just postpone a resume until a moment, where the existing volume
is already suspended thus no I/O can be in flight to such device.
Note: however there is slight catch - that we now have basically
a different 'risk' case where a resume of such i.e. new external
origin LV might fail and we are already in suspend tree state -
resolving error path in this situation is untrivial as well...
Zdenek Kabelac [Fri, 23 Feb 2024 19:52:51 +0000 (20:52 +0100)]
thin: external origins across thin-pool
Fix/support creation and usage of the external origin
across thin-pools - so thin LV can use thin LV from
some other thin-pool as external origin (read-only).
Zdenek Kabelac [Sun, 3 Mar 2024 22:23:04 +0000 (23:23 +0100)]
thin: validate usable volume for external origin
When creating external origin via 'lvcreate --type thin'
add the validation for LV being usable as external origin
since certain LVs cannot be really used this way.
Also call this function early during lvcreate cmdline arg
validation se we do not need to do unecesary operation.
Zdenek Kabelac [Tue, 27 Feb 2024 16:19:30 +0000 (17:19 +0100)]
activation: reduce table preloading
Over the time the code for preloading detached LVs got unnecessarily
complicate. But actually we need to preload only LVs that
were previously non-toplevel (invisible) LVs and became visible
toplevel LVs in the precommitted metadata.
If there would be needed some other rule, it would likely be a bug in
conversion code forgetting to set visibility flag on detached LV.
This reduces number of unnecessary repeated DM tree preloading.
Zdenek Kabelac [Fri, 23 Feb 2024 11:54:54 +0000 (12:54 +0100)]
snapshots: avoid monitoring of inactive origins
External origins for thin volumes can be also used at the same time
as old(thick) snapshot origins. However in this case it's possible
the LV is only active as being 'external' origin, but old snapshot LVs
are not active. For this case before handling these
LVs for un/monitoring check the active state of origin LV.
This should prevent warnings of monitoring failures.
Zdenek Kabelac [Thu, 8 Feb 2024 13:58:32 +0000 (14:58 +0100)]
cache: check module in modules builtin
Instead of parsing the whole /proc/kallsyms use faster variant
of using modprobe tool logic.
lvm2 here wants to know whether the particular DM cache policy is
present in the kernel - however since the cache policy does not have
any kernel module parameters and it can be built-in to a kernel
there is no /sys/modules directory in such case and we would need to call
modprobe everytime we want detect such case.
The old solution tried to look for particular kernel symbol
(and like not the right way, as smq_exit might be actually ommitted).
New version checks MODULES_PATH/`uname -r`/modules.builtin for
whether is present cache policy module instead of CPU expensive parsing
of kallsyms.
David Teigland [Mon, 5 Feb 2024 19:15:37 +0000 (13:15 -0600)]
devices file: rename unused system.devices
If lvm.conf has use_devicesfile=0 and /etc/lvm/device/system.devices
exists, then rename it to system.devices-unused.YYYYMMDD.HHMMSS.
This prevents an old, incorrect system.devices from being used in
the future if lvm.conf is changed to use_devicesfile=1.
David Teigland [Wed, 31 Jan 2024 18:14:02 +0000 (12:14 -0600)]
devices file: back up each version
Create backup copies of system.devices in /etc/lvm/devices/backup
named system.devices-YYYYMMDD.HHMMSS.NNNN. NNNN is the version
counter from the file.
Each time that an lvm command writes a new system.devices file,
it also writes the same file in the backup directory.
A new comment line is added to system.devices with HASH=<num>
where <num> is a crc calculated from the uncommented lines in
system.devices. This lets lvm detect if the file has been
modified outside of lvm itself.
If system.devices is edited directly, the next time a command
reads the file, the crc will not match the HASH value. The
command will then rewrite system.devices with the correct HASH
value, and create a backup reflecting the edits.
A default limit of 50 backup files is kept, configurable by
lvm.conf devicesfile_backup_limit (set to 0 to disable backups.)
David Teigland [Thu, 8 Feb 2024 20:51:46 +0000 (14:51 -0600)]
udev: ignore LVs containing PVs
If LVM LVs happen to contain PVs, they are passed to the lvm udev
rule for processing, where they should be ignored. PVs on LVs
most likely belong to VM images, and don't belong to the host
which sees the LV. It's unsafe for the host to use these PVs.
Without this change, the LV would be processed by pvscan which
would generally ignore it, either because of the devices file,
or because of the default lvm policy to not consider LVs as
potential PVs. This change makes the udev rule consistent
with that policy and avoids the unnecessary system messages
produced when pvscan ignores the LV.
Andre Klärner [Mon, 5 Feb 2024 19:57:30 +0000 (13:57 -0600)]
system_id: explain the reason for choosing appmachineid over machineid
Since understanding the reason for choosing the appmachineid over the
direct use of machineid is not easily found, I extended to help text to
clarify this a bit.
Zdenek Kabelac [Wed, 17 Jan 2024 16:25:34 +0000 (17:25 +0100)]
thin_pool: correct refactoring chunk_size
Function to recalc chunk_size according to dev hints needs to be
used after chunk_size is being set to thin pool segment - correct
this ordering mistake introduced in previous refactoring commit.
Zdenek Kabelac [Wed, 17 Jan 2024 16:13:26 +0000 (17:13 +0100)]
vdo: correct vdo header size
Previous patch that introduced support for thinpool with vdo
not correctly handled header size - as this part is not fully usable
yet. We are going to try to use the 0, but current state of code is not
yet compliant to this logic so keep vdo_header_size during conversion
and alos correctly pass through virtual_extents to vdo formating.
Zdenek Kabelac [Thu, 14 Dec 2023 13:06:54 +0000 (14:06 +0100)]
vdo: refactor conversion to vdo lv
Introduce struct vdo_convert_params {} to pass-in all the parameters
needed for the conversion of an LV to a vdopool + vdo LV.
Function convert_vdo_lv() is also able to create a new LV and swap
segments, so the passed in LV can be later on use for futher
conversion so this refactoring makes it ready for more enhanced
usage.
raid: add messages to lvs command output in case RaidLVs require a refresh
If a RaidLV mapping is required to be refreshed as a result of temporarily failed
and recurred RAID leg device (pairs) caused by writes to the LV during failure,
the requirement is reported by volume health character r' in position 9 of the
LV's attribute field (see 'man lvs' about additional volume health characters).
As this character can be overlooked, this patch adds messages to the top
of the lvs command output informing the user explicitely about the fact.
Both commands default [raid_](min|max)recoveryrate to 0 but ensure
min_recovery_rate is not larger than max_recoveryrate. This results
in command failure without requesting the user to also define
max_recovery_rate >= min_recovery_rate.
Fix both commands by defining max_recovery_rate = min_recoveryrate
in case "lvcreate/lvchange --minrecoveryrate Size ..." requests a
larger value than current maxrecoveryrate without also giving option
raid: lvcreate and lvchange fail if --min_recovery_rate is defined
Both commands default [raid_](min|max)recoveryrate to 0 but ensure
min_recovery_rate is not larger than max_recoveryrate. This results
in command failure without requestinng the user to also define
max_recovery_rate >= min_recovery_rate.
Fix both commands by defining max_recovery_rate = min_recoveryrate
in case "lvcreate/lvchange --minrecoveryrate Size ..." requests a
larger value than current maxrecoveryrate without also giving option
"--maxrecoveryrate Size ..." with a size greater or equal than min.
David Teigland [Fri, 17 Nov 2023 17:04:22 +0000 (11:04 -0600)]
lvs: set first attr flag for raid integrity images
The first lv_attr flag is 'i' or 'I' for a raid image.
(i: raid image, I: out of sync raid image)
For integrity raid images (_iorig), the flag was not being set.
David Teigland [Fri, 10 Nov 2023 22:24:45 +0000 (16:24 -0600)]
pvs, pvscan: new option -A to show PVs outside the devices file
pvs -A|--allpvs
Show PVs that would otherwise be excluded by the devices file.
pvscan -A|--allpvs
Show PVs that would otherwise be excluded by the devices file.
For those devices that are included by the devices file,
their device ID is displayed in place of the usual "lvm2"
format and size.
(pvs -a|--all is unchanged, and shows devices not formatted as PVs.)
David Teigland [Wed, 8 Nov 2023 17:46:38 +0000 (11:46 -0600)]
device_id: ensure pvid buffers are ID_LEN+1
A pvid string read from system.devices could be less
then ID_LEN since system.devices fields can be edited.
Ensure the pvid buffer is ID_LEN+1 even if the string
read from the file is shorter.
David Teigland [Thu, 2 Nov 2023 19:54:31 +0000 (14:54 -0500)]
device_id: improve searched_devnames temp file
Include info in the temp file to confirm that it should be used.
The temp file is meant to suppress repeated, identical searches
for the same PVIDs on the same set of devices. Write to the file
a count and hash of the missing PVIDs and a count and hash of the
devices to search. A subsequent command will ignore and remove
the temp file if any of these values differ. We don't want to
suppress a search if a change has occured, and a missing PV could
be found by scanning devices.
David Teigland [Thu, 2 Nov 2023 17:10:56 +0000 (12:10 -0500)]
device_id: change default search_for_devnames to all
Problematic scenario:
. the device for a PV has no wwid, so it's identified in system.devices
with IDTYPE=devname IDNAME=/dev/foo
. user adds/enables a wwid for the device
. on reboot, the device name changes, e.g. now /dev/bar
. the code that searches for the new device name includes an
optimization to skip looking on devs that have a wwid, on
the basis that a device with a wwid won't have IDTYPE=devname
. this optimization causes lvm to not look for the PV on /dev/bar
since that device now has a wwid, so the PV is not found
. the optimization is enabled by search_for_devnames="auto"
. change the default to search_for_devnames="all" which does not
use the problematic optimization
David Teigland [Fri, 27 Oct 2023 22:39:32 +0000 (17:39 -0500)]
lvmdevices: new output and options for check and update
- add new comparison between old and new entries, and use this
as the basis for new dedicated output for check and update
- add new --refresh option to search for missing PVIDs on all
devices, and possibly update the device ID
- internally, only use the term "refresh" for cases where a
new device ID may be found and assigned for a missing PVID
Tony Asleson [Wed, 25 Oct 2023 20:08:12 +0000 (15:08 -0500)]
tests: lvmdbusd handles empty LvCommon.Devices
During vdo testing with smaller block devices the test needs to determine
which PVs in a VG are unused. This was leveraging LvCommon.Devices, but
that isn't populated when a LV is resides on a LV pool instead of a PV.
This is a known limitation of the code at this time. For now we will walk
all the PVs in the VG looking for ones that don't have any associated LVs
and use them instead.
Zdenek Kabelac [Thu, 26 Oct 2023 19:57:59 +0000 (21:57 +0200)]
device_mapper: raid status handle all a chars
When getting raid status from some older kernels, we may get an 'a'
for 'A' leg when doing initial synchronization.
This may prohibit removal of newly synchronized leg until synchronization
is finished.
So in this case change the status to look like being reported
from a newer kernel version.
David Teigland [Tue, 17 Oct 2023 18:54:31 +0000 (13:54 -0500)]
device_id: first match non-devname device ids
Incorrectly matching a dev to a devname id (due to changing devnames)
before matching the dev to a proper device id, can result in the
dev not being matched to the real id.
Peter Rajnoha [Mon, 23 Oct 2023 11:56:55 +0000 (13:56 +0200)]
libdm: report: fix invalid JSON if using DM_REPORT_OUTPUT_MULTIPLE_TIMES and selection
When reporting in JSON format, we need to be able to find the 'last
displayed row', not just 'last row' as we did before. This is used
to decide whether to put the JSON_SEPARATOR (the ',' character)
between the lines when reporting in JSON format.
This is mainly important in case we use a combination of JSON format
and a report marked with DM_REPORT_OUTPUT_MULTIPLE_TIMES flag.
Such report may be reused several times with different selection
criteria each time. In that case, the report always contains all lines
in memory, even though some of them do not need to pass the selection
criteria that are currently used.
Without DM_REPORT_OUTPUT_MULTIPLE_TIMES flag, the report only contains
the lines that have passed the selection criteria, so the this wasn't
an issue in this case.
Fix suggested by Lars Ellenberg and reported here: https://github.com/lvmteam/lvm2/issues/130