module build fix.
Other flags let you change the installation & working directories.
Example:
- ./configure --with-kernel-dir=/usr/src/linux-2.4.16
+ ./configure --with-kernel-dir=/usr/src/linux-2.4.19-rc1
2) Patch, configure and build a new kernel containing device-mapper.
If there is already a patch for your kernel and you gave 'configure'
appropriate parameters in step 1, you can just run 'make apply-patches'
- from the top directory.
+ from the top directory and go on to step 3.
- If you are using an old version of User Mode Linux, you may also
- need to apply the patch patches/misc/uml_config.patch.
-
Configure, build and install your kernel in the normal way, selecting
'Device mapper support' from the 'Multiple devices driver support' menu.
If you are patching by hand, the patches are stored in the
'patches' subdirectory. The name of each patch contains the kernel
version it was generated against and whether it is for the 'fs' or
- 'ioctl' interface. Only one interface is supported at once - don't
- apply both patches. Current development effort is concentrated
- on the 'ioctl' interface, so choose that version.
-
- The patches were generated by running 'make patches' from the 'kernel'
- subdirectory. Constituent patches are kept in patches/common, patches/fs
- and patches/ioctl. Source files are kept in kernel/common, kernel/fs
- and kernel/ioctl. Running 'make symlinks' from the 'kernel' subdirectory
- will put symbolic links into your kernel tree pointing back at the
- source files.
+ 'ioctl' interface. Current development effort is concentrated
+ on the 'ioctl' interface. (Use CVS to get the older 'fs' patches if
+ you want.)
+
+ patches/common holds the constituent patches - see patches/common/README.
+
+ Running 'make symlinks' from the 'kernel' subdirectory will put symbolic
+ links into your kernel tree pointing back at the source files.
3) Build and install the shared library (libdevmapper.so) that
The script creates the /dev/device-mapper/control device for the ioctl
interface using the major and minor numbers that have been allocated
dynamically. It prints a message if it works or else it fails silently
- with a non-zero status code.
+ with a non-zero status code (which you probably want to test for).
If you want the block devices created by the device-mapper to have a
specific major number, then specify this when loading the module
This major number is visible in /proc/devices.
-5) You can now use 'dmsetup' to test the API.
+5) You can now boot your new kernel and use 'dmsetup' to test the API.
Read the dmsetup man page for more information.
+ Or proceed to install a beta version of the new LVM2 tools.
-Filesystem interface
-====================
-There is also an experimental filesystem interface, but it currently
-supports fewer features than the ioctl interface does.
+Notes about the 'proof of concept' filesystem interface
+=======================================================
+The filesystem interface has not been updated for several months,
+requires an old kernel, and is missing lots of features.
-If you wish to experiment with it you should configure --with-interface=fs
-and you may need to update some of the code to get it to compile again.
-You will need to mount dmfs.
-e.g. mount -t dmfs dmfs /dmfs (or add it to /etc/fstab)
+To experiment with it you should check out an old version from
+CVS and then 'configure --with-interface=fs'.
+You will need to mount dmfs. e.g. mount -t dmfs dmfs /dmfs
component can support user-space tools for logical volume management.
The driver maps ranges of sectors for the new logical device onto
-'targets' according to a mapping table. Currently the mapping table
-can be supplied to the driver through either an ioctl interface or a
-custom file system interface (dmfs).
+'mapping targets' according to a mapping table. Currently the mapping
+table must be supplied to the driver through an ioctl interface.
+Earlier versions of the driver also had a custom file system interface
+(dmfs), but we stopped work on this because of pressure of time.
The mapping table consists of an ordered list of rules of the form:
<start> <length> <target> [<target args> ...]
function looks up the correct target and then passes the request on to
the target to perform the remapping according to its arguments.
-So far the following targets are available:
+The following targets are available:
linear
striped
error
+ snapshot
+ mirror
The 'linear' target takes as arguments a target device name (eg
/dev/hda6) and a start sector and maps the range of sectors linearly
The 'error' target causes any I/O to the mapped sectors to fail. This
is useful for defining gaps in the new logical device.
+The 'snapshot' target supports asynchronous snapshots.
+See http://people.sistina.com/~thornber/snap_performance.html.
+
+The 'mirror' target will be used to implement a new pvmove.
+
In normal scenarios the mapping tables will remain small.
A btree structure is used to hold the sector range -> target mapping.
Since we know all the entries in the btree in advance we can make a
than current LVM.
-08/12/2001 Sistina UK
+Sistina UK
+Revised 26/06/2002
-This directory contains a beta release of the proposed device mapper
+This directory contains a beta release of the proposed device-mapper
for the linux kernel.
For more information about the device mapper read the INTRO file.
-0.96.01-cvs (2002-06-19)
+0.96.02-cvs (2002-06-26)
if (alloc_kiovec(1, &ps->iobuf))
goto bad;
- if (alloc_kiobuf_bhs(ps->iobuf))
- goto bad;
-
nr_pages = ps->chunk_size / (PAGE_SIZE / SECTOR_SIZE);
r = expand_kiobuf(ps->iobuf, nr_pages);
if (r)
#define DM_VERSION_MAJOR 1
#define DM_VERSION_MINOR 0
-#define DM_VERSION_PATCHLEVEL 0
-#define DM_VERSION_EXTRA "-ioctl-cvs (2002-06-19)"
+#define DM_VERSION_PATCHLEVEL 1
+#define DM_VERSION_EXTRA "-ioctl-cvs (2002-06-26)"
/* Status bits */
#define DM_READONLY_FLAG 0x00000001
.SH SYNOPSIS
.ad l
.B dmsetup create
-.I device_name table_file
+.I device_name table_file [uuid]
.br
.B dmsetup remove
.I device_name
.br
+.B dmsetup rename
+.I device_name new_name
+.br
.B dmsetup suspend
.I device_name
.br
.br
.B dmsetup info
.I device_name
+.br
+.B dmsetup deps
+.I device_name
+.br
+.B dmsetup status
+.I device_name
+.br
+.B dmsetup table
+.I device_name
+.br
+.B dmsetup wait
+.I device_name
+.br
+.B dmsetup remove_all
+.I device_name
+.br
+.B dmsetup version
.ad b
.SH DESCRIPTION
dmsetup manages logical devices that use the device-mapper driver.
each sector in the logical device.
The first argument to dmsetup is a command.
-The second argument is the logical device name.
+The second argument is the logical device name or uuid.
.SH COMMANDS
.IP \fBcreate
-.I device_name table_file
+.I device_name table_file [uuid]
.br
-Attempts to create a device using the table file given. If
+Attempts to create a device using the table file given.
+The optional uuid can be used in place of
+device_name in subsequent dmsetup commands. If
successful a device will appear as
/dev/device-mapper/<device-name>. See below for information
on the table file format.
.IP \fBremove
.I device_name
.br
-Removes the device
+Removes a device
+.IP \fBrename
+.I device_name new_name
+.br
+Renames a device
.IP \fBsuspend
.I device_name
.br
major,minor
.br
target_count
+.IP \fBdeps
+.I device_name
+.br
+Outputs a list of (major, minor) pairs for devices referenced by the
+specified device.
+.IP \fBstatus
+.I device_name
+.br
+Outputs status information for each of the device's targets.
+.IP \fBtable
+.I device_name
+.br
+Outputs the current table for the device in a format than can be fed
+back in using the create or reload commands.
+.IP \fBwait
+.I device_name
+.br
+Sleeps until an event is triggered against a device.
+.IP \fBremove_all
+.br
+Attempts to remove all device definitions i.e. reset the driver.
+Use with care!
+.IP \fBversion
+.I device_name
+.br
+Outputs version information.
.SH TABLE FORMAT
Each line of the table specifies a single target and is of the form:
.br
logical_start_sector num_sectors target_type target_args
.br
.br
-At the moment there are 3 simple target types available - though your
-system might have more in the form of modules.
+There are currently three simple target types available together
+with more complex optional ones that implement snapshots and mirrors.
.IP \fBlinear
.I destination_device start_sector
--- /dev/null
+Apply all of these to the core kernel:
+ -mempool.patch - sct's backport
+ -mempool_slab.patch - a couple more functions
+ -vcalloc.patch - a calloc implementation (with overflow check)
+
+And this one too:
+ -b_bdev_private.patch - add a private b_private (avoids ext3 conflict)
+
+These patches provide the core driver and implement basic mapping functions:
+ -config.patch - add device-mapper option (tagged experimental)
+ -devmapper_1_core.patch - the core driver
+ -devmapper_2_ioctl.patch - ioctl interface to driver
+ -devmapper_3_basic_mappings.patch - linear and striped mappings
+
+Optional asynchronous snapshot implementation:
+ -devmapper_4_snapshots.patch - snapshot implementation
+
+Optional mirror implementation (requires snapshots patch):
+ -devmapper_5_mirror.patch - mirror implementation (for pvmove)
+
+++ /dev/null
-diff -ruN linux-2.4.18/drivers/md/Config.in linux/drivers/md/Config.in
---- linux-2.4.18/drivers/md/Config.in Fri Sep 14 22:22:18 2001
-+++ linux/drivers/md/Config.in Wed Jan 2 19:23:58 2002
-@@ -14,5 +14,6 @@
- dep_tristate ' Multipath I/O support' CONFIG_MD_MULTIPATH $CONFIG_BLK_DEV_MD
-
- dep_tristate ' Logical volume manager (LVM) support' CONFIG_BLK_DEV_LVM $CONFIG_MD
-+dep_tristate ' Device mapper support' CONFIG_BLK_DEV_DM $CONFIG_MD
-
- endmenu
+++ /dev/null
-diff -Nru a/include/linux/mempool.h b/include/linux/mempool.h
---- /dev/null Wed Dec 31 16:00:00 1969
-+++ b/include/linux/mempool.h Tue Apr 23 20:55:52 2002
-@@ -0,0 +1,41 @@
-+/*
-+ * memory buffer pool support
-+ */
-+#ifndef _LINUX_MEMPOOL_H
-+#define _LINUX_MEMPOOL_H
-+
-+#include <linux/list.h>
-+#include <linux/wait.h>
-+
-+struct mempool_s;
-+typedef struct mempool_s mempool_t;
-+
-+typedef void * (mempool_alloc_t)(int gfp_mask, void *pool_data);
-+typedef void (mempool_free_t)(void *element, void *pool_data);
-+
-+struct mempool_s {
-+ spinlock_t lock;
-+ int min_nr, curr_nr;
-+ struct list_head elements;
-+
-+ void *pool_data;
-+ mempool_alloc_t *alloc;
-+ mempool_free_t *free;
-+ wait_queue_head_t wait;
-+};
-+extern mempool_t * mempool_create(int min_nr, mempool_alloc_t *alloc_fn,
-+ mempool_free_t *free_fn, void *pool_data);
-+extern void mempool_resize(mempool_t *pool, int new_min_nr, int gfp_mask);
-+extern void mempool_destroy(mempool_t *pool);
-+extern void * mempool_alloc(mempool_t *pool, int gfp_mask);
-+extern void mempool_free(void *element, mempool_t *pool);
-+
-+
-+/*
-+ * A mempool_alloc_t and mempool_free_t that get the memory from
-+ * a slab that is passed in through pool_data.
-+ */
-+void *mempool_alloc_slab(int gfp_mask, void *pool_data);
-+void mempool_free_slab(void *element, void *pool_data);
-+
-+#endif /* _LINUX_MEMPOOL_H */
-diff -Nru a/mm/Makefile b/mm/Makefile
---- a/mm/Makefile Mon Mar 25 14:40:15 2002
-+++ b/mm/Makefile Mon Mar 25 14:40:15 2002
-@@ -9,12 +9,12 @@
-
- O_TARGET := mm.o
-
--export-objs := shmem.o filemap.o
-+export-objs := shmem.o filemap.o memory.o page_alloc.o mempool.o
-
- obj-y := memory.o mmap.o filemap.o mprotect.o mlock.o mremap.o \
- vmalloc.o slab.o bootmem.o swap.o vmscan.o page_io.o \
- page_alloc.o swap_state.o swapfile.o numa.o oom_kill.o \
-- shmem.o
-+ shmem.o mempool.o
-
- obj-$(CONFIG_HIGHMEM) += highmem.o
-
-diff -Nru a/mm/mempool.c b/mm/mempool.c
---- /dev/null Wed Dec 31 16:00:00 1969
-+++ b/mm/mempool.c Tue Apr 23 20:55:52 2002
-@@ -0,0 +1,295 @@
-+/*
-+ * linux/mm/mempool.c
-+ *
-+ * memory buffer pool support. Such pools are mostly used
-+ * for guaranteed, deadlock-free memory allocations during
-+ * extreme VM load.
-+ *
-+ * started by Ingo Molnar, Copyright (C) 2001
-+ */
-+
-+#include <linux/mm.h>
-+#include <linux/slab.h>
-+#include <linux/module.h>
-+#include <linux/mempool.h>
-+#include <linux/compiler.h>
-+
-+/**
-+ * mempool_create - create a memory pool
-+ * @min_nr: the minimum number of elements guaranteed to be
-+ * allocated for this pool.
-+ * @alloc_fn: user-defined element-allocation function.
-+ * @free_fn: user-defined element-freeing function.
-+ * @pool_data: optional private data available to the user-defined functions.
-+ *
-+ * this function creates and allocates a guaranteed size, preallocated
-+ * memory pool. The pool can be used from the mempool_alloc and mempool_free
-+ * functions. This function might sleep. Both the alloc_fn() and the free_fn()
-+ * functions might sleep - as long as the mempool_alloc function is not called
-+ * from IRQ contexts. The element allocated by alloc_fn() must be able to
-+ * hold a struct list_head. (8 bytes on x86.)
-+ */
-+mempool_t * mempool_create(int min_nr, mempool_alloc_t *alloc_fn,
-+ mempool_free_t *free_fn, void *pool_data)
-+{
-+ mempool_t *pool;
-+ int i;
-+
-+ pool = kmalloc(sizeof(*pool), GFP_KERNEL);
-+ if (!pool)
-+ return NULL;
-+ memset(pool, 0, sizeof(*pool));
-+
-+ spin_lock_init(&pool->lock);
-+ pool->min_nr = min_nr;
-+ pool->pool_data = pool_data;
-+ INIT_LIST_HEAD(&pool->elements);
-+ init_waitqueue_head(&pool->wait);
-+ pool->alloc = alloc_fn;
-+ pool->free = free_fn;
-+
-+ /*
-+ * First pre-allocate the guaranteed number of buffers.
-+ */
-+ for (i = 0; i < min_nr; i++) {
-+ void *element;
-+ struct list_head *tmp;
-+ element = pool->alloc(GFP_KERNEL, pool->pool_data);
-+
-+ if (unlikely(!element)) {
-+ /*
-+ * Not enough memory - free the allocated ones
-+ * and return:
-+ */
-+ list_for_each(tmp, &pool->elements) {
-+ element = tmp;
-+ pool->free(element, pool->pool_data);
-+ }
-+ kfree(pool);
-+
-+ return NULL;
-+ }
-+ tmp = element;
-+ list_add(tmp, &pool->elements);
-+ pool->curr_nr++;
-+ }
-+ return pool;
-+}
-+
-+/**
-+ * mempool_resize - resize an existing memory pool
-+ * @pool: pointer to the memory pool which was allocated via
-+ * mempool_create().
-+ * @new_min_nr: the new minimum number of elements guaranteed to be
-+ * allocated for this pool.
-+ * @gfp_mask: the usual allocation bitmask.
-+ *
-+ * This function shrinks/grows the pool. In the case of growing,
-+ * it cannot be guaranteed that the pool will be grown to the new
-+ * size immediately, but new mempool_free() calls will refill it.
-+ *
-+ * Note, the caller must guarantee that no mempool_destroy is called
-+ * while this function is running. mempool_alloc() & mempool_free()
-+ * might be called (eg. from IRQ contexts) while this function executes.
-+ */
-+void mempool_resize(mempool_t *pool, int new_min_nr, int gfp_mask)
-+{
-+ int delta;
-+ void *element;
-+ unsigned long flags;
-+ struct list_head *tmp;
-+
-+ if (new_min_nr <= 0)
-+ BUG();
-+
-+ spin_lock_irqsave(&pool->lock, flags);
-+ if (new_min_nr < pool->min_nr) {
-+ pool->min_nr = new_min_nr;
-+ /*
-+ * Free possible excess elements.
-+ */
-+ while (pool->curr_nr > pool->min_nr) {
-+ tmp = pool->elements.next;
-+ if (tmp == &pool->elements)
-+ BUG();
-+ list_del(tmp);
-+ element = tmp;
-+ pool->curr_nr--;
-+ spin_unlock_irqrestore(&pool->lock, flags);
-+
-+ pool->free(element, pool->pool_data);
-+
-+ spin_lock_irqsave(&pool->lock, flags);
-+ }
-+ spin_unlock_irqrestore(&pool->lock, flags);
-+ return;
-+ }
-+ delta = new_min_nr - pool->min_nr;
-+ pool->min_nr = new_min_nr;
-+ spin_unlock_irqrestore(&pool->lock, flags);
-+
-+ /*
-+ * We refill the pool up to the new treshold - but we dont
-+ * (cannot) guarantee that the refill succeeds.
-+ */
-+ while (delta) {
-+ element = pool->alloc(gfp_mask, pool->pool_data);
-+ if (!element)
-+ break;
-+ mempool_free(element, pool);
-+ delta--;
-+ }
-+}
-+
-+/**
-+ * mempool_destroy - deallocate a memory pool
-+ * @pool: pointer to the memory pool which was allocated via
-+ * mempool_create().
-+ *
-+ * this function only sleeps if the free_fn() function sleeps. The caller
-+ * has to guarantee that no mempool_alloc() nor mempool_free() happens in
-+ * this pool when calling this function.
-+ */
-+void mempool_destroy(mempool_t *pool)
-+{
-+ void *element;
-+ struct list_head *head, *tmp;
-+
-+ if (!pool)
-+ return;
-+
-+ head = &pool->elements;
-+ for (tmp = head->next; tmp != head; ) {
-+ element = tmp;
-+ tmp = tmp->next;
-+ pool->free(element, pool->pool_data);
-+ pool->curr_nr--;
-+ }
-+ if (pool->curr_nr)
-+ BUG();
-+ kfree(pool);
-+}
-+
-+/**
-+ * mempool_alloc - allocate an element from a specific memory pool
-+ * @pool: pointer to the memory pool which was allocated via
-+ * mempool_create().
-+ * @gfp_mask: the usual allocation bitmask.
-+ *
-+ * this function only sleeps if the alloc_fn function sleeps or
-+ * returns NULL. Note that due to preallocation, this function
-+ * *never* fails when called from process contexts. (it might
-+ * fail if called from an IRQ context.)
-+ */
-+void * mempool_alloc(mempool_t *pool, int gfp_mask)
-+{
-+ void *element;
-+ unsigned long flags;
-+ struct list_head *tmp;
-+ int curr_nr;
-+ DECLARE_WAITQUEUE(wait, current);
-+ int gfp_nowait = gfp_mask & ~(__GFP_WAIT | __GFP_IO);
-+
-+repeat_alloc:
-+ element = pool->alloc(gfp_nowait, pool->pool_data);
-+ if (likely(element != NULL))
-+ return element;
-+
-+ /*
-+ * If the pool is less than 50% full then try harder
-+ * to allocate an element:
-+ */
-+ if ((gfp_mask != gfp_nowait) && (pool->curr_nr <= pool->min_nr/2)) {
-+ element = pool->alloc(gfp_mask, pool->pool_data);
-+ if (likely(element != NULL))
-+ return element;
-+ }
-+
-+ /*
-+ * Kick the VM at this point.
-+ */
-+ wakeup_bdflush();
-+
-+ spin_lock_irqsave(&pool->lock, flags);
-+ if (likely(pool->curr_nr)) {
-+ tmp = pool->elements.next;
-+ list_del(tmp);
-+ element = tmp;
-+ pool->curr_nr--;
-+ spin_unlock_irqrestore(&pool->lock, flags);
-+ return element;
-+ }
-+ spin_unlock_irqrestore(&pool->lock, flags);
-+
-+ /* We must not sleep in the GFP_ATOMIC case */
-+ if (gfp_mask == gfp_nowait)
-+ return NULL;
-+
-+ run_task_queue(&tq_disk);
-+
-+ add_wait_queue_exclusive(&pool->wait, &wait);
-+ set_task_state(current, TASK_UNINTERRUPTIBLE);
-+
-+ spin_lock_irqsave(&pool->lock, flags);
-+ curr_nr = pool->curr_nr;
-+ spin_unlock_irqrestore(&pool->lock, flags);
-+
-+ if (!curr_nr)
-+ schedule();
-+
-+ current->state = TASK_RUNNING;
-+ remove_wait_queue(&pool->wait, &wait);
-+
-+ goto repeat_alloc;
-+}
-+
-+/**
-+ * mempool_free - return an element to the pool.
-+ * @element: pool element pointer.
-+ * @pool: pointer to the memory pool which was allocated via
-+ * mempool_create().
-+ *
-+ * this function only sleeps if the free_fn() function sleeps.
-+ */
-+void mempool_free(void *element, mempool_t *pool)
-+{
-+ unsigned long flags;
-+
-+ if (pool->curr_nr < pool->min_nr) {
-+ spin_lock_irqsave(&pool->lock, flags);
-+ if (pool->curr_nr < pool->min_nr) {
-+ list_add(element, &pool->elements);
-+ pool->curr_nr++;
-+ spin_unlock_irqrestore(&pool->lock, flags);
-+ wake_up(&pool->wait);
-+ return;
-+ }
-+ spin_unlock_irqrestore(&pool->lock, flags);
-+ }
-+ pool->free(element, pool->pool_data);
-+}
-+
-+/*
-+ * A commonly used alloc and free fn.
-+ */
-+void *mempool_alloc_slab(int gfp_mask, void *pool_data)
-+{
-+ kmem_cache_t *mem = (kmem_cache_t *) pool_data;
-+ return kmem_cache_alloc(mem, gfp_mask);
-+}
-+
-+void mempool_free_slab(void *element, void *pool_data)
-+{
-+ kmem_cache_t *mem = (kmem_cache_t *) pool_data;
-+ kmem_cache_free(mem, element);
-+}
-+
-+
-+EXPORT_SYMBOL(mempool_create);
-+EXPORT_SYMBOL(mempool_resize);
-+EXPORT_SYMBOL(mempool_destroy);
-+EXPORT_SYMBOL(mempool_alloc);
-+EXPORT_SYMBOL(mempool_free);
-+EXPORT_SYMBOL(mempool_alloc_slab);
-+EXPORT_SYMBOL(mempool_free_slab);
-+
+++ /dev/null
-diff -ruN linux-2.4.18/drivers/md/Config.in linux/drivers/md/Config.in
---- linux-2.4.18/drivers/md/Config.in Fri Sep 14 22:22:18 2001
-+++ linux/drivers/md/Config.in Wed Jan 2 19:23:58 2002
-@@ -14,5 +14,6 @@
- dep_tristate ' Multipath I/O support' CONFIG_MD_MULTIPATH $CONFIG_BLK_DEV_MD
-
- dep_tristate ' Logical volume manager (LVM) support' CONFIG_BLK_DEV_LVM $CONFIG_MD
-+dep_tristate ' Device mapper support' CONFIG_BLK_DEV_DM $CONFIG_MD
-
- endmenu
+++ /dev/null
-diff -Nru a/include/linux/mempool.h b/include/linux/mempool.h
---- /dev/null Wed Dec 31 16:00:00 1969
-+++ b/include/linux/mempool.h Tue Apr 23 20:55:52 2002
-@@ -0,0 +1,41 @@
-+/*
-+ * memory buffer pool support
-+ */
-+#ifndef _LINUX_MEMPOOL_H
-+#define _LINUX_MEMPOOL_H
-+
-+#include <linux/list.h>
-+#include <linux/wait.h>
-+
-+struct mempool_s;
-+typedef struct mempool_s mempool_t;
-+
-+typedef void * (mempool_alloc_t)(int gfp_mask, void *pool_data);
-+typedef void (mempool_free_t)(void *element, void *pool_data);
-+
-+struct mempool_s {
-+ spinlock_t lock;
-+ int min_nr, curr_nr;
-+ struct list_head elements;
-+
-+ void *pool_data;
-+ mempool_alloc_t *alloc;
-+ mempool_free_t *free;
-+ wait_queue_head_t wait;
-+};
-+extern mempool_t * mempool_create(int min_nr, mempool_alloc_t *alloc_fn,
-+ mempool_free_t *free_fn, void *pool_data);
-+extern void mempool_resize(mempool_t *pool, int new_min_nr, int gfp_mask);
-+extern void mempool_destroy(mempool_t *pool);
-+extern void * mempool_alloc(mempool_t *pool, int gfp_mask);
-+extern void mempool_free(void *element, mempool_t *pool);
-+
-+
-+/*
-+ * A mempool_alloc_t and mempool_free_t that get the memory from
-+ * a slab that is passed in through pool_data.
-+ */
-+void *mempool_alloc_slab(int gfp_mask, void *pool_data);
-+void mempool_free_slab(void *element, void *pool_data);
-+
-+#endif /* _LINUX_MEMPOOL_H */
-diff -Nru a/mm/Makefile b/mm/Makefile
---- a/mm/Makefile Mon Mar 25 14:40:15 2002
-+++ b/mm/Makefile Mon Mar 25 14:40:15 2002
-@@ -9,12 +9,12 @@
-
- O_TARGET := mm.o
-
--export-objs := shmem.o filemap.o memory.o page_alloc.o
-+export-objs := shmem.o filemap.o memory.o page_alloc.o mempool.o
-
- obj-y := memory.o mmap.o filemap.o mprotect.o mlock.o mremap.o \
- vmalloc.o slab.o bootmem.o swap.o vmscan.o page_io.o \
- page_alloc.o swap_state.o swapfile.o numa.o oom_kill.o \
-- shmem.o
-+ shmem.o mempool.o
-
- obj-$(CONFIG_HIGHMEM) += highmem.o
-
-diff -Nru a/mm/mempool.c b/mm/mempool.c
---- /dev/null Wed Dec 31 16:00:00 1969
-+++ b/mm/mempool.c Tue Apr 23 20:55:52 2002
-@@ -0,0 +1,295 @@
-+/*
-+ * linux/mm/mempool.c
-+ *
-+ * memory buffer pool support. Such pools are mostly used
-+ * for guaranteed, deadlock-free memory allocations during
-+ * extreme VM load.
-+ *
-+ * started by Ingo Molnar, Copyright (C) 2001
-+ */
-+
-+#include <linux/mm.h>
-+#include <linux/slab.h>
-+#include <linux/module.h>
-+#include <linux/mempool.h>
-+#include <linux/compiler.h>
-+
-+/**
-+ * mempool_create - create a memory pool
-+ * @min_nr: the minimum number of elements guaranteed to be
-+ * allocated for this pool.
-+ * @alloc_fn: user-defined element-allocation function.
-+ * @free_fn: user-defined element-freeing function.
-+ * @pool_data: optional private data available to the user-defined functions.
-+ *
-+ * this function creates and allocates a guaranteed size, preallocated
-+ * memory pool. The pool can be used from the mempool_alloc and mempool_free
-+ * functions. This function might sleep. Both the alloc_fn() and the free_fn()
-+ * functions might sleep - as long as the mempool_alloc function is not called
-+ * from IRQ contexts. The element allocated by alloc_fn() must be able to
-+ * hold a struct list_head. (8 bytes on x86.)
-+ */
-+mempool_t * mempool_create(int min_nr, mempool_alloc_t *alloc_fn,
-+ mempool_free_t *free_fn, void *pool_data)
-+{
-+ mempool_t *pool;
-+ int i;
-+
-+ pool = kmalloc(sizeof(*pool), GFP_KERNEL);
-+ if (!pool)
-+ return NULL;
-+ memset(pool, 0, sizeof(*pool));
-+
-+ spin_lock_init(&pool->lock);
-+ pool->min_nr = min_nr;
-+ pool->pool_data = pool_data;
-+ INIT_LIST_HEAD(&pool->elements);
-+ init_waitqueue_head(&pool->wait);
-+ pool->alloc = alloc_fn;
-+ pool->free = free_fn;
-+
-+ /*
-+ * First pre-allocate the guaranteed number of buffers.
-+ */
-+ for (i = 0; i < min_nr; i++) {
-+ void *element;
-+ struct list_head *tmp;
-+ element = pool->alloc(GFP_KERNEL, pool->pool_data);
-+
-+ if (unlikely(!element)) {
-+ /*
-+ * Not enough memory - free the allocated ones
-+ * and return:
-+ */
-+ list_for_each(tmp, &pool->elements) {
-+ element = tmp;
-+ pool->free(element, pool->pool_data);
-+ }
-+ kfree(pool);
-+
-+ return NULL;
-+ }
-+ tmp = element;
-+ list_add(tmp, &pool->elements);
-+ pool->curr_nr++;
-+ }
-+ return pool;
-+}
-+
-+/**
-+ * mempool_resize - resize an existing memory pool
-+ * @pool: pointer to the memory pool which was allocated via
-+ * mempool_create().
-+ * @new_min_nr: the new minimum number of elements guaranteed to be
-+ * allocated for this pool.
-+ * @gfp_mask: the usual allocation bitmask.
-+ *
-+ * This function shrinks/grows the pool. In the case of growing,
-+ * it cannot be guaranteed that the pool will be grown to the new
-+ * size immediately, but new mempool_free() calls will refill it.
-+ *
-+ * Note, the caller must guarantee that no mempool_destroy is called
-+ * while this function is running. mempool_alloc() & mempool_free()
-+ * might be called (eg. from IRQ contexts) while this function executes.
-+ */
-+void mempool_resize(mempool_t *pool, int new_min_nr, int gfp_mask)
-+{
-+ int delta;
-+ void *element;
-+ unsigned long flags;
-+ struct list_head *tmp;
-+
-+ if (new_min_nr <= 0)
-+ BUG();
-+
-+ spin_lock_irqsave(&pool->lock, flags);
-+ if (new_min_nr < pool->min_nr) {
-+ pool->min_nr = new_min_nr;
-+ /*
-+ * Free possible excess elements.
-+ */
-+ while (pool->curr_nr > pool->min_nr) {
-+ tmp = pool->elements.next;
-+ if (tmp == &pool->elements)
-+ BUG();
-+ list_del(tmp);
-+ element = tmp;
-+ pool->curr_nr--;
-+ spin_unlock_irqrestore(&pool->lock, flags);
-+
-+ pool->free(element, pool->pool_data);
-+
-+ spin_lock_irqsave(&pool->lock, flags);
-+ }
-+ spin_unlock_irqrestore(&pool->lock, flags);
-+ return;
-+ }
-+ delta = new_min_nr - pool->min_nr;
-+ pool->min_nr = new_min_nr;
-+ spin_unlock_irqrestore(&pool->lock, flags);
-+
-+ /*
-+ * We refill the pool up to the new treshold - but we dont
-+ * (cannot) guarantee that the refill succeeds.
-+ */
-+ while (delta) {
-+ element = pool->alloc(gfp_mask, pool->pool_data);
-+ if (!element)
-+ break;
-+ mempool_free(element, pool);
-+ delta--;
-+ }
-+}
-+
-+/**
-+ * mempool_destroy - deallocate a memory pool
-+ * @pool: pointer to the memory pool which was allocated via
-+ * mempool_create().
-+ *
-+ * this function only sleeps if the free_fn() function sleeps. The caller
-+ * has to guarantee that no mempool_alloc() nor mempool_free() happens in
-+ * this pool when calling this function.
-+ */
-+void mempool_destroy(mempool_t *pool)
-+{
-+ void *element;
-+ struct list_head *head, *tmp;
-+
-+ if (!pool)
-+ return;
-+
-+ head = &pool->elements;
-+ for (tmp = head->next; tmp != head; ) {
-+ element = tmp;
-+ tmp = tmp->next;
-+ pool->free(element, pool->pool_data);
-+ pool->curr_nr--;
-+ }
-+ if (pool->curr_nr)
-+ BUG();
-+ kfree(pool);
-+}
-+
-+/**
-+ * mempool_alloc - allocate an element from a specific memory pool
-+ * @pool: pointer to the memory pool which was allocated via
-+ * mempool_create().
-+ * @gfp_mask: the usual allocation bitmask.
-+ *
-+ * this function only sleeps if the alloc_fn function sleeps or
-+ * returns NULL. Note that due to preallocation, this function
-+ * *never* fails when called from process contexts. (it might
-+ * fail if called from an IRQ context.)
-+ */
-+void * mempool_alloc(mempool_t *pool, int gfp_mask)
-+{
-+ void *element;
-+ unsigned long flags;
-+ struct list_head *tmp;
-+ int curr_nr;
-+ DECLARE_WAITQUEUE(wait, current);
-+ int gfp_nowait = gfp_mask & ~(__GFP_WAIT | __GFP_IO);
-+
-+repeat_alloc:
-+ element = pool->alloc(gfp_nowait, pool->pool_data);
-+ if (likely(element != NULL))
-+ return element;
-+
-+ /*
-+ * If the pool is less than 50% full then try harder
-+ * to allocate an element:
-+ */
-+ if ((gfp_mask != gfp_nowait) && (pool->curr_nr <= pool->min_nr/2)) {
-+ element = pool->alloc(gfp_mask, pool->pool_data);
-+ if (likely(element != NULL))
-+ return element;
-+ }
-+
-+ /*
-+ * Kick the VM at this point.
-+ */
-+ wakeup_bdflush();
-+
-+ spin_lock_irqsave(&pool->lock, flags);
-+ if (likely(pool->curr_nr)) {
-+ tmp = pool->elements.next;
-+ list_del(tmp);
-+ element = tmp;
-+ pool->curr_nr--;
-+ spin_unlock_irqrestore(&pool->lock, flags);
-+ return element;
-+ }
-+ spin_unlock_irqrestore(&pool->lock, flags);
-+
-+ /* We must not sleep in the GFP_ATOMIC case */
-+ if (gfp_mask == gfp_nowait)
-+ return NULL;
-+
-+ run_task_queue(&tq_disk);
-+
-+ add_wait_queue_exclusive(&pool->wait, &wait);
-+ set_task_state(current, TASK_UNINTERRUPTIBLE);
-+
-+ spin_lock_irqsave(&pool->lock, flags);
-+ curr_nr = pool->curr_nr;
-+ spin_unlock_irqrestore(&pool->lock, flags);
-+
-+ if (!curr_nr)
-+ schedule();
-+
-+ current->state = TASK_RUNNING;
-+ remove_wait_queue(&pool->wait, &wait);
-+
-+ goto repeat_alloc;
-+}
-+
-+/**
-+ * mempool_free - return an element to the pool.
-+ * @element: pool element pointer.
-+ * @pool: pointer to the memory pool which was allocated via
-+ * mempool_create().
-+ *
-+ * this function only sleeps if the free_fn() function sleeps.
-+ */
-+void mempool_free(void *element, mempool_t *pool)
-+{
-+ unsigned long flags;
-+
-+ if (pool->curr_nr < pool->min_nr) {
-+ spin_lock_irqsave(&pool->lock, flags);
-+ if (pool->curr_nr < pool->min_nr) {
-+ list_add(element, &pool->elements);
-+ pool->curr_nr++;
-+ spin_unlock_irqrestore(&pool->lock, flags);
-+ wake_up(&pool->wait);
-+ return;
-+ }
-+ spin_unlock_irqrestore(&pool->lock, flags);
-+ }
-+ pool->free(element, pool->pool_data);
-+}
-+
-+/*
-+ * A commonly used alloc and free fn.
-+ */
-+void *mempool_alloc_slab(int gfp_mask, void *pool_data)
-+{
-+ kmem_cache_t *mem = (kmem_cache_t *) pool_data;
-+ return kmem_cache_alloc(mem, gfp_mask);
-+}
-+
-+void mempool_free_slab(void *element, void *pool_data)
-+{
-+ kmem_cache_t *mem = (kmem_cache_t *) pool_data;
-+ kmem_cache_free(mem, element);
-+}
-+
-+
-+EXPORT_SYMBOL(mempool_create);
-+EXPORT_SYMBOL(mempool_resize);
-+EXPORT_SYMBOL(mempool_destroy);
-+EXPORT_SYMBOL(mempool_alloc);
-+EXPORT_SYMBOL(mempool_free);
-+EXPORT_SYMBOL(mempool_alloc_slab);
-+EXPORT_SYMBOL(mempool_free_slab);
-+
+++ /dev/null
-diff -Nru a/mm/vmalloc.c b/mm/vmalloc.c
---- a/mm/vmalloc.c Wed Jun 12 12:04:44 2002
-+++ b/mm/vmalloc.c Thu Jun 13 13:13:44 2002
-@@ -321,3 +321,22 @@
- read_unlock(&vmlist_lock);
- return buf - buf_start;
- }
-+
-+void *vcalloc(unsigned long nmemb, unsigned long size)
-+{
-+ unsigned long len;
-+ void *mem;
-+
-+ /*
-+ * Check that we're not going to overflow.
-+ */
-+ if (nmemb > (ULONG_MAX / size))
-+ return NULL;
-+
-+ len = nmemb * size;
-+ mem = vmalloc(len);
-+ if (mem)
-+ memset(mem, 0, len);
-+
-+ return mem;
-+}
-diff -Nru a/include/linux/vmalloc.h b/include/linux/vmalloc.h
---- a/include/linux/vmalloc.h Wed Jun 12 12:35:58 2002
-+++ b/include/linux/vmalloc.h Thu Jun 13 13:13:39 2002
-@@ -25,6 +25,7 @@
- extern void vmfree_area_pages(unsigned long address, unsigned long size);
- extern int vmalloc_area_pages(unsigned long address, unsigned long size,
- int gfp_mask, pgprot_t prot);
-+extern void *vcalloc(unsigned long nmemb, unsigned long size);
-
- /*
- * Allocate any pages
+++ /dev/null
-diff -ruN linux-2.4.18/drivers/md/Config.in linux/drivers/md/Config.in
---- linux-2.4.18/drivers/md/Config.in Fri Sep 14 22:22:18 2001
-+++ linux/drivers/md/Config.in Wed Jan 2 19:23:58 2002
-@@ -14,5 +14,6 @@
- dep_tristate ' Multipath I/O support' CONFIG_MD_MULTIPATH $CONFIG_BLK_DEV_MD
-
- dep_tristate ' Logical volume manager (LVM) support' CONFIG_BLK_DEV_LVM $CONFIG_MD
-+dep_tristate ' Device mapper support' CONFIG_BLK_DEV_DM $CONFIG_MD
-
- endmenu
+++ /dev/null
-diff -Nru a/include/linux/mempool.h b/include/linux/mempool.h
---- /dev/null Wed Dec 31 16:00:00 1969
-+++ b/include/linux/mempool.h Tue Apr 23 20:55:52 2002
-@@ -0,0 +1,41 @@
-+/*
-+ * memory buffer pool support
-+ */
-+#ifndef _LINUX_MEMPOOL_H
-+#define _LINUX_MEMPOOL_H
-+
-+#include <linux/list.h>
-+#include <linux/wait.h>
-+
-+struct mempool_s;
-+typedef struct mempool_s mempool_t;
-+
-+typedef void * (mempool_alloc_t)(int gfp_mask, void *pool_data);
-+typedef void (mempool_free_t)(void *element, void *pool_data);
-+
-+struct mempool_s {
-+ spinlock_t lock;
-+ int min_nr, curr_nr;
-+ struct list_head elements;
-+
-+ void *pool_data;
-+ mempool_alloc_t *alloc;
-+ mempool_free_t *free;
-+ wait_queue_head_t wait;
-+};
-+extern mempool_t * mempool_create(int min_nr, mempool_alloc_t *alloc_fn,
-+ mempool_free_t *free_fn, void *pool_data);
-+extern void mempool_resize(mempool_t *pool, int new_min_nr, int gfp_mask);
-+extern void mempool_destroy(mempool_t *pool);
-+extern void * mempool_alloc(mempool_t *pool, int gfp_mask);
-+extern void mempool_free(void *element, mempool_t *pool);
-+
-+
-+/*
-+ * A mempool_alloc_t and mempool_free_t that get the memory from
-+ * a slab that is passed in through pool_data.
-+ */
-+void *mempool_alloc_slab(int gfp_mask, void *pool_data);
-+void mempool_free_slab(void *element, void *pool_data);
-+
-+#endif /* _LINUX_MEMPOOL_H */
-diff -Nru a/mm/Makefile b/mm/Makefile
---- a/mm/Makefile Mon Mar 25 14:40:15 2002
-+++ b/mm/Makefile Mon Mar 25 14:40:15 2002
-@@ -9,12 +9,12 @@
-
- O_TARGET := mm.o
-
--export-objs := shmem.o filemap.o memory.o page_alloc.o
-+export-objs := shmem.o filemap.o memory.o page_alloc.o mempool.o
-
- obj-y := memory.o mmap.o filemap.o mprotect.o mlock.o mremap.o \
- vmalloc.o slab.o bootmem.o swap.o vmscan.o page_io.o \
- page_alloc.o swap_state.o swapfile.o numa.o oom_kill.o \
-- shmem.o
-+ shmem.o mempool.o
-
- obj-$(CONFIG_HIGHMEM) += highmem.o
-
-diff -Nru a/mm/mempool.c b/mm/mempool.c
---- /dev/null Wed Dec 31 16:00:00 1969
-+++ b/mm/mempool.c Tue Apr 23 20:55:52 2002
-@@ -0,0 +1,295 @@
-+/*
-+ * linux/mm/mempool.c
-+ *
-+ * memory buffer pool support. Such pools are mostly used
-+ * for guaranteed, deadlock-free memory allocations during
-+ * extreme VM load.
-+ *
-+ * started by Ingo Molnar, Copyright (C) 2001
-+ */
-+
-+#include <linux/mm.h>
-+#include <linux/slab.h>
-+#include <linux/module.h>
-+#include <linux/mempool.h>
-+#include <linux/compiler.h>
-+
-+/**
-+ * mempool_create - create a memory pool
-+ * @min_nr: the minimum number of elements guaranteed to be
-+ * allocated for this pool.
-+ * @alloc_fn: user-defined element-allocation function.
-+ * @free_fn: user-defined element-freeing function.
-+ * @pool_data: optional private data available to the user-defined functions.
-+ *
-+ * this function creates and allocates a guaranteed size, preallocated
-+ * memory pool. The pool can be used from the mempool_alloc and mempool_free
-+ * functions. This function might sleep. Both the alloc_fn() and the free_fn()
-+ * functions might sleep - as long as the mempool_alloc function is not called
-+ * from IRQ contexts. The element allocated by alloc_fn() must be able to
-+ * hold a struct list_head. (8 bytes on x86.)
-+ */
-+mempool_t * mempool_create(int min_nr, mempool_alloc_t *alloc_fn,
-+ mempool_free_t *free_fn, void *pool_data)
-+{
-+ mempool_t *pool;
-+ int i;
-+
-+ pool = kmalloc(sizeof(*pool), GFP_KERNEL);
-+ if (!pool)
-+ return NULL;
-+ memset(pool, 0, sizeof(*pool));
-+
-+ spin_lock_init(&pool->lock);
-+ pool->min_nr = min_nr;
-+ pool->pool_data = pool_data;
-+ INIT_LIST_HEAD(&pool->elements);
-+ init_waitqueue_head(&pool->wait);
-+ pool->alloc = alloc_fn;
-+ pool->free = free_fn;
-+
-+ /*
-+ * First pre-allocate the guaranteed number of buffers.
-+ */
-+ for (i = 0; i < min_nr; i++) {
-+ void *element;
-+ struct list_head *tmp;
-+ element = pool->alloc(GFP_KERNEL, pool->pool_data);
-+
-+ if (unlikely(!element)) {
-+ /*
-+ * Not enough memory - free the allocated ones
-+ * and return:
-+ */
-+ list_for_each(tmp, &pool->elements) {
-+ element = tmp;
-+ pool->free(element, pool->pool_data);
-+ }
-+ kfree(pool);
-+
-+ return NULL;
-+ }
-+ tmp = element;
-+ list_add(tmp, &pool->elements);
-+ pool->curr_nr++;
-+ }
-+ return pool;
-+}
-+
-+/**
-+ * mempool_resize - resize an existing memory pool
-+ * @pool: pointer to the memory pool which was allocated via
-+ * mempool_create().
-+ * @new_min_nr: the new minimum number of elements guaranteed to be
-+ * allocated for this pool.
-+ * @gfp_mask: the usual allocation bitmask.
-+ *
-+ * This function shrinks/grows the pool. In the case of growing,
-+ * it cannot be guaranteed that the pool will be grown to the new
-+ * size immediately, but new mempool_free() calls will refill it.
-+ *
-+ * Note, the caller must guarantee that no mempool_destroy is called
-+ * while this function is running. mempool_alloc() & mempool_free()
-+ * might be called (eg. from IRQ contexts) while this function executes.
-+ */
-+void mempool_resize(mempool_t *pool, int new_min_nr, int gfp_mask)
-+{
-+ int delta;
-+ void *element;
-+ unsigned long flags;
-+ struct list_head *tmp;
-+
-+ if (new_min_nr <= 0)
-+ BUG();
-+
-+ spin_lock_irqsave(&pool->lock, flags);
-+ if (new_min_nr < pool->min_nr) {
-+ pool->min_nr = new_min_nr;
-+ /*
-+ * Free possible excess elements.
-+ */
-+ while (pool->curr_nr > pool->min_nr) {
-+ tmp = pool->elements.next;
-+ if (tmp == &pool->elements)
-+ BUG();
-+ list_del(tmp);
-+ element = tmp;
-+ pool->curr_nr--;
-+ spin_unlock_irqrestore(&pool->lock, flags);
-+
-+ pool->free(element, pool->pool_data);
-+
-+ spin_lock_irqsave(&pool->lock, flags);
-+ }
-+ spin_unlock_irqrestore(&pool->lock, flags);
-+ return;
-+ }
-+ delta = new_min_nr - pool->min_nr;
-+ pool->min_nr = new_min_nr;
-+ spin_unlock_irqrestore(&pool->lock, flags);
-+
-+ /*
-+ * We refill the pool up to the new treshold - but we dont
-+ * (cannot) guarantee that the refill succeeds.
-+ */
-+ while (delta) {
-+ element = pool->alloc(gfp_mask, pool->pool_data);
-+ if (!element)
-+ break;
-+ mempool_free(element, pool);
-+ delta--;
-+ }
-+}
-+
-+/**
-+ * mempool_destroy - deallocate a memory pool
-+ * @pool: pointer to the memory pool which was allocated via
-+ * mempool_create().
-+ *
-+ * this function only sleeps if the free_fn() function sleeps. The caller
-+ * has to guarantee that no mempool_alloc() nor mempool_free() happens in
-+ * this pool when calling this function.
-+ */
-+void mempool_destroy(mempool_t *pool)
-+{
-+ void *element;
-+ struct list_head *head, *tmp;
-+
-+ if (!pool)
-+ return;
-+
-+ head = &pool->elements;
-+ for (tmp = head->next; tmp != head; ) {
-+ element = tmp;
-+ tmp = tmp->next;
-+ pool->free(element, pool->pool_data);
-+ pool->curr_nr--;
-+ }
-+ if (pool->curr_nr)
-+ BUG();
-+ kfree(pool);
-+}
-+
-+/**
-+ * mempool_alloc - allocate an element from a specific memory pool
-+ * @pool: pointer to the memory pool which was allocated via
-+ * mempool_create().
-+ * @gfp_mask: the usual allocation bitmask.
-+ *
-+ * this function only sleeps if the alloc_fn function sleeps or
-+ * returns NULL. Note that due to preallocation, this function
-+ * *never* fails when called from process contexts. (it might
-+ * fail if called from an IRQ context.)
-+ */
-+void * mempool_alloc(mempool_t *pool, int gfp_mask)
-+{
-+ void *element;
-+ unsigned long flags;
-+ struct list_head *tmp;
-+ int curr_nr;
-+ DECLARE_WAITQUEUE(wait, current);
-+ int gfp_nowait = gfp_mask & ~(__GFP_WAIT | __GFP_IO);
-+
-+repeat_alloc:
-+ element = pool->alloc(gfp_nowait, pool->pool_data);
-+ if (likely(element != NULL))
-+ return element;
-+
-+ /*
-+ * If the pool is less than 50% full then try harder
-+ * to allocate an element:
-+ */
-+ if ((gfp_mask != gfp_nowait) && (pool->curr_nr <= pool->min_nr/2)) {
-+ element = pool->alloc(gfp_mask, pool->pool_data);
-+ if (likely(element != NULL))
-+ return element;
-+ }
-+
-+ /*
-+ * Kick the VM at this point.
-+ */
-+ wakeup_bdflush();
-+
-+ spin_lock_irqsave(&pool->lock, flags);
-+ if (likely(pool->curr_nr)) {
-+ tmp = pool->elements.next;
-+ list_del(tmp);
-+ element = tmp;
-+ pool->curr_nr--;
-+ spin_unlock_irqrestore(&pool->lock, flags);
-+ return element;
-+ }
-+ spin_unlock_irqrestore(&pool->lock, flags);
-+
-+ /* We must not sleep in the GFP_ATOMIC case */
-+ if (gfp_mask == gfp_nowait)
-+ return NULL;
-+
-+ run_task_queue(&tq_disk);
-+
-+ add_wait_queue_exclusive(&pool->wait, &wait);
-+ set_task_state(current, TASK_UNINTERRUPTIBLE);
-+
-+ spin_lock_irqsave(&pool->lock, flags);
-+ curr_nr = pool->curr_nr;
-+ spin_unlock_irqrestore(&pool->lock, flags);
-+
-+ if (!curr_nr)
-+ schedule();
-+
-+ current->state = TASK_RUNNING;
-+ remove_wait_queue(&pool->wait, &wait);
-+
-+ goto repeat_alloc;
-+}
-+
-+/**
-+ * mempool_free - return an element to the pool.
-+ * @element: pool element pointer.
-+ * @pool: pointer to the memory pool which was allocated via
-+ * mempool_create().
-+ *
-+ * this function only sleeps if the free_fn() function sleeps.
-+ */
-+void mempool_free(void *element, mempool_t *pool)
-+{
-+ unsigned long flags;
-+
-+ if (pool->curr_nr < pool->min_nr) {
-+ spin_lock_irqsave(&pool->lock, flags);
-+ if (pool->curr_nr < pool->min_nr) {
-+ list_add(element, &pool->elements);
-+ pool->curr_nr++;
-+ spin_unlock_irqrestore(&pool->lock, flags);
-+ wake_up(&pool->wait);
-+ return;
-+ }
-+ spin_unlock_irqrestore(&pool->lock, flags);
-+ }
-+ pool->free(element, pool->pool_data);
-+}
-+
-+/*
-+ * A commonly used alloc and free fn.
-+ */
-+void *mempool_alloc_slab(int gfp_mask, void *pool_data)
-+{
-+ kmem_cache_t *mem = (kmem_cache_t *) pool_data;
-+ return kmem_cache_alloc(mem, gfp_mask);
-+}
-+
-+void mempool_free_slab(void *element, void *pool_data)
-+{
-+ kmem_cache_t *mem = (kmem_cache_t *) pool_data;
-+ kmem_cache_free(mem, element);
-+}
-+
-+
-+EXPORT_SYMBOL(mempool_create);
-+EXPORT_SYMBOL(mempool_resize);
-+EXPORT_SYMBOL(mempool_destroy);
-+EXPORT_SYMBOL(mempool_alloc);
-+EXPORT_SYMBOL(mempool_free);
-+EXPORT_SYMBOL(mempool_alloc_slab);
-+EXPORT_SYMBOL(mempool_free_slab);
-+
+
+#define DM_VERSION_MAJOR 1
+#define DM_VERSION_MINOR 0
-+#define DM_VERSION_PATCHLEVEL 0
-+#define DM_VERSION_EXTRA "-ioctl (2002-06-25)"
++#define DM_VERSION_PATCHLEVEL 1
++#define DM_VERSION_EXTRA "-ioctl (2002-06-26)"
+
+/* Status bits */
+#define DM_READONLY_FLAG 0x00000001
diff -ruN linux-2.4.19-rc1/drivers/md/dm-exception-store.c linux/drivers/md/dm-exception-store.c
--- linux-2.4.19-rc1/drivers/md/dm-exception-store.c Thu Jan 1 01:00:00 1970
+++ linux/drivers/md/dm-exception-store.c Tue Jun 25 22:31:08 2002
-@@ -0,0 +1,727 @@
+@@ -0,0 +1,698 @@
+/*
+ * dm-snapshot.c
+ *
+ return 0;
+}
+
-+#if LINUX_VERSION_CODE < KERNEL_VERSION ( 2, 4, 19)
-+/*
-+ * FIXME: Remove once 2.4.19 has been released.
-+ */
-+struct page *vmalloc_to_page(void *vmalloc_addr)
-+{
-+ unsigned long addr = (unsigned long) vmalloc_addr;
-+ struct page *page = NULL;
-+ pmd_t *pmd;
-+ pte_t *pte;
-+ pgd_t *pgd;
-+
-+ pgd = pgd_offset_k(addr);
-+ if (!pgd_none(*pgd)) {
-+ pmd = pmd_offset(pgd, addr);
-+ if (!pmd_none(*pmd)) {
-+ pte = pte_offset(pmd, addr);
-+ if (pte_present(*pte)) {
-+ page = pte_page(*pte);
-+ }
-+ }
-+ }
-+ return page;
-+}
-+#endif
-+
+static int allocate_iobuf(struct pstore *ps)
+{
+ size_t i, r = -ENOMEM, len, nr_pages;
+ if (alloc_kiovec(1, &ps->iobuf))
+ goto bad;
+
-+ if (alloc_kiobuf_bhs(ps->iobuf))
-+ goto bad;
-+
+ nr_pages = ps->chunk_size / (PAGE_SIZE / SECTOR_SIZE);
+ r = expand_kiobuf(ps->iobuf, nr_pages);
+ if (r)
+++ /dev/null
-diff -ruN linux-2.4.18/include/linux/fs.h linux/include/linux/fs.h
---- linux-2.4.18/include/linux/fs.h Tue Feb 19 15:24:57 2002
-+++ linux/include/linux/fs.h Thu Feb 21 12:34:42 2002
-@@ -258,7 +258,10 @@
- char * b_data; /* pointer to data block */
- struct page *b_page; /* the page this bh is mapped to */
- void (*b_end_io)(struct buffer_head *bh, int uptodate); /* I/O completion */
-- void *b_private; /* reserved for b_end_io */
-+ void *b_private; /* reserved for b_end_io, also used by ext3 */
-+ void *b_bdev_private; /* a hack to get around ext3 using b_private
-+ * after handing the buffer_head to the
-+ * block layer */
-
- unsigned long b_rsector; /* Real buffer location on disk */
- wait_queue_head_t b_wait;
+++ /dev/null
-diff -ruN linux-2.4.18/drivers/md/Makefile linux/drivers/md/Makefile
---- linux-2.4.18/drivers/md/Makefile Thu Dec 6 15:57:55 2001
-+++ linux/drivers/md/Makefile Wed Jan 2 19:25:16 2002
-@@ -4,9 +4,12 @@
-
- O_TARGET := mddev.o
-
--export-objs := md.o xor.o
-+export-objs := md.o xor.o dm-table.o dm-target.o kcopyd.o
- list-multi := lvm-mod.o
- lvm-mod-objs := lvm.o lvm-snap.o lvm-fs.o
-+dm-mod-objs := dm.o dm-table.o dm-target.o dm-ioctl.o \
-+ dm-linear.o dm-stripe.o dm-snapshot.o dm-exception-store.o \
-+ kcopyd.o
-
- # Note: link order is important. All raid personalities
- # and xor.o must come before md.o, as they each initialise
-@@ -20,8 +23,12 @@
- obj-$(CONFIG_MD_MULTIPATH) += multipath.o
- obj-$(CONFIG_BLK_DEV_MD) += md.o
- obj-$(CONFIG_BLK_DEV_LVM) += lvm-mod.o
-+obj-$(CONFIG_BLK_DEV_DM) += dm-mod.o
-
- include $(TOPDIR)/Rules.make
-
- lvm-mod.o: $(lvm-mod-objs)
- $(LD) -r -o $@ $(lvm-mod-objs)
-+
-+dm-mod.o: $(dm-mod-objs)
-+ $(LD) -r -o $@ $(dm-mod-objs)
+++ /dev/null
-diff -ruN linux-2.4.19-pre10/include/linux/fs.h linux/include/linux/fs.h
---- linux-2.4.19-pre10/include/linux/fs.h Tue Feb 19 15:24:57 2002
-+++ linux/include/linux/fs.h Thu Feb 21 12:34:42 2002
-@@ -260,7 +260,10 @@
- char * b_data; /* pointer to data block */
- struct page *b_page; /* the page this bh is mapped to */
- void (*b_end_io)(struct buffer_head *bh, int uptodate); /* I/O completion */
-- void *b_private; /* reserved for b_end_io */
-+ void *b_private; /* reserved for b_end_io, also used by ext3 */
-+ void *b_bdev_private; /* a hack to get around ext3 using b_private
-+ * after handing the buffer_head to the
-+ * block layer */
-
- unsigned long b_rsector; /* Real buffer location on disk */
- wait_queue_head_t b_wait;
+++ /dev/null
-diff -ruN linux-2.4.18/drivers/md/Makefile linux/drivers/md/Makefile
---- linux-2.4.18/drivers/md/Makefile Thu Dec 6 15:57:55 2001
-+++ linux/drivers/md/Makefile Wed Jan 2 19:25:16 2002
-@@ -4,9 +4,12 @@
-
- O_TARGET := mddev.o
-
--export-objs := md.o xor.o
-+export-objs := md.o xor.o dm-table.o dm-target.o kcopyd.o
- list-multi := lvm-mod.o
- lvm-mod-objs := lvm.o lvm-snap.o lvm-fs.o
-+dm-mod-objs := dm.o dm-table.o dm-target.o dm-ioctl.o \
-+ dm-linear.o dm-stripe.o dm-snapshot.o dm-exception-store.o \
-+ kcopyd.o dm-mirror.o
-
- # Note: link order is important. All raid personalities
- # and xor.o must come before md.o, as they each initialise
-@@ -20,8 +23,12 @@
- obj-$(CONFIG_MD_MULTIPATH) += multipath.o
- obj-$(CONFIG_BLK_DEV_MD) += md.o
- obj-$(CONFIG_BLK_DEV_LVM) += lvm-mod.o
-+obj-$(CONFIG_BLK_DEV_DM) += dm-mod.o
-
- include $(TOPDIR)/Rules.make
-
- lvm-mod.o: $(lvm-mod-objs)
- $(LD) -r -o $@ $(lvm-mod-objs)
-+
-+dm-mod.o: $(dm-mod-objs)
-+ $(LD) -r -o $@ $(dm-mod-objs)
+++ /dev/null
-diff -ruN linux-2.4.18/include/linux/fs.h linux/include/linux/fs.h
---- linux-2.4.18/include/linux/fs.h Tue Feb 19 15:24:57 2002
-+++ linux/include/linux/fs.h Thu Feb 21 12:34:42 2002
-@@ -258,7 +258,10 @@
- char * b_data; /* pointer to data block */
- struct page *b_page; /* the page this bh is mapped to */
- void (*b_end_io)(struct buffer_head *bh, int uptodate); /* I/O completion */
-- void *b_private; /* reserved for b_end_io */
-+ void *b_private; /* reserved for b_end_io, also used by ext3 */
-+ void *b_bdev_private; /* a hack to get around ext3 using b_private
-+ * after handing the buffer_head to the
-+ * block layer */
-
- unsigned long b_rsector; /* Real buffer location on disk */
- wait_queue_head_t b_wait;
+++ /dev/null
-diff -ruN linux-2.4.18/drivers/md/Makefile linux/drivers/md/Makefile
---- linux-2.4.18/drivers/md/Makefile Thu Dec 6 15:57:55 2001
-+++ linux/drivers/md/Makefile Wed Jan 2 19:25:16 2002
-@@ -4,9 +4,12 @@
-
- O_TARGET := mddev.o
-
--export-objs := md.o xor.o
-+export-objs := md.o xor.o dm-table.o dm-target.o kcopyd.o
- list-multi := lvm-mod.o
- lvm-mod-objs := lvm.o lvm-snap.o lvm-fs.o
-+dm-mod-objs := dm.o dm-table.o dm-target.o dm-ioctl.o \
-+ dm-linear.o dm-stripe.o dm-snapshot.o dm-exception-store.o \
-+ kcopyd.o
-
- # Note: link order is important. All raid personalities
- # and xor.o must come before md.o, as they each initialise
-@@ -20,8 +23,12 @@
- obj-$(CONFIG_MD_MULTIPATH) += multipath.o
- obj-$(CONFIG_BLK_DEV_MD) += md.o
- obj-$(CONFIG_BLK_DEV_LVM) += lvm-mod.o
-+obj-$(CONFIG_BLK_DEV_DM) += dm-mod.o
-
- include $(TOPDIR)/Rules.make
-
- lvm-mod.o: $(lvm-mod-objs)
- $(LD) -r -o $@ $(lvm-mod-objs)
-+
-+dm-mod.o: $(dm-mod-objs)
-+ $(LD) -r -o $@ $(dm-mod-objs)
diff -ruN linux-2.4.19-rc1/drivers/md/dm-exception-store.c linux/drivers/md/dm-exception-store.c
--- linux-2.4.19-rc1/drivers/md/dm-exception-store.c Thu Jan 1 01:00:00 1970
+++ linux/drivers/md/dm-exception-store.c Thu Jun 13 14:58:15 2002
-@@ -0,0 +1,727 @@
+@@ -0,0 +1,698 @@
+/*
+ * dm-snapshot.c
+ *
+ return 0;
+}
+
-+#if LINUX_VERSION_CODE < KERNEL_VERSION ( 2, 4, 19)
-+/*
-+ * FIXME: Remove once 2.4.19 has been released.
-+ */
-+struct page *vmalloc_to_page(void *vmalloc_addr)
-+{
-+ unsigned long addr = (unsigned long) vmalloc_addr;
-+ struct page *page = NULL;
-+ pmd_t *pmd;
-+ pte_t *pte;
-+ pgd_t *pgd;
-+
-+ pgd = pgd_offset_k(addr);
-+ if (!pgd_none(*pgd)) {
-+ pmd = pmd_offset(pgd, addr);
-+ if (!pmd_none(*pmd)) {
-+ pte = pte_offset(pmd, addr);
-+ if (pte_present(*pte)) {
-+ page = pte_page(*pte);
-+ }
-+ }
-+ }
-+ return page;
-+}
-+#endif
-+
+static int allocate_iobuf(struct pstore *ps)
+{
+ size_t i, r = -ENOMEM, len, nr_pages;
+ if (alloc_kiovec(1, &ps->iobuf))
+ goto bad;
+
-+ if (alloc_kiobuf_bhs(ps->iobuf))
-+ goto bad;
-+
+ nr_pages = ps->chunk_size / (PAGE_SIZE / SECTOR_SIZE);
+ r = expand_kiobuf(ps->iobuf, nr_pages);
+ if (r)
+
+#define DM_VERSION_MAJOR 1
+#define DM_VERSION_MINOR 0
-+#define DM_VERSION_PATCHLEVEL 0
-+#define DM_VERSION_EXTRA "-ioctl (2002-06-25)"
++#define DM_VERSION_PATCHLEVEL 1
++#define DM_VERSION_EXTRA "-ioctl (2002-06-26)"
+
+/* Status bits */
+#define DM_READONLY_FLAG 0x00000001
+++ /dev/null
-diff -ruN -X /home/thornber/packages/2.4/dontdiff linux/drivers/md/Config.in linux-dm/drivers/md/Config.in
---- linux/drivers/md/Config.in Fri Sep 14 22:22:18 2001
-+++ linux-dm/drivers/md/Config.in Wed Oct 31 18:08:58 2001
-@@ -14,5 +14,6 @@
- dep_tristate ' Multipath I/O support' CONFIG_MD_MULTIPATH $CONFIG_BLK_DEV_MD
-
- dep_tristate ' Logical volume manager (LVM) support' CONFIG_BLK_DEV_LVM $CONFIG_MD
-+dep_tristate ' Device mapper support' CONFIG_BLK_DEV_DM $CONFIG_MD
-
- endmenu
-diff -ruN -X /home/thornber/packages/2.4/dontdiff linux/drivers/md/Makefile linux-dm/drivers/md/Makefile
---- linux/drivers/md/Makefile Fri Sep 14 22:22:18 2001
-+++ linux-dm/drivers/md/Makefile Wed Oct 31 18:09:02 2001
-@@ -4,9 +4,10 @@
-
- O_TARGET := mddev.o
-
--export-objs := md.o xor.o
-+export-objs := md.o xor.o dm-table.o dm-target.o
- list-multi := lvm-mod.o
- lvm-mod-objs := lvm.o lvm-snap.o
-+dm-mod-objs := dm.o dm-table.o dm-target.o dm-ioctl.o dm-linear.o
-
- # Note: link order is important. All raid personalities
- # and xor.o must come before md.o, as they each initialise
-@@ -20,8 +21,12 @@
- obj-$(CONFIG_MD_MULTIPATH) += multipath.o
- obj-$(CONFIG_BLK_DEV_MD) += md.o
- obj-$(CONFIG_BLK_DEV_LVM) += lvm-mod.o
-+obj-$(CONFIG_BLK_DEV_DM) += dm-mod.o
-
- include $(TOPDIR)/Rules.make
-
- lvm-mod.o: $(lvm-mod-objs)
- $(LD) -r -o $@ $(lvm-mod-objs)
-+
-+dm-mod.o: $(dm-mod-objs)
-+ $(LD) -r -o $@ $(dm-mod-objs)
-diff -ruN -X /home/thornber/packages/2.4/dontdiff linux/drivers/md/dm-ioctl.c linux-dm/drivers/md/dm-ioctl.c
---- linux/drivers/md/dm-ioctl.c Thu Jan 1 01:00:00 1970
-+++ linux-dm/drivers/md/dm-ioctl.c Wed Oct 31 18:09:12 2001
-@@ -0,0 +1,292 @@
-+/*
-+ * Copyright (C) 2001 Sistina Software (UK) Limited.
-+ *
-+ * This file is released under the GPL.
-+ */
-+
-+#include <linux/fs.h>
-+#include <linux/dm-ioctl.h>
-+
-+#include "dm.h"
-+
-+static void free_params(struct dm_ioctl *p)
-+{
-+ vfree(p);
-+}
-+
-+static int copy_params(struct dm_ioctl *user, struct dm_ioctl **result)
-+{
-+ struct dm_ioctl tmp, *dmi;
-+
-+ if (copy_from_user(&tmp, user, sizeof(tmp)))
-+ return -EFAULT;
-+
-+ if (!(dmi = vmalloc(tmp.data_size)))
-+ return -ENOMEM;
-+
-+ if (copy_from_user(dmi, user, tmp.data_size))
-+ return -EFAULT;
-+
-+ *result = dmi;
-+ return 0;
-+}
-+
-+/*
-+ * check a string doesn't overrun the chunk of
-+ * memory we copied from userland.
-+ */
-+static int valid_str(char *str, void *end)
-+{
-+ while ((str != end) && *str)
-+ str++;
-+
-+ return *str ? 0 : 1;
-+}
-+
-+static int first_target(struct dm_ioctl *a, void *end,
-+ struct dm_target_spec **spec, char **params)
-+{
-+ *spec = (struct dm_target_spec *) ((unsigned char *) a) + a->data_size;
-+ *params = (char *) (*spec + 1);
-+
-+ return valid_str(*params, end);
-+}
-+
-+static int next_target(struct dm_target_spec *last, void *end,
-+ struct dm_target_spec **spec, char **params)
-+{
-+ *spec = (struct dm_target_spec *)
-+ (((unsigned char *) last) + last->next);
-+ *params = (char *) (*spec + 1);
-+
-+ return valid_str(*params, end);
-+}
-+
-+void err_fn(const char *message, void *private)
-+{
-+ printk(KERN_ERR "%s", message);
-+}
-+
-+/*
-+ * Checks to see if there's a gap in the table.
-+ * Returns true iff there is a gap.
-+ */
-+static int gap(struct dm_table *table, struct dm_target_spec *spec)
-+{
-+ if (!table->num_targets)
-+ return (spec->sector_start > 0) ? 1 : 0;
-+
-+ if (spec->sector_start != table->highs[table->num_targets - 1] + 1)
-+ return 1;
-+
-+ return 0;
-+}
-+
-+static int populate_table(struct dm_table *table, struct dm_ioctl *args)
-+{
-+ int i = 0, r, first = 1;
-+ struct dm_target_spec *spec;
-+ char *params;
-+ struct target_type *ttype;
-+ void *context, *end;
-+ offset_t high = 0;
-+
-+ if (!args->target_count) {
-+ WARN("No targets specified");
-+ return -EINVAL;
-+ }
-+
-+ end = ((void *) args) + args->data_size;
-+
-+#define PARSE_ERROR(msg) {err_fn(msg, NULL); return -EINVAL;}
-+
-+ for (i = 0; i < args->target_count; i++) {
-+
-+ r = first ? first_target(args, end, &spec, ¶ms) :
-+ next_target(spec, end, &spec, ¶ms);
-+
-+ if (!r)
-+ PARSE_ERROR("unable to find target");
-+
-+ /* lookup the target type */
-+ if (!(ttype = dm_get_target_type(spec->target_type)))
-+ PARSE_ERROR("unable to find target type");
-+
-+ if (gap(table, spec))
-+ PARSE_ERROR("gap in target ranges");
-+
-+ /* build the target */
-+ if (ttype->ctr(table, spec->sector_start, spec->length, params,
-+ &context, err_fn, NULL))
-+ PARSE_ERROR("target constructor failed");
-+
-+ /* add the target to the table */
-+ high = spec->sector_start + (spec->length - 1);
-+ if (dm_table_add_target(table, high, ttype, context))
-+ PARSE_ERROR("internal error adding target to table");
-+
-+ first = 0;
-+ }
-+
-+#undef PARSE_ERROR
-+
-+ r = dm_table_complete(table);
-+ return r;
-+}
-+
-+static int create(struct dm_ioctl *param)
-+{
-+ int r;
-+ struct mapped_device *md;
-+ struct dm_table *t;
-+
-+ if ((r = dm_create(param->name, param->minor, &md)))
-+ return r;
-+
-+ if ((r = dm_table_create(&t))) {
-+ dm_destroy(md);
-+ return r;
-+ }
-+
-+ if ((r = populate_table(t, param))) {
-+ dm_destroy(md);
-+ dm_table_destroy(t);
-+ return r;
-+ }
-+
-+ if ((r = dm_activate(md, t))) {
-+ dm_destroy(md);
-+ dm_table_destroy(t);
-+ return r;
-+ }
-+
-+ return 0;
-+}
-+
-+static int remove(struct dm_ioctl *param)
-+{
-+ int r;
-+ struct mapped_device *md = dm_get(param->name);
-+
-+ if (!md)
-+ return -ENODEV;
-+
-+ if ((r = dm_deactivate(md)))
-+ return r;
-+
-+ if (md->map)
-+ dm_table_destroy(md->map);
-+
-+ if (!dm_destroy(md))
-+ WARN("dm_ctl_ioctl: unable to remove device");
-+
-+ return 0;
-+}
-+
-+static int suspend(struct dm_ioctl *param)
-+{
-+ return -EINVAL;
-+}
-+
-+static int reload(struct dm_ioctl *param)
-+{
-+ return -EINVAL;
-+}
-+
-+static int info(struct dm_ioctl *param)
-+{
-+ return -EINVAL;
-+}
-+
-+static int ctl_open(struct inode *inode, struct file *file)
-+{
-+ /* only root can open this */
-+ if (!capable(CAP_SYS_ADMIN))
-+ return -EACCES;
-+
-+ MOD_INC_USE_COUNT;
-+ return 0;
-+}
-+
-+static int ctl_close(struct inode *inode, struct file *file)
-+{
-+ MOD_DEC_USE_COUNT;
-+ return 0;
-+}
-+
-+
-+static int ctl_ioctl(struct inode *inode, struct file *file,
-+ uint command, ulong a)
-+{
-+ int r = -EINVAL;
-+ struct dm_ioctl *p;
-+
-+ if ((r = copy_params((struct dm_ioctl *) a, &p)))
-+ return r;
-+
-+ switch (command) {
-+ case DM_CREATE:
-+ r = create(p);
-+ break;
-+
-+ case DM_REMOVE:
-+ r = remove(p);
-+ break;
-+
-+ case DM_SUSPEND:
-+ r = suspend(p);
-+ break;
-+
-+ case DM_RELOAD:
-+ r = reload(p);
-+ break;
-+
-+ case DM_INFO:
-+ r = info(p);
-+
-+ default:
-+ WARN("dm_ctl_ioctl: unknown command 0x%x\n", command);
-+ }
-+
-+ free_params(p);
-+ return r;
-+}
-+
-+
-+static struct file_operations _ctl_fops = {
-+ open: ctl_open,
-+ release: ctl_close,
-+ ioctl: ctl_ioctl,
-+};
-+
-+static int dm_ioctl_init(void)
-+{
-+ int r;
-+
-+ if ((r = devfs_register_chrdev(DM_CHAR_MAJOR, "device-mapper",
-+ &_ctl_fops)) < 0) {
-+ WARN("devfs_register_chrdev failed for dm control dev");
-+ return -EIO;
-+ }
-+
-+ return r;
-+}
-+
-+static void dm_ioctl_exit(void)
-+{
-+ if (devfs_unregister_chrdev(DM_CHAR_MAJOR, "device-mapper") < 0)
-+ WARN("devfs_unregister_chrdev failed for dm control device");
-+}
-+
-+/*
-+ * module hooks
-+ */
-+module_init(dm_ioctl_init);
-+module_exit(dm_ioctl_exit);
-+
-+MODULE_DESCRIPTION("device-mapper ioctl interface");
-+MODULE_AUTHOR("Joe Thornber <thornber@sistina.com>");
-+
-+#ifdef MODULE_LICENSE
-+MODULE_LICENSE("GPL");
-+#endif
-diff -ruN -X /home/thornber/packages/2.4/dontdiff linux/drivers/md/dm-linear.c linux-dm/drivers/md/dm-linear.c
---- linux/drivers/md/dm-linear.c Thu Jan 1 01:00:00 1970
-+++ linux-dm/drivers/md/dm-linear.c Wed Oct 31 18:09:12 2001
-@@ -0,0 +1,113 @@
-+/*
-+ * Copyright (C) 2001 Sistina Software (UK) Limited.
-+ *
-+ * This file is released under the GPL.
-+ */
-+
-+#include <linux/config.h>
-+#include <linux/module.h>
-+#include <linux/init.h>
-+#include <linux/slab.h>
-+#include <linux/fs.h>
-+#include <linux/blkdev.h>
-+#include <linux/device-mapper.h>
-+
-+#include "dm.h"
-+
-+/*
-+ * linear: maps a linear range of a device.
-+ */
-+struct linear_c {
-+ long delta; /* FIXME: we need a signed offset type */
-+ struct dm_dev *dev;
-+};
-+
-+/*
-+ * construct a linear mapping.
-+ * <dev_path> <offset>
-+ */
-+static int linear_ctr(struct dm_table *t, offset_t b, offset_t l,
-+ const char *args, void **context,
-+ dm_error_fn err, void *e_private)
-+{
-+ struct linear_c *lc;
-+ unsigned int start;
-+ char path[256]; /* FIXME: magic */
-+ int r = -EINVAL;
-+
-+ if (!(lc = kmalloc(sizeof(lc), GFP_KERNEL))) {
-+ err("couldn't allocate memory for linear context", e_private);
-+ return -ENOMEM;
-+ }
-+
-+ if (sscanf("%s %u", path, &start) != 2) {
-+ err("target params should be of the form <dev_path> <sector>",
-+ e_private);
-+ goto bad;
-+ }
-+
-+ if ((r = dm_table_get_device(t, path, &lc->dev))) {
-+ err("couldn't lookup device", e_private);
-+ r = -ENXIO;
-+ goto bad;
-+ }
-+
-+ lc->delta = (int) start - (int) b;
-+ *context = lc;
-+ return 0;
-+
-+ bad:
-+ kfree(lc);
-+ return r;
-+}
-+
-+static void linear_dtr(struct dm_table *t, void *c)
-+{
-+ struct linear_c *lc = (struct linear_c *) c;
-+ dm_table_put_device(t, lc->dev);
-+ kfree(c);
-+}
-+
-+static int linear_map(struct buffer_head *bh, int rw, void *context)
-+{
-+ struct linear_c *lc = (struct linear_c *) context;
-+
-+ bh->b_rdev = lc->dev->dev;
-+ bh->b_rsector = bh->b_rsector + lc->delta;
-+ return 1;
-+}
-+
-+static struct target_type linear_target = {
-+ name: "linear",
-+ module: THIS_MODULE,
-+ ctr: linear_ctr,
-+ dtr: linear_dtr,
-+ map: linear_map,
-+};
-+
-+static int __init linear_init(void)
-+{
-+ int r;
-+
-+ if ((r = dm_register_target(&linear_target)) < 0)
-+ printk(KERN_ERR "Device mapper: Linear: register failed\n");
-+
-+ return r;
-+}
-+
-+static void __exit linear_exit(void)
-+{
-+ if (dm_unregister_target(&linear_target) < 0)
-+ printk(KERN_ERR "Device mapper: Linear: unregister failed\n");
-+}
-+
-+module_init(linear_init);
-+module_exit(linear_exit);
-+
-+MODULE_AUTHOR("Joe Thornber <thornber@sistina.com>");
-+MODULE_DESCRIPTION("Device Mapper: Linear mapping");
-+
-+#ifdef MODULE_LICENSE
-+MODULE_LICENSE("GPL");
-+#endif
-+
-diff -ruN -X /home/thornber/packages/2.4/dontdiff linux/drivers/md/dm-table.c linux-dm/drivers/md/dm-table.c
---- linux/drivers/md/dm-table.c Thu Jan 1 01:00:00 1970
-+++ linux-dm/drivers/md/dm-table.c Wed Oct 31 18:09:12 2001
-@@ -0,0 +1,333 @@
-+/*
-+ * Copyright (C) 2001 Sistina Software (UK) Limited.
-+ *
-+ * This file is released under the GPL.
-+ */
-+
-+#include "dm.h"
-+
-+/* ceiling(n / size) * size */
-+static inline ulong round_up(ulong n, ulong size)
-+{
-+ ulong r = n % size;
-+ return n + (r ? (size - r) : 0);
-+}
-+
-+/* ceiling(n / size) */
-+static inline ulong div_up(ulong n, ulong size)
-+{
-+ return round_up(n, size) / size;
-+}
-+
-+/* similar to ceiling(log_size(n)) */
-+static uint int_log(ulong n, ulong base)
-+{
-+ int result = 0;
-+
-+ while (n > 1) {
-+ n = div_up(n, base);
-+ result++;
-+ }
-+
-+ return result;
-+}
-+
-+/*
-+ * return the highest key that you could lookup
-+ * from the n'th node on level l of the btree.
-+ */
-+static offset_t high(struct dm_table *t, int l, int n)
-+{
-+ for (; l < t->depth - 1; l++)
-+ n = get_child(n, CHILDREN_PER_NODE - 1);
-+
-+ if (n >= t->counts[l])
-+ return (offset_t) -1;
-+
-+ return get_node(t, l, n)[KEYS_PER_NODE - 1];
-+}
-+
-+/*
-+ * fills in a level of the btree based on the
-+ * highs of the level below it.
-+ */
-+static int setup_btree_index(int l, struct dm_table *t)
-+{
-+ int n, k;
-+ offset_t *node;
-+
-+ for (n = 0; n < t->counts[l]; n++) {
-+ node = get_node(t, l, n);
-+
-+ for (k = 0; k < KEYS_PER_NODE; k++)
-+ node[k] = high(t, l + 1, get_child(n, k));
-+ }
-+
-+ return 0;
-+}
-+
-+/*
-+ * highs, and targets are managed as dynamic
-+ * arrays during a table load.
-+ */
-+static int alloc_targets(struct dm_table *t, int num)
-+{
-+ offset_t *n_highs;
-+ struct target *n_targets;
-+ int n = t->num_targets;
-+ int size = (sizeof(struct target) + sizeof(offset_t)) * num;
-+
-+ n_highs = vmalloc(size);
-+ if (!n_highs)
-+ return -ENOMEM;
-+
-+ n_targets = (struct target *) (n_highs + num);
-+
-+ if (n) {
-+ memcpy(n_highs, t->highs, sizeof(*n_highs) * n);
-+ memcpy(n_targets, t->targets, sizeof(*n_targets) * n);
-+ }
-+
-+ vfree(t->highs);
-+
-+ t->num_allocated = num;
-+ t->highs = n_highs;
-+ t->targets = n_targets;
-+
-+ return 0;
-+}
-+
-+int dm_table_create(struct dm_table **result)
-+{
-+ struct dm_table *t = kmalloc(sizeof(struct dm_table), GFP_NOIO);
-+
-+ if (!t)
-+ return -ENOMEM;
-+
-+ memset(t, 0, sizeof(*t));
-+ INIT_LIST_HEAD(&t->devices);
-+
-+ /* allocate a single nodes worth of targets to
-+ begin with */
-+ if (alloc_targets(t, KEYS_PER_NODE)) {
-+ kfree(t);
-+ t = 0;
-+ }
-+
-+ *result = t;
-+ return 0;
-+}
-+
-+static void free_devices(struct list_head *devices)
-+{
-+ struct list_head *tmp, *next;
-+
-+ for (tmp = devices->next; tmp != devices; tmp = next) {
-+ struct dm_dev *dd = list_entry(tmp, struct dm_dev, list);
-+ next = tmp->next;
-+ kfree(dd);
-+ }
-+}
-+
-+void dm_table_destroy(struct dm_table *t)
-+{
-+ int i;
-+
-+ /* free the indexes (see dm_table_complete) */
-+ if (t->depth >= 2)
-+ vfree(t->index[t->depth - 2]);
-+
-+
-+ /* free the targets */
-+ for (i = 0; i < t->num_targets; i++) {
-+ struct target *tgt = &t->targets[i];
-+ if (tgt->type->dtr)
-+ tgt->type->dtr(t, tgt->private);
-+ }
-+
-+ vfree(t->highs);
-+
-+ /* free the device list */
-+ if (t->devices.next != &t->devices) {
-+ WARN("there are still devices present, someone isn't "
-+ "calling dm_table_remove_device");
-+
-+ free_devices(&t->devices);
-+ }
-+
-+ kfree(t);
-+}
-+
-+/*
-+ * Checks to see if we need to extend
-+ * highs or targets.
-+ */
-+static inline int check_space(struct dm_table *t)
-+{
-+ if (t->num_targets >= t->num_allocated)
-+ return alloc_targets(t, t->num_allocated * 2);
-+
-+ return 0;
-+}
-+
-+
-+/*
-+ * convert a device path to a kdev_t.
-+ */
-+int lookup_device(const char *path, kdev_t *dev)
-+{
-+ int r;
-+ struct nameidata nd;
-+ struct inode *inode;
-+
-+ if (!path_init(path, LOOKUP_FOLLOW, &nd))
-+ return 0;
-+
-+ if ((r = path_walk(path, &nd)))
-+ goto bad;
-+
-+ inode = nd.dentry->d_inode;
-+ if (!inode) {
-+ r = -ENOENT;
-+ goto bad;
-+ }
-+
-+ if (!S_ISBLK(inode->i_mode)) {
-+ r = -EINVAL;
-+ goto bad;
-+ }
-+
-+ *dev = inode->i_rdev;
-+
-+ bad:
-+ path_release(&nd);
-+ return r;
-+}
-+
-+/*
-+ * see if we've already got a device in the list.
-+ */
-+static struct dm_dev *find_device(struct list_head *l, kdev_t dev)
-+{
-+ struct list_head *tmp;
-+
-+ for (tmp = l->next; tmp != l; tmp = tmp->next) {
-+
-+ struct dm_dev *dd = list_entry(tmp, struct dm_dev, list);
-+ if (dd->dev == dev)
-+ return dd;
-+ }
-+
-+ return NULL;
-+}
-+
-+/*
-+ * add a device to the list, or just increment the
-+ * usage count if it's already present.
-+ */
-+int dm_table_get_device(struct dm_table *t, const char *path,
-+ struct dm_dev **result)
-+{
-+ int r;
-+ kdev_t dev;
-+ struct dm_dev *dd;
-+
-+ /* convert the path to a device */
-+ if ((r = lookup_device(path, &dev)))
-+ return r;
-+
-+ dd = find_device(&t->devices, dev);
-+ if (!dd) {
-+ dd = kmalloc(sizeof(*dd), GFP_KERNEL);
-+ if (!dd)
-+ return -ENOMEM;
-+
-+ dd->dev = dev;
-+ dd->bd = 0;
-+ atomic_set(&dd->count, 0);
-+ list_add(&dd->list, &t->devices);
-+ }
-+ atomic_inc(&dd->count);
-+ *result = dd;
-+
-+ return 0;
-+}
-+/*
-+ * decrement a devices use count and remove it if
-+ * neccessary.
-+ */
-+void dm_table_put_device(struct dm_table *t, struct dm_dev *dd)
-+{
-+ if (atomic_dec_and_test(&dd->count)) {
-+ list_del(&dd->list);
-+ kfree(dd);
-+ }
-+}
-+
-+/*
-+ * adds a target to the map
-+ */
-+int dm_table_add_target(struct dm_table *t, offset_t high,
-+ struct target_type *type, void *private)
-+{
-+ int r, n;
-+
-+ if ((r = check_space(t)))
-+ return r;
-+
-+ n = t->num_targets++;
-+ t->highs[n] = high;
-+ t->targets[n].type = type;
-+ t->targets[n].private = private;
-+
-+ return 0;
-+}
-+
-+
-+static int setup_indexes(struct dm_table *t)
-+{
-+ int i, total = 0;
-+ offset_t *indexes;
-+
-+ /* allocate the space for *all* the indexes */
-+ for (i = t->depth - 2; i >= 0; i--) {
-+ t->counts[i] = div_up(t->counts[i + 1], CHILDREN_PER_NODE);
-+ total += t->counts[i];
-+ }
-+
-+ if (!(indexes = vmalloc(NODE_SIZE * total)))
-+ return -ENOMEM;
-+
-+ /* set up internal nodes, bottom-up */
-+ for (i = t->depth - 2, total = 0; i >= 0; i--) {
-+ t->index[i] = indexes + (KEYS_PER_NODE * t->counts[i]);
-+ setup_btree_index(i, t);
-+ }
-+
-+ return 0;
-+}
-+
-+
-+/*
-+ * builds the btree to index the map
-+ */
-+int dm_table_complete(struct dm_table *t)
-+{
-+ int leaf_nodes, r = 0;
-+
-+ /* how many indexes will the btree have ? */
-+ leaf_nodes = div_up(t->num_targets, KEYS_PER_NODE);
-+ t->depth = 1 + int_log(leaf_nodes, CHILDREN_PER_NODE);
-+
-+ /* leaf layer has already been set up */
-+ t->counts[t->depth - 1] = leaf_nodes;
-+ t->index[t->depth - 1] = t->highs;
-+
-+ if (t->depth >= 2)
-+ r = setup_indexes(t);
-+
-+ return r;
-+}
-+
-+EXPORT_SYMBOL(dm_table_get_device);
-+EXPORT_SYMBOL(dm_table_put_device);
-diff -ruN -X /home/thornber/packages/2.4/dontdiff linux/drivers/md/dm-target.c linux-dm/drivers/md/dm-target.c
---- linux/drivers/md/dm-target.c Thu Jan 1 01:00:00 1970
-+++ linux-dm/drivers/md/dm-target.c Wed Oct 31 18:09:12 2001
-@@ -0,0 +1,175 @@
-+/*
-+ * Copyright (C) 2001 Sistina Software (UK) Limited
-+ *
-+ * This file is released under the GPL.
-+ */
-+
-+#include "dm.h"
-+#include <linux/kmod.h>
-+
-+struct tt_internal {
-+ struct target_type tt;
-+
-+ struct list_head list;
-+ long use;
-+};
-+
-+static LIST_HEAD(_targets);
-+static rwlock_t _lock = RW_LOCK_UNLOCKED;
-+
-+#define DM_MOD_NAME_SIZE 32
-+
-+static inline struct tt_internal *__find_target_type(const char *name)
-+{
-+ struct list_head *tmp;
-+ struct tt_internal *ti;
-+
-+ for(tmp = _targets.next; tmp != &_targets; tmp = tmp->next) {
-+
-+ ti = list_entry(tmp, struct tt_internal, list);
-+ if (!strcmp(name, ti->tt.name))
-+ return ti;
-+ }
-+
-+ return NULL;
-+}
-+
-+static struct tt_internal *get_target_type(const char *name)
-+{
-+ struct tt_internal *ti;
-+
-+ read_lock(&_lock);
-+ ti = __find_target_type(name);
-+
-+ if (ti) {
-+ if (ti->use == 0 && ti->tt.module)
-+ __MOD_INC_USE_COUNT(ti->tt.module);
-+ ti->use++;
-+ }
-+ read_unlock(&_lock);
-+
-+ return ti;
-+}
-+
-+static void load_module(const char *name)
-+{
-+ char module_name[DM_MOD_NAME_SIZE] = "dm-";
-+
-+ /* Length check for strcat() below */
-+ if (strlen(name) > (DM_MOD_NAME_SIZE - 4))
-+ return;
-+
-+ strcat(module_name, name);
-+ request_module(module_name);
-+}
-+
-+struct target_type *dm_get_target_type(const char *name)
-+{
-+ struct tt_internal *ti = get_target_type(name);
-+
-+ if (!ti) {
-+ load_module(name);
-+ ti = get_target_type(name);
-+ }
-+
-+ return ti ? &ti->tt : 0;
-+}
-+
-+void dm_put_target_type(struct target_type *t)
-+{
-+ struct tt_internal *ti = (struct tt_internal *) t;
-+
-+ read_lock(&_lock);
-+ if (--ti->use == 0 && ti->tt.module)
-+ __MOD_DEC_USE_COUNT(ti->tt.module);
-+
-+ if (ti->use < 0)
-+ BUG();
-+ read_unlock(&_lock);
-+}
-+
-+static struct tt_internal *alloc_target(struct target_type *t)
-+{
-+ struct tt_internal *ti = kmalloc(sizeof(*ti), GFP_KERNEL);
-+
-+ if (ti) {
-+ memset(ti, 0, sizeof(*ti));
-+ ti->tt = *t;
-+ }
-+
-+ return ti;
-+}
-+
-+int dm_register_target(struct target_type *t)
-+{
-+ int rv = 0;
-+ struct tt_internal *ti = alloc_target(t);
-+
-+ if (!ti)
-+ return -ENOMEM;
-+
-+ write_lock(&_lock);
-+ if (__find_target_type(t->name))
-+ rv = -EEXIST;
-+ else
-+ list_add(&ti->list, &_targets);
-+
-+ write_unlock(&_lock);
-+ return rv;
-+}
-+
-+int dm_unregister_target(struct target_type *t)
-+{
-+ struct tt_internal *ti = (struct tt_internal *) t;
-+ int rv = -ETXTBSY;
-+
-+ write_lock(&_lock);
-+ if (ti->use == 0) {
-+ list_del(&ti->list);
-+ kfree(ti);
-+ rv = 0;
-+ }
-+ write_unlock(&_lock);
-+
-+ return rv;
-+}
-+
-+/*
-+ * io-err: always fails an io, useful for bringing
-+ * up LV's that have holes in them.
-+ */
-+static int io_err_ctr(struct dm_table *t, offset_t b, offset_t l,
-+ const char *args, void **context,
-+ dm_error_fn err, void *e_private)
-+{
-+ *context = 0;
-+ return 0;
-+}
-+
-+static void io_err_dtr(struct dm_table *t, void *c)
-+{
-+ /* empty */
-+}
-+
-+static int io_err_map(struct buffer_head *bh, int rw, void *context)
-+{
-+ buffer_IO_error(bh);
-+ return 0;
-+}
-+
-+static struct target_type error_target = {
-+ name: "error",
-+ ctr: io_err_ctr,
-+ dtr: io_err_dtr,
-+ map: io_err_map
-+};
-+
-+
-+int dm_target_init(void)
-+{
-+ return dm_register_target(&error_target);
-+}
-+
-+EXPORT_SYMBOL(dm_register_target);
-+EXPORT_SYMBOL(dm_unregister_target);
-+
-diff -ruN -X /home/thornber/packages/2.4/dontdiff linux/drivers/md/dm.c linux-dm/drivers/md/dm.c
---- linux/drivers/md/dm.c Thu Jan 1 01:00:00 1970
-+++ linux-dm/drivers/md/dm.c Wed Oct 31 18:09:12 2001
-@@ -0,0 +1,921 @@
-+/*
-+ * Copyright (C) 2001 Sistina Software
-+ *
-+ * This file is released under the GPL.
-+ */
-+
-+#include "dm.h"
-+
-+/* defines for blk.h */
-+#define MAJOR_NR DM_BLK_MAJOR
-+#define DEVICE_NR(device) MINOR(device) /* has no partition bits */
-+#define DEVICE_NAME "device-mapper" /* name for messaging */
-+#define DEVICE_NO_RANDOM /* no entropy to contribute */
-+#define DEVICE_OFF(d) /* do-nothing */
-+
-+#include <linux/blk.h>
-+#include <linux/blkpg.h>
-+
-+/* we only need this for the lv_bmap struct definition, not happy */
-+#include <linux/lvm.h>
-+
-+#define MAX_DEVICES 64
-+#define DEFAULT_READ_AHEAD 64
-+
-+const char *_name = "device-mapper";
-+int _version[3] = {0, 1, 0};
-+
-+struct io_hook {
-+ struct mapped_device *md;
-+ struct target *target;
-+ int rw;
-+
-+ void (*end_io)(struct buffer_head * bh, int uptodate);
-+ void *context;
-+};
-+
-+kmem_cache_t *_io_hook_cache;
-+
-+#define rl down_read(&_dev_lock)
-+#define ru up_read(&_dev_lock)
-+#define wl down_write(&_dev_lock)
-+#define wu up_write(&_dev_lock)
-+
-+struct rw_semaphore _dev_lock;
-+static struct mapped_device *_devs[MAX_DEVICES];
-+
-+/* block device arrays */
-+static int _block_size[MAX_DEVICES];
-+static int _blksize_size[MAX_DEVICES];
-+static int _hardsect_size[MAX_DEVICES];
-+
-+const char *_fs_dir = "device-mapper";
-+static devfs_handle_t _dev_dir;
-+
-+static int request(request_queue_t *q, int rw, struct buffer_head *bh);
-+static int dm_user_bmap(struct inode *inode, struct lv_bmap *lvb);
-+
-+/*
-+ * setup and teardown the driver
-+ */
-+static int dm_init(void)
-+{
-+ int ret;
-+
-+ init_rwsem(&_dev_lock);
-+
-+ if (!_io_hook_cache)
-+ _io_hook_cache = kmem_cache_create("dm io hooks",
-+ sizeof(struct io_hook),
-+ 0, 0, NULL, NULL);
-+
-+ if (!_io_hook_cache)
-+ return -ENOMEM;
-+
-+ if ((ret = dm_target_init()))
-+ return ret;
-+
-+ /* set up the arrays */
-+ read_ahead[MAJOR_NR] = DEFAULT_READ_AHEAD;
-+ blk_size[MAJOR_NR] = _block_size;
-+ blksize_size[MAJOR_NR] = _blksize_size;
-+ hardsect_size[MAJOR_NR] = _hardsect_size;
-+
-+ if (devfs_register_blkdev(MAJOR_NR, _name, &dm_blk_dops) < 0) {
-+ printk(KERN_ERR "%s -- register_blkdev failed\n", _name);
-+ return -EIO;
-+ }
-+
-+ blk_queue_make_request(BLK_DEFAULT_QUEUE(MAJOR_NR), request);
-+
-+ _dev_dir = devfs_mk_dir(0, _fs_dir, NULL);
-+
-+ printk(KERN_INFO "%s %d.%d.%d initialised\n", _name,
-+ _version[0], _version[1], _version[2]);
-+ return 0;
-+}
-+
-+static void dm_exit(void)
-+{
-+ if (kmem_cache_destroy(_io_hook_cache))
-+ WARN("it looks like there are still some io_hooks allocated");
-+ _io_hook_cache = 0;
-+
-+ if (devfs_unregister_blkdev(MAJOR_NR, _name) < 0)
-+ printk(KERN_ERR "%s -- unregister_blkdev failed\n", _name);
-+
-+ read_ahead[MAJOR_NR] = 0;
-+ blk_size[MAJOR_NR] = 0;
-+ blksize_size[MAJOR_NR] = 0;
-+ hardsect_size[MAJOR_NR] = 0;
-+
-+ printk(KERN_INFO "%s %d.%d.%d cleaned up\n", _name,
-+ _version[0], _version[1], _version[2]);
-+}
-+
-+/*
-+ * block device functions
-+ */
-+static int dm_blk_open(struct inode *inode, struct file *file)
-+{
-+ int minor = MINOR(inode->i_rdev);
-+ struct mapped_device *md;
-+
-+ if (minor >= MAX_DEVICES)
-+ return -ENXIO;
-+
-+ wl;
-+ md = _devs[minor];
-+
-+ if (!md || !is_active(md)) {
-+ wu;
-+ return -ENXIO;
-+ }
-+
-+ md->use_count++;
-+ wu;
-+
-+ MOD_INC_USE_COUNT;
-+ return 0;
-+}
-+
-+static int dm_blk_close(struct inode *inode, struct file *file)
-+{
-+ int minor = MINOR(inode->i_rdev);
-+ struct mapped_device *md;
-+
-+ if (minor >= MAX_DEVICES)
-+ return -ENXIO;
-+
-+ wl;
-+ md = _devs[minor];
-+ if (!md || md->use_count < 1) {
-+ WARN("reference count in mapped_device incorrect");
-+ wu;
-+ return -ENXIO;
-+ }
-+
-+ md->use_count--;
-+ wu;
-+
-+ MOD_DEC_USE_COUNT;
-+ return 0;
-+}
-+
-+/* In 512-byte units */
-+#define VOLUME_SIZE(minor) (_block_size[(minor)] >> 1)
-+
-+static int dm_blk_ioctl(struct inode *inode, struct file *file,
-+ uint command, ulong a)
-+{
-+ int minor = MINOR(inode->i_rdev);
-+ long size;
-+
-+ if (minor >= MAX_DEVICES)
-+ return -ENXIO;
-+
-+ switch (command) {
-+ case BLKSSZGET:
-+ case BLKROGET:
-+ case BLKROSET:
-+#if 0
-+ case BLKELVSET:
-+ case BLKELVGET:
-+#endif
-+ return blk_ioctl(inode->i_dev, command, a);
-+ break;
-+
-+ case BLKGETSIZE:
-+ size = VOLUME_SIZE(minor);
-+ if (copy_to_user((void *) a, &size, sizeof (long)))
-+ return -EFAULT;
-+ break;
-+
-+ case BLKFLSBUF:
-+ if (!capable(CAP_SYS_ADMIN))
-+ return -EACCES;
-+ fsync_dev(inode->i_rdev);
-+ invalidate_buffers(inode->i_rdev);
-+ return 0;
-+
-+ case BLKRAGET:
-+ if (copy_to_user
-+ ((void *) a, &read_ahead[MAJOR(inode->i_rdev)],
-+ sizeof (long)))
-+ return -EFAULT;
-+ return 0;
-+
-+ case BLKRASET:
-+ if (!capable(CAP_SYS_ADMIN))
-+ return -EACCES;
-+ read_ahead[MAJOR(inode->i_rdev)] = a;
-+ return 0;
-+
-+ case BLKRRPART:
-+ return -EINVAL;
-+
-+ case LV_BMAP:
-+ return dm_user_bmap(inode, (struct lv_bmap *) a);
-+
-+ default:
-+ WARN("unknown block ioctl %d", command);
-+ return -EINVAL;
-+ }
-+
-+ return 0;
-+}
-+
-+static inline struct io_hook *alloc_io_hook(void)
-+{
-+ return kmem_cache_alloc(_io_hook_cache, GFP_NOIO);
-+}
-+
-+static inline void free_io_hook(struct io_hook *ih)
-+{
-+ kmem_cache_free(_io_hook_cache, ih);
-+}
-+
-+/*
-+ * FIXME: need to decide if deferred_io's need
-+ * their own slab, I say no for now since they are
-+ * only used when the device is suspended.
-+ */
-+static inline struct deferred_io *alloc_deferred(void)
-+{
-+ return kmalloc(sizeof(struct deferred_io), GFP_NOIO);
-+}
-+
-+static inline void free_deferred(struct deferred_io *di)
-+{
-+ kfree(di);
-+}
-+
-+/*
-+ * call a targets optional error function if
-+ * an io failed.
-+ */
-+static inline int call_err_fn(struct io_hook *ih, struct buffer_head *bh)
-+{
-+ dm_err_fn err = ih->target->type->err;
-+ if (err)
-+ return err(bh, ih->rw, ih->target->private);
-+
-+ return 0;
-+}
-+
-+/*
-+ * bh->b_end_io routine that decrements the
-+ * pending count and then calls the original
-+ * bh->b_end_io fn.
-+ */
-+static void dec_pending(struct buffer_head *bh, int uptodate)
-+{
-+ struct io_hook *ih = bh->b_private;
-+
-+ if (!uptodate && call_err_fn(ih, bh))
-+ return;
-+
-+ if (atomic_dec_and_test(&ih->md->pending))
-+ /* nudge anyone waiting on suspend queue */
-+ wake_up(&ih->md->wait);
-+
-+ bh->b_end_io = ih->end_io;
-+ bh->b_private = ih->context;
-+ free_io_hook(ih);
-+
-+ bh->b_end_io(bh, uptodate);
-+}
-+
-+/*
-+ * add the bh to the list of deferred io.
-+ */
-+static int queue_io(struct mapped_device *md, struct buffer_head *bh, int rw)
-+{
-+ struct deferred_io *di = alloc_deferred();
-+
-+ if (!di)
-+ return -ENOMEM;
-+
-+ wl;
-+ if (test_bit(DM_ACTIVE, &md->state)) {
-+ wu;
-+ return 0;
-+ }
-+
-+ di->bh = bh;
-+ di->rw = rw;
-+ di->next = md->deferred;
-+ md->deferred = di;
-+ wu;
-+
-+ return 1;
-+}
-+
-+/*
-+ * do the bh mapping for a given leaf
-+ */
-+static inline int __map_buffer(struct mapped_device *md,
-+ struct buffer_head *bh, int rw, int leaf)
-+{
-+ int r;
-+ dm_map_fn fn;
-+ void *context;
-+ struct io_hook *ih = NULL;
-+ struct target *ti = md->map->targets + leaf;
-+
-+ fn = ti->type->map;
-+ context = ti->private;
-+
-+ ih = alloc_io_hook();
-+
-+ if (!ih)
-+ return 0;
-+
-+ ih->md = md;
-+ ih->rw = rw;
-+ ih->target = ti;
-+ ih->end_io = bh->b_end_io;
-+ ih->context = bh->b_private;
-+
-+ r = fn(bh, rw, context);
-+
-+ if (r > 0) {
-+ /* hook the end io request fn */
-+ atomic_inc(&md->pending);
-+ bh->b_end_io = dec_pending;
-+ bh->b_private = ih;
-+
-+ } else if (r == 0)
-+ /* we don't need to hook */
-+ free_io_hook(ih);
-+
-+ else if (r < 0) {
-+ free_io_hook(ih);
-+ return 0;
-+ }
-+
-+ return 1;
-+}
-+
-+/*
-+ * search the btree for the correct target.
-+ */
-+static inline int __find_node(struct dm_table *t, struct buffer_head *bh)
-+{
-+ int l, n = 0, k = 0;
-+ offset_t *node;
-+
-+ for (l = 0; l < t->depth; l++) {
-+ n = get_child(n, k);
-+ node = get_node(t, l, n);
-+
-+ for (k = 0; k < KEYS_PER_NODE; k++)
-+ if (node[k] >= bh->b_rsector)
-+ break;
-+ }
-+
-+ return (KEYS_PER_NODE * n) + k;
-+}
-+
-+static int request(request_queue_t *q, int rw, struct buffer_head *bh)
-+{
-+ struct mapped_device *md;
-+ int r, minor = MINOR(bh->b_rdev);
-+
-+ if (minor >= MAX_DEVICES)
-+ goto bad_no_lock;
-+
-+ rl;
-+ md = _devs[minor];
-+
-+ if (!md || !md->map)
-+ goto bad;
-+
-+ /* if we're suspended we have to queue this io for later */
-+ if (!test_bit(DM_ACTIVE, &md->state)) {
-+ ru;
-+ r = queue_io(md, bh, rw);
-+
-+ if (r < 0)
-+ goto bad_no_lock;
-+
-+ else if (r > 0)
-+ return 0; /* deferred successfully */
-+
-+ rl; /* FIXME: there's still a race here */
-+ }
-+
-+ if (!__map_buffer(md, bh, rw, __find_node(md->map, bh)))
-+ goto bad;
-+
-+ ru;
-+ return 1;
-+
-+ bad:
-+ ru;
-+
-+ bad_no_lock:
-+ buffer_IO_error(bh);
-+ return 0;
-+}
-+
-+static int check_dev_size(int minor, unsigned long block)
-+{
-+ /* FIXME: check this */
-+ unsigned long max_sector = (_block_size[minor] << 1) + 1;
-+ unsigned long sector = (block + 1) * (_blksize_size[minor] >> 9);
-+
-+ return (sector > max_sector) ? 0 : 1;
-+}
-+
-+/*
-+ * creates a dummy buffer head and maps it (for lilo).
-+ */
-+static int do_bmap(kdev_t dev, unsigned long block,
-+ kdev_t *r_dev, unsigned long *r_block)
-+{
-+ struct mapped_device *md;
-+ struct buffer_head bh;
-+ int minor = MINOR(dev), r;
-+ struct target *t;
-+
-+ rl;
-+ if ((minor >= MAX_DEVICES) || !(md = _devs[minor]) ||
-+ !test_bit(DM_ACTIVE, &md->state)) {
-+ r = -ENXIO;
-+ goto out;
-+ }
-+
-+ if (!check_dev_size(minor, block)) {
-+ r = -EINVAL;
-+ goto out;
-+ }
-+
-+ /* setup dummy bh */
-+ memset(&bh, 0, sizeof(bh));
-+ bh.b_blocknr = block;
-+ bh.b_dev = bh.b_rdev = dev;
-+ bh.b_size = _blksize_size[minor];
-+ bh.b_rsector = block * (bh.b_size >> 9);
-+
-+ /* find target */
-+ t = md->map->targets + __find_node(md->map, &bh);
-+
-+ /* do the mapping */
-+ r = t->type->map(&bh, READ, t->private);
-+
-+ *r_dev = bh.b_rdev;
-+ *r_block = bh.b_rsector / (bh.b_size >> 9);
-+
-+ out:
-+ ru;
-+ return r;
-+}
-+
-+/*
-+ * marshals arguments and results between user and
-+ * kernel space.
-+ */
-+static int dm_user_bmap(struct inode *inode, struct lv_bmap *lvb)
-+{
-+ unsigned long block, r_block;
-+ kdev_t r_dev;
-+ int r;
-+
-+ if (get_user(block, &lvb->lv_block))
-+ return -EFAULT;
-+
-+ if ((r = do_bmap(inode->i_rdev, block, &r_dev, &r_block)))
-+ return r;
-+
-+ if (put_user(kdev_t_to_nr(r_dev), &lvb->lv_dev) ||
-+ put_user(r_block, &lvb->lv_block))
-+ return -EFAULT;
-+
-+ return 0;
-+}
-+
-+/*
-+ * see if the device with a specific minor # is
-+ * free.
-+ */
-+static inline int __specific_dev(int minor)
-+{
-+ if (minor > MAX_DEVICES) {
-+ WARN("request for a mapped_device > than MAX_DEVICES");
-+ return 0;
-+ }
-+
-+ if (!_devs[minor])
-+ return minor;
-+
-+ return -1;
-+}
-+
-+/*
-+ * find the first free device.
-+ */
-+static inline int __any_old_dev(void)
-+{
-+ int i;
-+
-+ for (i = 0; i < MAX_DEVICES; i++)
-+ if (!_devs[i])
-+ return i;
-+
-+ return -1;
-+}
-+
-+/*
-+ * allocate and initialise a blank device.
-+ */
-+static struct mapped_device *alloc_dev(int minor)
-+{
-+ struct mapped_device *md = kmalloc(sizeof(*md), GFP_KERNEL);
-+
-+ if (!md)
-+ return 0;
-+
-+ memset(md, 0, sizeof (*md));
-+
-+ wl;
-+ minor = (minor < 0) ? __any_old_dev() : __specific_dev(minor);
-+
-+ if (minor < 0) {
-+ WARN("no free devices available");
-+ wu;
-+ kfree(md);
-+ return 0;
-+ }
-+
-+ md->dev = MKDEV(DM_BLK_MAJOR, minor);
-+ md->name[0] = '\0';
-+ md->state = 0;
-+
-+ init_waitqueue_head(&md->wait);
-+
-+ _devs[minor] = md;
-+ wu;
-+
-+ return md;
-+}
-+
-+static void free_dev(struct mapped_device *md)
-+{
-+ kfree(md);
-+}
-+
-+/*
-+ * open a device so we can use it as a map
-+ * destination.
-+ */
-+static int open_dev(struct dm_dev *d)
-+{
-+ int err;
-+
-+ if (d->bd)
-+ BUG();
-+
-+ if (!(d->bd = bdget(kdev_t_to_nr(d->dev))))
-+ return -ENOMEM;
-+
-+ if ((err = blkdev_get(d->bd, FMODE_READ|FMODE_WRITE, 0, BDEV_FILE))) {
-+ bdput(d->bd);
-+ return err;
-+ }
-+
-+ return 0;
-+}
-+
-+/*
-+ * close a device that we've been using.
-+ */
-+static void close_dev(struct dm_dev *d)
-+{
-+ if (!d->bd)
-+ return;
-+
-+ blkdev_put(d->bd, BDEV_FILE);
-+ bdput(d->bd);
-+ d->bd = 0;
-+}
-+
-+/*
-+ * Close a list of devices.
-+ */
-+static void close_devices(struct list_head *devices)
-+{
-+ struct list_head *tmp;
-+
-+ for (tmp = devices->next; tmp != devices; tmp = tmp->next) {
-+ struct dm_dev *dd = list_entry(tmp, struct dm_dev, list);
-+ close_dev(dd);
-+ }
-+}
-+
-+/*
-+ * Open a list of devices.
-+ */
-+static int open_devices(struct list_head *devices)
-+{
-+ int r = 0;
-+ struct list_head *tmp;
-+
-+ for (tmp = devices->next; tmp != devices; tmp = tmp->next) {
-+ struct dm_dev *dd = list_entry(tmp, struct dm_dev, list);
-+ if ((r = open_dev(dd)))
-+ goto bad;
-+ }
-+ return 0;
-+
-+ bad:
-+ close_devices(devices);
-+ return r;
-+}
-+
-+
-+struct mapped_device *dm_find_by_minor(int minor)
-+{
-+ struct mapped_device *md;
-+
-+ rl;
-+ md = _devs[minor];
-+ ru;
-+
-+ return md;
-+}
-+
-+static int register_device(struct mapped_device *md)
-+{
-+ md->devfs_entry =
-+ devfs_register(_dev_dir, md->name, DEVFS_FL_CURRENT_OWNER,
-+ MAJOR(md->dev), MINOR(md->dev),
-+ S_IFBLK | S_IRUSR | S_IWUSR | S_IRGRP,
-+ &dm_blk_dops, NULL);
-+
-+ if (!md->devfs_entry)
-+ return -ENOMEM;
-+
-+ return 0;
-+}
-+
-+static int unregister_device(struct mapped_device *md)
-+{
-+ devfs_unregister(md->devfs_entry);
-+ return 0;
-+}
-+
-+/*
-+ * constructor for a new device
-+ */
-+int dm_create(const char *name, int minor, struct mapped_device **result)
-+{
-+ int r;
-+ struct mapped_device *md;
-+
-+ if (minor >= MAX_DEVICES)
-+ return -ENXIO;
-+
-+ if (!(md = alloc_dev(minor)))
-+ return -ENXIO;
-+
-+ wl;
-+ strcpy(md->name, name);
-+ _devs[minor] = md;
-+ if ((r = register_device(md))) {
-+ wu;
-+ free_dev(md);
-+ return r;
-+ }
-+ wu;
-+
-+ *result = md;
-+ return 0;
-+}
-+
-+/*
-+ * destructor for the device. md->map is
-+ * deliberately not destroyed, dm-fs/dm-ioctl
-+ * should manage table objects.
-+ */
-+int dm_destroy(struct mapped_device *md)
-+{
-+ int minor, r;
-+
-+ wl;
-+ if (md->use_count) {
-+ wu;
-+ return -EPERM;
-+ }
-+
-+ if ((r = unregister_device(md))) {
-+ wu;
-+ return r;
-+ }
-+
-+ minor = MINOR(md->dev);
-+ _devs[minor] = 0;
-+ wu;
-+
-+ kfree(md);
-+
-+ return 0;
-+}
-+
-+/*
-+ * the hardsect size for a mapped device is the
-+ * smallest hard sect size from the devices it
-+ * maps onto.
-+ */
-+static int __find_hardsect_size(struct list_head *devices)
-+{
-+ int result = INT_MAX, size;
-+ struct list_head *tmp;
-+
-+ for (tmp = devices->next; tmp != devices; tmp = tmp->next) {
-+ struct dm_dev *dd = list_entry(tmp, struct dm_dev, list);
-+ size = get_hardsect_size(dd->dev);
-+ if (size < result)
-+ result = size;
-+ }
-+ return result;
-+}
-+
-+/*
-+ * Bind a table to the device.
-+ */
-+void __bind(struct mapped_device *md, struct dm_table *t)
-+{
-+ int minor = MINOR(md->dev);
-+
-+ md->map = t;
-+
-+ /* in k */
-+ _block_size[minor] = (t->highs[t->num_targets - 1] + 1) >> 1;
-+
-+ _blksize_size[minor] = BLOCK_SIZE;
-+ _hardsect_size[minor] = __find_hardsect_size(&t->devices);
-+ register_disk(NULL, md->dev, 1, &dm_blk_dops, _block_size[minor]);
-+}
-+
-+/*
-+ * requeue the deferred buffer_heads by calling
-+ * generic_make_request.
-+ */
-+static void __flush_deferred_io(struct mapped_device *md)
-+{
-+ struct deferred_io *c, *n;
-+
-+ for (c = md->deferred, md->deferred = 0; c; c = n) {
-+ n = c->next;
-+ generic_make_request(c->rw, c->bh);
-+ free_deferred(c);
-+ }
-+}
-+
-+/*
-+ * make the device available for use, if was
-+ * previously suspended rather than newly created
-+ * then all queued io is flushed
-+ */
-+int dm_activate(struct mapped_device *md, struct dm_table *table)
-+{
-+ int r;
-+
-+ /* check that the mapping has at least been loaded. */
-+ if (!table->num_targets)
-+ return -EINVAL;
-+
-+ wl;
-+
-+ /* you must be deactivated first */
-+ if (is_active(md)) {
-+ wu;
-+ return -EPERM;
-+ }
-+
-+ __bind(md, table);
-+
-+ if ((r = open_devices(&md->map->devices))) {
-+ wu;
-+ return r;
-+ }
-+
-+ set_bit(DM_ACTIVE, &md->state);
-+ __flush_deferred_io(md);
-+ wu;
-+
-+ return 0;
-+}
-+
-+/*
-+ * Deactivate the device, the device must not be
-+ * opened by anyone.
-+ */
-+int dm_deactivate(struct mapped_device *md)
-+{
-+ rl;
-+ if (md->use_count) {
-+ ru;
-+ return -EPERM;
-+ }
-+
-+ fsync_dev(md->dev);
-+
-+ ru;
-+
-+ wl;
-+ if (md->use_count) {
-+ /* drat, somebody got in quick ... */
-+ wu;
-+ return -EPERM;
-+ }
-+
-+ close_devices(&md->map->devices);
-+ md->map = 0;
-+ clear_bit(DM_ACTIVE, &md->state);
-+ wu;
-+
-+ return 0;
-+}
-+
-+/*
-+ * We need to be able to change a mapping table
-+ * under a mounted filesystem. for example we
-+ * might want to move some data in the background.
-+ * Before the table can be swapped with
-+ * dm_bind_table, dm_suspend must be called to
-+ * flush any in flight buffer_heads and ensure
-+ * that any further io gets deferred.
-+ */
-+void dm_suspend(struct mapped_device *md)
-+{
-+ DECLARE_WAITQUEUE(wait, current);
-+
-+ wl;
-+ if (!is_active(md)) {
-+ wu;
-+ return;
-+ }
-+
-+ clear_bit(DM_ACTIVE, &md->state);
-+ wu;
-+
-+ /* wait for all the pending io to flush */
-+ add_wait_queue(&md->wait, &wait);
-+ current->state = TASK_UNINTERRUPTIBLE;
-+ do {
-+ wl;
-+ if (!atomic_read(&md->pending))
-+ break;
-+
-+ wu;
-+ schedule();
-+
-+ } while (1);
-+
-+ current->state = TASK_RUNNING;
-+ remove_wait_queue(&md->wait, &wait);
-+ close_devices(&md->map->devices);
-+
-+ md->map = 0;
-+ wu;
-+}
-+
-+/*
-+ * Search for a device with a particular name.
-+ */
-+struct mapped_device *dm_get(const char *name)
-+{
-+ int i;
-+ struct mapped_device *md = NULL;
-+
-+ rl;
-+ for (i = 0; i < MAX_DEVICES; i++)
-+ if (_devs[i] && !strcmp(_devs[i]->name, name)) {
-+ md = _devs[i];
-+ break;
-+ }
-+ ru;
-+
-+ return md;
-+}
-+
-+struct block_device_operations dm_blk_dops = {
-+ open: dm_blk_open,
-+ release: dm_blk_close,
-+ ioctl: dm_blk_ioctl
-+};
-+
-+/*
-+ * module hooks
-+ */
-+module_init(dm_init);
-+module_exit(dm_exit);
-+
-+MODULE_DESCRIPTION("device-mapper driver");
-+MODULE_AUTHOR("Joe Thornber <thornber@sistina.com>");
-+
-+#ifdef MODULE_LICENSE
-+MODULE_LICENSE("GPL");
-+#endif
-diff -ruN -X /home/thornber/packages/2.4/dontdiff linux/drivers/md/dm.h linux-dm/drivers/md/dm.h
---- linux/drivers/md/dm.h Thu Jan 1 01:00:00 1970
-+++ linux-dm/drivers/md/dm.h Wed Oct 31 18:09:12 2001
-@@ -0,0 +1,165 @@
-+/*
-+ * Copyright (C) 2001 Sistina Software
-+ *
-+ * This file is released under the GPL.
-+ */
-+
-+#ifndef DM_INTERNAL_H
-+#define DM_INTERNAL_H
-+
-+#include <linux/version.h>
-+#include <linux/major.h>
-+#include <linux/iobuf.h>
-+#include <linux/module.h>
-+#include <linux/fs.h>
-+#include <linux/slab.h>
-+#include <linux/vmalloc.h>
-+#include <linux/compatmac.h>
-+#include <linux/cache.h>
-+#include <linux/devfs_fs_kernel.h>
-+#include <linux/ctype.h>
-+#include <linux/device-mapper.h>
-+#include <linux/list.h>
-+
-+#define MAX_DEPTH 16
-+#define NODE_SIZE L1_CACHE_BYTES
-+#define KEYS_PER_NODE (NODE_SIZE / sizeof(offset_t))
-+#define CHILDREN_PER_NODE (KEYS_PER_NODE + 1)
-+
-+enum {
-+ DM_BOUND = 0, /* device has been bound to a table */
-+ DM_ACTIVE, /* device is running */
-+};
-+
-+
-+/*
-+ * list of devices that a metadevice uses
-+ * and hence should open/close.
-+ */
-+struct dm_dev {
-+ atomic_t count;
-+ struct list_head list;
-+
-+ kdev_t dev;
-+ struct block_device *bd;
-+};
-+
-+/*
-+ * io that had to be deferred while we were
-+ * suspended
-+ */
-+struct deferred_io {
-+ int rw;
-+ struct buffer_head *bh;
-+ struct deferred_io *next;
-+};
-+
-+/*
-+ * btree leaf, these do the actual mapping
-+ */
-+struct target {
-+ struct target_type *type;
-+ void *private;
-+};
-+
-+/*
-+ * the btree
-+ */
-+struct dm_table {
-+ /* btree table */
-+ int depth;
-+ int counts[MAX_DEPTH]; /* in nodes */
-+ offset_t *index[MAX_DEPTH];
-+
-+ int num_targets;
-+ int num_allocated;
-+ offset_t *highs;
-+ struct target *targets;
-+
-+ /* a list of devices used by this table */
-+ struct list_head devices;
-+};
-+
-+/*
-+ * the actual device struct
-+ */
-+struct mapped_device {
-+ kdev_t dev;
-+ char name[DM_NAME_LEN];
-+
-+ int use_count;
-+ int state;
-+
-+ /* a list of io's that arrived while we were suspended */
-+ atomic_t pending;
-+ wait_queue_head_t wait;
-+ struct deferred_io *deferred;
-+
-+ struct dm_table *map;
-+
-+ /* used by dm-fs.c */
-+ devfs_handle_t devfs_entry;
-+};
-+
-+extern struct block_device_operations dm_blk_dops;
-+
-+
-+/* dm-target.c */
-+int dm_target_init(void);
-+struct target_type *dm_get_target_type(const char *name);
-+void dm_put_target_type(struct target_type *t);
-+
-+/* dm.c */
-+struct mapped_device *dm_find_by_minor(int minor);
-+
-+int dm_create(const char *name, int minor, struct mapped_device **result);
-+int dm_destroy(struct mapped_device *md);
-+
-+int dm_activate(struct mapped_device *md, struct dm_table *t);
-+int dm_deactivate(struct mapped_device *md);
-+
-+void dm_suspend(struct mapped_device *md);
-+
-+struct mapped_device *dm_get(const char *name);
-+
-+
-+/* dm-table.c */
-+int dm_table_create(struct dm_table **result);
-+void dm_table_destroy(struct dm_table *t);
-+
-+int dm_table_add_target(struct dm_table *t, offset_t high,
-+ struct target_type *type, void *private);
-+int dm_table_complete(struct dm_table *t);
-+
-+
-+/* dm-fs.c */
-+int dm_fs_init(void);
-+void dm_fs_exit(void);
-+
-+
-+
-+#define WARN(f, x...) printk(KERN_WARNING "device-mapper: " f "\n" , ## x)
-+
-+/*
-+ * calculate the index of the child node of the
-+ * n'th node k'th key.
-+ */
-+static inline int get_child(int n, int k)
-+{
-+ return (n * CHILDREN_PER_NODE) + k;
-+}
-+
-+/*
-+ * returns the n'th node of level l from table t.
-+ */
-+static inline offset_t *get_node(struct dm_table *t, int l, int n)
-+{
-+ return t->index[l] + (n * KEYS_PER_NODE);
-+}
-+
-+static inline int is_active(struct mapped_device *md)
-+{
-+ return test_bit(DM_ACTIVE, &md->state);
-+}
-+
-+#endif
-diff -ruN -X /home/thornber/packages/2.4/dontdiff linux/include/linux/device-mapper.h linux-dm/include/linux/device-mapper.h
---- linux/include/linux/device-mapper.h Thu Jan 1 01:00:00 1970
-+++ linux-dm/include/linux/device-mapper.h Wed Oct 31 18:08:44 2001
-@@ -0,0 +1,60 @@
-+/*
-+ * Copyright (C) 2001 Sistina Software (UK) Limited.
-+ *
-+ * This file is released under the GPL.
-+ */
-+
-+#ifndef DEVICE_MAPPER_H
-+#define DEVICE_MAPPER_H
-+
-+#include <linux/major.h>
-+
-+/* FIXME: Use value from local range for now, for co-existence with LVM 1 */
-+#define DM_BLK_MAJOR 124
-+#define DM_NAME_LEN 64
-+#define DM_MAX_TYPE_NAME 16
-+
-+
-+struct dm_table;
-+struct dm_dev;
-+typedef unsigned int offset_t;
-+
-+typedef void (*dm_error_fn)(const char *message, void *private);
-+
-+/*
-+ * constructor, destructor and map fn types
-+ */
-+typedef int (*dm_ctr_fn)(struct dm_table *t, offset_t b, offset_t l,
-+ const char *args, void **context,
-+ dm_error_fn err, void *e_private);
-+
-+typedef void (*dm_dtr_fn)(struct dm_table *t, void *c);
-+typedef int (*dm_map_fn)(struct buffer_head *bh, int rw, void *context);
-+typedef int (*dm_err_fn)(struct buffer_head *bh, int rw, void *context);
-+
-+
-+/*
-+ * Contructors should call this to make sure any
-+ * destination devices are handled correctly
-+ * (ie. opened/closed).
-+ */
-+int dm_table_get_device(struct dm_table *t, const char *path,
-+ struct dm_dev **result);
-+void dm_table_put_device(struct dm_table *table, struct dm_dev *d);
-+
-+/*
-+ * information about a target type
-+ */
-+struct target_type {
-+ const char *name;
-+ struct module *module;
-+ dm_ctr_fn ctr;
-+ dm_dtr_fn dtr;
-+ dm_map_fn map;
-+ dm_err_fn err;
-+};
-+
-+int dm_register_target(struct target_type *t);
-+int dm_unregister_target(struct target_type *t);
-+
-+#endif /* DEVICE_MAPPER_H */
-diff -ruN -X /home/thornber/packages/2.4/dontdiff linux/include/linux/dm-ioctl.h linux-dm/include/linux/dm-ioctl.h
---- linux/include/linux/dm-ioctl.h Thu Jan 1 01:00:00 1970
-+++ linux-dm/include/linux/dm-ioctl.h Wed Oct 31 18:08:40 2001
-@@ -0,0 +1,57 @@
-+/*
-+ * Copyright (C) 2001 Sistina Software (UK) Limited.
-+ *
-+ * This file is released under the GPL.
-+ */
-+
-+#ifndef _DM_IOCTL_H
-+#define _DM_IOCTL_H
-+
-+// FIXME: just for now to steal LVM_CHR_MAJOR
-+#include <linux/lvm.h>
-+
-+#include "device-mapper.h"
-+
-+/*
-+ * Implements a traditional ioctl interface to the
-+ * device mapper. Yuck.
-+ */
-+
-+struct dm_target_spec {
-+ int32_t status; /* used when reading from kernel only */
-+ uint64_t sector_start;
-+ uint64_t length;
-+
-+ char target_type[DM_MAX_TYPE_NAME];
-+
-+ uint32_t next; /* offset in bytes to next target_spec */
-+
-+ /*
-+ * Parameter string starts immediately
-+ * after this object. Be careful to add
-+ * padding after string to ensure correct
-+ * alignment of subsequent dm_target_spec.
-+ */
-+};
-+
-+struct dm_ioctl {
-+ uint32_t data_size; /* the size of this structure */
-+ char name[DM_NAME_LEN];
-+ int suspend;
-+ int open_count; /* out field */
-+ int minor;
-+
-+ int target_count;
-+};
-+
-+/* FIXME: find own # */
-+#define DM_IOCTL 0xfd
-+#define DM_CHAR_MAJOR LVM_CHAR_MAJOR
-+
-+#define DM_CREATE _IOW(DM_IOCTL, 0x00, struct dm_ioctl)
-+#define DM_REMOVE _IOW(DM_IOCTL, 0x01, struct dm_ioctl)
-+#define DM_SUSPEND _IOW(DM_IOCTL, 0x02, struct dm_ioctl)
-+#define DM_RELOAD _IOWR(DM_IOCTL, 0x03, struct dm_ioctl)
-+#define DM_INFO _IOWR(DM_IOCTL, 0x04, struct dm_ioctl)
-+
-+#endif
+++ /dev/null
---- linux/include/linux/device-mapper.h Thu Nov 1 12:25:55 2001
-+++ linux-dm/include/linux/device-mapper.h Thu Nov 1 11:46:57 2001
-@@ -8,12 +8,14 @@
- #define DEVICE_MAPPER_H
-
- #include <linux/major.h>
-+#include <linux/fs.h>
-
- /* FIXME: Use value from local range for now, for co-existence with LVM 1 */
- #define DM_BLK_MAJOR 124
- #define DM_NAME_LEN 64
- #define DM_MAX_TYPE_NAME 16
-
-+#ifdef __KERNEL__
-
- struct dm_table;
- struct dm_dev;
-@@ -56,5 +58,7 @@
-
- int dm_register_target(struct target_type *t);
- int dm_unregister_target(struct target_type *t);
-+
-+#endif /* __KERNEL__ */
-
- #endif /* DEVICE_MAPPER_H */
---- linux/include/linux/dm-ioctl.h Thu Nov 1 12:25:55 2001
-+++ linux-dm/include/linux/dm-ioctl.h Thu Nov 1 11:51:36 2001
-@@ -7,9 +7,6 @@
- #ifndef _DM_IOCTL_H
- #define _DM_IOCTL_H
-
--// FIXME: just for now to steal LVM_CHR_MAJOR
--#include <linux/lvm.h>
--
- #include "device-mapper.h"
-
- /*
-@@ -19,12 +16,12 @@
-
- struct dm_target_spec {
- int32_t status; /* used when reading from kernel only */
-- uint64_t sector_start;
-- uint64_t length;
-+ unsigned long long sector_start;
-+ unsigned long long length;
-
- char target_type[DM_MAX_TYPE_NAME];
-
-- uint32_t next; /* offset in bytes to next target_spec */
-+ unsigned long next; /* offset in bytes to next target_spec */
-
- /*
- * Parameter string starts immediately
-@@ -35,7 +32,7 @@
- };
-
- struct dm_ioctl {
-- uint32_t data_size; /* the size of this structure */
-+ unsigned long data_size; /* the size of this structure */
- char name[DM_NAME_LEN];
- int suspend;
- int open_count; /* out field */
-@@ -44,9 +41,9 @@
- int target_count;
- };
-
--/* FIXME: find own # */
-+/* FIXME: find own numbers, 109 is pinched from LVM */
- #define DM_IOCTL 0xfd
--#define DM_CHAR_MAJOR LVM_CHAR_MAJOR
-+#define DM_CHAR_MAJOR 109
-
- #define DM_CREATE _IOW(DM_IOCTL, 0x00, struct dm_ioctl)
- #define DM_REMOVE _IOW(DM_IOCTL, 0x01, struct dm_ioctl)
+++ /dev/null
---- linux-last/drivers/md/dm-ioctl.c Thu Nov 1 13:20:44 2001
-+++ linux/drivers/md/dm-ioctl.c Thu Nov 1 13:30:58 2001
-@@ -259,21 +259,32 @@
- ioctl: ctl_ioctl,
- };
-
-+
-+static devfs_handle_t _ctl_handle;
-+
- static int dm_ioctl_init(void)
- {
- int r;
-
-+
- if ((r = devfs_register_chrdev(DM_CHAR_MAJOR, "device-mapper",
- &_ctl_fops)) < 0) {
- WARN("devfs_register_chrdev failed for dm control dev");
- return -EIO;
- }
-
-+ _ctl_handle = devfs_register(0 , "device-mapper/control", 0,
-+ DM_CHAR_MAJOR, 0,
-+ S_IFCHR | S_IRUSR | S_IWUSR | S_IRGRP,
-+ &_ctl_fops, NULL);
-+
- return r;
- }
-
- static void dm_ioctl_exit(void)
- {
-+ // FIXME: remove control device
-+
- if (devfs_unregister_chrdev(DM_CHAR_MAJOR, "device-mapper") < 0)
- WARN("devfs_unregister_chrdev failed for dm control device");
- }
+++ /dev/null
---- linux-last/drivers/md/dm-ioctl.c Thu Nov 1 13:34:07 2001
-+++ linux/drivers/md/dm-ioctl.c Thu Nov 1 13:47:08 2001
-@@ -37,7 +37,7 @@
- */
- static int valid_str(char *str, void *end)
- {
-- while ((str != end) && *str)
-+ while ((str < end) && *str)
- str++;
-
- return *str ? 0 : 1;
-@@ -46,7 +46,7 @@
- static int first_target(struct dm_ioctl *a, void *end,
- struct dm_target_spec **spec, char **params)
- {
-- *spec = (struct dm_target_spec *) ((unsigned char *) a) + a->data_size;
-+ *spec = (struct dm_target_spec *) (a + 1);
- *params = (char *) (*spec + 1);
-
- return valid_str(*params, end);
+++ /dev/null
---- linux-last/drivers/md/dm-linear.c Thu Nov 1 13:20:44 2001
-+++ linux/drivers/md/dm-linear.c Thu Nov 1 15:01:28 2001
-@@ -31,7 +31,9 @@
- dm_error_fn err, void *e_private)
- {
- struct linear_c *lc;
-- unsigned int start;
-+ unsigned long start; /* FIXME: should be unsigned long long,
-+ need to fix sscanf */
-+
- char path[256]; /* FIXME: magic */
- int r = -EINVAL;
-
-@@ -40,7 +42,7 @@
- return -ENOMEM;
- }
-
-- if (sscanf("%s %u", path, &start) != 2) {
-+ if (sscanf(args, "%s %lu", path, &start) != 2) {
- err("target params should be of the form <dev_path> <sector>",
- e_private);
- goto bad;
+++ /dev/null
---- linux-last/drivers/md/dm-ioctl.c Thu Nov 1 14:47:50 2001
-+++ linux/drivers/md/dm-ioctl.c Thu Nov 1 15:14:02 2001
-@@ -64,7 +64,7 @@
-
- void err_fn(const char *message, void *private)
- {
-- printk(KERN_ERR "%s", message);
-+ printk(KERN_WARNING "%s\n", message);
- }
-
- /*
+++ /dev/null
---- linux-last/drivers/md/dm.c Thu Nov 1 13:20:46 2001
-+++ linux/drivers/md/dm.c Thu Nov 1 18:09:18 2001
-@@ -127,7 +127,7 @@
- wl;
- md = _devs[minor];
-
-- if (!md || !is_active(md)) {
-+ if (!md) {
- wu;
- return -ENXIO;
- }
-@@ -297,7 +297,7 @@
- return -ENOMEM;
-
- wl;
-- if (test_bit(DM_ACTIVE, &md->state)) {
-+ if (!md->suspended) {
- wu;
- return 0;
- }
-@@ -388,11 +388,14 @@
- rl;
- md = _devs[minor];
-
-- if (!md || !md->map)
-+ if (!md)
- goto bad;
-
-- /* if we're suspended we have to queue this io for later */
-- if (!test_bit(DM_ACTIVE, &md->state)) {
-+ /*
-+ * If we're suspended we have to queue
-+ * this io for later.
-+ */
-+ if (md->suspended) {
- ru;
- r = queue_io(md, bh, rw);
-
-@@ -440,8 +443,7 @@
- struct target *t;
-
- rl;
-- if ((minor >= MAX_DEVICES) || !(md = _devs[minor]) ||
-- !test_bit(DM_ACTIVE, &md->state)) {
-+ if ((minor >= MAX_DEVICES) || !(md = _devs[minor]) || md->suspended) {
- r = -ENXIO;
- goto out;
- }
-@@ -550,7 +552,7 @@
-
- md->dev = MKDEV(DM_BLK_MAJOR, minor);
- md->name[0] = '\0';
-- md->state = 0;
-+ md->suspended = 0;
-
- init_waitqueue_head(&md->wait);
-
-@@ -633,18 +635,6 @@
- return r;
- }
-
--
--struct mapped_device *dm_find_by_minor(int minor)
--{
-- struct mapped_device *md;
--
-- rl;
-- md = _devs[minor];
-- ru;
--
-- return md;
--}
--
- static int register_device(struct mapped_device *md)
- {
- md->devfs_entry =
-@@ -666,9 +656,58 @@
- }
-
- /*
-+ * the hardsect size for a mapped device is the
-+ * smallest hard sect size from the devices it
-+ * maps onto.
-+ */
-+static int __find_hardsect_size(struct list_head *devices)
-+{
-+ int result = INT_MAX, size;
-+ struct list_head *tmp;
-+
-+ for (tmp = devices->next; tmp != devices; tmp = tmp->next) {
-+ struct dm_dev *dd = list_entry(tmp, struct dm_dev, list);
-+ size = get_hardsect_size(dd->dev);
-+ if (size < result)
-+ result = size;
-+ }
-+ return result;
-+}
-+
-+/*
-+ * Bind a table to the device.
-+ */
-+int __bind(struct mapped_device *md, struct dm_table *t)
-+{
-+ int minor = MINOR(md->dev);
-+
-+ if (!t->num_targets)
-+ return -EINVAL;
-+
-+ md->map = t;
-+
-+ /* in k */
-+ _block_size[minor] = (t->highs[t->num_targets - 1] + 1) >> 1;
-+
-+ _blksize_size[minor] = BLOCK_SIZE;
-+ _hardsect_size[minor] = __find_hardsect_size(&t->devices);
-+ register_disk(NULL, md->dev, 1, &dm_blk_dops, _block_size[minor]);
-+
-+ return open_devices(&md->map->devices);
-+}
-+
-+void __unbind(struct mapped_device *md)
-+{
-+ close_devices(&md->map->devices);
-+ md->map = NULL;
-+}
-+
-+/*
- * constructor for a new device
- */
--int dm_create(const char *name, int minor, struct mapped_device **result)
-+int dm_create(const char *name, int minor,
-+ struct dm_table *table,
-+ struct mapped_device **result)
- {
- int r;
- struct mapped_device *md;
-@@ -687,6 +726,12 @@
- free_dev(md);
- return r;
- }
-+
-+ if ((r = __bind(md, table))) {
-+ wu;
-+ free_dev(md);
-+ return r;
-+ }
- wu;
-
- *result = md;
-@@ -696,12 +741,22 @@
- /*
- * destructor for the device. md->map is
- * deliberately not destroyed, dm-fs/dm-ioctl
-- * should manage table objects.
-+ * should manage table objects. You cannot
-+ * destroy a suspended device.
- */
- int dm_destroy(struct mapped_device *md)
- {
- int minor, r;
-
-+ rl;
-+ if (md->suspended || md->use_count) {
-+ ru;
-+ return -EPERM;
-+ }
-+
-+ fsync_dev(md->dev);
-+ ru;
-+
- wl;
- if (md->use_count) {
- wu;
-@@ -715,48 +770,15 @@
-
- minor = MINOR(md->dev);
- _devs[minor] = 0;
-+ __unbind(md);
-+
- wu;
-
-- kfree(md);
-+ free_dev(md);
-
- return 0;
- }
-
--/*
-- * the hardsect size for a mapped device is the
-- * smallest hard sect size from the devices it
-- * maps onto.
-- */
--static int __find_hardsect_size(struct list_head *devices)
--{
-- int result = INT_MAX, size;
-- struct list_head *tmp;
--
-- for (tmp = devices->next; tmp != devices; tmp = tmp->next) {
-- struct dm_dev *dd = list_entry(tmp, struct dm_dev, list);
-- size = get_hardsect_size(dd->dev);
-- if (size < result)
-- result = size;
-- }
-- return result;
--}
--
--/*
-- * Bind a table to the device.
-- */
--void __bind(struct mapped_device *md, struct dm_table *t)
--{
-- int minor = MINOR(md->dev);
--
-- md->map = t;
--
-- /* in k */
-- _block_size[minor] = (t->highs[t->num_targets - 1] + 1) >> 1;
--
-- _blksize_size[minor] = BLOCK_SIZE;
-- _hardsect_size[minor] = __find_hardsect_size(&t->devices);
-- register_disk(NULL, md->dev, 1, &dm_blk_dops, _block_size[minor]);
--}
-
- /*
- * requeue the deferred buffer_heads by calling
-@@ -774,70 +796,32 @@
- }
-
- /*
-- * make the device available for use, if was
-- * previously suspended rather than newly created
-- * then all queued io is flushed
-+ * Swap in a new table.
- */
--int dm_activate(struct mapped_device *md, struct dm_table *table)
-+int dm_swap_table(struct mapped_device *md, struct dm_table *table)
- {
- int r;
-
-- /* check that the mapping has at least been loaded. */
-- if (!table->num_targets)
-- return -EINVAL;
--
- wl;
-
-- /* you must be deactivated first */
-- if (is_active(md)) {
-+ /* device must be suspended */
-+ if (!md->suspended) {
- wu;
- return -EPERM;
- }
-
-- __bind(md, table);
-+ __unbind(md);
-
-- if ((r = open_devices(&md->map->devices))) {
-+ if ((r = __bind(md, table))) {
- wu;
- return r;
- }
-
-- set_bit(DM_ACTIVE, &md->state);
-- __flush_deferred_io(md);
- wu;
-
- return 0;
- }
-
--/*
-- * Deactivate the device, the device must not be
-- * opened by anyone.
-- */
--int dm_deactivate(struct mapped_device *md)
--{
-- rl;
-- if (md->use_count) {
-- ru;
-- return -EPERM;
-- }
--
-- fsync_dev(md->dev);
--
-- ru;
--
-- wl;
-- if (md->use_count) {
-- /* drat, somebody got in quick ... */
-- wu;
-- return -EPERM;
-- }
--
-- close_devices(&md->map->devices);
-- md->map = 0;
-- clear_bit(DM_ACTIVE, &md->state);
-- wu;
--
-- return 0;
--}
-
- /*
- * We need to be able to change a mapping table
-@@ -848,17 +832,17 @@
- * flush any in flight buffer_heads and ensure
- * that any further io gets deferred.
- */
--void dm_suspend(struct mapped_device *md)
-+int dm_suspend(struct mapped_device *md)
- {
- DECLARE_WAITQUEUE(wait, current);
-
- wl;
-- if (!is_active(md)) {
-+ if (md->suspended) {
- wu;
-- return;
-+ return -EINVAL;
- }
-
-- clear_bit(DM_ACTIVE, &md->state);
-+ md->suspended = 1;
- wu;
-
- /* wait for all the pending io to flush */
-@@ -876,10 +860,24 @@
-
- current->state = TASK_RUNNING;
- remove_wait_queue(&md->wait, &wait);
-- close_devices(&md->map->devices);
-+ wu;
-+
-+ return 0;
-+}
-
-- md->map = 0;
-+int dm_resume(struct mapped_device *md)
-+{
-+ wl;
-+ if (!md->suspended) {
-+ wu;
-+ return -EINVAL;
-+ }
-+
-+ md->suspended = 0;
-+ __flush_deferred_io(md);
- wu;
-+
-+ return 0;
- }
-
- /*
---- linux-last/drivers/md/dm.h Thu Nov 1 13:20:46 2001
-+++ linux/drivers/md/dm.h Thu Nov 1 17:45:14 2001
-@@ -26,11 +26,6 @@
- #define KEYS_PER_NODE (NODE_SIZE / sizeof(offset_t))
- #define CHILDREN_PER_NODE (KEYS_PER_NODE + 1)
-
--enum {
-- DM_BOUND = 0, /* device has been bound to a table */
-- DM_ACTIVE, /* device is running */
--};
--
-
- /*
- * list of devices that a metadevice uses
-@@ -88,7 +83,7 @@
- char name[DM_NAME_LEN];
-
- int use_count;
-- int state;
-+ int suspended;
-
- /* a list of io's that arrived while we were suspended */
- atomic_t pending;
-@@ -110,17 +105,27 @@
- void dm_put_target_type(struct target_type *t);
-
- /* dm.c */
--struct mapped_device *dm_find_by_minor(int minor);
-+struct mapped_device *dm_get(const char *name);
-+
-+int dm_create(const char *name, int minor,
-+ struct dm_table *table,
-+ struct mapped_device **result);
-
--int dm_create(const char *name, int minor, struct mapped_device **result);
- int dm_destroy(struct mapped_device *md);
-
--int dm_activate(struct mapped_device *md, struct dm_table *t);
--int dm_deactivate(struct mapped_device *md);
-
--void dm_suspend(struct mapped_device *md);
-+/*
-+ * The device must be suspended before calling
-+ * this method.
-+ */
-+int dm_swap_table(struct mapped_device *md, struct dm_table *t);
-+
-+/*
-+ * People can still use a suspended device.
-+ */
-+int dm_suspend(struct mapped_device *md);
-+int dm_resume(struct mapped_device *md);
-
--struct mapped_device *dm_get(const char *name);
-
-
- /* dm-table.c */
-@@ -155,11 +160,6 @@
- static inline offset_t *get_node(struct dm_table *t, int l, int n)
- {
- return t->index[l] + (n * KEYS_PER_NODE);
--}
--
--static inline int is_active(struct mapped_device *md)
--{
-- return test_bit(DM_ACTIVE, &md->state);
- }
-
- #endif
---- linux-last/drivers/md/dm-ioctl.c Thu Nov 1 14:47:50 2001
-+++ linux/drivers/md/dm-ioctl.c Thu Nov 1 17:53:16 2001
-@@ -37,7 +37,7 @@
- */
- static int valid_str(char *str, void *end)
- {
-- while ((str < end) && *str)
-+ while (((void *) str < end) && *str)
- str++;
-
- return *str ? 0 : 1;
-@@ -140,52 +140,41 @@
- struct mapped_device *md;
- struct dm_table *t;
-
-- if ((r = dm_create(param->name, param->minor, &md)))
-+ if ((r = dm_table_create(&t)))
- return r;
-
-- if ((r = dm_table_create(&t))) {
-- dm_destroy(md);
-- return r;
-- }
--
- if ((r = populate_table(t, param))) {
-- dm_destroy(md);
- dm_table_destroy(t);
- return r;
- }
-
-- if ((r = dm_activate(md, t))) {
-- dm_destroy(md);
-- dm_table_destroy(t);
-+ if ((r = dm_create(param->name, param->minor, t, &md)))
- return r;
-- }
-
- return 0;
- }
-
- static int remove(struct dm_ioctl *param)
- {
-- int r;
- struct mapped_device *md = dm_get(param->name);
-
- if (!md)
-- return -ENODEV;
--
-- if ((r = dm_deactivate(md)))
-- return r;
-+ return -ENXIO;
-
-- if (md->map)
-- dm_table_destroy(md->map);
--
-- if (!dm_destroy(md))
-- WARN("dm_ctl_ioctl: unable to remove device");
--
-- return 0;
-+ return dm_destroy(md);
- }
-
- static int suspend(struct dm_ioctl *param)
- {
-- return -EINVAL;
-+ struct mapped_device *md = dm_get(param->name);
-+
-+ if (!md)
-+ return -ENXIO;
-+
-+ if (param->suspend)
-+ return dm_suspend(md);
-+
-+ return dm_resume(md);
- }
-
- static int reload(struct dm_ioctl *param)
+++ /dev/null
---- linux-last/drivers/md/dm.h Thu Nov 1 18:13:33 2001
-+++ linux/drivers/md/dm.h Thu Nov 1 18:19:22 2001
-@@ -137,11 +137,6 @@
- int dm_table_complete(struct dm_table *t);
-
-
--/* dm-fs.c */
--int dm_fs_init(void);
--void dm_fs_exit(void);
--
--
-
- #define WARN(f, x...) printk(KERN_WARNING "device-mapper: " f "\n" , ## x)
-
+++ /dev/null
---- linux-last/drivers/md/dm.c Fri Nov 2 13:07:57 2001
-+++ linux/drivers/md/dm.c Fri Nov 2 13:05:39 2001
-@@ -702,6 +702,21 @@
- md->map = NULL;
- }
-
-+static int check_name(const char *name)
-+{
-+ if (strchr(name, '/')) {
-+ WARN("invalid device name");
-+ return 0;
-+ }
-+
-+ if (dm_get(name)) {
-+ WARN("device name already in use");
-+ return 0;
-+ }
-+
-+ return 1;
-+}
-+
- /*
- * constructor for a new device
- */
-@@ -719,6 +734,12 @@
- return -ENXIO;
-
- wl;
-+ if (!check_name(name)) {
-+ wu;
-+ free_dev(md);
-+ return -EINVAL;
-+ }
-+
- strcpy(md->name, name);
- _devs[minor] = md;
- if ((r = register_device(md))) {
---- linux-last/drivers/md/dm.c Fri Nov 2 14:14:27 2001
-+++ linux/drivers/md/dm.c Fri Nov 2 15:36:37 2001
-@@ -702,6 +702,18 @@
- md->map = NULL;
- }
-
-+
-+static struct mapped_device *__get_by_name(const char *name)
-+{
-+ int i;
-+
-+ for (i = 0; i < MAX_DEVICES; i++)
-+ if (_devs[i] && !strcmp(_devs[i]->name, name))
-+ return _devs[i];
-+
-+ return NULL;
-+}
-+
- static int check_name(const char *name)
- {
- if (strchr(name, '/')) {
-@@ -709,7 +721,7 @@
- return 0;
- }
-
-- if (dm_get(name)) {
-+ if (__get_by_name(name)) {
- WARN("device name already in use");
- return 0;
- }
-@@ -906,15 +918,10 @@
- */
- struct mapped_device *dm_get(const char *name)
- {
-- int i;
-- struct mapped_device *md = NULL;
-+ struct mapped_device *md;
-
- rl;
-- for (i = 0; i < MAX_DEVICES; i++)
-- if (_devs[i] && !strcmp(_devs[i]->name, name)) {
-- md = _devs[i];
-- break;
-- }
-+ md = __get_by_name(name);
- ru;
-
- return md;
+++ /dev/null
---- linux-last/drivers/md/dm.c Fri Nov 2 13:20:42 2001
-+++ linux/drivers/md/dm.c Fri Nov 2 14:03:15 2001
-@@ -609,7 +609,7 @@
- {
- struct list_head *tmp;
-
-- for (tmp = devices->next; tmp != devices; tmp = tmp->next) {
-+ list_for_each(tmp, devices) {
- struct dm_dev *dd = list_entry(tmp, struct dm_dev, list);
- close_dev(dd);
- }
-@@ -623,7 +623,7 @@
- int r = 0;
- struct list_head *tmp;
-
-- for (tmp = devices->next; tmp != devices; tmp = tmp->next) {
-+ list_for_each(tmp, devices) {
- struct dm_dev *dd = list_entry(tmp, struct dm_dev, list);
- if ((r = open_dev(dd)))
- goto bad;
-@@ -665,7 +665,7 @@
- int result = INT_MAX, size;
- struct list_head *tmp;
-
-- for (tmp = devices->next; tmp != devices; tmp = tmp->next) {
-+ list_for_each(tmp, devices) {
- struct dm_dev *dd = list_entry(tmp, struct dm_dev, list);
- size = get_hardsect_size(dd->dev);
- if (size < result)
---- linux-last/drivers/md/dm-table.c Fri Nov 2 13:07:57 2001
-+++ linux/drivers/md/dm-table.c Fri Nov 2 14:01:36 2001
-@@ -137,10 +137,12 @@
- if (t->depth >= 2)
- vfree(t->index[t->depth - 2]);
-
--
- /* free the targets */
- for (i = 0; i < t->num_targets; i++) {
- struct target *tgt = &t->targets[i];
-+
-+ dm_put_target_type(t->targets[i].type);
-+
- if (tgt->type->dtr)
- tgt->type->dtr(t, tgt->private);
- }
-@@ -211,8 +213,7 @@
- {
- struct list_head *tmp;
-
-- for (tmp = l->next; tmp != l; tmp = tmp->next) {
--
-+ list_for_each(tmp, l) {
- struct dm_dev *dd = list_entry(tmp, struct dm_dev, list);
- if (dd->dev == dev)
- return dd;
---- linux-last/drivers/md/dm-target.c Fri Nov 2 13:07:57 2001
-+++ linux/drivers/md/dm-target.c Fri Nov 2 13:52:32 2001
-@@ -24,9 +24,9 @@
- struct list_head *tmp;
- struct tt_internal *ti;
-
-- for(tmp = _targets.next; tmp != &_targets; tmp = tmp->next) {
--
-+ list_for_each(tmp, &_targets) {
- ti = list_entry(tmp, struct tt_internal, list);
-+
- if (!strcmp(name, ti->tt.name))
- return ti;
- }
-@@ -72,7 +72,7 @@
- ti = get_target_type(name);
- }
-
-- return ti ? &ti->tt : 0;
-+ return ti ? &ti->tt : NULL;
- }
-
- void dm_put_target_type(struct target_type *t)
---- linux-last/drivers/md/dm-linear.c Fri Nov 2 13:07:57 2001
-+++ linux/drivers/md/dm-linear.c Fri Nov 2 14:11:26 2001
-@@ -92,15 +92,15 @@
- int r;
-
- if ((r = dm_register_target(&linear_target)) < 0)
-- printk(KERN_ERR "Device mapper: Linear: register failed\n");
-+ WARN("linear target register failed");
-
- return r;
- }
-
- static void __exit linear_exit(void)
- {
-- if (dm_unregister_target(&linear_target) < 0)
-- printk(KERN_ERR "Device mapper: Linear: unregister failed\n");
-+ if (dm_unregister_target(&linear_target))
-+ WARN("linear target unregister failed");
- }
-
- module_init(linear_init);
+++ /dev/null
---- linux-last/include/linux/device-mapper.h Fri Nov 2 13:07:57 2001
-+++ linux/include/linux/device-mapper.h Fri Nov 2 14:20:50 2001
-@@ -57,7 +57,7 @@
- };
-
- int dm_register_target(struct target_type *t);
--int dm_unregister_target(struct target_type *t);
-+int dm_unregister_target(const char *name);
-
- #endif /* __KERNEL__ */
-
---- linux-last/drivers/md/dm-target.c Fri Nov 2 14:14:27 2001
-+++ linux/drivers/md/dm-target.c Fri Nov 2 14:20:11 2001
-@@ -118,20 +118,26 @@
- return rv;
- }
-
--int dm_unregister_target(struct target_type *t)
-+int dm_unregister_target(const char *name)
- {
-- struct tt_internal *ti = (struct tt_internal *) t;
-- int rv = -ETXTBSY;
-+ struct tt_internal *ti;
-
- write_lock(&_lock);
-- if (ti->use == 0) {
-- list_del(&ti->list);
-- kfree(ti);
-- rv = 0;
-+ if (!(ti = __find_target_type(name))) {
-+ write_unlock(&_lock);
-+ return -EINVAL;
-+ }
-+
-+ if (ti->use) {
-+ write_unlock(&_lock);
-+ return -ETXTBSY;
- }
-- write_unlock(&_lock);
-
-- return rv;
-+ list_del(&ti->list);
-+ kfree(ti);
-+
-+ write_unlock(&_lock);
-+ return 0;
- }
-
- /*
---- linux-last/drivers/md/dm-linear.c Fri Nov 2 14:14:27 2001
-+++ linux/drivers/md/dm-linear.c Fri Nov 2 14:17:12 2001
-@@ -99,7 +99,7 @@
-
- static void __exit linear_exit(void)
- {
-- if (dm_unregister_target(&linear_target))
-+ if (dm_unregister_target(linear_target.name))
- WARN("linear target unregister failed");
- }
-
+++ /dev/null
---- linux-last/drivers/md/dm-ioctl.c Tue Nov 6 14:44:22 2001
-+++ linux/drivers/md/dm-ioctl.c Tue Nov 6 14:50:58 2001
-@@ -179,7 +179,30 @@
-
- static int reload(struct dm_ioctl *param)
- {
-- return -EINVAL;
-+ int r;
-+ struct mapped_device *md = dm_get(param->name);
-+ struct dm_table *t, *old;
-+
-+ if (!md)
-+ return -ENXIO;
-+
-+ if ((r = dm_table_create(&t)))
-+ return r;
-+
-+ if ((r = populate_table(t, param))) {
-+ dm_table_destroy(t);
-+ return r;
-+ }
-+
-+ old = md->map;
-+
-+ if ((r = dm_swap_table(md, t))) {
-+ dm_table_destroy(t);
-+ return r;
-+ }
-+
-+ dm_table_destroy(old);
-+ return 0;
- }
-
- static int info(struct dm_ioctl *param)
+++ /dev/null
---- linux-last//drivers/md/dm.c Tue Nov 6 14:44:22 2001
-+++ linux/drivers/md/dm.c Tue Nov 6 16:22:10 2001
-@@ -395,7 +395,7 @@
- * If we're suspended we have to queue
- * this io for later.
- */
-- if (md->suspended) {
-+ while (md->suspended) {
- ru;
- r = queue_io(md, bh, rw);
-
-@@ -405,7 +405,13 @@
- else if (r > 0)
- return 0; /* deferred successfully */
-
-- rl; /* FIXME: there's still a race here */
-+ /*
-+ * We're in a while loop, because
-+ * someone could suspend before we
-+ * get to the following read
-+ * lock
-+ */
-+ rl;
- }
-
- if (!__map_buffer(md, bh, rw, __find_node(md->map, bh)))
-@@ -817,14 +823,15 @@
- * requeue the deferred buffer_heads by calling
- * generic_make_request.
- */
--static void __flush_deferred_io(struct mapped_device *md)
-+static void flush_deferred_io(struct deferred_io *c)
- {
-- struct deferred_io *c, *n;
-+ struct deferred_io *n;
-
-- for (c = md->deferred, md->deferred = 0; c; c = n) {
-+ while (c) {
- n = c->next;
- generic_make_request(c->rw, c->bh);
- free_deferred(c);
-+ c = n;
- }
- }
-
-@@ -900,6 +907,8 @@
-
- int dm_resume(struct mapped_device *md)
- {
-+ struct deferred_io *def;
-+
- wl;
- if (!md->suspended) {
- wu;
-@@ -907,8 +916,11 @@
- }
-
- md->suspended = 0;
-- __flush_deferred_io(md);
-+ def = md->deferred;
-+ md->deferred = NULL;
- wu;
-+
-+ flush_deferred_io(def);
-
- return 0;
- }
---- linux-last//drivers/md/dm-ioctl.c Tue Nov 6 15:00:39 2001
-+++ linux/drivers/md/dm-ioctl.c Tue Nov 6 15:01:11 2001
-@@ -171,10 +171,7 @@
- if (!md)
- return -ENXIO;
-
-- if (param->suspend)
-- return dm_suspend(md);
--
-- return dm_resume(md);
-+ return param->suspend ? dm_suspend(md) : dm_resume(md);
- }
-
- static int reload(struct dm_ioctl *param)
+++ /dev/null
---- linux-last/drivers/md/dm.c Tue Nov 6 16:29:32 2001
-+++ linux/drivers/md/dm.c Wed Nov 7 09:01:43 2001
-@@ -397,6 +397,10 @@
- */
- while (md->suspended) {
- ru;
-+
-+ if (rw == READA)
-+ goto bad_no_lock;
-+
- r = queue_io(md, bh, rw);
-
- if (r < 0)
+++ /dev/null
---- linux-last/include/linux/dm-ioctl.h Tue Nov 6 14:44:22 2001
-+++ linux/include/linux/dm-ioctl.h Wed Nov 7 09:36:30 2001
-@@ -34,16 +34,18 @@
- struct dm_ioctl {
- unsigned long data_size; /* the size of this structure */
- char name[DM_NAME_LEN];
-- int suspend;
-- int open_count; /* out field */
-- int minor;
-
-- int target_count;
-+ int exists; /* out */
-+ int suspend; /* in/out */
-+ int open_count; /* out */
-+ int minor; /* in/out */
-+
-+ int target_count; /* in/out */
- };
-
- /* FIXME: find own numbers, 109 is pinched from LVM */
- #define DM_IOCTL 0xfd
--#define DM_CHAR_MAJOR 109
-+#define DM_CHAR_MAJOR 124
-
- #define DM_CREATE _IOW(DM_IOCTL, 0x00, struct dm_ioctl)
- #define DM_REMOVE _IOW(DM_IOCTL, 0x01, struct dm_ioctl)
---- linux-last/drivers/md/dm-ioctl.c Tue Nov 6 16:29:32 2001
-+++ linux/drivers/md/dm-ioctl.c Wed Nov 7 10:13:23 2001
-@@ -202,9 +202,26 @@
- return 0;
- }
-
--static int info(struct dm_ioctl *param)
-+static int info(struct dm_ioctl *param, struct dm_ioctl *user)
- {
-- return -EINVAL;
-+ struct mapped_device *md = dm_get(param->name);
-+
-+ if (!md) {
-+ param->exists = 0;
-+ goto out;
-+ }
-+
-+ param->exists = 1;
-+ param->suspend = md->suspended;
-+ param->open_count = md->use_count;
-+ param->minor = MINOR(md->dev);
-+ param->target_count = md->map->num_targets;
-+
-+ out:
-+ if (copy_to_user(user, param, sizeof(*param)))
-+ return -EFAULT;
-+
-+ return 0;
- }
-
- static int ctl_open(struct inode *inode, struct file *file)
-@@ -251,7 +268,8 @@
- break;
-
- case DM_INFO:
-- r = info(p);
-+ r = info(p, (struct dm_ioctl *) a);
-+ break;
-
- default:
- WARN("dm_ctl_ioctl: unknown command 0x%x\n", command);
+++ /dev/null
---- linux-last/include/linux/device-mapper.h Tue Nov 13 14:33:39 2001
-+++ linux/include/linux/device-mapper.h Tue Nov 13 14:31:12 2001
-@@ -14,6 +14,7 @@
- #define DM_BLK_MAJOR 124
- #define DM_NAME_LEN 64
- #define DM_MAX_TYPE_NAME 16
-+#define DM_DIR "device-mapper"
-
- #ifdef __KERNEL__
-
---- linux-last/drivers/md/dm.c Tue Nov 13 14:33:39 2001
-+++ linux/drivers/md/dm.c Tue Nov 13 14:32:12 2001
-@@ -22,7 +22,7 @@
- #define MAX_DEVICES 64
- #define DEFAULT_READ_AHEAD 64
-
--const char *_name = "device-mapper";
-+const char *_name = DEVICE_NAME;
- int _version[3] = {0, 1, 0};
-
- struct io_hook {
-@@ -49,7 +49,6 @@
- static int _blksize_size[MAX_DEVICES];
- static int _hardsect_size[MAX_DEVICES];
-
--const char *_fs_dir = "device-mapper";
- static devfs_handle_t _dev_dir;
-
- static int request(request_queue_t *q, int rw, struct buffer_head *bh);
-@@ -88,7 +87,7 @@
-
- blk_queue_make_request(BLK_DEFAULT_QUEUE(MAJOR_NR), request);
-
-- _dev_dir = devfs_mk_dir(0, _fs_dir, NULL);
-+ _dev_dir = devfs_mk_dir(0, DM_DIR, NULL);
-
- printk(KERN_INFO "%s %d.%d.%d initialised\n", _name,
- _version[0], _version[1], _version[2]);
---- linux-last/drivers/md/dm-ioctl.c Tue Nov 13 14:33:39 2001
-+++ linux/drivers/md/dm-ioctl.c Tue Nov 13 14:31:12 2001
-@@ -293,14 +293,13 @@
- {
- int r;
-
--
-- if ((r = devfs_register_chrdev(DM_CHAR_MAJOR, "device-mapper",
-+ if ((r = devfs_register_chrdev(DM_CHAR_MAJOR, DM_DIR,
- &_ctl_fops)) < 0) {
- WARN("devfs_register_chrdev failed for dm control dev");
- return -EIO;
- }
-
-- _ctl_handle = devfs_register(0 , "device-mapper/control", 0,
-+ _ctl_handle = devfs_register(0 , DM_DIR "/control", 0,
- DM_CHAR_MAJOR, 0,
- S_IFCHR | S_IRUSR | S_IWUSR | S_IRGRP,
- &_ctl_fops, NULL);
-@@ -312,7 +311,7 @@
- {
- // FIXME: remove control device
-
-- if (devfs_unregister_chrdev(DM_CHAR_MAJOR, "device-mapper") < 0)
-+ if (devfs_unregister_chrdev(DM_CHAR_MAJOR, DM_DIR) < 0)
- WARN("devfs_unregister_chrdev failed for dm control device");
- }
-
+++ /dev/null
---- linux-last/drivers/md/dm.c Tue Nov 13 14:38:11 2001
-+++ linux/drivers/md/dm.c Tue Nov 13 14:38:30 2001
-@@ -162,7 +162,7 @@
- }
-
- /* In 512-byte units */
--#define VOLUME_SIZE(minor) (_block_size[(minor)] >> 1)
-+#define VOLUME_SIZE(minor) (_block_size[(minor)] << 1)
-
- static int dm_blk_ioctl(struct inode *inode, struct file *file,
- uint command, ulong a)
+++ /dev/null
---- linux-last/drivers/md/dm.c Tue Nov 13 14:39:36 2001
-+++ linux/drivers/md/dm.c Tue Nov 13 15:46:58 2001
-@@ -576,74 +576,6 @@
- kfree(md);
- }
-
--/*
-- * open a device so we can use it as a map
-- * destination.
-- */
--static int open_dev(struct dm_dev *d)
--{
-- int err;
--
-- if (d->bd)
-- BUG();
--
-- if (!(d->bd = bdget(kdev_t_to_nr(d->dev))))
-- return -ENOMEM;
--
-- if ((err = blkdev_get(d->bd, FMODE_READ|FMODE_WRITE, 0, BDEV_FILE))) {
-- bdput(d->bd);
-- return err;
-- }
--
-- return 0;
--}
--
--/*
-- * close a device that we've been using.
-- */
--static void close_dev(struct dm_dev *d)
--{
-- if (!d->bd)
-- return;
--
-- blkdev_put(d->bd, BDEV_FILE);
-- bdput(d->bd);
-- d->bd = 0;
--}
--
--/*
-- * Close a list of devices.
-- */
--static void close_devices(struct list_head *devices)
--{
-- struct list_head *tmp;
--
-- list_for_each(tmp, devices) {
-- struct dm_dev *dd = list_entry(tmp, struct dm_dev, list);
-- close_dev(dd);
-- }
--}
--
--/*
-- * Open a list of devices.
-- */
--static int open_devices(struct list_head *devices)
--{
-- int r = 0;
-- struct list_head *tmp;
--
-- list_for_each(tmp, devices) {
-- struct dm_dev *dd = list_entry(tmp, struct dm_dev, list);
-- if ((r = open_dev(dd)))
-- goto bad;
-- }
-- return 0;
--
-- bad:
-- close_devices(devices);
-- return r;
--}
--
- static int register_device(struct mapped_device *md)
- {
- md->devfs_entry =
-@@ -686,7 +618,7 @@
- /*
- * Bind a table to the device.
- */
--int __bind(struct mapped_device *md, struct dm_table *t)
-+static int __bind(struct mapped_device *md, struct dm_table *t)
- {
- int minor = MINOR(md->dev);
-
-@@ -702,12 +634,11 @@
- _hardsect_size[minor] = __find_hardsect_size(&t->devices);
- register_disk(NULL, md->dev, 1, &dm_blk_dops, _block_size[minor]);
-
-- return open_devices(&md->map->devices);
-+ return 0;
- }
-
--void __unbind(struct mapped_device *md)
-+static void __unbind(struct mapped_device *md)
- {
-- close_devices(&md->map->devices);
- md->map = NULL;
- }
-
---- linux-last/drivers/md/dm-table.c Tue Nov 13 14:24:30 2001
-+++ linux/drivers/md/dm-table.c Tue Nov 13 15:50:20 2001
-@@ -223,6 +223,41 @@
- }
-
- /*
-+ * open a device so we can use it as a map
-+ * destination.
-+ */
-+static int open_dev(struct dm_dev *d)
-+{
-+ int err;
-+
-+ if (d->bd)
-+ BUG();
-+
-+ if (!(d->bd = bdget(kdev_t_to_nr(d->dev))))
-+ return -ENOMEM;
-+
-+ if ((err = blkdev_get(d->bd, FMODE_READ|FMODE_WRITE, 0, BDEV_FILE))) {
-+ bdput(d->bd);
-+ return err;
-+ }
-+
-+ return 0;
-+}
-+
-+/*
-+ * close a device that we've been using.
-+ */
-+static void close_dev(struct dm_dev *d)
-+{
-+ if (!d->bd)
-+ return;
-+
-+ blkdev_put(d->bd, BDEV_FILE);
-+ bdput(d->bd);
-+ d->bd = 0;
-+}
-+
-+/*
- * add a device to the list, or just increment the
- * usage count if it's already present.
- */
-@@ -245,6 +280,12 @@
-
- dd->dev = dev;
- dd->bd = 0;
-+
-+ if ((r = open_dev(dd))) {
-+ kfree(dd);
-+ return r;
-+ }
-+
- atomic_set(&dd->count, 0);
- list_add(&dd->list, &t->devices);
- }
-@@ -253,6 +294,7 @@
-
- return 0;
- }
-+
- /*
- * decrement a devices use count and remove it if
- * neccessary.
-@@ -260,6 +302,7 @@
- void dm_table_put_device(struct dm_table *t, struct dm_dev *dd)
- {
- if (atomic_dec_and_test(&dd->count)) {
-+ close_dev(dd);
- list_del(&dd->list);
- kfree(dd);
- }
+++ /dev/null
---- linux-last/include/linux/device-mapper.h Tue Nov 13 14:38:11 2001
-+++ linux/include/linux/device-mapper.h Tue Nov 13 16:01:12 2001
-@@ -42,6 +42,7 @@
- * (ie. opened/closed).
- */
- int dm_table_get_device(struct dm_table *t, const char *path,
-+ offset_t start, offset_t len,
- struct dm_dev **result);
- void dm_table_put_device(struct dm_table *table, struct dm_dev *d);
-
---- linux-last/drivers/md/dm-table.c Tue Nov 13 15:56:39 2001
-+++ linux/drivers/md/dm-table.c Tue Nov 13 16:04:55 2001
-@@ -6,6 +6,9 @@
-
- #include "dm.h"
-
-+#include <linux/blkdev.h>
-+
-+
- /* ceiling(n / size) * size */
- static inline ulong round_up(ulong n, ulong size)
- {
-@@ -258,10 +261,32 @@
- }
-
- /*
-+ * If possible (ie. blk_size[major] is set), this
-+ * checks an area of a destination device is
-+ * valid.
-+ */
-+static int check_device_area(kdev_t dev, offset_t start, offset_t len)
-+{
-+ int *sizes;
-+ offset_t dev_size;
-+
-+ if (!(sizes = blk_size[MAJOR(dev)]) || !(dev_size = sizes[MINOR(dev)]))
-+ /* we don't know the device details,
-+ * so give the benefit of the doubt */
-+ return 1;
-+
-+ /* convert to 512-byte sectors */
-+ dev_size <<= 1;
-+
-+ return ((start < dev_size) && (len <= (dev_size - start)));
-+}
-+
-+/*
- * add a device to the list, or just increment the
- * usage count if it's already present.
- */
- int dm_table_get_device(struct dm_table *t, const char *path,
-+ offset_t start, offset_t len,
- struct dm_dev **result)
- {
- int r;
-@@ -290,6 +315,13 @@
- list_add(&dd->list, &t->devices);
- }
- atomic_inc(&dd->count);
-+
-+ if (!check_device_area(dd->dev, start, len)) {
-+ WARN("device '%s' not large enough for target", path);
-+ dm_table_put_device(t, dd);
-+ return -EINVAL;
-+ }
-+
- *result = dd;
-
- return 0;
---- linux-last/drivers/md/dm-linear.c Tue Nov 13 14:24:30 2001
-+++ linux/drivers/md/dm-linear.c Tue Nov 13 15:59:21 2001
-@@ -48,7 +48,7 @@
- goto bad;
- }
-
-- if ((r = dm_table_get_device(t, path, &lc->dev))) {
-+ if ((r = dm_table_get_device(t, path, start, l, &lc->dev))) {
- err("couldn't lookup device", e_private);
- r = -ENXIO;
- goto bad;
+++ /dev/null
---- linux-last/drivers/md/dm-table.c Wed Nov 14 14:42:24 2001
-+++ linux/drivers/md/dm-table.c Wed Nov 14 17:48:28 2001
-@@ -91,6 +91,7 @@
- memcpy(n_targets, t->targets, sizeof(*n_targets) * n);
- }
-
-+ memset(n_highs + n , -1, sizeof(*n_highs) * (num - n));
- vfree(t->highs);
-
- t->num_allocated = num;
-@@ -376,7 +377,8 @@
-
- /* set up internal nodes, bottom-up */
- for (i = t->depth - 2, total = 0; i >= 0; i--) {
-- t->index[i] = indexes + (KEYS_PER_NODE * t->counts[i]);
-+ t->index[i] = indexes;
-+ indexes += (KEYS_PER_NODE * t->counts[i]);
- setup_btree_index(i, t);
- }
-
+++ /dev/null
---- linux-last/include/linux/dm-ioctl.h Wed Nov 14 14:42:24 2001
-+++ linux/include/linux/dm-ioctl.h Tue Nov 20 10:02:43 2001
-@@ -47,7 +47,7 @@
- #define DM_IOCTL 0xfd
- #define DM_CHAR_MAJOR 124
-
--#define DM_CREATE _IOW(DM_IOCTL, 0x00, struct dm_ioctl)
-+#define DM_CREATE _IOWR(DM_IOCTL, 0x00, struct dm_ioctl)
- #define DM_REMOVE _IOW(DM_IOCTL, 0x01, struct dm_ioctl)
- #define DM_SUSPEND _IOW(DM_IOCTL, 0x02, struct dm_ioctl)
- #define DM_RELOAD _IOWR(DM_IOCTL, 0x03, struct dm_ioctl)
---- linux-last/drivers/md/dm-linear.c Wed Nov 14 14:42:24 2001
-+++ linux/drivers/md/dm-linear.c Wed Nov 14 20:13:15 2001
-@@ -112,4 +112,3 @@
- #ifdef MODULE_LICENSE
- MODULE_LICENSE("GPL");
- #endif
--
---- linux-last/drivers/md/dm-ioctl.c Wed Nov 14 14:42:24 2001
-+++ linux/drivers/md/dm-ioctl.c Tue Nov 20 10:32:47 2001
-@@ -134,7 +134,33 @@
- return r;
- }
-
--static int create(struct dm_ioctl *param)
-+/*
-+ * Copies device info back to user space, used by
-+ * the create and info ioctls.
-+ */
-+static int info(const char *name, struct dm_ioctl *user)
-+{
-+ struct dm_ioctl param;
-+ struct mapped_device *md = dm_get(name);
-+
-+ if (!md) {
-+ param.exists = 0;
-+ goto out;
-+ }
-+
-+ param.data_size = 0;
-+ strncpy(param.name, md->name, sizeof(param.name));
-+ param.exists = 1;
-+ param.suspend = md->suspended;
-+ param.open_count = md->use_count;
-+ param.minor = MINOR(md->dev);
-+ param.target_count = md->map->num_targets;
-+
-+ out:
-+ return copy_to_user(user, ¶m, sizeof(param));
-+}
-+
-+static int create(struct dm_ioctl *param, struct dm_ioctl *user)
- {
- int r;
- struct mapped_device *md;
-@@ -143,15 +169,22 @@
- if ((r = dm_table_create(&t)))
- return r;
-
-- if ((r = populate_table(t, param))) {
-- dm_table_destroy(t);
-- return r;
-- }
-+ if ((r = populate_table(t, param)))
-+ goto bad;
-
- if ((r = dm_create(param->name, param->minor, t, &md)))
-- return r;
-+ goto bad;
-+
-+ if ((r = info(param->name, user))) {
-+ dm_destroy(md);
-+ goto bad;
-+ }
-
- return 0;
-+
-+ bad:
-+ dm_table_destroy(t);
-+ return r;
- }
-
- static int remove(struct dm_ioctl *param)
-@@ -202,28 +235,6 @@
- return 0;
- }
-
--static int info(struct dm_ioctl *param, struct dm_ioctl *user)
--{
-- struct mapped_device *md = dm_get(param->name);
--
-- if (!md) {
-- param->exists = 0;
-- goto out;
-- }
--
-- param->exists = 1;
-- param->suspend = md->suspended;
-- param->open_count = md->use_count;
-- param->minor = MINOR(md->dev);
-- param->target_count = md->map->num_targets;
--
-- out:
-- if (copy_to_user(user, param, sizeof(*param)))
-- return -EFAULT;
--
-- return 0;
--}
--
- static int ctl_open(struct inode *inode, struct file *file)
- {
- /* only root can open this */
-@@ -244,7 +255,7 @@
- static int ctl_ioctl(struct inode *inode, struct file *file,
- uint command, ulong a)
- {
-- int r = -EINVAL;
-+ int r;
- struct dm_ioctl *p;
-
- if ((r = copy_params((struct dm_ioctl *) a, &p)))
-@@ -252,7 +263,7 @@
-
- switch (command) {
- case DM_CREATE:
-- r = create(p);
-+ r = create(p, (struct dm_ioctl *) a);
- break;
-
- case DM_REMOVE:
-@@ -268,11 +279,12 @@
- break;
-
- case DM_INFO:
-- r = info(p, (struct dm_ioctl *) a);
-+ r = info(p->name, (struct dm_ioctl *) a);
- break;
-
- default:
- WARN("dm_ctl_ioctl: unknown command 0x%x\n", command);
-+ r = -EINVAL;
- }
-
- free_params(p);
+++ /dev/null
---- linux/drivers/md/dm.c Wed Nov 21 15:16:01 2001
-+++ linux-dm/drivers/md/dm.c Wed Nov 21 14:41:20 2001
-@@ -583,9 +587,6 @@
- MAJOR(md->dev), MINOR(md->dev),
- S_IFBLK | S_IRUSR | S_IWUSR | S_IRGRP,
- &dm_blk_dops, NULL);
--
-- if (!md->devfs_entry)
-- return -ENOMEM;
-
- return 0;
- }
+++ /dev/null
---- linux-last/include/linux/dm-ioctl.h Wed Nov 21 17:39:43 2001
-+++ linux/include/linux/dm-ioctl.h Wed Nov 21 17:41:50 2001
-@@ -38,6 +38,7 @@
- int exists; /* out */
- int suspend; /* in/out */
- int open_count; /* out */
-+ int major; /* out */
- int minor; /* in/out */
-
- int target_count; /* in/out */
---- linux-last/drivers/md/dm-ioctl.c Wed Nov 21 17:39:43 2001
-+++ linux/drivers/md/dm-ioctl.c Wed Nov 21 17:42:15 2001
-@@ -153,6 +153,7 @@
- param.exists = 1;
- param.suspend = md->suspended;
- param.open_count = md->use_count;
-+ param.major = DM_BLK_MAJOR;
- param.minor = MINOR(md->dev);
- param.target_count = md->map->num_targets;
-
+++ /dev/null
-diff -ruN -X /home/joe/packages/2.4/dontdiff linux-last/drivers/md/Makefile linux/drivers/md/Makefile
---- linux-last/drivers/md/Makefile Wed Nov 28 17:09:15 2001
-+++ linux/drivers/md/Makefile Wed Nov 28 17:09:49 2001
-@@ -7,7 +7,8 @@
- export-objs := md.o xor.o dm-table.o dm-target.o
- list-multi := lvm-mod.o
- lvm-mod-objs := lvm.o lvm-snap.o
--dm-mod-objs := dm.o dm-table.o dm-target.o dm-ioctl.o dm-linear.o
-+dm-mod-objs := dm.o dm-table.o dm-target.o dm-ioctl.o \
-+ dm-linear.o dm-stripe.o
-
- # Note: link order is important. All raid personalities
- # and xor.o must come before md.o, as they each initialise
-diff -ruN -X /home/joe/packages/2.4/dontdiff linux-last/drivers/md/dm-stripe.c linux/drivers/md/dm-stripe.c
---- linux-last/drivers/md/dm-stripe.c Thu Jan 1 01:00:00 1970
-+++ linux/drivers/md/dm-stripe.c Wed Nov 28 17:34:39 2001
-@@ -0,0 +1,190 @@
-+/*
-+ * Copyright (C) 2001 Sistina Software (UK) Limited.
-+ *
-+ * This file is released under the GPL.
-+ */
-+
-+#include <linux/config.h>
-+#include <linux/module.h>
-+#include <linux/init.h>
-+#include <linux/slab.h>
-+#include <linux/fs.h>
-+#include <linux/blkdev.h>
-+#include <linux/device-mapper.h>
-+
-+#include "dm.h"
-+
-+struct stripe {
-+ struct dm_dev *dev;
-+ offset_t physical_start;
-+};
-+
-+struct stripe_c {
-+ offset_t logical_start;
-+ uint32_t stripes;
-+
-+ /* The size of this target / num. stripes */
-+ uint32_t stripe_width;
-+
-+ /* eg, we stripe in 64k chunks */
-+ uint32_t chunk_shift;
-+ offset_t chunk_mask;
-+
-+ struct stripe stripe[0];
-+};
-+
-+
-+static inline struct stripe_c *alloc_context(int stripes)
-+{
-+ size_t len = sizeof(struct stripe_c) +
-+ (sizeof(struct stripe) * stripes);
-+ return kmalloc(len, GFP_KERNEL);
-+}
-+
-+/*
-+ * parses a single <dev> <sector> pair.
-+ */
-+static int get_stripe(struct dm_table *t, struct stripe_c *sc,
-+ int stripe, const char *args)
-+{
-+ int n, r;
-+ char path[256]; /* FIXME: buffer overrun risk */
-+ unsigned long start;
-+
-+ if (sscanf(args, "%s %lu %n", path, &start, &n) != 2)
-+ return -EINVAL;
-+
-+ if ((r = dm_table_get_device(t, path, start, sc->stripe_width,
-+ &sc->stripe[stripe].dev)))
-+ return -ENXIO;
-+
-+ sc->stripe[stripe].physical_start = start;
-+ return n;
-+}
-+
-+/*
-+ * construct a striped mapping.
-+ * <number of stripes> <chunk size (2^^n)> [<dev_path> <offset>]+
-+ */
-+static int stripe_ctr(struct dm_table *t, offset_t b, offset_t l,
-+ const char *args, void **context,
-+ dm_error_fn err, void *e_private)
-+{
-+ struct stripe_c *sc;
-+ uint32_t stripes;
-+ uint32_t chunk_size;
-+ int n, i;
-+
-+ if (sscanf(args, "%u %u %n", &stripes, &chunk_size, &n) != 2) {
-+ err("couldn't parse <stripes> <chunk size>", e_private);
-+ return -EINVAL;
-+ }
-+
-+ if (l % stripes) {
-+ err("target length is not divisable by the number of stripes",
-+ e_private);
-+ return -EINVAL;
-+ }
-+
-+ if (!(sc = alloc_context(stripes))) {
-+ err("couldn't allocate memory for striped context", e_private);
-+ return -ENOMEM;
-+ }
-+
-+ sc->logical_start = b;
-+ sc->stripes = stripes;
-+ sc->stripe_width = l / stripes;
-+
-+ /*
-+ * chunk_size is a power of two. We only
-+ * that power and the mask.
-+ */
-+ if (!chunk_size) {
-+ err("invalid chunk size", e_private);
-+ return -EINVAL;
-+ }
-+
-+ sc->chunk_mask = chunk_size - 1;
-+ for (sc->chunk_shift = 0; chunk_size; sc->chunk_shift++)
-+ chunk_size >>= 1;
-+ sc->chunk_shift--;
-+
-+ /*
-+ * Get the stripe destinations.
-+ */
-+ for (i = 0; i < stripes; i++) {
-+ args += n;
-+ n = get_stripe(t, sc, i, args);
-+
-+ if (n < 0) {
-+ err("couldn't parse stripe destination", e_private);
-+ kfree(sc);
-+ return n;
-+ }
-+ }
-+
-+
-+ *context = sc;
-+ return 0;
-+}
-+
-+static void stripe_dtr(struct dm_table *t, void *c)
-+{
-+ unsigned int i;
-+ struct stripe_c *sc = (struct stripe_c *) c;
-+
-+ for (i = 0; i < sc->stripes; i++)
-+ dm_table_put_device(t, sc->stripe[i].dev);
-+
-+ kfree(sc);
-+}
-+
-+static int stripe_map(struct buffer_head *bh, int rw, void *context)
-+{
-+ struct stripe_c *sc = (struct stripe_c *) context;
-+
-+ offset_t offset = bh->b_rsector - sc->logical_start;
-+ uint32_t chunk = (uint32_t) (offset >> sc->chunk_shift);
-+ uint32_t stripe = chunk % sc->stripes; /* 32bit modulus */
-+ chunk = chunk / sc->stripes;
-+
-+ bh->b_rdev = sc->stripe[stripe].dev->dev;
-+ bh->b_rsector = sc->stripe[stripe].physical_start +
-+ (chunk << sc->chunk_shift) +
-+ (offset & sc->chunk_mask);
-+ return 1;
-+}
-+
-+static struct target_type stripe_target = {
-+ name: "striped",
-+ module: THIS_MODULE,
-+ ctr: stripe_ctr,
-+ dtr: stripe_dtr,
-+ map: stripe_map,
-+};
-+
-+static int __init stripe_init(void)
-+{
-+ int r;
-+
-+ if ((r = dm_register_target(&stripe_target)) < 0)
-+ WARN("linear target register failed");
-+
-+ return r;
-+}
-+
-+static void __exit stripe_exit(void)
-+{
-+ if (dm_unregister_target(stripe_target.name))
-+ WARN("striped target unregister failed");
-+}
-+
-+module_init(stripe_init);
-+module_exit(stripe_exit);
-+
-+MODULE_AUTHOR("Joe Thornber <thornber@sistina.com>");
-+MODULE_DESCRIPTION("Device Mapper: Striped mapping");
-+
-+#ifdef MODULE_LICENSE
-+MODULE_LICENSE("GPL");
-+#endif
+++ /dev/null
---- linux-last/drivers/md/dm-table.c Wed Nov 28 17:09:02 2001
-+++ linux/drivers/md/dm-table.c Thu Nov 29 10:05:45 2001
-@@ -240,10 +240,8 @@
- if (!(d->bd = bdget(kdev_t_to_nr(d->dev))))
- return -ENOMEM;
-
-- if ((err = blkdev_get(d->bd, FMODE_READ|FMODE_WRITE, 0, BDEV_FILE))) {
-- bdput(d->bd);
-+ if ((err = blkdev_get(d->bd, FMODE_READ|FMODE_WRITE, 0, BDEV_FILE)))
- return err;
-- }
-
- return 0;
- }
-@@ -257,8 +255,7 @@
- return;
-
- blkdev_put(d->bd, BDEV_FILE);
-- bdput(d->bd);
-- d->bd = 0;
-+ d->bd = NULL;
- }
-
- /*
+++ /dev/null
---- linux-last/drivers/md/dm.c Wed Nov 28 17:09:02 2001
-+++ linux/drivers/md/dm.c Thu Nov 29 11:37:13 2001
-@@ -636,7 +636,14 @@
-
- static void __unbind(struct mapped_device *md)
- {
-+ int minor = MINOR(md->dev);
-+
-+ dm_table_destroy(md->map);
- md->map = NULL;
-+
-+ _block_size[minor] = 0;
-+ _blksize_size[minor] = 0;
-+ _hardsect_size[minor] = 0;
- }
-
-
-@@ -709,10 +716,8 @@
- }
-
- /*
-- * destructor for the device. md->map is
-- * deliberately not destroyed, dm-fs/dm-ioctl
-- * should manage table objects. You cannot
-- * destroy a suspended device.
-+ * Destructor for the device. You cannot destroy
-+ * a suspended device.
- */
- int dm_destroy(struct mapped_device *md)
- {
-@@ -767,7 +772,7 @@
- }
-
- /*
-- * Swap in a new table.
-+ * Swap in a new table (destroying old one).
- */
- int dm_swap_table(struct mapped_device *md, struct dm_table *table)
- {
---- linux-last/drivers/md/dm-ioctl.c Wed Nov 28 17:09:02 2001
-+++ linux/drivers/md/dm-ioctl.c Thu Nov 29 11:36:27 2001
-@@ -212,7 +212,7 @@
- {
- int r;
- struct mapped_device *md = dm_get(param->name);
-- struct dm_table *t, *old;
-+ struct dm_table *t;
-
- if (!md)
- return -ENXIO;
-@@ -225,14 +225,11 @@
- return r;
- }
-
-- old = md->map;
--
- if ((r = dm_swap_table(md, t))) {
- dm_table_destroy(t);
- return r;
- }
-
-- dm_table_destroy(old);
- return 0;
- }
-