* [patch 1/6] Create and Use common mempool allocators
[not found] <20060128001539.030809000@localhost.localdomain>
@ 2006-01-28 0:19 ` Matthew Dobson
2006-01-28 0:19 ` [patch 2/6] " Matthew Dobson
` (4 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Matthew Dobson @ 2006-01-28 0:19 UTC (permalink / raw)
To: linux-kernel; +Cc: penberg, akpm
plain text document attachment (mempool-add_page_allocator.patch)
From: Matthew Dobson <colpatch@us.ibm.com>
Subject: [patch 1/6] mempool - Add page allocator
Add an allocator to the common mempool code: a simple page allocator
This will be used by the next patch in the series to replace duplicate
mempool-backed page allocators in 2 places in the kernel. It is also
likely that there will be more users in the future.
Signed-off-by: Matthew Dobson <colpatch@us.ibm.com>
include/linux/mempool.h | 7 +++++++
mm/mempool.c | 18 ++++++++++++++++++
2 files changed, 25 insertions(+)
Index: linux-2.6.16-rc1-mm3+mempool_work/mm/mempool.c
===================================================================
--- linux-2.6.16-rc1-mm3+mempool_work.orig/mm/mempool.c
+++ linux-2.6.16-rc1-mm3+mempool_work/mm/mempool.c
@@ -289,3 +289,21 @@ void mempool_free_slab(void *element, vo
kmem_cache_free(mem, element);
}
EXPORT_SYMBOL(mempool_free_slab);
+
+/*
+ * A simple mempool-backed page allocator that allocates pages
+ * of the order specified by pool_data.
+ */
+void *mempool_alloc_pages(gfp_t gfp_mask, void *pool_data)
+{
+ int order = (int) pool_data;
+ return alloc_pages(gfp_mask, order);
+}
+EXPORT_SYMBOL(mempool_alloc_pages);
+
+void mempool_free_pages(void *element, void *pool_data)
+{
+ int order = (int) pool_data;
+ __free_pages(element, order);
+}
+EXPORT_SYMBOL(mempool_free_pages);
Index: linux-2.6.16-rc1-mm3+mempool_work/include/linux/mempool.h
===================================================================
--- linux-2.6.16-rc1-mm3+mempool_work.orig/include/linux/mempool.h
+++ linux-2.6.16-rc1-mm3+mempool_work/include/linux/mempool.h
@@ -38,4 +38,11 @@ extern void mempool_free(void *element,
void *mempool_alloc_slab(gfp_t gfp_mask, void *pool_data);
void mempool_free_slab(void *element, void *pool_data);
+/*
+ * A mempool_alloc_t and mempool_free_t for a simple page allocator that
+ * allocates pages of the order specified by pool_data
+ */
+void *mempool_alloc_pages(gfp_t gfp_mask, void *pool_data);
+void mempool_free_pages(void *element, void *pool_data);
+
#endif /* _LINUX_MEMPOOL_H */
--
^ permalink raw reply [flat|nested] 7+ messages in thread
* [patch 2/6] Create and Use common mempool allocators
[not found] <20060128001539.030809000@localhost.localdomain>
2006-01-28 0:19 ` [patch 1/6] Create and Use common mempool allocators Matthew Dobson
@ 2006-01-28 0:19 ` Matthew Dobson
2006-01-28 10:08 ` Pekka Enberg
2006-01-28 0:19 ` [patch 3/6] " Matthew Dobson
` (3 subsequent siblings)
5 siblings, 1 reply; 7+ messages in thread
From: Matthew Dobson @ 2006-01-28 0:19 UTC (permalink / raw)
To: linux-kernel; +Cc: penberg, akpm
plain text document attachment (mempool-use_page_allocator.patch)
From: Matthew Dobson <colpatch@us.ibm.com>
Subject: [patch 2/6] mempool - Use common mempool page allocator
Convert two mempool users that currently use their own mempool-backed page
allocators to use the generic mempool page allocator.
Also included are 2 trivial whitespace fixes.
Signed-off-by: Matthew Dobson <colpatch@us.ibm.com>
drivers/md/dm-crypt.c | 18 ++----------------
mm/highmem.c | 24 ++++++++----------------
2 files changed, 10 insertions(+), 32 deletions(-)
Index: linux-2.6.16-rc1-mm3+mempool_work/drivers/md/dm-crypt.c
===================================================================
--- linux-2.6.16-rc1-mm3+mempool_work.orig/drivers/md/dm-crypt.c
+++ linux-2.6.16-rc1-mm3+mempool_work/drivers/md/dm-crypt.c
@@ -94,20 +94,6 @@ struct crypt_config {
static kmem_cache_t *_crypt_io_pool;
/*
- * Mempool alloc and free functions for the page
- */
-static void *mempool_alloc_page(gfp_t gfp_mask, void *data)
-{
- return alloc_page(gfp_mask);
-}
-
-static void mempool_free_page(void *page, void *data)
-{
- __free_page(page);
-}
-
-
-/*
* Different IV generation algorithms:
*
* plain: the initial vector is the 32-bit low-endian version of the sector
@@ -637,8 +623,8 @@ static int crypt_ctr(struct dm_target *t
goto bad3;
}
- cc->page_pool = mempool_create(MIN_POOL_PAGES, mempool_alloc_page,
- mempool_free_page, NULL);
+ cc->page_pool = mempool_create(MIN_POOL_PAGES, mempool_alloc_pages,
+ mempool_free_pages, 0);
if (!cc->page_pool) {
ti->error = PFX "Cannot allocate page mempool";
goto bad4;
Index: linux-2.6.16-rc1-mm3+mempool_work/mm/highmem.c
===================================================================
--- linux-2.6.16-rc1-mm3+mempool_work.orig/mm/highmem.c
+++ linux-2.6.16-rc1-mm3+mempool_work/mm/highmem.c
@@ -31,14 +31,9 @@
static mempool_t *page_pool, *isa_page_pool;
-static void *page_pool_alloc_isa(gfp_t gfp_mask, void *data)
+static void *mempool_alloc_pages_isa(gfp_t gfp_mask, void *data)
{
- return alloc_page(gfp_mask | GFP_DMA);
-}
-
-static void page_pool_free(void *page, void *data)
-{
- __free_page(page);
+ return mempool_alloc_pages(gfp_mask | GFP_DMA, data);
}
/*
@@ -51,11 +46,6 @@ static void page_pool_free(void *page, v
*/
#ifdef CONFIG_HIGHMEM
-static void *page_pool_alloc(gfp_t gfp_mask, void *data)
-{
- return alloc_page(gfp_mask);
-}
-
static int pkmap_count[LAST_PKMAP];
static unsigned int last_pkmap_nr;
static __cacheline_aligned_in_smp DEFINE_SPINLOCK(kmap_lock);
@@ -229,7 +219,8 @@ static __init int init_emergency_pool(vo
if (!i.totalhigh)
return 0;
- page_pool = mempool_create(POOL_SIZE, page_pool_alloc, page_pool_free, NULL);
+ page_pool = mempool_create(POOL_SIZE, mempool_alloc_pages,
+ mempool_free_pages, 0);
if (!page_pool)
BUG();
printk("highmem bounce pool size: %d pages\n", POOL_SIZE);
@@ -272,7 +263,8 @@ int init_emergency_isa_pool(void)
if (isa_page_pool)
return 0;
- isa_page_pool = mempool_create(ISA_POOL_SIZE, page_pool_alloc_isa, page_pool_free, NULL);
+ isa_page_pool = mempool_create(ISA_POOL_SIZE, mempool_alloc_pages_isa,
+ mempool_free_pages, 0);
if (!isa_page_pool)
BUG();
@@ -337,7 +329,7 @@ static void bounce_end_io(struct bio *bi
bio_put(bio);
}
-static int bounce_end_io_write(struct bio *bio, unsigned int bytes_done,int err)
+static int bounce_end_io_write(struct bio *bio, unsigned int bytes_done, int err)
{
if (bio->bi_size)
return 1;
@@ -384,7 +376,7 @@ static int bounce_end_io_read_isa(struct
}
static void __blk_queue_bounce(request_queue_t *q, struct bio **bio_orig,
- mempool_t *pool)
+ mempool_t *pool)
{
struct page *page;
struct bio *bio = NULL;
--
^ permalink raw reply [flat|nested] 7+ messages in thread
* [patch 3/6] Create and Use common mempool allocators
[not found] <20060128001539.030809000@localhost.localdomain>
2006-01-28 0:19 ` [patch 1/6] Create and Use common mempool allocators Matthew Dobson
2006-01-28 0:19 ` [patch 2/6] " Matthew Dobson
@ 2006-01-28 0:19 ` Matthew Dobson
2006-01-28 0:19 ` [patch 4/6] " Matthew Dobson
` (2 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Matthew Dobson @ 2006-01-28 0:19 UTC (permalink / raw)
To: linux-kernel; +Cc: penberg, akpm
plain text document attachment (mempool-add_kmalloc_allocator.patch)
From: Matthew Dobson <colpatch@us.ibm.com>
Subject: [patch 3/6] mempool - Add kmalloc allocator
Add another allocator to the common mempool code: a kmalloc/kfree allocator
This will be used by the next patch in the series to replace duplicate
mempool-backed kmalloc allocators in several places in the kernel.
It is also very likely that there will be more users in the future.
Signed-off-by: Matthew Dobson <colpatch@us.ibm.com>
include/linux/mempool.h | 7 +++++++
mm/mempool.c | 17 +++++++++++++++++
2 files changed, 24 insertions(+)
Index: linux-2.6.16-rc1-mm3+mempool_work/include/linux/mempool.h
===================================================================
--- linux-2.6.16-rc1-mm3+mempool_work.orig/include/linux/mempool.h
+++ linux-2.6.16-rc1-mm3+mempool_work/include/linux/mempool.h
@@ -39,6 +39,13 @@ void *mempool_alloc_slab(gfp_t gfp_mask,
void mempool_free_slab(void *element, void *pool_data);
/*
+ * A mempool_alloc_t and mempool_free_t to kmalloc the amount of memory
+ * specified by pool_data
+ */
+void *mempool_kmalloc(gfp_t gfp_mask, void *pool_data);
+void mempool_kfree(void *element, void *pool_data);
+
+/*
* A mempool_alloc_t and mempool_free_t for a simple page allocator that
* allocates pages of the order specified by pool_data
*/
Index: linux-2.6.16-rc1-mm3+mempool_work/mm/mempool.c
===================================================================
--- linux-2.6.16-rc1-mm3+mempool_work.orig/mm/mempool.c
+++ linux-2.6.16-rc1-mm3+mempool_work/mm/mempool.c
@@ -291,6 +291,23 @@ void mempool_free_slab(void *element, vo
EXPORT_SYMBOL(mempool_free_slab);
/*
+ * A commonly used alloc and free fn that kmalloc/kfrees the amount of memory
+ * specfied by pool_data
+ */
+void *mempool_kmalloc(gfp_t gfp_mask, void *pool_data)
+{
+ size_t size = (size_t) pool_data;
+ return kmalloc(size, gfp_mask);
+}
+EXPORT_SYMBOL(mempool_kmalloc);
+
+void mempool_kfree(void *element, void *pool_data)
+{
+ kfree(element);
+}
+EXPORT_SYMBOL(mempool_kfree);
+
+/*
* A simple mempool-backed page allocator that allocates pages
* of the order specified by pool_data.
*/
--
^ permalink raw reply [flat|nested] 7+ messages in thread
* [patch 4/6] Create and Use common mempool allocators
[not found] <20060128001539.030809000@localhost.localdomain>
` (2 preceding siblings ...)
2006-01-28 0:19 ` [patch 3/6] " Matthew Dobson
@ 2006-01-28 0:19 ` Matthew Dobson
2006-01-28 0:19 ` [patch 5/6] " Matthew Dobson
2006-01-28 0:20 ` [patch 6/6] " Matthew Dobson
5 siblings, 0 replies; 7+ messages in thread
From: Matthew Dobson @ 2006-01-28 0:19 UTC (permalink / raw)
To: linux-kernel; +Cc: penberg, akpm
plain text document attachment (mempool-use_kmalloc_allocator.patch)
From: Matthew Dobson <colpatch@us.ibm.com>
Subject: [patch 4/6] mempool - Use common mempool kmalloc allocator
This patch changes several mempool users, all of which are basically
just wrappers around kmalloc(), to use the common mempool_kmalloc/kfree,
rather than their own wrapper function, removing a bunch of duplicated code.
Signed-off-by: Matthew Dobson <colpatch@us.ibm.com>
drivers/block/pktcdvd.c | 28 +++++---------------------
drivers/md/bitmap.c | 15 ++------------
drivers/md/dm-io.c | 13 +-----------
drivers/md/dm-raid1.c | 14 +------------
drivers/s390/scsi/zfcp_aux.c | 46 +++++++++++++------------------------------
drivers/scsi/lpfc/lpfc_mem.c | 20 +++---------------
fs/bio.c | 17 +++------------
7 files changed, 35 insertions(+), 118 deletions(-)
Index: linux-2.6.16-rc1-mm3+mempool_work/drivers/block/pktcdvd.c
===================================================================
--- linux-2.6.16-rc1-mm3+mempool_work.orig/drivers/block/pktcdvd.c
+++ linux-2.6.16-rc1-mm3+mempool_work/drivers/block/pktcdvd.c
@@ -230,16 +230,6 @@ static int pkt_grow_pktlist(struct pktcd
return 1;
}
-static void *pkt_rb_alloc(gfp_t gfp_mask, void *data)
-{
- return kmalloc(sizeof(struct pkt_rb_node), gfp_mask);
-}
-
-static void pkt_rb_free(void *ptr, void *data)
-{
- kfree(ptr);
-}
-
static inline struct pkt_rb_node *pkt_rbtree_next(struct pkt_rb_node *node)
{
struct rb_node *n = rb_next(&node->rb_node);
@@ -2086,16 +2076,6 @@ static int pkt_close(struct inode *inode
}
-static void *psd_pool_alloc(gfp_t gfp_mask, void *data)
-{
- return kmalloc(sizeof(struct packet_stacked_data), gfp_mask);
-}
-
-static void psd_pool_free(void *ptr, void *data)
-{
- kfree(ptr);
-}
-
static int pkt_end_io_read_cloned(struct bio *bio, unsigned int bytes_done, int err)
{
struct packet_stacked_data *psd = bio->bi_private;
@@ -2495,7 +2475,9 @@ static int pkt_setup_dev(struct pkt_ctrl
if (!pd)
return ret;
- pd->rb_pool = mempool_create(PKT_RB_POOL_SIZE, pkt_rb_alloc, pkt_rb_free, NULL);
+ pd->rb_pool = mempool_create(PKT_RB_POOL_SIZE,
+ mempool_kmalloc, mempool_kfree,
+ sizeof(struct pkt_rb_node));
if (!pd->rb_pool)
goto out_mem;
@@ -2657,7 +2639,9 @@ static int __init pkt_init(void)
{
int ret;
- psd_pool = mempool_create(PSD_POOL_SIZE, psd_pool_alloc, psd_pool_free, NULL);
+ psd_pool = mempool_create(PSD_POOL_SIZE,
+ mempool_kmalloc, mempool_kfree,
+ sizeof(struct packet_stacked_data));
if (!psd_pool)
return -ENOMEM;
Index: linux-2.6.16-rc1-mm3+mempool_work/drivers/scsi/lpfc/lpfc_mem.c
===================================================================
--- linux-2.6.16-rc1-mm3+mempool_work.orig/drivers/scsi/lpfc/lpfc_mem.c
+++ linux-2.6.16-rc1-mm3+mempool_work/drivers/scsi/lpfc/lpfc_mem.c
@@ -38,18 +38,6 @@
#define LPFC_MBUF_POOL_SIZE 64 /* max elements in MBUF safety pool */
#define LPFC_MEM_POOL_SIZE 64 /* max elem in non-DMA safety pool */
-static void *
-lpfc_pool_kmalloc(gfp_t gfp_flags, void *data)
-{
- return kmalloc((unsigned long)data, gfp_flags);
-}
-
-static void
-lpfc_pool_kfree(void *obj, void *data)
-{
- kfree(obj);
-}
-
int
lpfc_mem_alloc(struct lpfc_hba * phba)
{
@@ -80,14 +68,14 @@ lpfc_mem_alloc(struct lpfc_hba * phba)
}
phba->mbox_mem_pool = mempool_create(LPFC_MEM_POOL_SIZE,
- lpfc_pool_kmalloc, lpfc_pool_kfree,
- (void *)(unsigned long)sizeof(LPFC_MBOXQ_t));
+ mempool_kmalloc, mempool_kfree,
+ sizeof(LPFC_MBOXQ_t));
if (!phba->mbox_mem_pool)
goto fail_free_mbuf_pool;
phba->nlp_mem_pool = mempool_create(LPFC_MEM_POOL_SIZE,
- lpfc_pool_kmalloc, lpfc_pool_kfree,
- (void *)(unsigned long)sizeof(struct lpfc_nodelist));
+ mempool_kmalloc, mempool_kfree,
+ sizeof(struct lpfc_nodelist));
if (!phba->nlp_mem_pool)
goto fail_free_mbox_pool;
Index: linux-2.6.16-rc1-mm3+mempool_work/drivers/md/dm-raid1.c
===================================================================
--- linux-2.6.16-rc1-mm3+mempool_work.orig/drivers/md/dm-raid1.c
+++ linux-2.6.16-rc1-mm3+mempool_work/drivers/md/dm-raid1.c
@@ -122,16 +122,6 @@ static inline sector_t region_to_sector(
/* FIXME move this */
static void queue_bio(struct mirror_set *ms, struct bio *bio, int rw);
-static void *region_alloc(gfp_t gfp_mask, void *pool_data)
-{
- return kmalloc(sizeof(struct region), gfp_mask);
-}
-
-static void region_free(void *element, void *pool_data)
-{
- kfree(element);
-}
-
#define MIN_REGIONS 64
#define MAX_RECOVERY 1
static int rh_init(struct region_hash *rh, struct mirror_set *ms,
@@ -173,8 +163,8 @@ static int rh_init(struct region_hash *r
INIT_LIST_HEAD(&rh->quiesced_regions);
INIT_LIST_HEAD(&rh->recovered_regions);
- rh->region_pool = mempool_create(MIN_REGIONS, region_alloc,
- region_free, NULL);
+ rh->region_pool = mempool_create(MIN_REGIONS, mempool_kmalloc,
+ mempool_kfree, sizeof(struct region));
if (!rh->region_pool) {
vfree(rh->buckets);
rh->buckets = NULL;
Index: linux-2.6.16-rc1-mm3+mempool_work/drivers/s390/scsi/zfcp_aux.c
===================================================================
--- linux-2.6.16-rc1-mm3+mempool_work.orig/drivers/s390/scsi/zfcp_aux.c
+++ linux-2.6.16-rc1-mm3+mempool_work/drivers/s390/scsi/zfcp_aux.c
@@ -829,18 +829,6 @@ zfcp_unit_dequeue(struct zfcp_unit *unit
device_unregister(&unit->sysfs_device);
}
-static void *
-zfcp_mempool_alloc(gfp_t gfp_mask, void *size)
-{
- return kmalloc((size_t) size, gfp_mask);
-}
-
-static void
-zfcp_mempool_free(void *element, void *size)
-{
- kfree(element);
-}
-
/*
* Allocates a combined QTCB/fsf_req buffer for erp actions and fcp/SCSI
* commands.
@@ -854,50 +842,44 @@ zfcp_allocate_low_mem_buffers(struct zfc
{
adapter->pool.fsf_req_erp =
mempool_create(ZFCP_POOL_FSF_REQ_ERP_NR,
- zfcp_mempool_alloc, zfcp_mempool_free, (void *)
+ mempool_kmalloc, mempool_kfree,
sizeof(struct zfcp_fsf_req_pool_element));
-
- if (NULL == adapter->pool.fsf_req_erp)
+ if (!adapter->pool.fsf_req_erp)
return -ENOMEM;
adapter->pool.fsf_req_scsi =
mempool_create(ZFCP_POOL_FSF_REQ_SCSI_NR,
- zfcp_mempool_alloc, zfcp_mempool_free, (void *)
+ mempool_kmalloc, mempool_kfree,
sizeof(struct zfcp_fsf_req_pool_element));
-
- if (NULL == adapter->pool.fsf_req_scsi)
+ if (!adapter->pool.fsf_req_scsi)
return -ENOMEM;
adapter->pool.fsf_req_abort =
mempool_create(ZFCP_POOL_FSF_REQ_ABORT_NR,
- zfcp_mempool_alloc, zfcp_mempool_free, (void *)
+ mempool_kmalloc, mempool_kfree,
sizeof(struct zfcp_fsf_req_pool_element));
-
- if (NULL == adapter->pool.fsf_req_abort)
+ if (!adapter->pool.fsf_req_abort)
return -ENOMEM;
adapter->pool.fsf_req_status_read =
mempool_create(ZFCP_POOL_STATUS_READ_NR,
- zfcp_mempool_alloc, zfcp_mempool_free,
- (void *) sizeof(struct zfcp_fsf_req));
-
- if (NULL == adapter->pool.fsf_req_status_read)
+ mempool_kmalloc, mempool_kfree,
+ sizeof(struct zfcp_fsf_req));
+ if (!adapter->pool.fsf_req_status_read)
return -ENOMEM;
adapter->pool.data_status_read =
mempool_create(ZFCP_POOL_STATUS_READ_NR,
- zfcp_mempool_alloc, zfcp_mempool_free,
- (void *) sizeof(struct fsf_status_read_buffer));
-
- if (NULL == adapter->pool.data_status_read)
+ mempool_kmalloc, mempool_kfree,
+ sizeof(struct fsf_status_read_buffer));
+ if (!adapter->pool.data_status_read)
return -ENOMEM;
adapter->pool.data_gid_pn =
mempool_create(ZFCP_POOL_DATA_GID_PN_NR,
- zfcp_mempool_alloc, zfcp_mempool_free, (void *)
+ mempool_kmalloc, mempool_kfree,
sizeof(struct zfcp_gid_pn_data));
-
- if (NULL == adapter->pool.data_gid_pn)
+ if (!adapter->pool.data_gid_pn)
return -ENOMEM;
return 0;
Index: linux-2.6.16-rc1-mm3+mempool_work/drivers/md/dm-io.c
===================================================================
--- linux-2.6.16-rc1-mm3+mempool_work.orig/drivers/md/dm-io.c
+++ linux-2.6.16-rc1-mm3+mempool_work/drivers/md/dm-io.c
@@ -32,16 +32,6 @@ struct io {
static unsigned _num_ios;
static mempool_t *_io_pool;
-static void *alloc_io(gfp_t gfp_mask, void *pool_data)
-{
- return kmalloc(sizeof(struct io), gfp_mask);
-}
-
-static void free_io(void *element, void *pool_data)
-{
- kfree(element);
-}
-
static unsigned int pages_to_ios(unsigned int pages)
{
return 4 * pages; /* too many ? */
@@ -65,7 +55,8 @@ static int resize_pool(unsigned int new_
} else {
/* create new pool */
- _io_pool = mempool_create(new_ios, alloc_io, free_io, NULL);
+ _io_pool = mempool_create(new_ios, mempool_kmalloc,
+ mempool_kfree, sizeof(struct io));
if (!_io_pool)
return -ENOMEM;
Index: linux-2.6.16-rc1-mm3+mempool_work/drivers/md/bitmap.c
===================================================================
--- linux-2.6.16-rc1-mm3+mempool_work.orig/drivers/md/bitmap.c
+++ linux-2.6.16-rc1-mm3+mempool_work/drivers/md/bitmap.c
@@ -89,16 +89,6 @@ int bitmap_active(struct bitmap *bitmap)
}
#define WRITE_POOL_SIZE 256
-/* mempool for queueing pending writes on the bitmap file */
-static void *write_pool_alloc(gfp_t gfp_flags, void *data)
-{
- return kmalloc(sizeof(struct page_list), gfp_flags);
-}
-
-static void write_pool_free(void *ptr, void *data)
-{
- kfree(ptr);
-}
/*
* just a placeholder - calls kmalloc for bitmap pages
@@ -1564,8 +1554,9 @@ int bitmap_create(mddev_t *mddev)
spin_lock_init(&bitmap->write_lock);
INIT_LIST_HEAD(&bitmap->complete_pages);
init_waitqueue_head(&bitmap->write_wait);
- bitmap->write_pool = mempool_create(WRITE_POOL_SIZE, write_pool_alloc,
- write_pool_free, NULL);
+ bitmap->write_pool = mempool_create(WRITE_POOL_SIZE,
+ mempool_kmalloc, mempool_kfree,
+ sizeof(struct page_list));
err = -ENOMEM;
if (!bitmap->write_pool)
goto error;
Index: linux-2.6.16-rc1-mm3+mempool_work/fs/bio.c
===================================================================
--- linux-2.6.16-rc1-mm3+mempool_work.orig/fs/bio.c
+++ linux-2.6.16-rc1-mm3+mempool_work/fs/bio.c
@@ -1127,16 +1127,6 @@ struct bio_pair *bio_split(struct bio *b
return bp;
}
-static void *bio_pair_alloc(gfp_t gfp_flags, void *data)
-{
- return kmalloc(sizeof(struct bio_pair), gfp_flags);
-}
-
-static void bio_pair_free(void *bp, void *data)
-{
- kfree(bp);
-}
-
/*
* create memory pools for biovec's in a bio_set.
@@ -1154,7 +1144,7 @@ static int biovec_create_pools(struct bi
pool_entries >>= 1;
*bvp = mempool_create(pool_entries, mempool_alloc_slab,
- mempool_free_slab, bp->slab);
+ mempool_free_slab, bp->slab);
if (!*bvp)
return -ENOMEM;
}
@@ -1193,7 +1183,7 @@ struct bio_set *bioset_create(int bio_po
memset(bs, 0, sizeof(*bs));
bs->bio_pool = mempool_create(bio_pool_size, mempool_alloc_slab,
- mempool_free_slab, bio_slab);
+ mempool_free_slab, bio_slab);
if (!bs->bio_pool)
goto bad;
@@ -1258,7 +1248,8 @@ static int __init init_bio(void)
panic("bio: can't allocate bios\n");
bio_split_pool = mempool_create(BIO_SPLIT_ENTRIES,
- bio_pair_alloc, bio_pair_free, NULL);
+ mempool_kmalloc, mempool_kfree,
+ sizeof(struct bio_pair));
if (!bio_split_pool)
panic("bio: can't create split pool\n");
--
^ permalink raw reply [flat|nested] 7+ messages in thread
* [patch 5/6] Create and Use common mempool allocators
[not found] <20060128001539.030809000@localhost.localdomain>
` (3 preceding siblings ...)
2006-01-28 0:19 ` [patch 4/6] " Matthew Dobson
@ 2006-01-28 0:19 ` Matthew Dobson
2006-01-28 0:20 ` [patch 6/6] " Matthew Dobson
5 siblings, 0 replies; 7+ messages in thread
From: Matthew Dobson @ 2006-01-28 0:19 UTC (permalink / raw)
To: linux-kernel; +Cc: penberg, akpm
plain text document attachment (mempool-add_kzalloc_allocator.patch)
From: Matthew Dobson <colpatch@us.ibm.com>
Subject: [patch 5/6] mempool - Add kzalloc allocator
Add another allocator to the common mempool code: a kzalloc/kfree allocator
This will be used by the next patch in the series to replace a mempool-backed
kzalloc allocator. It is also very likely that there will be more users in the
future.
Signed-off-by: Matthew Dobson <colpatch@us.ibm.com>
include/linux/mempool.h | 5 +++--
mm/mempool.c | 7 +++++++
2 files changed, 10 insertions(+), 2 deletions(-)
Index: linux-2.6.16-rc1-mm3+mempool_work/include/linux/mempool.h
===================================================================
--- linux-2.6.16-rc1-mm3+mempool_work.orig/include/linux/mempool.h
+++ linux-2.6.16-rc1-mm3+mempool_work/include/linux/mempool.h
@@ -39,10 +39,11 @@ void *mempool_alloc_slab(gfp_t gfp_mask,
void mempool_free_slab(void *element, void *pool_data);
/*
- * A mempool_alloc_t and mempool_free_t to kmalloc the amount of memory
- * specified by pool_data
+ * 2 mempool_alloc_t's and a mempool_free_t to kmalloc/kzalloc and kfree
+ * the amount of memory specified by pool_data
*/
void *mempool_kmalloc(gfp_t gfp_mask, void *pool_data);
+void *mempool_kzalloc(gfp_t gfp_mask, void *pool_data);
void mempool_kfree(void *element, void *pool_data);
/*
Index: linux-2.6.16-rc1-mm3+mempool_work/mm/mempool.c
===================================================================
--- linux-2.6.16-rc1-mm3+mempool_work.orig/mm/mempool.c
+++ linux-2.6.16-rc1-mm3+mempool_work/mm/mempool.c
@@ -301,6 +301,13 @@ void *mempool_kmalloc(gfp_t gfp_mask, vo
}
EXPORT_SYMBOL(mempool_kmalloc);
+void *mempool_kzalloc(gfp_t gfp_mask, void *pool_data)
+{
+ size_t size = (size_t) pool_data;
+ return kzalloc(size, gfp_mask);
+}
+EXPORT_SYMBOL(mempool_kzalloc);
+
void mempool_kfree(void *element, void *pool_data)
{
kfree(element);
--
^ permalink raw reply [flat|nested] 7+ messages in thread
* [patch 6/6] Create and Use common mempool allocators
[not found] <20060128001539.030809000@localhost.localdomain>
` (4 preceding siblings ...)
2006-01-28 0:19 ` [patch 5/6] " Matthew Dobson
@ 2006-01-28 0:20 ` Matthew Dobson
5 siblings, 0 replies; 7+ messages in thread
From: Matthew Dobson @ 2006-01-28 0:20 UTC (permalink / raw)
To: linux-kernel; +Cc: penberg, akpm
plain text document attachment (mempool-use_kzalloc_allocator.patch)
From: Matthew Dobson <colpatch@us.ibm.com>
Subject: [patch 6/6] mempool - Use common mempool kzalloc allocator
This patch changes a mempool user, which is basically just a wrapper around
kzalloc(), to use the common mempool_kmalloc/kfree, rather than its own wrapper
function, removing duplicated code.
Signed-off-by: Matthew Dobson <colpatch@us.ibm.com>
drivers/md/multipath.c | 16 ++--------------
1 files changed, 2 insertions(+), 14 deletions(-)
Index: linux-2.6.16-rc1-mm3+mempool_work/drivers/md/multipath.c
===================================================================
--- linux-2.6.16-rc1-mm3+mempool_work.orig/drivers/md/multipath.c
+++ linux-2.6.16-rc1-mm3+mempool_work/drivers/md/multipath.c
@@ -35,18 +35,6 @@
#define NR_RESERVED_BUFS 32
-static void *mp_pool_alloc(gfp_t gfp_flags, void *data)
-{
- struct multipath_bh *mpb;
- mpb = kzalloc(sizeof(*mpb), gfp_flags);
- return mpb;
-}
-
-static void mp_pool_free(void *mpb, void *data)
-{
- kfree(mpb);
-}
-
static int multipath_map (multipath_conf_t *conf)
{
int i, disks = conf->raid_disks;
@@ -495,8 +483,8 @@ static int multipath_run (mddev_t *mddev
mddev->degraded = conf->raid_disks = conf->working_disks;
conf->pool = mempool_create(NR_RESERVED_BUFS,
- mp_pool_alloc, mp_pool_free,
- NULL);
+ mempool_kzalloc, mempool_kfree,
+ sizeof(struct multipath_bh));
if (conf->pool == NULL) {
printk(KERN_ERR
"multipath: couldn't allocate memory for %s\n",
--
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [patch 2/6] Create and Use common mempool allocators
2006-01-28 0:19 ` [patch 2/6] " Matthew Dobson
@ 2006-01-28 10:08 ` Pekka Enberg
0 siblings, 0 replies; 7+ messages in thread
From: Pekka Enberg @ 2006-01-28 10:08 UTC (permalink / raw)
To: colpatch; +Cc: linux-kernel, akpm
Hi,
On Fri, 2006-01-27 at 16:19 -0800, Matthew Dobson wrote:
> - cc->page_pool = mempool_create(MIN_POOL_PAGES, mempool_alloc_page,
> - mempool_free_page, NULL);
> + cc->page_pool = mempool_create(MIN_POOL_PAGES, mempool_alloc_pages,
> + mempool_free_pages, 0);
You need to cast that zero to a void pointer to avoid compliation
warning (found in various other places as well). Would probably make
sense to implement helper functions so the casting doesn't spread all
over the place. Other than that, looks good to me.
Pekka
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2006-01-28 10:08 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20060128001539.030809000@localhost.localdomain>
2006-01-28 0:19 ` [patch 1/6] Create and Use common mempool allocators Matthew Dobson
2006-01-28 0:19 ` [patch 2/6] " Matthew Dobson
2006-01-28 10:08 ` Pekka Enberg
2006-01-28 0:19 ` [patch 3/6] " Matthew Dobson
2006-01-28 0:19 ` [patch 4/6] " Matthew Dobson
2006-01-28 0:19 ` [patch 5/6] " Matthew Dobson
2006-01-28 0:20 ` [patch 6/6] " Matthew Dobson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox