* [PATCHSET percpu/for-3.18] add @gfp to init functions of percpu data structures
@ 2014-08-26 0:43 Tejun Heo
2014-08-26 0:43 ` [PATCH 1/3] percpu_counter: add @gfp to percpu_counter_init() Tejun Heo
` (4 more replies)
0 siblings, 5 replies; 10+ messages in thread
From: Tejun Heo @ 2014-08-26 0:43 UTC (permalink / raw)
To: linux-kernel; +Cc: cl
There's now a pending patchset[1] which implements atomic percpu
allocation. This patchset propagates @gfp to percpu data structures
so that they can be allocated and initialized from !GFP_KERNEL
contexts too. This will be used for opportunistic allocations of data
structures embedding percpu constructs in IO path.
This patchset adds @gfp to alloc/init functions of percpu_counter,
[flex_]proportions and percpu-refcount. We could add separate
alloc/init functions which take @gfp but there aren't too many users
yet, so let's just add it to the existing ones.
This patchset contains the following patches
0001-percpu_counter-add-gfp-to-percpu_counter_init.patch
0002-proportions-add-gfp-to-init-functions.patch
0003-percpu-refcount-add-gfp-to-percpu_ref_init.patch
and is on top of
[1] [PATCHSET REPOST percpu/for-3.18] percpu: implement atomic allocation support
and avaliable in the following git branch.
git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu.git/review-add-gfps
diffstat follows. Thanks.
arch/x86/kvm/mmu.c | 2 +-
drivers/target/target_core_tpg.c | 3 ++-
fs/aio.c | 4 ++--
fs/btrfs/disk-io.c | 8 ++++----
fs/btrfs/extent-tree.c | 2 +-
fs/ext2/super.c | 6 +++---
fs/ext3/super.c | 6 +++---
fs/ext4/super.c | 14 +++++++++-----
fs/file_table.c | 2 +-
fs/quota/dquot.c | 2 +-
fs/super.c | 3 ++-
include/linux/flex_proportions.h | 5 +++--
include/linux/percpu-refcount.h | 3 ++-
include/linux/percpu_counter.h | 10 ++++++----
include/linux/proportions.h | 5 +++--
include/net/dst_ops.h | 2 +-
include/net/inet_frag.h | 2 +-
kernel/cgroup.c | 6 +++---
lib/flex_proportions.c | 8 ++++----
lib/percpu-refcount.c | 6 ++++--
lib/percpu_counter.c | 4 ++--
lib/proportions.c | 10 +++++-----
mm/backing-dev.c | 4 ++--
mm/mmap.c | 2 +-
mm/nommu.c | 2 +-
mm/page-writeback.c | 2 +-
mm/shmem.c | 2 +-
net/dccp/proto.c | 2 +-
net/ipv4/tcp.c | 4 ++--
net/ipv4/tcp_memcontrol.c | 2 +-
net/sctp/protocol.c | 2 +-
31 files changed, 74 insertions(+), 61 deletions(-)
--
tejun
[1] http://lkml.kernel.org/g/1408726399-4436-1-git-send-email-tj@kernel.org
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 1/3] percpu_counter: add @gfp to percpu_counter_init()
2014-08-26 0:43 [PATCHSET percpu/for-3.18] add @gfp to init functions of percpu data structures Tejun Heo
@ 2014-08-26 0:43 ` Tejun Heo
2014-08-26 0:46 ` David Miller
2014-08-26 10:21 ` Jan Kara
2014-08-26 0:43 ` [PATCH 2/3] proportions: add @gfp to init functions Tejun Heo
` (3 subsequent siblings)
4 siblings, 2 replies; 10+ messages in thread
From: Tejun Heo @ 2014-08-26 0:43 UTC (permalink / raw)
To: linux-kernel
Cc: cl, Tejun Heo, x86, Jens Axboe, Jan Kara, Theodore Ts'o,
Alexander Viro, David S. Miller, Andrew Morton
Percpu allocator now supports allocation mask. Add @gfp to
percpu_counter_init() so that !GFP_KERNEL allocation masks can be used
with percpu_counters too.
We could have left percpu_counter_init() alone and added
percpu_counter_init_gfp(); however, the number of users isn't that
high and introducing _gfp variants to all percpu data structures would
be quite ugly, so let's just do the conversion. This is the one with
the most users. Other percpu data structures are a lot easier to
convert.
This patch doesn't make any functional difference.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: x86@kernel.org
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Jan Kara <jack@suse.cz>
Cc: "Theodore Ts'o" <tytso@mit.edu>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Andrew Morton <akpm@linux-foundation.org>
---
arch/x86/kvm/mmu.c | 2 +-
fs/btrfs/disk-io.c | 8 ++++----
fs/btrfs/extent-tree.c | 2 +-
fs/ext2/super.c | 6 +++---
fs/ext3/super.c | 6 +++---
fs/ext4/super.c | 14 +++++++++-----
fs/file_table.c | 2 +-
fs/quota/dquot.c | 2 +-
fs/super.c | 3 ++-
include/linux/percpu_counter.h | 10 ++++++----
include/net/dst_ops.h | 2 +-
include/net/inet_frag.h | 2 +-
lib/flex_proportions.c | 4 ++--
lib/percpu_counter.c | 4 ++--
lib/proportions.c | 6 +++---
mm/backing-dev.c | 2 +-
mm/mmap.c | 2 +-
mm/nommu.c | 2 +-
mm/shmem.c | 2 +-
net/dccp/proto.c | 2 +-
net/ipv4/tcp.c | 4 ++--
net/ipv4/tcp_memcontrol.c | 2 +-
net/sctp/protocol.c | 2 +-
23 files changed, 49 insertions(+), 42 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 9314678..5bd53f2 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4534,7 +4534,7 @@ int kvm_mmu_module_init(void)
if (!mmu_page_header_cache)
goto nomem;
- if (percpu_counter_init(&kvm_total_used_mmu_pages, 0))
+ if (percpu_counter_init(&kvm_total_used_mmu_pages, 0, GFP_KERNEL))
goto nomem;
register_shrinker(&mmu_shrinker);
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 08e65e9..61dae01 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -1180,7 +1180,7 @@ static struct btrfs_subvolume_writers *btrfs_alloc_subvolume_writers(void)
if (!writers)
return ERR_PTR(-ENOMEM);
- ret = percpu_counter_init(&writers->counter, 0);
+ ret = percpu_counter_init(&writers->counter, 0, GFP_KERNEL);
if (ret < 0) {
kfree(writers);
return ERR_PTR(ret);
@@ -2185,7 +2185,7 @@ int open_ctree(struct super_block *sb,
goto fail_srcu;
}
- ret = percpu_counter_init(&fs_info->dirty_metadata_bytes, 0);
+ ret = percpu_counter_init(&fs_info->dirty_metadata_bytes, 0, GFP_KERNEL);
if (ret) {
err = ret;
goto fail_bdi;
@@ -2193,13 +2193,13 @@ int open_ctree(struct super_block *sb,
fs_info->dirty_metadata_batch = PAGE_CACHE_SIZE *
(1 + ilog2(nr_cpu_ids));
- ret = percpu_counter_init(&fs_info->delalloc_bytes, 0);
+ ret = percpu_counter_init(&fs_info->delalloc_bytes, 0, GFP_KERNEL);
if (ret) {
err = ret;
goto fail_dirty_metadata_bytes;
}
- ret = percpu_counter_init(&fs_info->bio_counter, 0);
+ ret = percpu_counter_init(&fs_info->bio_counter, 0, GFP_KERNEL);
if (ret) {
err = ret;
goto fail_delalloc_bytes;
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index 813537f..94ec71e 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -3493,7 +3493,7 @@ static int update_space_info(struct btrfs_fs_info *info, u64 flags,
if (!found)
return -ENOMEM;
- ret = percpu_counter_init(&found->total_bytes_pinned, 0);
+ ret = percpu_counter_init(&found->total_bytes_pinned, 0, GFP_KERNEL);
if (ret) {
kfree(found);
return ret;
diff --git a/fs/ext2/super.c b/fs/ext2/super.c
index b88edc0..170dc41 100644
--- a/fs/ext2/super.c
+++ b/fs/ext2/super.c
@@ -1067,14 +1067,14 @@ static int ext2_fill_super(struct super_block *sb, void *data, int silent)
ext2_rsv_window_add(sb, &sbi->s_rsv_window_head);
err = percpu_counter_init(&sbi->s_freeblocks_counter,
- ext2_count_free_blocks(sb));
+ ext2_count_free_blocks(sb), GFP_KERNEL);
if (!err) {
err = percpu_counter_init(&sbi->s_freeinodes_counter,
- ext2_count_free_inodes(sb));
+ ext2_count_free_inodes(sb), GFP_KERNEL);
}
if (!err) {
err = percpu_counter_init(&sbi->s_dirs_counter,
- ext2_count_dirs(sb));
+ ext2_count_dirs(sb), GFP_KERNEL);
}
if (err) {
ext2_msg(sb, KERN_ERR, "error: insufficient memory");
diff --git a/fs/ext3/super.c b/fs/ext3/super.c
index 08cdfe5..eba021b 100644
--- a/fs/ext3/super.c
+++ b/fs/ext3/super.c
@@ -2039,14 +2039,14 @@ static int ext3_fill_super (struct super_block *sb, void *data, int silent)
goto failed_mount2;
}
err = percpu_counter_init(&sbi->s_freeblocks_counter,
- ext3_count_free_blocks(sb));
+ ext3_count_free_blocks(sb), GFP_KERNEL);
if (!err) {
err = percpu_counter_init(&sbi->s_freeinodes_counter,
- ext3_count_free_inodes(sb));
+ ext3_count_free_inodes(sb), GFP_KERNEL);
}
if (!err) {
err = percpu_counter_init(&sbi->s_dirs_counter,
- ext3_count_dirs(sb));
+ ext3_count_dirs(sb), GFP_KERNEL);
}
if (err) {
ext3_msg(sb, KERN_ERR, "error: insufficient memory");
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 32b43ad..e25ca8f 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -3891,7 +3891,8 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
/* Register extent status tree shrinker */
ext4_es_register_shrinker(sbi);
- if ((err = percpu_counter_init(&sbi->s_extent_cache_cnt, 0)) != 0) {
+ err = percpu_counter_init(&sbi->s_extent_cache_cnt, 0, GFP_KERNEL);
+ if (err) {
ext4_msg(sb, KERN_ERR, "insufficient memory");
goto failed_mount3;
}
@@ -4105,17 +4106,20 @@ no_journal:
block = ext4_count_free_clusters(sb);
ext4_free_blocks_count_set(sbi->s_es,
EXT4_C2B(sbi, block));
- err = percpu_counter_init(&sbi->s_freeclusters_counter, block);
+ err = percpu_counter_init(&sbi->s_freeclusters_counter, block,
+ GFP_KERNEL);
if (!err) {
unsigned long freei = ext4_count_free_inodes(sb);
sbi->s_es->s_free_inodes_count = cpu_to_le32(freei);
- err = percpu_counter_init(&sbi->s_freeinodes_counter, freei);
+ err = percpu_counter_init(&sbi->s_freeinodes_counter, freei,
+ GFP_KERNEL);
}
if (!err)
err = percpu_counter_init(&sbi->s_dirs_counter,
- ext4_count_dirs(sb));
+ ext4_count_dirs(sb), GFP_KERNEL);
if (!err)
- err = percpu_counter_init(&sbi->s_dirtyclusters_counter, 0);
+ err = percpu_counter_init(&sbi->s_dirtyclusters_counter, 0,
+ GFP_KERNEL);
if (err) {
ext4_msg(sb, KERN_ERR, "insufficient memory");
goto failed_mount6;
diff --git a/fs/file_table.c b/fs/file_table.c
index 385bfd3..0bab12b 100644
--- a/fs/file_table.c
+++ b/fs/file_table.c
@@ -331,5 +331,5 @@ void __init files_init(unsigned long mempages)
n = (mempages * (PAGE_SIZE / 1024)) / 10;
files_stat.max_files = max_t(unsigned long, n, NR_FILE);
- percpu_counter_init(&nr_files, 0);
+ percpu_counter_init(&nr_files, 0, GFP_KERNEL);
}
diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index f2d0eee..8b663b2 100644
--- a/fs/quota/dquot.c
+++ b/fs/quota/dquot.c
@@ -2725,7 +2725,7 @@ static int __init dquot_init(void)
panic("Cannot create dquot hash table");
for (i = 0; i < _DQST_DQSTAT_LAST; i++) {
- ret = percpu_counter_init(&dqstats.counter[i], 0);
+ ret = percpu_counter_init(&dqstats.counter[i], 0, GFP_KERNEL);
if (ret)
panic("Cannot create dquot stat counters");
}
diff --git a/fs/super.c b/fs/super.c
index b9a214d..1b83610 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -175,7 +175,8 @@ static struct super_block *alloc_super(struct file_system_type *type, int flags)
goto fail;
for (i = 0; i < SB_FREEZE_LEVELS; i++) {
- if (percpu_counter_init(&s->s_writers.counter[i], 0) < 0)
+ if (percpu_counter_init(&s->s_writers.counter[i], 0,
+ GFP_KERNEL) < 0)
goto fail;
lockdep_init_map(&s->s_writers.lock_map[i], sb_writers_name[i],
&type->s_writers_key[i], 0);
diff --git a/include/linux/percpu_counter.h b/include/linux/percpu_counter.h
index d5dd465..50e5009 100644
--- a/include/linux/percpu_counter.h
+++ b/include/linux/percpu_counter.h
@@ -12,6 +12,7 @@
#include <linux/threads.h>
#include <linux/percpu.h>
#include <linux/types.h>
+#include <linux/gfp.h>
#ifdef CONFIG_SMP
@@ -26,14 +27,14 @@ struct percpu_counter {
extern int percpu_counter_batch;
-int __percpu_counter_init(struct percpu_counter *fbc, s64 amount,
+int __percpu_counter_init(struct percpu_counter *fbc, s64 amount, gfp_t gfp,
struct lock_class_key *key);
-#define percpu_counter_init(fbc, value) \
+#define percpu_counter_init(fbc, value, gfp) \
({ \
static struct lock_class_key __key; \
\
- __percpu_counter_init(fbc, value, &__key); \
+ __percpu_counter_init(fbc, value, gfp, &__key); \
})
void percpu_counter_destroy(struct percpu_counter *fbc);
@@ -89,7 +90,8 @@ struct percpu_counter {
s64 count;
};
-static inline int percpu_counter_init(struct percpu_counter *fbc, s64 amount)
+static inline int percpu_counter_init(struct percpu_counter *fbc, s64 amount,
+ gfp_t gfp)
{
fbc->count = amount;
return 0;
diff --git a/include/net/dst_ops.h b/include/net/dst_ops.h
index 2f26dfb..1f99a1d 100644
--- a/include/net/dst_ops.h
+++ b/include/net/dst_ops.h
@@ -63,7 +63,7 @@ static inline void dst_entries_add(struct dst_ops *dst, int val)
static inline int dst_entries_init(struct dst_ops *dst)
{
- return percpu_counter_init(&dst->pcpuc_entries, 0);
+ return percpu_counter_init(&dst->pcpuc_entries, 0, GFP_KERNEL);
}
static inline void dst_entries_destroy(struct dst_ops *dst)
diff --git a/include/net/inet_frag.h b/include/net/inet_frag.h
index 65a8855..8d17655 100644
--- a/include/net/inet_frag.h
+++ b/include/net/inet_frag.h
@@ -151,7 +151,7 @@ static inline void add_frag_mem_limit(struct inet_frag_queue *q, int i)
static inline void init_frag_mem_limit(struct netns_frags *nf)
{
- percpu_counter_init(&nf->mem, 0);
+ percpu_counter_init(&nf->mem, 0, GFP_KERNEL);
}
static inline unsigned int sum_frag_mem_limit(struct netns_frags *nf)
diff --git a/lib/flex_proportions.c b/lib/flex_proportions.c
index ebf3bac..b9d026b 100644
--- a/lib/flex_proportions.c
+++ b/lib/flex_proportions.c
@@ -40,7 +40,7 @@ int fprop_global_init(struct fprop_global *p)
p->period = 0;
/* Use 1 to avoid dealing with periods with 0 events... */
- err = percpu_counter_init(&p->events, 1);
+ err = percpu_counter_init(&p->events, 1, GFP_KERNEL);
if (err)
return err;
seqcount_init(&p->sequence);
@@ -172,7 +172,7 @@ int fprop_local_init_percpu(struct fprop_local_percpu *pl)
{
int err;
- err = percpu_counter_init(&pl->events, 0);
+ err = percpu_counter_init(&pl->events, 0, GFP_KERNEL);
if (err)
return err;
pl->period = 0;
diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c
index 7dd33577..1a28f0f 100644
--- a/lib/percpu_counter.c
+++ b/lib/percpu_counter.c
@@ -112,13 +112,13 @@ s64 __percpu_counter_sum(struct percpu_counter *fbc)
}
EXPORT_SYMBOL(__percpu_counter_sum);
-int __percpu_counter_init(struct percpu_counter *fbc, s64 amount,
+int __percpu_counter_init(struct percpu_counter *fbc, s64 amount, gfp_t gfp,
struct lock_class_key *key)
{
raw_spin_lock_init(&fbc->lock);
lockdep_set_class(&fbc->lock, key);
fbc->count = amount;
- fbc->counters = alloc_percpu(s32);
+ fbc->counters = alloc_percpu_gfp(s32, gfp);
if (!fbc->counters)
return -ENOMEM;
diff --git a/lib/proportions.c b/lib/proportions.c
index 05df848..ca95f8d 100644
--- a/lib/proportions.c
+++ b/lib/proportions.c
@@ -83,11 +83,11 @@ int prop_descriptor_init(struct prop_descriptor *pd, int shift)
pd->index = 0;
pd->pg[0].shift = shift;
mutex_init(&pd->mutex);
- err = percpu_counter_init(&pd->pg[0].events, 0);
+ err = percpu_counter_init(&pd->pg[0].events, 0, GFP_KERNEL);
if (err)
goto out;
- err = percpu_counter_init(&pd->pg[1].events, 0);
+ err = percpu_counter_init(&pd->pg[1].events, 0, GFP_KERNEL);
if (err)
percpu_counter_destroy(&pd->pg[0].events);
@@ -193,7 +193,7 @@ int prop_local_init_percpu(struct prop_local_percpu *pl)
raw_spin_lock_init(&pl->lock);
pl->shift = 0;
pl->period = 0;
- return percpu_counter_init(&pl->events, 0);
+ return percpu_counter_init(&pl->events, 0, GFP_KERNEL);
}
void prop_local_destroy_percpu(struct prop_local_percpu *pl)
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 1706cbb..f19a818 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -455,7 +455,7 @@ int bdi_init(struct backing_dev_info *bdi)
bdi_wb_init(&bdi->wb, bdi);
for (i = 0; i < NR_BDI_STAT_ITEMS; i++) {
- err = percpu_counter_init(&bdi->bdi_stat[i], 0);
+ err = percpu_counter_init(&bdi->bdi_stat[i], 0, GFP_KERNEL);
if (err)
goto err;
}
diff --git a/mm/mmap.c b/mm/mmap.c
index c1f2ea4..d7ec93e 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -3196,7 +3196,7 @@ void __init mmap_init(void)
{
int ret;
- ret = percpu_counter_init(&vm_committed_as, 0);
+ ret = percpu_counter_init(&vm_committed_as, 0, GFP_KERNEL);
VM_BUG_ON(ret);
}
diff --git a/mm/nommu.c b/mm/nommu.c
index a881d96..bd1808e 100644
--- a/mm/nommu.c
+++ b/mm/nommu.c
@@ -539,7 +539,7 @@ void __init mmap_init(void)
{
int ret;
- ret = percpu_counter_init(&vm_committed_as, 0);
+ ret = percpu_counter_init(&vm_committed_as, 0, GFP_KERNEL);
VM_BUG_ON(ret);
vm_region_jar = KMEM_CACHE(vm_region, SLAB_PANIC);
}
diff --git a/mm/shmem.c b/mm/shmem.c
index 0e5fb22..d4bc55d 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2993,7 +2993,7 @@ int shmem_fill_super(struct super_block *sb, void *data, int silent)
#endif
spin_lock_init(&sbinfo->stat_lock);
- if (percpu_counter_init(&sbinfo->used_blocks, 0))
+ if (percpu_counter_init(&sbinfo->used_blocks, 0, GFP_KERNEL))
goto failed;
sbinfo->free_inodes = sbinfo->max_inodes;
diff --git a/net/dccp/proto.c b/net/dccp/proto.c
index de2c1e7..e421edd 100644
--- a/net/dccp/proto.c
+++ b/net/dccp/proto.c
@@ -1115,7 +1115,7 @@ static int __init dccp_init(void)
BUILD_BUG_ON(sizeof(struct dccp_skb_cb) >
FIELD_SIZEOF(struct sk_buff, cb));
- rc = percpu_counter_init(&dccp_orphan_count, 0);
+ rc = percpu_counter_init(&dccp_orphan_count, 0, GFP_KERNEL);
if (rc)
goto out_fail;
rc = -ENOBUFS;
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 541f26a..d59c260 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -3188,8 +3188,8 @@ void __init tcp_init(void)
BUILD_BUG_ON(sizeof(struct tcp_skb_cb) > sizeof(skb->cb));
- percpu_counter_init(&tcp_sockets_allocated, 0);
- percpu_counter_init(&tcp_orphan_count, 0);
+ percpu_counter_init(&tcp_sockets_allocated, 0, GFP_KERNEL);
+ percpu_counter_init(&tcp_orphan_count, 0, GFP_KERNEL);
tcp_hashinfo.bind_bucket_cachep =
kmem_cache_create("tcp_bind_bucket",
sizeof(struct inet_bind_bucket), 0,
diff --git a/net/ipv4/tcp_memcontrol.c b/net/ipv4/tcp_memcontrol.c
index 3af5226..1d19135 100644
--- a/net/ipv4/tcp_memcontrol.c
+++ b/net/ipv4/tcp_memcontrol.c
@@ -32,7 +32,7 @@ int tcp_init_cgroup(struct mem_cgroup *memcg, struct cgroup_subsys *ss)
res_parent = &parent_cg->memory_allocated;
res_counter_init(&cg_proto->memory_allocated, res_parent);
- percpu_counter_init(&cg_proto->sockets_allocated, 0);
+ percpu_counter_init(&cg_proto->sockets_allocated, 0, GFP_KERNEL);
return 0;
}
diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c
index 6240834..f00a85a 100644
--- a/net/sctp/protocol.c
+++ b/net/sctp/protocol.c
@@ -1341,7 +1341,7 @@ static __init int sctp_init(void)
if (!sctp_chunk_cachep)
goto err_chunk_cachep;
- status = percpu_counter_init(&sctp_sockets_allocated, 0);
+ status = percpu_counter_init(&sctp_sockets_allocated, 0, GFP_KERNEL);
if (status)
goto err_percpu_counter_init;
--
1.9.3
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 2/3] proportions: add @gfp to init functions
2014-08-26 0:43 [PATCHSET percpu/for-3.18] add @gfp to init functions of percpu data structures Tejun Heo
2014-08-26 0:43 ` [PATCH 1/3] percpu_counter: add @gfp to percpu_counter_init() Tejun Heo
@ 2014-08-26 0:43 ` Tejun Heo
2014-08-26 10:19 ` Jan Kara
2014-08-26 0:43 ` [PATCH 3/3] percpu-refcount: add @gfp to percpu_ref_init() Tejun Heo
` (2 subsequent siblings)
4 siblings, 1 reply; 10+ messages in thread
From: Tejun Heo @ 2014-08-26 0:43 UTC (permalink / raw)
To: linux-kernel; +Cc: cl, Tejun Heo, Jan Kara, Peter Zijlstra
Percpu allocator now supports allocation mask. Add @gfp to
[flex_]proportions init functions so that !GFP_KERNEL allocation masks
can be used with them too.
This patch doesn't make any functional difference.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Peter Zijlstra <peterz@infradead.org>
---
include/linux/flex_proportions.h | 5 +++--
include/linux/proportions.h | 5 +++--
lib/flex_proportions.c | 8 ++++----
lib/proportions.c | 10 +++++-----
mm/backing-dev.c | 2 +-
mm/page-writeback.c | 2 +-
6 files changed, 17 insertions(+), 15 deletions(-)
diff --git a/include/linux/flex_proportions.h b/include/linux/flex_proportions.h
index 4ebc49f..0d348e0 100644
--- a/include/linux/flex_proportions.h
+++ b/include/linux/flex_proportions.h
@@ -10,6 +10,7 @@
#include <linux/percpu_counter.h>
#include <linux/spinlock.h>
#include <linux/seqlock.h>
+#include <linux/gfp.h>
/*
* When maximum proportion of some event type is specified, this is the
@@ -32,7 +33,7 @@ struct fprop_global {
seqcount_t sequence;
};
-int fprop_global_init(struct fprop_global *p);
+int fprop_global_init(struct fprop_global *p, gfp_t gfp);
void fprop_global_destroy(struct fprop_global *p);
bool fprop_new_period(struct fprop_global *p, int periods);
@@ -79,7 +80,7 @@ struct fprop_local_percpu {
raw_spinlock_t lock; /* Protect period and numerator */
};
-int fprop_local_init_percpu(struct fprop_local_percpu *pl);
+int fprop_local_init_percpu(struct fprop_local_percpu *pl, gfp_t gfp);
void fprop_local_destroy_percpu(struct fprop_local_percpu *pl);
void __fprop_inc_percpu(struct fprop_global *p, struct fprop_local_percpu *pl);
void __fprop_inc_percpu_max(struct fprop_global *p, struct fprop_local_percpu *pl,
diff --git a/include/linux/proportions.h b/include/linux/proportions.h
index 26a8a4e..00e8e8f 100644
--- a/include/linux/proportions.h
+++ b/include/linux/proportions.h
@@ -12,6 +12,7 @@
#include <linux/percpu_counter.h>
#include <linux/spinlock.h>
#include <linux/mutex.h>
+#include <linux/gfp.h>
struct prop_global {
/*
@@ -40,7 +41,7 @@ struct prop_descriptor {
struct mutex mutex; /* serialize the prop_global switch */
};
-int prop_descriptor_init(struct prop_descriptor *pd, int shift);
+int prop_descriptor_init(struct prop_descriptor *pd, int shift, gfp_t gfp);
void prop_change_shift(struct prop_descriptor *pd, int new_shift);
/*
@@ -61,7 +62,7 @@ struct prop_local_percpu {
raw_spinlock_t lock; /* protect the snapshot state */
};
-int prop_local_init_percpu(struct prop_local_percpu *pl);
+int prop_local_init_percpu(struct prop_local_percpu *pl, gfp_t gfp);
void prop_local_destroy_percpu(struct prop_local_percpu *pl);
void __prop_inc_percpu(struct prop_descriptor *pd, struct prop_local_percpu *pl);
void prop_fraction_percpu(struct prop_descriptor *pd, struct prop_local_percpu *pl,
diff --git a/lib/flex_proportions.c b/lib/flex_proportions.c
index b9d026b..8f25652 100644
--- a/lib/flex_proportions.c
+++ b/lib/flex_proportions.c
@@ -34,13 +34,13 @@
*/
#include <linux/flex_proportions.h>
-int fprop_global_init(struct fprop_global *p)
+int fprop_global_init(struct fprop_global *p, gfp_t gfp)
{
int err;
p->period = 0;
/* Use 1 to avoid dealing with periods with 0 events... */
- err = percpu_counter_init(&p->events, 1, GFP_KERNEL);
+ err = percpu_counter_init(&p->events, 1, gfp);
if (err)
return err;
seqcount_init(&p->sequence);
@@ -168,11 +168,11 @@ void fprop_fraction_single(struct fprop_global *p,
*/
#define PROP_BATCH (8*(1+ilog2(nr_cpu_ids)))
-int fprop_local_init_percpu(struct fprop_local_percpu *pl)
+int fprop_local_init_percpu(struct fprop_local_percpu *pl, gfp_t gfp)
{
int err;
- err = percpu_counter_init(&pl->events, 0, GFP_KERNEL);
+ err = percpu_counter_init(&pl->events, 0, gfp);
if (err)
return err;
pl->period = 0;
diff --git a/lib/proportions.c b/lib/proportions.c
index ca95f8d..6f72429 100644
--- a/lib/proportions.c
+++ b/lib/proportions.c
@@ -73,7 +73,7 @@
#include <linux/proportions.h>
#include <linux/rcupdate.h>
-int prop_descriptor_init(struct prop_descriptor *pd, int shift)
+int prop_descriptor_init(struct prop_descriptor *pd, int shift, gfp_t gfp)
{
int err;
@@ -83,11 +83,11 @@ int prop_descriptor_init(struct prop_descriptor *pd, int shift)
pd->index = 0;
pd->pg[0].shift = shift;
mutex_init(&pd->mutex);
- err = percpu_counter_init(&pd->pg[0].events, 0, GFP_KERNEL);
+ err = percpu_counter_init(&pd->pg[0].events, 0, gfp);
if (err)
goto out;
- err = percpu_counter_init(&pd->pg[1].events, 0, GFP_KERNEL);
+ err = percpu_counter_init(&pd->pg[1].events, 0, gfp);
if (err)
percpu_counter_destroy(&pd->pg[0].events);
@@ -188,12 +188,12 @@ prop_adjust_shift(int *pl_shift, unsigned long *pl_period, int new_shift)
#define PROP_BATCH (8*(1+ilog2(nr_cpu_ids)))
-int prop_local_init_percpu(struct prop_local_percpu *pl)
+int prop_local_init_percpu(struct prop_local_percpu *pl, gfp_t gfp)
{
raw_spin_lock_init(&pl->lock);
pl->shift = 0;
pl->period = 0;
- return percpu_counter_init(&pl->events, 0, GFP_KERNEL);
+ return percpu_counter_init(&pl->events, 0, gfp);
}
void prop_local_destroy_percpu(struct prop_local_percpu *pl)
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index f19a818..64ec49d 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -470,7 +470,7 @@ int bdi_init(struct backing_dev_info *bdi)
bdi->write_bandwidth = INIT_BW;
bdi->avg_write_bandwidth = INIT_BW;
- err = fprop_local_init_percpu(&bdi->completions);
+ err = fprop_local_init_percpu(&bdi->completions, GFP_KERNEL);
if (err) {
err:
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 91d73ef..5085994 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -1777,7 +1777,7 @@ void __init page_writeback_init(void)
writeback_set_ratelimit();
register_cpu_notifier(&ratelimit_nb);
- fprop_global_init(&writeout_completions);
+ fprop_global_init(&writeout_completions, GFP_KERNEL);
}
/**
--
1.9.3
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 3/3] percpu-refcount: add @gfp to percpu_ref_init()
2014-08-26 0:43 [PATCHSET percpu/for-3.18] add @gfp to init functions of percpu data structures Tejun Heo
2014-08-26 0:43 ` [PATCH 1/3] percpu_counter: add @gfp to percpu_counter_init() Tejun Heo
2014-08-26 0:43 ` [PATCH 2/3] proportions: add @gfp to init functions Tejun Heo
@ 2014-08-26 0:43 ` Tejun Heo
2014-08-26 15:35 ` [PATCH v2 " Tejun Heo
2014-09-08 0:30 ` [PATCH 0.5/3] percpu_counter: make percpu_counters_lock irq-safe Tejun Heo
2014-09-08 0:30 ` [PATCHSET percpu/for-3.18] add @gfp to init functions of percpu data structures Tejun Heo
4 siblings, 1 reply; 10+ messages in thread
From: Tejun Heo @ 2014-08-26 0:43 UTC (permalink / raw)
To: linux-kernel
Cc: cl, Tejun Heo, Kent Overstreet, Benjamin LaHaise, Li Zefan,
Nicholas A. Bellinger
Percpu allocator now supports allocation mask. Add @gfp to
percpu_ref_init() so that !GFP_KERNEL allocation masks can be used
with percpu_refs too.
This patch doesn't make any functional difference.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Kent Overstreet <koverstreet@google.com>
Cc: Benjamin LaHaise <bcrl@kvack.org>
Cc: Li Zefan <lizefan@huawei.com>
Cc: Nicholas A. Bellinger <nab@linux-iscsi.org>
---
drivers/target/target_core_tpg.c | 3 ++-
fs/aio.c | 4 ++--
include/linux/percpu-refcount.h | 3 ++-
kernel/cgroup.c | 6 +++---
lib/percpu-refcount.c | 6 ++++--
5 files changed, 13 insertions(+), 9 deletions(-)
diff --git a/drivers/target/target_core_tpg.c b/drivers/target/target_core_tpg.c
index fddfae6..4ab6da3 100644
--- a/drivers/target/target_core_tpg.c
+++ b/drivers/target/target_core_tpg.c
@@ -819,7 +819,8 @@ int core_tpg_add_lun(
{
int ret;
- ret = percpu_ref_init(&lun->lun_ref, core_tpg_lun_ref_release);
+ ret = percpu_ref_init(&lun->lun_ref, core_tpg_lun_ref_release,
+ GFP_KERNEL);
if (ret < 0)
return ret;
diff --git a/fs/aio.c b/fs/aio.c
index bd7ec2c..93fbcc0f 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -666,10 +666,10 @@ static struct kioctx *ioctx_alloc(unsigned nr_events)
INIT_LIST_HEAD(&ctx->active_reqs);
- if (percpu_ref_init(&ctx->users, free_ioctx_users))
+ if (percpu_ref_init(&ctx->users, free_ioctx_users, GFP_KERNEL))
goto err;
- if (percpu_ref_init(&ctx->reqs, free_ioctx_reqs))
+ if (percpu_ref_init(&ctx->reqs, free_ioctx_reqs, GFP_KERNEL))
goto err;
ctx->cpu = alloc_percpu(struct kioctx_cpu);
diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h
index 3dfbf23..ee83251 100644
--- a/include/linux/percpu-refcount.h
+++ b/include/linux/percpu-refcount.h
@@ -49,6 +49,7 @@
#include <linux/kernel.h>
#include <linux/percpu.h>
#include <linux/rcupdate.h>
+#include <linux/gfp.h>
struct percpu_ref;
typedef void (percpu_ref_func_t)(struct percpu_ref *);
@@ -66,7 +67,7 @@ struct percpu_ref {
};
int __must_check percpu_ref_init(struct percpu_ref *ref,
- percpu_ref_func_t *release);
+ percpu_ref_func_t *release, gfp_t gfp);
void percpu_ref_reinit(struct percpu_ref *ref);
void percpu_ref_exit(struct percpu_ref *ref);
void percpu_ref_kill_and_confirm(struct percpu_ref *ref,
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 7dc8788..589b4d8 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -1628,7 +1628,7 @@ static int cgroup_setup_root(struct cgroup_root *root, unsigned int ss_mask)
goto out;
root_cgrp->id = ret;
- ret = percpu_ref_init(&root_cgrp->self.refcnt, css_release);
+ ret = percpu_ref_init(&root_cgrp->self.refcnt, css_release, GFP_KERNEL);
if (ret)
goto out;
@@ -4487,7 +4487,7 @@ static int create_css(struct cgroup *cgrp, struct cgroup_subsys *ss,
init_and_link_css(css, ss, cgrp);
- err = percpu_ref_init(&css->refcnt, css_release);
+ err = percpu_ref_init(&css->refcnt, css_release, GFP_KERNEL);
if (err)
goto err_free_css;
@@ -4555,7 +4555,7 @@ static int cgroup_mkdir(struct kernfs_node *parent_kn, const char *name,
goto out_unlock;
}
- ret = percpu_ref_init(&cgrp->self.refcnt, css_release);
+ ret = percpu_ref_init(&cgrp->self.refcnt, css_release, GFP_KERNEL);
if (ret)
goto out_free_cgrp;
diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c
index fe5a334..ff99032 100644
--- a/lib/percpu-refcount.c
+++ b/lib/percpu-refcount.c
@@ -40,6 +40,7 @@ static unsigned __percpu *pcpu_count_ptr(struct percpu_ref *ref)
* percpu_ref_init - initialize a percpu refcount
* @ref: percpu_ref to initialize
* @release: function which will be called when refcount hits 0
+ * @gfp: allocation mask to use
*
* Initializes the refcount in single atomic counter mode with a refcount of 1;
* analagous to atomic_set(ref, 1).
@@ -47,11 +48,12 @@ static unsigned __percpu *pcpu_count_ptr(struct percpu_ref *ref)
* Note that @release must not sleep - it may potentially be called from RCU
* callback context by percpu_ref_kill().
*/
-int percpu_ref_init(struct percpu_ref *ref, percpu_ref_func_t *release)
+int percpu_ref_init(struct percpu_ref *ref, percpu_ref_func_t *release,
+ gfp_t gfp)
{
atomic_set(&ref->count, 1 + PCPU_COUNT_BIAS);
- ref->pcpu_count_ptr = (unsigned long)alloc_percpu(unsigned);
+ ref->pcpu_count_ptr = (unsigned long)alloc_percpu_gfp(unsigned, gfp);
if (!ref->pcpu_count_ptr)
return -ENOMEM;
--
1.9.3
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 1/3] percpu_counter: add @gfp to percpu_counter_init()
2014-08-26 0:43 ` [PATCH 1/3] percpu_counter: add @gfp to percpu_counter_init() Tejun Heo
@ 2014-08-26 0:46 ` David Miller
2014-08-26 10:21 ` Jan Kara
1 sibling, 0 replies; 10+ messages in thread
From: David Miller @ 2014-08-26 0:46 UTC (permalink / raw)
To: tj; +Cc: linux-kernel, cl, x86, axboe, jack, tytso, viro, akpm
From: Tejun Heo <tj@kernel.org>
Date: Mon, 25 Aug 2014 20:43:30 -0400
> Percpu allocator now supports allocation mask. Add @gfp to
> percpu_counter_init() so that !GFP_KERNEL allocation masks can be used
> with percpu_counters too.
>
> We could have left percpu_counter_init() alone and added
> percpu_counter_init_gfp(); however, the number of users isn't that
> high and introducing _gfp variants to all percpu data structures would
> be quite ugly, so let's just do the conversion. This is the one with
> the most users. Other percpu data structures are a lot easier to
> convert.
>
> This patch doesn't make any functional difference.
>
> Signed-off-by: Tejun Heo <tj@kernel.org>
For networking bits:
Acked-by: David S. Miller <davem@davemloft.net>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 2/3] proportions: add @gfp to init functions
2014-08-26 0:43 ` [PATCH 2/3] proportions: add @gfp to init functions Tejun Heo
@ 2014-08-26 10:19 ` Jan Kara
0 siblings, 0 replies; 10+ messages in thread
From: Jan Kara @ 2014-08-26 10:19 UTC (permalink / raw)
To: Tejun Heo; +Cc: linux-kernel, cl, Jan Kara, Peter Zijlstra
On Mon 25-08-14 20:43:31, Tejun Heo wrote:
> Percpu allocator now supports allocation mask. Add @gfp to
> [flex_]proportions init functions so that !GFP_KERNEL allocation masks
> can be used with them too.
>
> This patch doesn't make any functional difference.
Looks good to me. You can add:
Reviewed-by: Jan Kara <jack@suse.cz>
Honza
>
> Signed-off-by: Tejun Heo <tj@kernel.org>
> Cc: Jan Kara <jack@suse.cz>
> Cc: Peter Zijlstra <peterz@infradead.org>
> ---
> include/linux/flex_proportions.h | 5 +++--
> include/linux/proportions.h | 5 +++--
> lib/flex_proportions.c | 8 ++++----
> lib/proportions.c | 10 +++++-----
> mm/backing-dev.c | 2 +-
> mm/page-writeback.c | 2 +-
> 6 files changed, 17 insertions(+), 15 deletions(-)
>
> diff --git a/include/linux/flex_proportions.h b/include/linux/flex_proportions.h
> index 4ebc49f..0d348e0 100644
> --- a/include/linux/flex_proportions.h
> +++ b/include/linux/flex_proportions.h
> @@ -10,6 +10,7 @@
> #include <linux/percpu_counter.h>
> #include <linux/spinlock.h>
> #include <linux/seqlock.h>
> +#include <linux/gfp.h>
>
> /*
> * When maximum proportion of some event type is specified, this is the
> @@ -32,7 +33,7 @@ struct fprop_global {
> seqcount_t sequence;
> };
>
> -int fprop_global_init(struct fprop_global *p);
> +int fprop_global_init(struct fprop_global *p, gfp_t gfp);
> void fprop_global_destroy(struct fprop_global *p);
> bool fprop_new_period(struct fprop_global *p, int periods);
>
> @@ -79,7 +80,7 @@ struct fprop_local_percpu {
> raw_spinlock_t lock; /* Protect period and numerator */
> };
>
> -int fprop_local_init_percpu(struct fprop_local_percpu *pl);
> +int fprop_local_init_percpu(struct fprop_local_percpu *pl, gfp_t gfp);
> void fprop_local_destroy_percpu(struct fprop_local_percpu *pl);
> void __fprop_inc_percpu(struct fprop_global *p, struct fprop_local_percpu *pl);
> void __fprop_inc_percpu_max(struct fprop_global *p, struct fprop_local_percpu *pl,
> diff --git a/include/linux/proportions.h b/include/linux/proportions.h
> index 26a8a4e..00e8e8f 100644
> --- a/include/linux/proportions.h
> +++ b/include/linux/proportions.h
> @@ -12,6 +12,7 @@
> #include <linux/percpu_counter.h>
> #include <linux/spinlock.h>
> #include <linux/mutex.h>
> +#include <linux/gfp.h>
>
> struct prop_global {
> /*
> @@ -40,7 +41,7 @@ struct prop_descriptor {
> struct mutex mutex; /* serialize the prop_global switch */
> };
>
> -int prop_descriptor_init(struct prop_descriptor *pd, int shift);
> +int prop_descriptor_init(struct prop_descriptor *pd, int shift, gfp_t gfp);
> void prop_change_shift(struct prop_descriptor *pd, int new_shift);
>
> /*
> @@ -61,7 +62,7 @@ struct prop_local_percpu {
> raw_spinlock_t lock; /* protect the snapshot state */
> };
>
> -int prop_local_init_percpu(struct prop_local_percpu *pl);
> +int prop_local_init_percpu(struct prop_local_percpu *pl, gfp_t gfp);
> void prop_local_destroy_percpu(struct prop_local_percpu *pl);
> void __prop_inc_percpu(struct prop_descriptor *pd, struct prop_local_percpu *pl);
> void prop_fraction_percpu(struct prop_descriptor *pd, struct prop_local_percpu *pl,
> diff --git a/lib/flex_proportions.c b/lib/flex_proportions.c
> index b9d026b..8f25652 100644
> --- a/lib/flex_proportions.c
> +++ b/lib/flex_proportions.c
> @@ -34,13 +34,13 @@
> */
> #include <linux/flex_proportions.h>
>
> -int fprop_global_init(struct fprop_global *p)
> +int fprop_global_init(struct fprop_global *p, gfp_t gfp)
> {
> int err;
>
> p->period = 0;
> /* Use 1 to avoid dealing with periods with 0 events... */
> - err = percpu_counter_init(&p->events, 1, GFP_KERNEL);
> + err = percpu_counter_init(&p->events, 1, gfp);
> if (err)
> return err;
> seqcount_init(&p->sequence);
> @@ -168,11 +168,11 @@ void fprop_fraction_single(struct fprop_global *p,
> */
> #define PROP_BATCH (8*(1+ilog2(nr_cpu_ids)))
>
> -int fprop_local_init_percpu(struct fprop_local_percpu *pl)
> +int fprop_local_init_percpu(struct fprop_local_percpu *pl, gfp_t gfp)
> {
> int err;
>
> - err = percpu_counter_init(&pl->events, 0, GFP_KERNEL);
> + err = percpu_counter_init(&pl->events, 0, gfp);
> if (err)
> return err;
> pl->period = 0;
> diff --git a/lib/proportions.c b/lib/proportions.c
> index ca95f8d..6f72429 100644
> --- a/lib/proportions.c
> +++ b/lib/proportions.c
> @@ -73,7 +73,7 @@
> #include <linux/proportions.h>
> #include <linux/rcupdate.h>
>
> -int prop_descriptor_init(struct prop_descriptor *pd, int shift)
> +int prop_descriptor_init(struct prop_descriptor *pd, int shift, gfp_t gfp)
> {
> int err;
>
> @@ -83,11 +83,11 @@ int prop_descriptor_init(struct prop_descriptor *pd, int shift)
> pd->index = 0;
> pd->pg[0].shift = shift;
> mutex_init(&pd->mutex);
> - err = percpu_counter_init(&pd->pg[0].events, 0, GFP_KERNEL);
> + err = percpu_counter_init(&pd->pg[0].events, 0, gfp);
> if (err)
> goto out;
>
> - err = percpu_counter_init(&pd->pg[1].events, 0, GFP_KERNEL);
> + err = percpu_counter_init(&pd->pg[1].events, 0, gfp);
> if (err)
> percpu_counter_destroy(&pd->pg[0].events);
>
> @@ -188,12 +188,12 @@ prop_adjust_shift(int *pl_shift, unsigned long *pl_period, int new_shift)
>
> #define PROP_BATCH (8*(1+ilog2(nr_cpu_ids)))
>
> -int prop_local_init_percpu(struct prop_local_percpu *pl)
> +int prop_local_init_percpu(struct prop_local_percpu *pl, gfp_t gfp)
> {
> raw_spin_lock_init(&pl->lock);
> pl->shift = 0;
> pl->period = 0;
> - return percpu_counter_init(&pl->events, 0, GFP_KERNEL);
> + return percpu_counter_init(&pl->events, 0, gfp);
> }
>
> void prop_local_destroy_percpu(struct prop_local_percpu *pl)
> diff --git a/mm/backing-dev.c b/mm/backing-dev.c
> index f19a818..64ec49d 100644
> --- a/mm/backing-dev.c
> +++ b/mm/backing-dev.c
> @@ -470,7 +470,7 @@ int bdi_init(struct backing_dev_info *bdi)
> bdi->write_bandwidth = INIT_BW;
> bdi->avg_write_bandwidth = INIT_BW;
>
> - err = fprop_local_init_percpu(&bdi->completions);
> + err = fprop_local_init_percpu(&bdi->completions, GFP_KERNEL);
>
> if (err) {
> err:
> diff --git a/mm/page-writeback.c b/mm/page-writeback.c
> index 91d73ef..5085994 100644
> --- a/mm/page-writeback.c
> +++ b/mm/page-writeback.c
> @@ -1777,7 +1777,7 @@ void __init page_writeback_init(void)
> writeback_set_ratelimit();
> register_cpu_notifier(&ratelimit_nb);
>
> - fprop_global_init(&writeout_completions);
> + fprop_global_init(&writeout_completions, GFP_KERNEL);
> }
>
> /**
> --
> 1.9.3
>
--
Jan Kara <jack@suse.cz>
SUSE Labs, CR
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/3] percpu_counter: add @gfp to percpu_counter_init()
2014-08-26 0:43 ` [PATCH 1/3] percpu_counter: add @gfp to percpu_counter_init() Tejun Heo
2014-08-26 0:46 ` David Miller
@ 2014-08-26 10:21 ` Jan Kara
1 sibling, 0 replies; 10+ messages in thread
From: Jan Kara @ 2014-08-26 10:21 UTC (permalink / raw)
To: Tejun Heo
Cc: linux-kernel, cl, x86, Jens Axboe, Jan Kara, Theodore Ts'o,
Alexander Viro, David S. Miller, Andrew Morton
On Mon 25-08-14 20:43:30, Tejun Heo wrote:
> Percpu allocator now supports allocation mask. Add @gfp to
> percpu_counter_init() so that !GFP_KERNEL allocation masks can be used
> with percpu_counters too.
>
> We could have left percpu_counter_init() alone and added
> percpu_counter_init_gfp(); however, the number of users isn't that
> high and introducing _gfp variants to all percpu data structures would
> be quite ugly, so let's just do the conversion. This is the one with
> the most users. Other percpu data structures are a lot easier to
> convert.
>
> This patch doesn't make any functional difference.
>
> Signed-off-by: Tejun Heo <tj@kernel.org>
> Cc: x86@kernel.org
> Cc: Jens Axboe <axboe@kernel.dk>
> Cc: Jan Kara <jack@suse.cz>
> Cc: "Theodore Ts'o" <tytso@mit.edu>
> Cc: Alexander Viro <viro@zeniv.linux.org.uk>
> Cc: "David S. Miller" <davem@davemloft.net>
> Cc: Andrew Morton <akpm@linux-foundation.org>
For the ext?, quota, lib, bdi parts:
Acked-by: Jan Kara <jack@suse.cz>
> ---
> arch/x86/kvm/mmu.c | 2 +-
> fs/btrfs/disk-io.c | 8 ++++----
> fs/btrfs/extent-tree.c | 2 +-
> fs/ext2/super.c | 6 +++---
> fs/ext3/super.c | 6 +++---
> fs/ext4/super.c | 14 +++++++++-----
> fs/file_table.c | 2 +-
> fs/quota/dquot.c | 2 +-
> fs/super.c | 3 ++-
> include/linux/percpu_counter.h | 10 ++++++----
> include/net/dst_ops.h | 2 +-
> include/net/inet_frag.h | 2 +-
> lib/flex_proportions.c | 4 ++--
> lib/percpu_counter.c | 4 ++--
> lib/proportions.c | 6 +++---
> mm/backing-dev.c | 2 +-
> mm/mmap.c | 2 +-
> mm/nommu.c | 2 +-
> mm/shmem.c | 2 +-
> net/dccp/proto.c | 2 +-
> net/ipv4/tcp.c | 4 ++--
> net/ipv4/tcp_memcontrol.c | 2 +-
> net/sctp/protocol.c | 2 +-
> 23 files changed, 49 insertions(+), 42 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 9314678..5bd53f2 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -4534,7 +4534,7 @@ int kvm_mmu_module_init(void)
> if (!mmu_page_header_cache)
> goto nomem;
>
> - if (percpu_counter_init(&kvm_total_used_mmu_pages, 0))
> + if (percpu_counter_init(&kvm_total_used_mmu_pages, 0, GFP_KERNEL))
> goto nomem;
>
> register_shrinker(&mmu_shrinker);
> diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
> index 08e65e9..61dae01 100644
> --- a/fs/btrfs/disk-io.c
> +++ b/fs/btrfs/disk-io.c
> @@ -1180,7 +1180,7 @@ static struct btrfs_subvolume_writers *btrfs_alloc_subvolume_writers(void)
> if (!writers)
> return ERR_PTR(-ENOMEM);
>
> - ret = percpu_counter_init(&writers->counter, 0);
> + ret = percpu_counter_init(&writers->counter, 0, GFP_KERNEL);
> if (ret < 0) {
> kfree(writers);
> return ERR_PTR(ret);
> @@ -2185,7 +2185,7 @@ int open_ctree(struct super_block *sb,
> goto fail_srcu;
> }
>
> - ret = percpu_counter_init(&fs_info->dirty_metadata_bytes, 0);
> + ret = percpu_counter_init(&fs_info->dirty_metadata_bytes, 0, GFP_KERNEL);
> if (ret) {
> err = ret;
> goto fail_bdi;
> @@ -2193,13 +2193,13 @@ int open_ctree(struct super_block *sb,
> fs_info->dirty_metadata_batch = PAGE_CACHE_SIZE *
> (1 + ilog2(nr_cpu_ids));
>
> - ret = percpu_counter_init(&fs_info->delalloc_bytes, 0);
> + ret = percpu_counter_init(&fs_info->delalloc_bytes, 0, GFP_KERNEL);
> if (ret) {
> err = ret;
> goto fail_dirty_metadata_bytes;
> }
>
> - ret = percpu_counter_init(&fs_info->bio_counter, 0);
> + ret = percpu_counter_init(&fs_info->bio_counter, 0, GFP_KERNEL);
> if (ret) {
> err = ret;
> goto fail_delalloc_bytes;
> diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
> index 813537f..94ec71e 100644
> --- a/fs/btrfs/extent-tree.c
> +++ b/fs/btrfs/extent-tree.c
> @@ -3493,7 +3493,7 @@ static int update_space_info(struct btrfs_fs_info *info, u64 flags,
> if (!found)
> return -ENOMEM;
>
> - ret = percpu_counter_init(&found->total_bytes_pinned, 0);
> + ret = percpu_counter_init(&found->total_bytes_pinned, 0, GFP_KERNEL);
> if (ret) {
> kfree(found);
> return ret;
> diff --git a/fs/ext2/super.c b/fs/ext2/super.c
> index b88edc0..170dc41 100644
> --- a/fs/ext2/super.c
> +++ b/fs/ext2/super.c
> @@ -1067,14 +1067,14 @@ static int ext2_fill_super(struct super_block *sb, void *data, int silent)
> ext2_rsv_window_add(sb, &sbi->s_rsv_window_head);
>
> err = percpu_counter_init(&sbi->s_freeblocks_counter,
> - ext2_count_free_blocks(sb));
> + ext2_count_free_blocks(sb), GFP_KERNEL);
> if (!err) {
> err = percpu_counter_init(&sbi->s_freeinodes_counter,
> - ext2_count_free_inodes(sb));
> + ext2_count_free_inodes(sb), GFP_KERNEL);
> }
> if (!err) {
> err = percpu_counter_init(&sbi->s_dirs_counter,
> - ext2_count_dirs(sb));
> + ext2_count_dirs(sb), GFP_KERNEL);
> }
> if (err) {
> ext2_msg(sb, KERN_ERR, "error: insufficient memory");
> diff --git a/fs/ext3/super.c b/fs/ext3/super.c
> index 08cdfe5..eba021b 100644
> --- a/fs/ext3/super.c
> +++ b/fs/ext3/super.c
> @@ -2039,14 +2039,14 @@ static int ext3_fill_super (struct super_block *sb, void *data, int silent)
> goto failed_mount2;
> }
> err = percpu_counter_init(&sbi->s_freeblocks_counter,
> - ext3_count_free_blocks(sb));
> + ext3_count_free_blocks(sb), GFP_KERNEL);
> if (!err) {
> err = percpu_counter_init(&sbi->s_freeinodes_counter,
> - ext3_count_free_inodes(sb));
> + ext3_count_free_inodes(sb), GFP_KERNEL);
> }
> if (!err) {
> err = percpu_counter_init(&sbi->s_dirs_counter,
> - ext3_count_dirs(sb));
> + ext3_count_dirs(sb), GFP_KERNEL);
> }
> if (err) {
> ext3_msg(sb, KERN_ERR, "error: insufficient memory");
> diff --git a/fs/ext4/super.c b/fs/ext4/super.c
> index 32b43ad..e25ca8f 100644
> --- a/fs/ext4/super.c
> +++ b/fs/ext4/super.c
> @@ -3891,7 +3891,8 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
> /* Register extent status tree shrinker */
> ext4_es_register_shrinker(sbi);
>
> - if ((err = percpu_counter_init(&sbi->s_extent_cache_cnt, 0)) != 0) {
> + err = percpu_counter_init(&sbi->s_extent_cache_cnt, 0, GFP_KERNEL);
> + if (err) {
> ext4_msg(sb, KERN_ERR, "insufficient memory");
> goto failed_mount3;
> }
> @@ -4105,17 +4106,20 @@ no_journal:
> block = ext4_count_free_clusters(sb);
> ext4_free_blocks_count_set(sbi->s_es,
> EXT4_C2B(sbi, block));
> - err = percpu_counter_init(&sbi->s_freeclusters_counter, block);
> + err = percpu_counter_init(&sbi->s_freeclusters_counter, block,
> + GFP_KERNEL);
> if (!err) {
> unsigned long freei = ext4_count_free_inodes(sb);
> sbi->s_es->s_free_inodes_count = cpu_to_le32(freei);
> - err = percpu_counter_init(&sbi->s_freeinodes_counter, freei);
> + err = percpu_counter_init(&sbi->s_freeinodes_counter, freei,
> + GFP_KERNEL);
> }
> if (!err)
> err = percpu_counter_init(&sbi->s_dirs_counter,
> - ext4_count_dirs(sb));
> + ext4_count_dirs(sb), GFP_KERNEL);
> if (!err)
> - err = percpu_counter_init(&sbi->s_dirtyclusters_counter, 0);
> + err = percpu_counter_init(&sbi->s_dirtyclusters_counter, 0,
> + GFP_KERNEL);
> if (err) {
> ext4_msg(sb, KERN_ERR, "insufficient memory");
> goto failed_mount6;
> diff --git a/fs/file_table.c b/fs/file_table.c
> index 385bfd3..0bab12b 100644
> --- a/fs/file_table.c
> +++ b/fs/file_table.c
> @@ -331,5 +331,5 @@ void __init files_init(unsigned long mempages)
>
> n = (mempages * (PAGE_SIZE / 1024)) / 10;
> files_stat.max_files = max_t(unsigned long, n, NR_FILE);
> - percpu_counter_init(&nr_files, 0);
> + percpu_counter_init(&nr_files, 0, GFP_KERNEL);
> }
> diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
> index f2d0eee..8b663b2 100644
> --- a/fs/quota/dquot.c
> +++ b/fs/quota/dquot.c
> @@ -2725,7 +2725,7 @@ static int __init dquot_init(void)
> panic("Cannot create dquot hash table");
>
> for (i = 0; i < _DQST_DQSTAT_LAST; i++) {
> - ret = percpu_counter_init(&dqstats.counter[i], 0);
> + ret = percpu_counter_init(&dqstats.counter[i], 0, GFP_KERNEL);
> if (ret)
> panic("Cannot create dquot stat counters");
> }
> diff --git a/fs/super.c b/fs/super.c
> index b9a214d..1b83610 100644
> --- a/fs/super.c
> +++ b/fs/super.c
> @@ -175,7 +175,8 @@ static struct super_block *alloc_super(struct file_system_type *type, int flags)
> goto fail;
>
> for (i = 0; i < SB_FREEZE_LEVELS; i++) {
> - if (percpu_counter_init(&s->s_writers.counter[i], 0) < 0)
> + if (percpu_counter_init(&s->s_writers.counter[i], 0,
> + GFP_KERNEL) < 0)
> goto fail;
> lockdep_init_map(&s->s_writers.lock_map[i], sb_writers_name[i],
> &type->s_writers_key[i], 0);
> diff --git a/include/linux/percpu_counter.h b/include/linux/percpu_counter.h
> index d5dd465..50e5009 100644
> --- a/include/linux/percpu_counter.h
> +++ b/include/linux/percpu_counter.h
> @@ -12,6 +12,7 @@
> #include <linux/threads.h>
> #include <linux/percpu.h>
> #include <linux/types.h>
> +#include <linux/gfp.h>
>
> #ifdef CONFIG_SMP
>
> @@ -26,14 +27,14 @@ struct percpu_counter {
>
> extern int percpu_counter_batch;
>
> -int __percpu_counter_init(struct percpu_counter *fbc, s64 amount,
> +int __percpu_counter_init(struct percpu_counter *fbc, s64 amount, gfp_t gfp,
> struct lock_class_key *key);
>
> -#define percpu_counter_init(fbc, value) \
> +#define percpu_counter_init(fbc, value, gfp) \
> ({ \
> static struct lock_class_key __key; \
> \
> - __percpu_counter_init(fbc, value, &__key); \
> + __percpu_counter_init(fbc, value, gfp, &__key); \
> })
>
> void percpu_counter_destroy(struct percpu_counter *fbc);
> @@ -89,7 +90,8 @@ struct percpu_counter {
> s64 count;
> };
>
> -static inline int percpu_counter_init(struct percpu_counter *fbc, s64 amount)
> +static inline int percpu_counter_init(struct percpu_counter *fbc, s64 amount,
> + gfp_t gfp)
> {
> fbc->count = amount;
> return 0;
> diff --git a/include/net/dst_ops.h b/include/net/dst_ops.h
> index 2f26dfb..1f99a1d 100644
> --- a/include/net/dst_ops.h
> +++ b/include/net/dst_ops.h
> @@ -63,7 +63,7 @@ static inline void dst_entries_add(struct dst_ops *dst, int val)
>
> static inline int dst_entries_init(struct dst_ops *dst)
> {
> - return percpu_counter_init(&dst->pcpuc_entries, 0);
> + return percpu_counter_init(&dst->pcpuc_entries, 0, GFP_KERNEL);
> }
>
> static inline void dst_entries_destroy(struct dst_ops *dst)
> diff --git a/include/net/inet_frag.h b/include/net/inet_frag.h
> index 65a8855..8d17655 100644
> --- a/include/net/inet_frag.h
> +++ b/include/net/inet_frag.h
> @@ -151,7 +151,7 @@ static inline void add_frag_mem_limit(struct inet_frag_queue *q, int i)
>
> static inline void init_frag_mem_limit(struct netns_frags *nf)
> {
> - percpu_counter_init(&nf->mem, 0);
> + percpu_counter_init(&nf->mem, 0, GFP_KERNEL);
> }
>
> static inline unsigned int sum_frag_mem_limit(struct netns_frags *nf)
> diff --git a/lib/flex_proportions.c b/lib/flex_proportions.c
> index ebf3bac..b9d026b 100644
> --- a/lib/flex_proportions.c
> +++ b/lib/flex_proportions.c
> @@ -40,7 +40,7 @@ int fprop_global_init(struct fprop_global *p)
>
> p->period = 0;
> /* Use 1 to avoid dealing with periods with 0 events... */
> - err = percpu_counter_init(&p->events, 1);
> + err = percpu_counter_init(&p->events, 1, GFP_KERNEL);
> if (err)
> return err;
> seqcount_init(&p->sequence);
> @@ -172,7 +172,7 @@ int fprop_local_init_percpu(struct fprop_local_percpu *pl)
> {
> int err;
>
> - err = percpu_counter_init(&pl->events, 0);
> + err = percpu_counter_init(&pl->events, 0, GFP_KERNEL);
> if (err)
> return err;
> pl->period = 0;
> diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c
> index 7dd33577..1a28f0f 100644
> --- a/lib/percpu_counter.c
> +++ b/lib/percpu_counter.c
> @@ -112,13 +112,13 @@ s64 __percpu_counter_sum(struct percpu_counter *fbc)
> }
> EXPORT_SYMBOL(__percpu_counter_sum);
>
> -int __percpu_counter_init(struct percpu_counter *fbc, s64 amount,
> +int __percpu_counter_init(struct percpu_counter *fbc, s64 amount, gfp_t gfp,
> struct lock_class_key *key)
> {
> raw_spin_lock_init(&fbc->lock);
> lockdep_set_class(&fbc->lock, key);
> fbc->count = amount;
> - fbc->counters = alloc_percpu(s32);
> + fbc->counters = alloc_percpu_gfp(s32, gfp);
> if (!fbc->counters)
> return -ENOMEM;
>
> diff --git a/lib/proportions.c b/lib/proportions.c
> index 05df848..ca95f8d 100644
> --- a/lib/proportions.c
> +++ b/lib/proportions.c
> @@ -83,11 +83,11 @@ int prop_descriptor_init(struct prop_descriptor *pd, int shift)
> pd->index = 0;
> pd->pg[0].shift = shift;
> mutex_init(&pd->mutex);
> - err = percpu_counter_init(&pd->pg[0].events, 0);
> + err = percpu_counter_init(&pd->pg[0].events, 0, GFP_KERNEL);
> if (err)
> goto out;
>
> - err = percpu_counter_init(&pd->pg[1].events, 0);
> + err = percpu_counter_init(&pd->pg[1].events, 0, GFP_KERNEL);
> if (err)
> percpu_counter_destroy(&pd->pg[0].events);
>
> @@ -193,7 +193,7 @@ int prop_local_init_percpu(struct prop_local_percpu *pl)
> raw_spin_lock_init(&pl->lock);
> pl->shift = 0;
> pl->period = 0;
> - return percpu_counter_init(&pl->events, 0);
> + return percpu_counter_init(&pl->events, 0, GFP_KERNEL);
> }
>
> void prop_local_destroy_percpu(struct prop_local_percpu *pl)
> diff --git a/mm/backing-dev.c b/mm/backing-dev.c
> index 1706cbb..f19a818 100644
> --- a/mm/backing-dev.c
> +++ b/mm/backing-dev.c
> @@ -455,7 +455,7 @@ int bdi_init(struct backing_dev_info *bdi)
> bdi_wb_init(&bdi->wb, bdi);
>
> for (i = 0; i < NR_BDI_STAT_ITEMS; i++) {
> - err = percpu_counter_init(&bdi->bdi_stat[i], 0);
> + err = percpu_counter_init(&bdi->bdi_stat[i], 0, GFP_KERNEL);
> if (err)
> goto err;
> }
> diff --git a/mm/mmap.c b/mm/mmap.c
> index c1f2ea4..d7ec93e 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -3196,7 +3196,7 @@ void __init mmap_init(void)
> {
> int ret;
>
> - ret = percpu_counter_init(&vm_committed_as, 0);
> + ret = percpu_counter_init(&vm_committed_as, 0, GFP_KERNEL);
> VM_BUG_ON(ret);
> }
>
> diff --git a/mm/nommu.c b/mm/nommu.c
> index a881d96..bd1808e 100644
> --- a/mm/nommu.c
> +++ b/mm/nommu.c
> @@ -539,7 +539,7 @@ void __init mmap_init(void)
> {
> int ret;
>
> - ret = percpu_counter_init(&vm_committed_as, 0);
> + ret = percpu_counter_init(&vm_committed_as, 0, GFP_KERNEL);
> VM_BUG_ON(ret);
> vm_region_jar = KMEM_CACHE(vm_region, SLAB_PANIC);
> }
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 0e5fb22..d4bc55d 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -2993,7 +2993,7 @@ int shmem_fill_super(struct super_block *sb, void *data, int silent)
> #endif
>
> spin_lock_init(&sbinfo->stat_lock);
> - if (percpu_counter_init(&sbinfo->used_blocks, 0))
> + if (percpu_counter_init(&sbinfo->used_blocks, 0, GFP_KERNEL))
> goto failed;
> sbinfo->free_inodes = sbinfo->max_inodes;
>
> diff --git a/net/dccp/proto.c b/net/dccp/proto.c
> index de2c1e7..e421edd 100644
> --- a/net/dccp/proto.c
> +++ b/net/dccp/proto.c
> @@ -1115,7 +1115,7 @@ static int __init dccp_init(void)
>
> BUILD_BUG_ON(sizeof(struct dccp_skb_cb) >
> FIELD_SIZEOF(struct sk_buff, cb));
> - rc = percpu_counter_init(&dccp_orphan_count, 0);
> + rc = percpu_counter_init(&dccp_orphan_count, 0, GFP_KERNEL);
> if (rc)
> goto out_fail;
> rc = -ENOBUFS;
> diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> index 541f26a..d59c260 100644
> --- a/net/ipv4/tcp.c
> +++ b/net/ipv4/tcp.c
> @@ -3188,8 +3188,8 @@ void __init tcp_init(void)
>
> BUILD_BUG_ON(sizeof(struct tcp_skb_cb) > sizeof(skb->cb));
>
> - percpu_counter_init(&tcp_sockets_allocated, 0);
> - percpu_counter_init(&tcp_orphan_count, 0);
> + percpu_counter_init(&tcp_sockets_allocated, 0, GFP_KERNEL);
> + percpu_counter_init(&tcp_orphan_count, 0, GFP_KERNEL);
> tcp_hashinfo.bind_bucket_cachep =
> kmem_cache_create("tcp_bind_bucket",
> sizeof(struct inet_bind_bucket), 0,
> diff --git a/net/ipv4/tcp_memcontrol.c b/net/ipv4/tcp_memcontrol.c
> index 3af5226..1d19135 100644
> --- a/net/ipv4/tcp_memcontrol.c
> +++ b/net/ipv4/tcp_memcontrol.c
> @@ -32,7 +32,7 @@ int tcp_init_cgroup(struct mem_cgroup *memcg, struct cgroup_subsys *ss)
> res_parent = &parent_cg->memory_allocated;
>
> res_counter_init(&cg_proto->memory_allocated, res_parent);
> - percpu_counter_init(&cg_proto->sockets_allocated, 0);
> + percpu_counter_init(&cg_proto->sockets_allocated, 0, GFP_KERNEL);
>
> return 0;
> }
> diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c
> index 6240834..f00a85a 100644
> --- a/net/sctp/protocol.c
> +++ b/net/sctp/protocol.c
> @@ -1341,7 +1341,7 @@ static __init int sctp_init(void)
> if (!sctp_chunk_cachep)
> goto err_chunk_cachep;
>
> - status = percpu_counter_init(&sctp_sockets_allocated, 0);
> + status = percpu_counter_init(&sctp_sockets_allocated, 0, GFP_KERNEL);
> if (status)
> goto err_percpu_counter_init;
>
> --
> 1.9.3
>
--
Jan Kara <jack@suse.cz>
SUSE Labs, CR
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v2 3/3] percpu-refcount: add @gfp to percpu_ref_init()
2014-08-26 0:43 ` [PATCH 3/3] percpu-refcount: add @gfp to percpu_ref_init() Tejun Heo
@ 2014-08-26 15:35 ` Tejun Heo
0 siblings, 0 replies; 10+ messages in thread
From: Tejun Heo @ 2014-08-26 15:35 UTC (permalink / raw)
To: linux-kernel
Cc: cl, Kent Overstreet, Benjamin LaHaise, Li Zefan,
Nicholas A. Bellinger
>From ad5328cfb8c0d828e07ac428fdf3fbcb4f8698ad Mon Sep 17 00:00:00 2001
From: Tejun Heo <tj@kernel.org>
Date: Tue, 26 Aug 2014 11:33:17 -0400
Percpu allocator now supports allocation mask. Add @gfp to
percpu_ref_init() so that !GFP_KERNEL allocation masks can be used
with percpu_refs too.
This patch doesn't make any functional difference.
v2: blk-mq conversion was missing. Updated.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Kent Overstreet <koverstreet@google.com>
Cc: Benjamin LaHaise <bcrl@kvack.org>
Cc: Li Zefan <lizefan@huawei.com>
Cc: Nicholas A. Bellinger <nab@linux-iscsi.org>
Cc: Jens Axboe <axboe@kernel.dk>
---
Missed blk-mq conversion while forward porting to v3.17-rc1. Updated.
Thanks.
block/blk-mq.c | 3 ++-
drivers/target/target_core_tpg.c | 3 ++-
fs/aio.c | 4 ++--
include/linux/percpu-refcount.h | 3 ++-
kernel/cgroup.c | 6 +++---
lib/percpu-refcount.c | 6 ++++--
6 files changed, 15 insertions(+), 10 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 5189cb1..702df07 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1776,7 +1776,8 @@ struct request_queue *blk_mq_init_queue(struct blk_mq_tag_set *set)
if (!q)
goto err_hctxs;
- if (percpu_ref_init(&q->mq_usage_counter, blk_mq_usage_counter_release))
+ if (percpu_ref_init(&q->mq_usage_counter, blk_mq_usage_counter_release,
+ GFP_KERNEL))
goto err_map;
setup_timer(&q->timeout, blk_mq_rq_timer, (unsigned long) q);
diff --git a/drivers/target/target_core_tpg.c b/drivers/target/target_core_tpg.c
index fddfae6..4ab6da3 100644
--- a/drivers/target/target_core_tpg.c
+++ b/drivers/target/target_core_tpg.c
@@ -819,7 +819,8 @@ int core_tpg_add_lun(
{
int ret;
- ret = percpu_ref_init(&lun->lun_ref, core_tpg_lun_ref_release);
+ ret = percpu_ref_init(&lun->lun_ref, core_tpg_lun_ref_release,
+ GFP_KERNEL);
if (ret < 0)
return ret;
diff --git a/fs/aio.c b/fs/aio.c
index bd7ec2c..93fbcc0f 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -666,10 +666,10 @@ static struct kioctx *ioctx_alloc(unsigned nr_events)
INIT_LIST_HEAD(&ctx->active_reqs);
- if (percpu_ref_init(&ctx->users, free_ioctx_users))
+ if (percpu_ref_init(&ctx->users, free_ioctx_users, GFP_KERNEL))
goto err;
- if (percpu_ref_init(&ctx->reqs, free_ioctx_reqs))
+ if (percpu_ref_init(&ctx->reqs, free_ioctx_reqs, GFP_KERNEL))
goto err;
ctx->cpu = alloc_percpu(struct kioctx_cpu);
diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h
index 3dfbf23..ee83251 100644
--- a/include/linux/percpu-refcount.h
+++ b/include/linux/percpu-refcount.h
@@ -49,6 +49,7 @@
#include <linux/kernel.h>
#include <linux/percpu.h>
#include <linux/rcupdate.h>
+#include <linux/gfp.h>
struct percpu_ref;
typedef void (percpu_ref_func_t)(struct percpu_ref *);
@@ -66,7 +67,7 @@ struct percpu_ref {
};
int __must_check percpu_ref_init(struct percpu_ref *ref,
- percpu_ref_func_t *release);
+ percpu_ref_func_t *release, gfp_t gfp);
void percpu_ref_reinit(struct percpu_ref *ref);
void percpu_ref_exit(struct percpu_ref *ref);
void percpu_ref_kill_and_confirm(struct percpu_ref *ref,
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 7dc8788..589b4d8 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -1628,7 +1628,7 @@ static int cgroup_setup_root(struct cgroup_root *root, unsigned int ss_mask)
goto out;
root_cgrp->id = ret;
- ret = percpu_ref_init(&root_cgrp->self.refcnt, css_release);
+ ret = percpu_ref_init(&root_cgrp->self.refcnt, css_release, GFP_KERNEL);
if (ret)
goto out;
@@ -4487,7 +4487,7 @@ static int create_css(struct cgroup *cgrp, struct cgroup_subsys *ss,
init_and_link_css(css, ss, cgrp);
- err = percpu_ref_init(&css->refcnt, css_release);
+ err = percpu_ref_init(&css->refcnt, css_release, GFP_KERNEL);
if (err)
goto err_free_css;
@@ -4555,7 +4555,7 @@ static int cgroup_mkdir(struct kernfs_node *parent_kn, const char *name,
goto out_unlock;
}
- ret = percpu_ref_init(&cgrp->self.refcnt, css_release);
+ ret = percpu_ref_init(&cgrp->self.refcnt, css_release, GFP_KERNEL);
if (ret)
goto out_free_cgrp;
diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c
index fe5a334..ff99032 100644
--- a/lib/percpu-refcount.c
+++ b/lib/percpu-refcount.c
@@ -40,6 +40,7 @@ static unsigned __percpu *pcpu_count_ptr(struct percpu_ref *ref)
* percpu_ref_init - initialize a percpu refcount
* @ref: percpu_ref to initialize
* @release: function which will be called when refcount hits 0
+ * @gfp: allocation mask to use
*
* Initializes the refcount in single atomic counter mode with a refcount of 1;
* analagous to atomic_set(ref, 1).
@@ -47,11 +48,12 @@ static unsigned __percpu *pcpu_count_ptr(struct percpu_ref *ref)
* Note that @release must not sleep - it may potentially be called from RCU
* callback context by percpu_ref_kill().
*/
-int percpu_ref_init(struct percpu_ref *ref, percpu_ref_func_t *release)
+int percpu_ref_init(struct percpu_ref *ref, percpu_ref_func_t *release,
+ gfp_t gfp)
{
atomic_set(&ref->count, 1 + PCPU_COUNT_BIAS);
- ref->pcpu_count_ptr = (unsigned long)alloc_percpu(unsigned);
+ ref->pcpu_count_ptr = (unsigned long)alloc_percpu_gfp(unsigned, gfp);
if (!ref->pcpu_count_ptr)
return -ENOMEM;
--
1.9.3
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 0.5/3] percpu_counter: make percpu_counters_lock irq-safe
2014-08-26 0:43 [PATCHSET percpu/for-3.18] add @gfp to init functions of percpu data structures Tejun Heo
` (2 preceding siblings ...)
2014-08-26 0:43 ` [PATCH 3/3] percpu-refcount: add @gfp to percpu_ref_init() Tejun Heo
@ 2014-09-08 0:30 ` Tejun Heo
2014-09-08 0:30 ` [PATCHSET percpu/for-3.18] add @gfp to init functions of percpu data structures Tejun Heo
4 siblings, 0 replies; 10+ messages in thread
From: Tejun Heo @ 2014-09-08 0:30 UTC (permalink / raw)
To: linux-kernel; +Cc: cl
percpu_counter is scheduled to grow @gfp support to allow atomic
initialization. This patch makes percpu_counters_lock irq-safe so
that it can be safely used from atomic contexts.
Signed-off-by: Tejun Heo <tj@kernel.org>
---
lib/percpu_counter.c | 16 ++++++++++------
1 file changed, 10 insertions(+), 6 deletions(-)
--- a/lib/percpu_counter.c
+++ b/lib/percpu_counter.c
@@ -115,6 +115,8 @@ EXPORT_SYMBOL(__percpu_counter_sum);
int __percpu_counter_init(struct percpu_counter *fbc, s64 amount, gfp_t gfp,
struct lock_class_key *key)
{
+ unsigned long flags __maybe_unused;
+
raw_spin_lock_init(&fbc->lock);
lockdep_set_class(&fbc->lock, key);
fbc->count = amount;
@@ -126,9 +128,9 @@ int __percpu_counter_init(struct percpu_
#ifdef CONFIG_HOTPLUG_CPU
INIT_LIST_HEAD(&fbc->list);
- spin_lock(&percpu_counters_lock);
+ spin_lock_irqsave(&percpu_counters_lock, flags);
list_add(&fbc->list, &percpu_counters);
- spin_unlock(&percpu_counters_lock);
+ spin_unlock_irqrestore(&percpu_counters_lock, flags);
#endif
return 0;
}
@@ -136,15 +138,17 @@ EXPORT_SYMBOL(__percpu_counter_init);
void percpu_counter_destroy(struct percpu_counter *fbc)
{
+ unsigned long flags __maybe_unused;
+
if (!fbc->counters)
return;
debug_percpu_counter_deactivate(fbc);
#ifdef CONFIG_HOTPLUG_CPU
- spin_lock(&percpu_counters_lock);
+ spin_lock_irqsave(&percpu_counters_lock, flags);
list_del(&fbc->list);
- spin_unlock(&percpu_counters_lock);
+ spin_unlock_irqrestore(&percpu_counters_lock, flags);
#endif
free_percpu(fbc->counters);
fbc->counters = NULL;
@@ -173,7 +177,7 @@ static int percpu_counter_hotcpu_callbac
return NOTIFY_OK;
cpu = (unsigned long)hcpu;
- spin_lock(&percpu_counters_lock);
+ spin_lock_irq(&percpu_counters_lock);
list_for_each_entry(fbc, &percpu_counters, list) {
s32 *pcount;
unsigned long flags;
@@ -184,7 +188,7 @@ static int percpu_counter_hotcpu_callbac
*pcount = 0;
raw_spin_unlock_irqrestore(&fbc->lock, flags);
}
- spin_unlock(&percpu_counters_lock);
+ spin_unlock_irq(&percpu_counters_lock);
#endif
return NOTIFY_OK;
}
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCHSET percpu/for-3.18] add @gfp to init functions of percpu data structures
2014-08-26 0:43 [PATCHSET percpu/for-3.18] add @gfp to init functions of percpu data structures Tejun Heo
` (3 preceding siblings ...)
2014-09-08 0:30 ` [PATCH 0.5/3] percpu_counter: make percpu_counters_lock irq-safe Tejun Heo
@ 2014-09-08 0:30 ` Tejun Heo
4 siblings, 0 replies; 10+ messages in thread
From: Tejun Heo @ 2014-09-08 0:30 UTC (permalink / raw)
To: linux-kernel; +Cc: cl
Applied to percpu/for-3.18.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2014-09-08 0:30 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-08-26 0:43 [PATCHSET percpu/for-3.18] add @gfp to init functions of percpu data structures Tejun Heo
2014-08-26 0:43 ` [PATCH 1/3] percpu_counter: add @gfp to percpu_counter_init() Tejun Heo
2014-08-26 0:46 ` David Miller
2014-08-26 10:21 ` Jan Kara
2014-08-26 0:43 ` [PATCH 2/3] proportions: add @gfp to init functions Tejun Heo
2014-08-26 10:19 ` Jan Kara
2014-08-26 0:43 ` [PATCH 3/3] percpu-refcount: add @gfp to percpu_ref_init() Tejun Heo
2014-08-26 15:35 ` [PATCH v2 " Tejun Heo
2014-09-08 0:30 ` [PATCH 0.5/3] percpu_counter: make percpu_counters_lock irq-safe Tejun Heo
2014-09-08 0:30 ` [PATCHSET percpu/for-3.18] add @gfp to init functions of percpu data structures Tejun Heo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).