From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from gate.crashing.org (gate.crashing.org [63.228.1.57]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3xk76W2Bj6zDqY3 for ; Fri, 1 Sep 2017 15:27:30 +1000 (AEST) Message-ID: <1504243603.4974.68.camel@kernel.crashing.org> Subject: Re: [PATCH v3 1/8] powerpc/xive: introduce a common routine xive_queue_page_alloc() From: Benjamin Herrenschmidt To: =?ISO-8859-1?Q?C=E9dric?= Le Goater , linuxppc-dev@lists.ozlabs.org Cc: Michael Ellerman , Paul Mackerras , David Gibson Date: Fri, 01 Sep 2017 15:26:43 +1000 In-Reply-To: <20170830194617.26621-2-clg@kaod.org> References: <20170830194617.26621-1-clg@kaod.org> <20170830194617.26621-2-clg@kaod.org> Content-Type: text/plain; charset="UTF-8" Mime-Version: 1.0 List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , On Wed, 2017-08-30 at 21:46 +0200, Cédric Le Goater wrote: > This routine will be used in the spapr backend. Also introduce a short > xive_alloc_order() helper. > > Signed-off-by: Cédric Le Goater > Reviewed-by: David Gibson Acked-by: Benjamin Herrenschmidt > --- > arch/powerpc/sysdev/xive/common.c | 16 ++++++++++++++++ > arch/powerpc/sysdev/xive/native.c | 16 +++++----------- > arch/powerpc/sysdev/xive/xive-internal.h | 6 ++++++ > 3 files changed, 27 insertions(+), 11 deletions(-) > > diff --git a/arch/powerpc/sysdev/xive/common.c b/arch/powerpc/sysdev/xive/common.c > index 6e0c9dee724f..26999ceae20e 100644 > --- a/arch/powerpc/sysdev/xive/common.c > +++ b/arch/powerpc/sysdev/xive/common.c > @@ -1424,6 +1424,22 @@ bool xive_core_init(const struct xive_ops *ops, void __iomem *area, u32 offset, > return true; > } > > +__be32 *xive_queue_page_alloc(unsigned int cpu, u32 queue_shift) > +{ > + unsigned int alloc_order; > + struct page *pages; > + __be32 *qpage; > + > + alloc_order = xive_alloc_order(queue_shift); > + pages = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL, alloc_order); > + if (!pages) > + return ERR_PTR(-ENOMEM); > + qpage = (__be32 *)page_address(pages); > + memset(qpage, 0, 1 << queue_shift); > + > + return qpage; > +} > + > static int __init xive_off(char *arg) > { > xive_cmdline_disabled = true; > diff --git a/arch/powerpc/sysdev/xive/native.c b/arch/powerpc/sysdev/xive/native.c > index 0f95476b01f6..ef92a83090e1 100644 > --- a/arch/powerpc/sysdev/xive/native.c > +++ b/arch/powerpc/sysdev/xive/native.c > @@ -202,17 +202,12 @@ EXPORT_SYMBOL_GPL(xive_native_disable_queue); > static int xive_native_setup_queue(unsigned int cpu, struct xive_cpu *xc, u8 prio) > { > struct xive_q *q = &xc->queue[prio]; > - unsigned int alloc_order; > - struct page *pages; > __be32 *qpage; > > - alloc_order = (xive_queue_shift > PAGE_SHIFT) ? > - (xive_queue_shift - PAGE_SHIFT) : 0; > - pages = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL, alloc_order); > - if (!pages) > - return -ENOMEM; > - qpage = (__be32 *)page_address(pages); > - memset(qpage, 0, 1 << xive_queue_shift); > + qpage = xive_queue_page_alloc(cpu, xive_queue_shift); > + if (IS_ERR(qpage)) > + return PTR_ERR(qpage); > + > return xive_native_configure_queue(get_hard_smp_processor_id(cpu), > q, prio, qpage, xive_queue_shift, false); > } > @@ -227,8 +222,7 @@ static void xive_native_cleanup_queue(unsigned int cpu, struct xive_cpu *xc, u8 > * from an IPI and iounmap isn't safe > */ > __xive_native_disable_queue(get_hard_smp_processor_id(cpu), q, prio); > - alloc_order = (xive_queue_shift > PAGE_SHIFT) ? > - (xive_queue_shift - PAGE_SHIFT) : 0; > + alloc_order = xive_alloc_order(xive_queue_shift); > free_pages((unsigned long)q->qpage, alloc_order); > q->qpage = NULL; > } > diff --git a/arch/powerpc/sysdev/xive/xive-internal.h b/arch/powerpc/sysdev/xive/xive-internal.h > index d07ef2d29caf..dd1e2022cce4 100644 > --- a/arch/powerpc/sysdev/xive/xive-internal.h > +++ b/arch/powerpc/sysdev/xive/xive-internal.h > @@ -56,6 +56,12 @@ struct xive_ops { > > bool xive_core_init(const struct xive_ops *ops, void __iomem *area, u32 offset, > u8 max_prio); > +__be32 *xive_queue_page_alloc(unsigned int cpu, u32 queue_shift); > + > +static inline u32 xive_alloc_order(u32 queue_shift) > +{ > + return (queue_shift > PAGE_SHIFT) ? (queue_shift - PAGE_SHIFT) : 0; > +} > > extern bool xive_cmdline_disabled; >