From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from 15.mo3.mail-out.ovh.net (15.mo3.mail-out.ovh.net [87.98.150.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3xRT3D6cMhzDqty for ; Tue, 8 Aug 2017 19:03:52 +1000 (AEST) Received: from player797.ha.ovh.net (b9.ovh.net [213.186.33.59]) by mo3.mail-out.ovh.net (Postfix) with ESMTP id 1F25F12AF30 for ; Tue, 8 Aug 2017 10:57:21 +0200 (CEST) From: =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= To: linuxppc-dev@lists.ozlabs.org Cc: Benjamin Herrenschmidt , Michael Ellerman , Paul Mackerras , David Gibson , =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= Subject: [PATCH 08/10] powerpc/xive: take into account '/ibm, plat-res-int-priorities' Date: Tue, 8 Aug 2017 10:56:18 +0200 Message-Id: <1502182579-990-9-git-send-email-clg@kaod.org> In-Reply-To: <1502182579-990-1-git-send-email-clg@kaod.org> References: <1502182579-990-1-git-send-email-clg@kaod.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , '/ibm,plat-res-int-priorities' contains a list of priorities that the hypervisor has reserved for its own use. Scan these ranges to choose the lowest unused priority for the xive spapr backend. Signed-off-by: Cédric Le Goater --- arch/powerpc/sysdev/xive/spapr.c | 62 +++++++++++++++++++++++++++++++++++++++- 1 file changed, 61 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/sysdev/xive/spapr.c b/arch/powerpc/sysdev/xive/spapr.c index 7fc40047c23d..220331986bd8 100644 --- a/arch/powerpc/sysdev/xive/spapr.c +++ b/arch/powerpc/sysdev/xive/spapr.c @@ -532,13 +532,70 @@ static const struct xive_ops xive_spapr_ops = { .name = "spapr", }; +/* + * get max priority from "/ibm,plat-res-int-priorities" + */ +static bool xive_get_max_prio(u8 *max_prio) +{ + struct device_node *rootdn; + const __be32 *reg; + u32 len; + int prio, found; + + rootdn = of_find_node_by_path("/"); + if (!rootdn) { + pr_err("not root node found !\n"); + return false; + } + + reg = of_get_property(rootdn, "ibm,plat-res-int-priorities", &len); + if (!reg) { + pr_err("Failed to read 'ibm,plat-res-int-priorities' property\n"); + return false; + } + + if (len % (2 * sizeof(u32)) != 0) { + pr_err("invalid 'ibm,plat-res-int-priorities' property\n"); + return false; + } + + /* HW supports priorities in the range [0-7] and 0xFF is a + * wildcard priority used to mask. We scan the ranges reserved + * by the hypervisor to find the lowest priority we can use. + */ + found = 0xFF; + for (prio = 0; prio < 8; prio++) { + int reserved = 0; + int i; + + for (i = 0; i < len / (2 * sizeof(u32)); i++) { + int base = be32_to_cpu(reg[2 * i]); + int range = be32_to_cpu(reg[2 * i + 1]); + + if (prio >= base && prio < base + range) + reserved++; + } + + if (!reserved) + found = prio; + } + + if (found == 0xFF) { + pr_err("no valid priority found in 'ibm,plat-res-int-priorities'\n"); + return false; + } + + *max_prio = found; + return true; +} + bool xive_spapr_init(void) { struct device_node *np; struct resource r; void __iomem *tima; struct property *prop; - u8 max_prio = 7; + u8 max_prio; u32 val; u32 len; const __be32 *reg; @@ -566,6 +623,9 @@ bool xive_spapr_init(void) return false; } + if (!xive_get_max_prio(&max_prio)) + return false; + /* Feed the IRQ number allocator with the ranges given in the DT */ reg = of_get_property(np, "ibm,xive-lisn-ranges", &len); if (!reg) { -- 2.7.5