From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3v2N6L62DBzDqR3 for ; Tue, 17 Jan 2017 06:07:54 +1100 (AEDT) Received: from pps.filterd (m0098413.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.20/8.16.0.20) with SMTP id v0GJ3sS5116475 for ; Mon, 16 Jan 2017 14:07:52 -0500 Received: from e18.ny.us.ibm.com (e18.ny.us.ibm.com [129.33.205.208]) by mx0b-001b2d01.pphosted.com with ESMTP id 280xqux252-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Mon, 16 Jan 2017 14:07:52 -0500 Received: from localhost by e18.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 16 Jan 2017 14:07:51 -0500 From: Reza Arbab To: Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras Cc: linuxppc-dev@lists.ozlabs.org, "Aneesh Kumar K.V" , Balbir Singh , Alistair Popple Subject: [PATCH v5 0/4] powerpc/mm: enable memory hotplug on radix Date: Mon, 16 Jan 2017 13:07:42 -0600 Message-Id: <1484593666-8001-1-git-send-email-arbab@linux.vnet.ibm.com> List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Do the plumbing needed for memory hotplug on systems using radix pagetables, borrowing from existing vmemmap and x86 code. This passes basic verification of plugging and removing memory, but this stuff is tricky and I'd appreciate extra scrutiny of the series for correctness--in particular, the adaptation of remove_pagetable() from x86. Former patch 1 is now a separate fix. This set still depends on it: https://lkml.kernel.org/r/1483475991-16999-1-git-send-email-arbab@linux.vnet.ibm.com /* changelog */ v5: * Retain pr_info() of the size used to map each address range. * flush_tlb_kernel_range() -> radix__flush_tlb_kernel_range() v4: * https://lkml.kernel.org/r/1483476218-17271-1-git-send-email-arbab@linux.vnet.ibm.com * Sent patch 1 as a standalone fix. * Extract a common function that can be used by both radix_init_pgtable() and radix__create_section_mapping(). * Reduce tlb flushing to one flush_tlb_kernel_range() per section, and do less granular locking of init_mm->page_table_lock. v3: * https://lkml.kernel.org/r/1481831443-22761-1-git-send-email-arbab@linux.vnet.ibm.com * Port remove_pagetable() et al. from x86 for unmapping. * [RFC] -> [PATCH] v2: * https://lkml.kernel.org/r/1471449083-15931-1-git-send-email-arbab@linux.vnet.ibm.com * Do not simply fall through to vmemmap_{create,remove}_mapping(). As Aneesh and Michael pointed out, they are tied to CONFIG_SPARSEMEM_VMEMMAP and only did what I needed by luck anyway. v1: * https://lkml.kernel.org/r/1466699962-22412-1-git-send-email-arbab@linux.vnet.ibm.com Reza Arbab (4): powerpc/mm: refactor radix physical page mapping powerpc/mm: add radix__create_section_mapping() powerpc/mm: add radix__remove_section_mapping() powerpc/mm: unstub radix__vmemmap_remove_mapping() arch/powerpc/include/asm/book3s/64/radix.h | 5 + arch/powerpc/mm/pgtable-book3s64.c | 4 +- arch/powerpc/mm/pgtable-radix.c | 257 ++++++++++++++++++++++++----- 3 files changed, 225 insertions(+), 41 deletions(-) -- 1.8.3.1