From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-yb1-f198.google.com (mail-yb1-f198.google.com [209.85.219.198]) by kanga.kvack.org (Postfix) with ESMTP id 034C18E00D7 for ; Fri, 25 Jan 2019 14:10:37 -0500 (EST) Received: by mail-yb1-f198.google.com with SMTP id f12so5059119ybp.0 for ; Fri, 25 Jan 2019 11:10:36 -0800 (PST) Received: from userp2120.oracle.com (userp2120.oracle.com. [156.151.31.85]) by mx.google.com with ESMTPS id i23si2045185ybj.308.2019.01.25.11.10.35 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 25 Jan 2019 11:10:35 -0800 (PST) Subject: Re: [PATCH 5/5] dax: "Hotplug" persistent memory for use like normal RAM References: <20190124231441.37A4A305@viggo.jf.intel.com> <20190124231448.E102D18E@viggo.jf.intel.com> <0852310e-41dc-dc96-2da5-11350f5adce6@oracle.com> <5A90DA2E42F8AE43BC4A093BF067884825733A5B@SHSMSX104.ccr.corp.intel.com> From: Jane Chu Message-ID: Date: Fri, 25 Jan 2019 11:10:22 -0800 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: "Verma, Vishal L" , "Williams, Dan J" , "Du, Fan" Cc: "linux-kernel@vger.kernel.org" , "bp@suse.de" , "linux-mm@kvack.org" , "dave.hansen@linux.intel.com" , "tiwai@suse.de" , "akpm@linux-foundation.org" , "linux-nvdimm@lists.01.org" , "jglisse@redhat.com" , "zwisler@kernel.org" , "mhocko@suse.com" , "baiyaowei@cmss.chinamobile.com" , "thomas.lendacky@amd.com" , "Wu, Fengguang" , "Huang, Ying" , "bhelgaas@google.com" On 1/25/2019 10:20 AM, Verma, Vishal L wrote: > > On Fri, 2019-01-25 at 09:18 -0800, Dan Williams wrote: >> On Fri, Jan 25, 2019 at 12:20 AM Du, Fan wrote: >>> Dan >>> >>> Thanks for the insights! >>> >>> Can I say, the UCE is delivered from h/w to OS in a single way in >>> case of machine >>> check, only PMEM/DAX stuff filter out UC address and managed in its >>> own way by >>> badblocks, if PMEM/DAX doesn't do so, then common RAS workflow will >>> kick in, >>> right? >> >> The common RAS workflow always kicks in, it's just the page state >> presented by a DAX mapping needs distinct handling. Once it is >> hot-plugged it no longer needs to be treated differently than "System >> RAM". >> >>> And how about when ARS is involved but no machine check fired for >>> the function >>> of this patchset? >> >> The hotplug effectively disconnects this address range from the ARS >> results. They will still be reported in the libnvdimm "region" level >> badblocks instance, but there's no safe / coordinated way to go clear >> those errors without additional kernel enabling. There is no "clear >> error" semantic for "System RAM". >> > Perhaps as future enabling, the kernel can go perform "clear error" for > offlined pages, and make them usable again. But I'm not sure how > prepared mm is to re-accept pages previously offlined. > Offlining a DRAM backed page due to an UC makes sense because a. the physical DRAM cell might still have an error b. power cycle, scrubing could potentially 'repair' the DRAM cell, making the page usable again. But for a PMEM backed page, neither is true. If a poison bit is set in a page, that indicates the underlying hardware has completed the repair work, all that's left is for software to recover. Secondly, because poison is persistent, unless software explicitly clear the bit, the page is permanently unusable. thanks, -jane