From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id BCA6821B02845 for ; Thu, 19 Jul 2018 13:06:40 -0700 (PDT) Subject: Re: [PATCH v5 09/12] nfit/libnvdimm: add support for issue secure erase DSM to Intel nvdimm References: <153186061802.27463.14539931103401173743.stgit@djiang5-desk3.ch.intel.com> <153186089522.27463.4537738384176593789.stgit@djiang5-desk3.ch.intel.com> From: Dave Jiang Message-ID: <503e1027-eae3-d38e-20a1-fada97528687@intel.com> Date: Thu, 19 Jul 2018 13:06:39 -0700 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" To: "Elliott, Robert (Persistent Memory)" , "Williams, Dan J" Cc: "dhowells@redhat.com" , "Schofield, Alison , keyrings@vger.kernel.org" , "keescook@chromium.org" , "linux-nvdimm@lists.01.org" List-ID: On 07/18/2018 06:43 PM, Elliott, Robert (Persistent Memory) wrote: > > >> -----Original Message----- >> From: Dave Jiang >> Sent: Wednesday, July 18, 2018 12:41 PM >> To: Elliott, Robert (Persistent Memory) ; Williams, >> Dan J >> Cc: dhowells@redhat.com; Schofield, Alison >> ; keyrings@vger.kernel.org; >> keescook@chromium.org; linux-nvdimm@lists.01.org >> Subject: Re: [PATCH v5 09/12] nfit/libnvdimm: add support for issue >> secure erase DSM to Intel nvdimm >> >> >> >> On 07/18/2018 10:27 AM, Elliott, Robert (Persistent Memory) wrote: >>> >>> >>>> -----Original Message----- >>>> From: Linux-nvdimm [mailto:linux-nvdimm-bounces@lists.01.org] On >> Behalf Of Dave Jiang >>>> Sent: Tuesday, July 17, 2018 3:55 PM >>>> Subject: [PATCH v5 09/12] nfit/libnvdimm: add support for issue >> secure erase DSM to Intel nvdimm >>> ... >>> +static int intel_dimm_security_erase(struct nvdimm_bus >> *nvdimm_bus, >>>> +struct nvdimm *nvdimm, struct nvdimm_key_data *nkey) >>> ... >>>> +/* DIMM unlocked, invalidate all CPU caches before we read it */ >>>> +wbinvd_on_all_cpus(); >>> >>> For this function, that comment should use "erased" rather than >>> "unlocked". >>> >>> For both this function and intel_dimm_security_unlock() in patch >> 04/12, >>> could the driver do a loop of clflushopts on one CPU via >>> clflush_cache_range() rather than run wbinvd on all CPUs? >> >> The loop should work, but wbinvd is going to be less overall impact >> to the performance for really huge ranges. Also, unlock should happen >> only once and during NVDIMM initialization. So wbinvd should be ok. > > Unlike unlock, secure erase could be requested at any time. > > wbinvd must run on every physical core on every physical CPU, while > clflushopt flushes everything from just one CPU core. > > wbinvd adds huge interrupt latencies, generating complaints like these: > https://patchwork.kernel.org/patch/37090/ > https://lists.xenproject.org/archives/html/xen-devel/2011-09/msg00675.html > > Also, there's no need to disrupt cache content for other addresses; > only the data at the addresses just erased or unlocked is a concern. > clflushopt avoids disrupting other threads. Yes secure erase could be requested at any time, but the likelihood of that happening frequently is unlikely. Also, in order to do secure erase, one must disable regions impacted by the dimm and also the dimm itself. More likely than not, the admin is doing maintenance and not expecting running workloads (at least not on the pmem). The concern is more that the admin wants to finish the task quickly rather than if there's performance impact while the maintenance task is going on. Also, looping over potentially TB-sized ranges with CLFLUSHOPT may take a while (many minutes?)? Yes it just flushes cache from one CPU, but also it causes cross-CPU traffic to maintain coherency, and KTI traffic and/or reads from the media to check directory bits. WBINVD is pretty heavy handed but it's the only option we have that doesn't have to plow through each cache line in the huge range. > > Related topic: a flush is also necessary before sending the secure erase or > unlock command. Otherwise, there could be dirty write data that gets > written by the concluding flush (overwriting the now-unlocked or just-erased > data). For unlock during boot, you might assume that no writes having > occurred yet, but that isn't true for secure erase on demand. Flushing > before both commands is safest. Yes I missed that. Thanks for catching. I'll add the flush before executing secure erase. It's probably not necessary for unlock since there's no data that would be in the CPU cache until the DIMMs are accessible. _______________________________________________ Linux-nvdimm mailing list Linux-nvdimm@lists.01.org https://lists.01.org/mailman/listinfo/linux-nvdimm