From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6BDBCCFA17 for ; Thu, 26 Sep 2024 23:05:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 635146B009B; Thu, 26 Sep 2024 19:05:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 60B7C6B009C; Thu, 26 Sep 2024 19:05:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4AC2C6B009D; Thu, 26 Sep 2024 19:05:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 285DA6B009B for ; Thu, 26 Sep 2024 19:05:02 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id C7277C06B9 for ; Thu, 26 Sep 2024 23:05:01 +0000 (UTC) X-FDA: 82608421602.30.0EC4AD8 Received: from mail-pf1-f181.google.com (mail-pf1-f181.google.com [209.85.210.181]) by imf30.hostedemail.com (Postfix) with ESMTP id CE4F78000A for ; Thu, 26 Sep 2024 23:04:59 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=mdFbRA2V; spf=pass (imf30.hostedemail.com: domain of nifan.cxl@gmail.com designates 209.85.210.181 as permitted sender) smtp.mailfrom=nifan.cxl@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1727391838; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GMsIDMK1DDrg0Yyv1jX8w4F76evTMg1B64TtXPFTWOQ=; b=AIcLxzE07nBHUf+ih5zAZDtqkA1ZRY3CG9Z7DWnF1Gk6ceHCln2VvwyRjPK+AvOZl+nWEU vKhBCrLMg6XVDWqQSwmfywgXCfuUPwGUGYoxc+9JCixD5ptc2TS/VEjQxDA/JW3D4iRODy VjmTiMZXOVxjUDX0OTXgwNCD+caqY94= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=mdFbRA2V; spf=pass (imf30.hostedemail.com: domain of nifan.cxl@gmail.com designates 209.85.210.181 as permitted sender) smtp.mailfrom=nifan.cxl@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1727391838; a=rsa-sha256; cv=none; b=ApjCrv1lHDbNdFodmLq0h+O8t0fHaDL7eq+JyT2ZWgPjfo3ph20c26gZ1c3KLGuTGJgJkQ H8D9Uz9ufbqokbxP1lqCtztAqlFJrRtJoHsqnL52m8YRS3nFa+jS9YR6vMdP95Te68lVGr 8Z5I57s/wejXocZPnR7kRAnzDkk1pc0= Received: by mail-pf1-f181.google.com with SMTP id d2e1a72fcca58-71798a15ce5so1992893b3a.0 for ; Thu, 26 Sep 2024 16:04:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1727391898; x=1727996698; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:from:to:cc:subject:date:message-id:reply-to; bh=GMsIDMK1DDrg0Yyv1jX8w4F76evTMg1B64TtXPFTWOQ=; b=mdFbRA2VjV9ItP1RUXXcIPScHCyVAeYAFwM3XJwCbkJAhtWN7YXAjctL2mHdI9Otu7 SacdakbATHcBYce8/vBixTS1W0tEgIMgL/Po0TOPr83/+oEaTSTjDrJspCQmDv8tWuXU i3NyIBzhj1hl2M9ox2o4FsGexL2/f3nyUKiFHo8ah61RNLnlOgg8RMGUnfdEK3JErfm1 wM6gqePUSaEpfmfck0nA1YQdJpEO70jmzfYsXudAgr3hqxeBS4eFAiPEtRHbxQpgNG1T 5Qd34rzNOSZPdXKmAqozQADgXoxpBdLyCYx/JXqEpKOX3wIWq/uCsAcJ56RxraxoU9c6 4FVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1727391898; x=1727996698; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=GMsIDMK1DDrg0Yyv1jX8w4F76evTMg1B64TtXPFTWOQ=; b=OjlhqDr/YJaxzWEGC+MkySK29lhdVU8rnTOb0hBsJEa710B6K0TzCy7Hwwgvw6BcHU PcXdsyfiQD7eBIsnBVjTb+GeZDCw2R2ZFa42HQl1dSL5L6XlP9AO2SEYQRxiuaN0fyBL hlTxFg02g9hlYJ912AVrYRHmyBio9YIxSO01rmCoG+9aBY42uI+pQ/9fWxMwWPNvVyb7 yO3lY3PrPI64oEmSx5rWZrmmzFMTgrtKQM4JjHG2TdpFDbD/RBzUU4UKUNorKc5ynvh/ 0hHqOA5WAB9hO1aFy75nQMZ6wXDreUJwesld19m456FZh8JzKXTTf9x2wOaE5e+TI07s GY2A== X-Forwarded-Encrypted: i=1; AJvYcCVTtGnwCS2y06iyq4xle3b9wH/F4TiVQus036ZV75HwjHU79H0E8jHeoJJwcOZ8QRiPMiOb9eU1gw==@kvack.org X-Gm-Message-State: AOJu0YwkY0uu4QUqjr46EiiXxK+0PZdWWgEIJtLiaPNgyfbw1Dkl8aTw UnSF4ICtgG6cCh0FgW345b9g1frjf/uK5YYIJ+LO9xqj8yOo9zeg X-Google-Smtp-Source: AGHT+IF+nGkS9O9uGxE47Gicx01cKZPi/PnmWJEq3myiiVA2CsWcjjHbKBQ7EqG99fOgC6cSl2Zi8g== X-Received: by 2002:a05:6a20:b711:b0:1d4:fafb:845d with SMTP id adf61e73a8af0-1d4fafb8945mr1677142637.2.1727391898182; Thu, 26 Sep 2024 16:04:58 -0700 (PDT) Received: from fan ([2601:646:8f03:9fee:8795:a5ca:b1bf:1a39]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71b264bc88bsm421782b3a.71.2024.09.26.16.04.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 26 Sep 2024 16:04:57 -0700 (PDT) From: Fan Ni X-Google-Original-From: Fan Ni Date: Thu, 26 Sep 2024 16:04:42 -0700 To: shiju.jose@huawei.com Cc: linux-edac@vger.kernel.org, linux-cxl@vger.kernel.org, linux-acpi@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, bp@alien8.de, tony.luck@intel.com, rafael@kernel.org, lenb@kernel.org, mchehab@kernel.org, dan.j.williams@intel.com, dave@stgolabs.net, jonathan.cameron@huawei.com, dave.jiang@intel.com, alison.schofield@intel.com, vishal.l.verma@intel.com, ira.weiny@intel.com, david@redhat.com, Vilas.Sridharan@amd.com, leo.duran@amd.com, Yazen.Ghannam@amd.com, rientjes@google.com, jiaqiyan@google.com, Jon.Grimm@amd.com, dave.hansen@linux.intel.com, naoya.horiguchi@nec.com, james.morse@arm.com, jthoughton@google.com, somasundaram.a@hpe.com, erdemaktas@google.com, pgonda@google.com, duenwen@google.com, mike.malvestuto@intel.com, gthelen@google.com, wschwartz@amperecomputing.com, dferguson@amperecomputing.com, wbs@os.amperecomputing.com, nifan.cxl@gmail.com, jgroves@micron.com, vsalve@micron.com, tanxiaofei@huawei.com, prime.zeng@hisilicon.com, roberto.sassu@huawei.com, kangkang.shen@futurewei.com, wanghuiqiang@huawei.com, linuxarm@huawei.com Subject: Re: [PATCH v12 02/17] EDAC: Add EDAC scrub control driver Message-ID: References: <20240911090447.751-1-shiju.jose@huawei.com> <20240911090447.751-3-shiju.jose@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240911090447.751-3-shiju.jose@huawei.com> X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: CE4F78000A X-Stat-Signature: syhezhg85nwfzdq1kgze7qb6c1qy3nao X-HE-Tag: 1727391899-72173 X-HE-Meta: U2FsdGVkX196zgYazv4QGg60t0sPzYgtBqX7/IUvo1PsnonnkGWe2z3JpAsxUeWupBiAP2avxE/jrimZu/JZlzvm4tfE5Z7eui9LBCzt40lFYqFHGNoIYeVpEILmi3ETmycVLXcvjEMijVfKBjDq7YklKPsy3pdJbtmwvCSJqZT7gcR7DXDjFzUvsPTb/EObQVVsjd/QQZqyg5MOn6FpdMFmom/xCmwOXNxmlemWnED03TbqmhIUnMYxDiPi8sJ6QuV3C0ZbHHmrMjJ5A5xYyE2UZI54p9+JyOQnH2McXSg2qBToY4NsoI5SvoHwa/AbRsiS2sejGXztXUqqAjMfWkP1eYq+Ct+4s+uMuk6zmisMNBvFxu2NTv606CEVIMAZEUAv7wYaHJWtZlWbETKNHiRPaSeLC0AdJjZ5Y5wKSuK5ayPJArCPNMuxDioZ7bXyqcQR6Y3k8Bgh22sViEBQZQAV91FduUrO1zIaN1XEWELSyKdqpTeDg97DHBOt1SaDeGsuCjzJhXy+QLOC3bUIbscv8M6VmTdlZ6Idaq201ZgsMCbGYXMnHYRvUO9245ieHJp6W4A8MpG8xy5tDYFXw/eHjGNI6EB3R/++Z6Hdy8QH2+uTnq8uAD9AkmX2ah+lw6OcGxmUZgVqjk0uJ3E8mW8dTjG2XWoEWGoHVQEGzIULiVNRAN+0wIvvnEjDbFypr7ZXtZeoXhHTHKpOtpqkLFVsL78Hn8dyyglCW4pn7pgowmCGt0hMiakNhTnm7RcfFwrY+lY+KzrqZ38BkfmWFGrASA446RLOO3UJhmNf0QrX5FREmPH0PEGdTlqFC5PhJk5Ydp8tbK9rArOAnJwH1Rhd2vMfs3c8cnzAF0oP4DQzygQaJyJ6K47nCbgCB4U/LbmuK7HQ48Yk+70IrUSn2m0Gr04CgKGjjJe5GwvBkVHJAM+FZQexAogPfofLdMYkVGT0oBuHSEuQg636bv8 KucexbhA o96CM2CMAEIh5IJfXQ/7DdQtoFyCUQ0ULEDBxzpeek37XWBJo7rNY2UK2ub1EZD+6xT4XtUA2oQvj2tJBdgi+PmbJnnFgA9zvGsvV3wsL/aawNwvW9R4gst3VGqfXOq79ftwFPTG62Y7FbYlKETx/cSJ9mmEbeyVZv5eRi0Pk45LTMUurQTrWvcPy22x4+Uvg8QhTyXTePdralpFYOXv5MyMwXv4XBrU260WWIo/0PrE/lbvcNuosxBbzHF7gx1SATLTZDM05ywtg+ql95ypgBgPlB28wQAjJj7vSOWElrCKsN7AAdsyb2uJIF+1+tmLgSgAxUq21E3B0AFkrNLkqax9Akte6VfeiK5dafWbYpjTIe5ZxKhAfA2XhdEdFx02S6g0J4zsuZpZ4pX+9pzQ4TtcD+tHxyNb3m0awlEBJxdw+8LVNDKh0XBiDeOXAH4n2WiRul4XdK7zLJNNxCl/Tvqg2JJ3U4VScqUILKBPbtOF6e80VsRWASbzfZcY7zeH9zVGs/Ka5sSWbIa1RjCjhEIYfFDiJaUtNNNNDjRYBXU0ucuVykiMW3JEi6CErn2AIsDEwuy6LeEqtvPJ9+8Sr6oY5mg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000001, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Sep 11, 2024 at 10:04:31AM +0100, shiju.jose@huawei.com wrote: > From: Shiju Jose > > Add generic EDAC scrub control driver supports configuring the memory scrubbers > in the system. The device with scrub feature, get the scrub descriptor from the > EDAC scrub and registers with the EDAC RAS feature driver, which adds the sysfs > scrub control interface. The scrub control attributes for a scrub instance are > available to userspace in /sys/bus/edac/devices//scrub*/. > > Generic EDAC scrub driver and the common sysfs scrub interface promotes > unambiguous access from the userspace irrespective of the underlying scrub > devices. > > The sysfs scrub attribute nodes would be present only if the client driver > has implemented the corresponding attribute callback function and pass in ops > to the EDAC RAS feature driver during registration. > > Co-developed-by: Jonathan Cameron > Signed-off-by: Jonathan Cameron > Signed-off-by: Shiju Jose > --- > Documentation/ABI/testing/sysfs-edac-scrub | 69 ++++ > drivers/edac/Makefile | 1 + > drivers/edac/edac_device.c | 6 +- > drivers/edac/edac_scrub.c | 377 +++++++++++++++++++++ > include/linux/edac.h | 30 ++ > 5 files changed, 482 insertions(+), 1 deletion(-) > create mode 100644 Documentation/ABI/testing/sysfs-edac-scrub > create mode 100755 drivers/edac/edac_scrub.c > > diff --git a/Documentation/ABI/testing/sysfs-edac-scrub b/Documentation/ABI/testing/sysfs-edac-scrub > new file mode 100644 > index 000000000000..f465cc91423f > --- /dev/null > +++ b/Documentation/ABI/testing/sysfs-edac-scrub > @@ -0,0 +1,69 @@ > +What: /sys/bus/edac/devices//scrub* Based on the code below, we can only have scrub0, scrub1, etc. So should we use scrubX instead of scrub* here. The same for below. Fan > +Date: Oct 2024 > +KernelVersion: 6.12 > +Contact: linux-edac@vger.kernel.org > +Description: > + The sysfs EDAC bus devices //scrub* subdirectory > + belongs to an instance of memory scrub control feature, > + where directory corresponds to a device/memory > + region registered with the EDAC scrub driver and thus > + registered with the generic EDAC RAS driver. > + The sysfs scrub attr nodes would be present only if the > + client driver has implemented the corresponding attr > + callback function and pass in ops to the EDAC RAS feature > + driver during registration. > + > +What: /sys/bus/edac/devices//scrub*/addr_range_base > +Date: Oct 2024 > +KernelVersion: 6.12 > +Contact: linux-edac@vger.kernel.org > +Description: > + (RW) The base of the address range of the memory region > + to be scrubbed (on-demand scrubbing). > + > +What: /sys/bus/edac/devices//scrub*/addr_range_size > +Date: Oct 2024 > +KernelVersion: 6.12 > +Contact: linux-edac@vger.kernel.org > +Description: > + (RW) The size of the address range of the memory region > + to be scrubbed (on-demand scrubbing). > + > +What: /sys/bus/edac/devices//scrub*/enable_background > +Date: Oct 2024 > +KernelVersion: 6.12 > +Contact: linux-edac@vger.kernel.org > +Description: > + (RW) Start/Stop background(patrol) scrubbing if supported. > + > +What: /sys/bus/edac/devices//scrub*/enable_on_demand > +Date: Oct 2024 > +KernelVersion: 6.12 > +Contact: linux-edac@vger.kernel.org > +Description: > + (RW) Start/Stop on-demand scrubbing the memory region > + if supported. > + > +What: /sys/bus/edac/devices//scrub*/min_cycle_duration > +Date: Oct 2024 > +KernelVersion: 6.12 > +Contact: linux-edac@vger.kernel.org > +Description: > + (RO) Supported minimum scrub cycle duration in seconds > + by the memory scrubber. > + > +What: /sys/bus/edac/devices//scrub*/max_cycle_duration > +Date: Oct 2024 > +KernelVersion: 6.12 > +Contact: linux-edac@vger.kernel.org > +Description: > + (RO) Supported maximum scrub cycle duration in seconds > + by the memory scrubber. > + > +What: /sys/bus/edac/devices//scrub*/current_cycle_duration > +Date: Oct 2024 > +KernelVersion: 6.12 > +Contact: linux-edac@vger.kernel.org > +Description: > + (RW) The current scrub cycle duration in seconds and must be > + within the supported range by the memory scrubber. > diff --git a/drivers/edac/Makefile b/drivers/edac/Makefile > index 4edfb83ffbee..fbf0e39ec678 100644 > --- a/drivers/edac/Makefile > +++ b/drivers/edac/Makefile > @@ -10,6 +10,7 @@ obj-$(CONFIG_EDAC) := edac_core.o > > edac_core-y := edac_mc.o edac_device.o edac_mc_sysfs.o > edac_core-y += edac_module.o edac_device_sysfs.o wq.o > +edac_core-y += edac_scrub.o > > edac_core-$(CONFIG_EDAC_DEBUG) += debugfs.o > > diff --git a/drivers/edac/edac_device.c b/drivers/edac/edac_device.c > index e4a5d010ea2d..6381896b6424 100644 > --- a/drivers/edac/edac_device.c > +++ b/drivers/edac/edac_device.c > @@ -608,12 +608,16 @@ static int edac_dev_feat_init(struct device *parent, > const struct edac_dev_feature *ras_feat, > const struct attribute_group **attr_groups) > { > - int num; > + int num, ret; > > switch (ras_feat->ft_type) { > case RAS_FEAT_SCRUB: > dev_data->scrub_ops = ras_feat->scrub_ops; > dev_data->private = ras_feat->ctx; > + ret = edac_scrub_get_desc(parent, attr_groups, > + ras_feat->instance); > + if (ret) > + return ret; > return 1; > case RAS_FEAT_ECS: > num = ras_feat->ecs_info.num_media_frus; > diff --git a/drivers/edac/edac_scrub.c b/drivers/edac/edac_scrub.c > new file mode 100755 > index 000000000000..3f8f37629acf > --- /dev/null > +++ b/drivers/edac/edac_scrub.c > @@ -0,0 +1,377 @@ > +// SPDX-License-Identifier: GPL-2.0 > +/* > + * Generic EDAC scrub driver supports controlling the memory > + * scrubbers in the system and the common sysfs scrub interface > + * promotes unambiguous access from the userspace. > + * > + * Copyright (c) 2024 HiSilicon Limited. > + */ > + > +#define pr_fmt(fmt) "EDAC SCRUB: " fmt > + > +#include > + > +enum edac_scrub_attributes { > + SCRUB_ADDR_RANGE_BASE, > + SCRUB_ADDR_RANGE_SIZE, > + SCRUB_ENABLE_BACKGROUND, > + SCRUB_ENABLE_ON_DEMAND, > + SCRUB_MIN_CYCLE_DURATION, > + SCRUB_MAX_CYCLE_DURATION, > + SCRUB_CURRENT_CYCLE_DURATION, > + SCRUB_MAX_ATTRS > +}; > + > +struct edac_scrub_dev_attr { > + struct device_attribute dev_attr; > + u8 instance; > +}; > + > +struct edac_scrub_context { > + char name[EDAC_FEAT_NAME_LEN]; > + struct edac_scrub_dev_attr scrub_dev_attr[SCRUB_MAX_ATTRS]; > + struct attribute *scrub_attrs[SCRUB_MAX_ATTRS + 1]; > + struct attribute_group group; > +}; > + > +#define to_scrub_dev_attr(_dev_attr) \ > + container_of(_dev_attr, struct edac_scrub_dev_attr, dev_attr) > + > +static ssize_t addr_range_base_show(struct device *ras_feat_dev, > + struct device_attribute *attr, > + char *buf) > +{ > + u8 inst = ((struct edac_scrub_dev_attr *)to_scrub_dev_attr(attr))->instance; > + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev); > + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops; > + u64 base, size; > + int ret; > + > + ret = ops->read_range(ras_feat_dev->parent, ctx->scrub[inst].private, &base, &size); > + if (ret) > + return ret; > + > + return sysfs_emit(buf, "0x%llx\n", base); > +} > + > +static ssize_t addr_range_size_show(struct device *ras_feat_dev, > + struct device_attribute *attr, > + char *buf) > +{ > + u8 inst = ((struct edac_scrub_dev_attr *)to_scrub_dev_attr(attr))->instance; > + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev); > + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops; > + u64 base, size; > + int ret; > + > + ret = ops->read_range(ras_feat_dev->parent, ctx->scrub[inst].private, &base, &size); > + if (ret) > + return ret; > + > + return sysfs_emit(buf, "0x%llx\n", size); > +} > + > +static ssize_t addr_range_base_store(struct device *ras_feat_dev, > + struct device_attribute *attr, > + const char *buf, size_t len) > +{ > + u8 inst = ((struct edac_scrub_dev_attr *)to_scrub_dev_attr(attr))->instance; > + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev); > + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops; > + u64 base, size; > + int ret; > + > + ret = ops->read_range(ras_feat_dev->parent, ctx->scrub[inst].private, &base, &size); > + if (ret) > + return ret; > + > + ret = kstrtou64(buf, 0, &base); > + if (ret < 0) > + return ret; > + > + ret = ops->write_range(ras_feat_dev->parent, ctx->scrub[inst].private, base, size); > + if (ret) > + return ret; > + > + return len; > +} > + > +static ssize_t addr_range_size_store(struct device *ras_feat_dev, > + struct device_attribute *attr, > + const char *buf, > + size_t len) > +{ > + u8 inst = ((struct edac_scrub_dev_attr *)to_scrub_dev_attr(attr))->instance; > + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev); > + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops; > + u64 base, size; > + int ret; > + > + ret = ops->read_range(ras_feat_dev->parent, ctx->scrub[inst].private, &base, &size); > + if (ret) > + return ret; > + > + ret = kstrtou64(buf, 0, &size); > + if (ret < 0) > + return ret; > + > + ret = ops->write_range(ras_feat_dev->parent, ctx->scrub[inst].private, base, size); > + if (ret) > + return ret; > + > + return len; > +} > + > +static ssize_t enable_background_store(struct device *ras_feat_dev, > + struct device_attribute *attr, > + const char *buf, size_t len) > +{ > + u8 inst = ((struct edac_scrub_dev_attr *)to_scrub_dev_attr(attr))->instance; > + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev); > + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops; > + bool enable; > + int ret; > + > + ret = kstrtobool(buf, &enable); > + if (ret < 0) > + return ret; > + > + ret = ops->set_enabled_bg(ras_feat_dev->parent, ctx->scrub[inst].private, enable); > + if (ret) > + return ret; > + > + return len; > +} > + > +static ssize_t enable_background_show(struct device *ras_feat_dev, > + struct device_attribute *attr, char *buf) > +{ > + u8 inst = ((struct edac_scrub_dev_attr *)to_scrub_dev_attr(attr))->instance; > + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev); > + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops; > + bool enable; > + int ret; > + > + ret = ops->get_enabled_bg(ras_feat_dev->parent, ctx->scrub[inst].private, &enable); > + if (ret) > + return ret; > + > + return sysfs_emit(buf, "%d\n", enable); > +} > + > +static ssize_t enable_on_demand_show(struct device *ras_feat_dev, > + struct device_attribute *attr, char *buf) > +{ > + u8 inst = ((struct edac_scrub_dev_attr *)to_scrub_dev_attr(attr))->instance; > + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev); > + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops; > + bool enable; > + int ret; > + > + ret = ops->get_enabled_od(ras_feat_dev->parent, ctx->scrub[inst].private, &enable); > + if (ret) > + return ret; > + > + return sysfs_emit(buf, "%d\n", enable); > +} > + > +static ssize_t enable_on_demand_store(struct device *ras_feat_dev, > + struct device_attribute *attr, > + const char *buf, size_t len) > +{ > + u8 inst = ((struct edac_scrub_dev_attr *)to_scrub_dev_attr(attr))->instance; > + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev); > + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops; > + bool enable; > + int ret; > + > + ret = kstrtobool(buf, &enable); > + if (ret < 0) > + return ret; > + > + ret = ops->set_enabled_od(ras_feat_dev->parent, ctx->scrub[inst].private, enable); > + if (ret) > + return ret; > + > + return len; > +} > + > +static ssize_t min_cycle_duration_show(struct device *ras_feat_dev, > + struct device_attribute *attr, > + char *buf) > +{ > + u8 inst = ((struct edac_scrub_dev_attr *)to_scrub_dev_attr(attr))->instance; > + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev); > + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops; > + u32 val; > + int ret; > + > + ret = ops->min_cycle_read(ras_feat_dev->parent, ctx->scrub[inst].private, &val); > + if (ret) > + return ret; > + > + return sysfs_emit(buf, "%u\n", val); > +} > + > +static ssize_t max_cycle_duration_show(struct device *ras_feat_dev, > + struct device_attribute *attr, > + char *buf) > +{ > + u8 inst = ((struct edac_scrub_dev_attr *)to_scrub_dev_attr(attr))->instance; > + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev); > + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops; > + u32 val; > + int ret; > + > + ret = ops->max_cycle_read(ras_feat_dev->parent, ctx->scrub[inst].private, &val); > + if (ret) > + return ret; > + > + return sysfs_emit(buf, "%u\n", val); > +} > + > +static ssize_t current_cycle_duration_show(struct device *ras_feat_dev, > + struct device_attribute *attr, > + char *buf) > +{ > + u8 inst = ((struct edac_scrub_dev_attr *)to_scrub_dev_attr(attr))->instance; > + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev); > + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops; > + u32 val; > + int ret; > + > + ret = ops->cycle_duration_read(ras_feat_dev->parent, ctx->scrub[inst].private, &val); > + if (ret) > + return ret; > + > + return sysfs_emit(buf, "%u\n", val); > +} > + > +static ssize_t current_cycle_duration_store(struct device *ras_feat_dev, > + struct device_attribute *attr, > + const char *buf, size_t len) > +{ > + u8 inst = ((struct edac_scrub_dev_attr *)to_scrub_dev_attr(attr))->instance; > + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev); > + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops; > + long val; > + int ret; > + > + ret = kstrtol(buf, 0, &val); > + if (ret < 0) > + return ret; > + > + ret = ops->cycle_duration_write(ras_feat_dev->parent, ctx->scrub[inst].private, val); > + if (ret) > + return ret; > + > + return len; > +} > + > +static umode_t scrub_attr_visible(struct kobject *kobj, > + struct attribute *a, int attr_id) > +{ > + struct device *ras_feat_dev = kobj_to_dev(kobj); > + struct device_attribute *dev_attr = > + container_of(a, struct device_attribute, attr); > + u8 inst = ((struct edac_scrub_dev_attr *)to_scrub_dev_attr(dev_attr))->instance; > + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev); > + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops; > + > + switch (attr_id) { > + case SCRUB_ADDR_RANGE_BASE: > + case SCRUB_ADDR_RANGE_SIZE: > + if (ops->read_range && ops->write_range) > + return a->mode; > + if (ops->read_range) > + return 0444; > + return 0; > + case SCRUB_ENABLE_BACKGROUND: > + if (ops->get_enabled_bg && ops->set_enabled_bg) > + return a->mode; > + if (ops->get_enabled_bg) > + return 0444; > + return 0; > + case SCRUB_ENABLE_ON_DEMAND: > + if (ops->get_enabled_od && ops->set_enabled_od) > + return a->mode; > + if (ops->get_enabled_od) > + return 0444; > + return 0; > + case SCRUB_MIN_CYCLE_DURATION: > + return ops->min_cycle_read ? a->mode : 0; > + case SCRUB_MAX_CYCLE_DURATION: > + return ops->max_cycle_read ? a->mode : 0; > + case SCRUB_CURRENT_CYCLE_DURATION: > + if (ops->cycle_duration_read && ops->cycle_duration_write) > + return a->mode; > + if (ops->cycle_duration_read) > + return 0444; > + return 0; > + default: > + return 0; > + } > +} > + > +#define EDAC_SCRUB_ATTR_RO(_name, _instance) \ > + ((struct edac_scrub_dev_attr) { .dev_attr = __ATTR_RO(_name), \ > + .instance = _instance }) > + > +#define EDAC_SCRUB_ATTR_WO(_name, _instance) \ > + ((struct edac_scrub_dev_attr) { .dev_attr = __ATTR_WO(_name), \ > + .instance = _instance }) > + > +#define EDAC_SCRUB_ATTR_RW(_name, _instance) \ > + ((struct edac_scrub_dev_attr) { .dev_attr = __ATTR_RW(_name), \ > + .instance = _instance }) > + > +static int scrub_create_desc(struct device *scrub_dev, > + const struct attribute_group **attr_groups, > + u8 instance) > +{ > + struct edac_scrub_context *scrub_ctx; > + struct attribute_group *group; > + int i; > + > + scrub_ctx = devm_kzalloc(scrub_dev, sizeof(*scrub_ctx), GFP_KERNEL); > + if (!scrub_ctx) > + return -ENOMEM; > + > + group = &scrub_ctx->group; > + scrub_ctx->scrub_dev_attr[0] = EDAC_SCRUB_ATTR_RW(addr_range_base, instance); > + scrub_ctx->scrub_dev_attr[1] = EDAC_SCRUB_ATTR_RW(addr_range_size, instance); > + scrub_ctx->scrub_dev_attr[2] = EDAC_SCRUB_ATTR_RW(enable_background, instance); > + scrub_ctx->scrub_dev_attr[3] = EDAC_SCRUB_ATTR_RW(enable_on_demand, instance); > + scrub_ctx->scrub_dev_attr[4] = EDAC_SCRUB_ATTR_RO(min_cycle_duration, instance); > + scrub_ctx->scrub_dev_attr[5] = EDAC_SCRUB_ATTR_RO(max_cycle_duration, instance); > + scrub_ctx->scrub_dev_attr[6] = EDAC_SCRUB_ATTR_RW(current_cycle_duration, instance); > + for (i = 0; i < SCRUB_MAX_ATTRS; i++) > + scrub_ctx->scrub_attrs[i] = &scrub_ctx->scrub_dev_attr[i].dev_attr.attr; > + > + sprintf(scrub_ctx->name, "%s%d", "scrub", instance); > + group->name = scrub_ctx->name; > + group->attrs = scrub_ctx->scrub_attrs; > + group->is_visible = scrub_attr_visible; > + > + attr_groups[0] = group; > + > + return 0; > +} > + > +/** > + * edac_scrub_get_desc - get EDAC scrub descriptors > + * @scrub_dev: client device, with scrub support > + * @attr_groups: pointer to attrribute group container > + * @instance: device's scrub instance number. > + * > + * Returns 0 on success, error otherwise. > + */ > +int edac_scrub_get_desc(struct device *scrub_dev, > + const struct attribute_group **attr_groups, > + u8 instance) > +{ > + if (!scrub_dev || !attr_groups) > + return -EINVAL; > + > + return scrub_create_desc(scrub_dev, attr_groups, instance); > +} > diff --git a/include/linux/edac.h b/include/linux/edac.h > index b337254cf5b8..aae8262b9863 100644 > --- a/include/linux/edac.h > +++ b/include/linux/edac.h > @@ -674,6 +674,36 @@ enum edac_dev_feat { > RAS_FEAT_MAX > }; > > +/** > + * struct scrub_ops - scrub device operations (all elements optional) > + * @read_range: read base and offset of scrubbing range. > + * @write_range: set the base and offset of the scrubbing range. > + * @get_enabled_bg: check if currently performing background scrub. > + * @set_enabled_bg: start or stop a bg-scrub. > + * @get_enabled_od: check if currently performing on-demand scrub. > + * @set_enabled_od: start or stop an on-demand scrub. > + * @min_cycle_read: minimum supported scrub cycle duration in seconds. > + * @max_cycle_read: maximum supported scrub cycle duration in seconds. > + * @cycle_duration_read: get the scrub cycle duration in seconds. > + * @cycle_duration_write: set the scrub cycle duration in seconds. > + */ > +struct edac_scrub_ops { > + int (*read_range)(struct device *dev, void *drv_data, u64 *base, u64 *size); > + int (*write_range)(struct device *dev, void *drv_data, u64 base, u64 size); > + int (*get_enabled_bg)(struct device *dev, void *drv_data, bool *enable); > + int (*set_enabled_bg)(struct device *dev, void *drv_data, bool enable); > + int (*get_enabled_od)(struct device *dev, void *drv_data, bool *enable); > + int (*set_enabled_od)(struct device *dev, void *drv_data, bool enable); > + int (*min_cycle_read)(struct device *dev, void *drv_data, u32 *min); > + int (*max_cycle_read)(struct device *dev, void *drv_data, u32 *max); > + int (*cycle_duration_read)(struct device *dev, void *drv_data, u32 *cycle); > + int (*cycle_duration_write)(struct device *dev, void *drv_data, u32 cycle); > +}; > + > +int edac_scrub_get_desc(struct device *scrub_dev, > + const struct attribute_group **attr_groups, > + u8 instance); > + > struct edac_ecs_ex_info { > u16 num_media_frus; > }; > -- > 2.34.1 > -- Fan Ni