public inbox for linux-block@vger.kernel.org
 help / color / mirror / Atom feed
From: Dongsheng Yang <dongsheng.yang@linux.dev>
To: Mikulas Patocka <mpatocka@redhat.com>
Cc: agk@redhat.com, snitzer@kernel.org, axboe@kernel.dk, hch@lst.de,
	dan.j.williams@intel.com, Jonathan.Cameron@Huawei.com,
	linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-cxl@vger.kernel.org, nvdimm@lists.linux.dev,
	dm-devel@lists.linux.dev
Subject: Re: [RFC v2 00/11] dm-pcache – persistent-memory cache for block devices
Date: Mon, 23 Jun 2025 12:18:08 +0800	[thread overview]
Message-ID: <9dd017ea-a0ec-47c4-b7ae-b6f441dbd5ec@linux.dev> (raw)
In-Reply-To: <3c9f304a-b830-4242-8e01-04efab4e0eaa@linux.dev>


[-- Attachment #1.1: Type: text/plain, Size: 3310 bytes --]


在 6/23/2025 11:13 AM, Dongsheng Yang 写道:
>
> Hi Mikulas:
>
>      I will send dm-pcache V1 soon, below is my response to your comments.
>
> 在 6/13/2025 12:57 AM, Mikulas Patocka 写道:
>> Hi
>>
>>
>> On Thu, 5 Jun 2025, Dongsheng Yang wrote:
>>
>>> Hi Mikulas and all,
>>>
>>> This is *RFC v2* of the *pcache* series, a persistent-memory backed cache.
>>>
>>>
>>> ----------------------------------------------------------------------
>>> 1. pmem access layer
>>> ----------------------------------------------------------------------
>>>
>>> * All reads use *copy_mc_to_kernel()* so that uncorrectable media
>>>    errors are detected and reported.
>>> * All writes go through *memcpy_flushcache()* to guarantee durability
>>>    on real persistent memory.
>> You could also try to use normal write and clflushopt for big writes - I
>> found out that for larger regions it is better - see the function
>> memcpy_flushcache_optimized in dm-writecache. Test, which way is better.
>
> I did a test with fio on /dev/pmem0, with an attached patch on nd_pmem.ko:
>
> when I use memmap pmem device, I got a similar result with the comment 
> in memcpy_flushcache_optimized():
>
> Test (memmap pmem) clflushopt flushcache 
> ------------------------------------------------- test_randwrite_512 
> 200 MiB/s 228 MiB/s test_randwrite_1024 378 MiB/s 431 MiB/s 
> test_randwrite_2K 773 MiB/s 769 MiB/s test_randwrite_4K 1364 MiB/s 
> 1272 MiB/s test_randwrite_8K 2078 MiB/s 1817 MiB/s test_randwrite_16K 
> 2745 MiB/s 2098 MiB/s test_randwrite_32K 3232 MiB/s 2231 MiB/s 
> test_randwrite_64K 3660 MiB/s 2411 MiB/s test_randwrite_128K 3922 
> MiB/s 2513 MiB/s test_randwrite_1M 3824 MiB/s 2537 MiB/s 
> test_write_512 228 MiB/s 228 MiB/s test_write_1024 439 MiB/s 423 MiB/s 
> test_write_2K 841 MiB/s 800 MiB/s test_write_4K 1364 MiB/s 1308 MiB/s 
> test_write_8K 2107 MiB/s 1838 MiB/s test_write_16K 2752 MiB/s 2166 
> MiB/s test_write_32K 3213 MiB/s 2247 MiB/s test_write_64K 3661 MiB/s 
> 2415 MiB/s test_write_128K 3902 MiB/s 2514 MiB/s test_write_1M 3808 
> MiB/s 2529 MiB/s
>
> But I got a different result when I use Optane pmem100:
>
> Test (Optane pmem100) clflushopt flushcache 
> ------------------------------------------------- test_randwrite_512 
> 167 MiB/s 226 MiB/s test_randwrite_1024 301 MiB/s 420 MiB/s 
> test_randwrite_2K 615 MiB/s 639 MiB/s test_randwrite_4K 967 MiB/s 1024 
> MiB/s test_randwrite_8K 1047 MiB/s 1314 MiB/s test_randwrite_16K 1096 
> MiB/s 1377 MiB/s test_randwrite_32K 1155 MiB/s 1382 MiB/s 
> test_randwrite_64K 1184 MiB/s 1452 MiB/s test_randwrite_128K 1199 
> MiB/s 1488 MiB/s test_randwrite_1M 1178 MiB/s 1499 MiB/s 
> test_write_512 233 MiB/s 233 MiB/s test_write_1024 424 MiB/s 391 MiB/s 
> test_write_2K 706 MiB/s 760 MiB/s test_write_4K 978 MiB/s 1076 MiB/s 
> test_write_8K 1059 MiB/s 1296 MiB/s test_write_16K 1119 MiB/s 1380 
> MiB/s test_write_32K 1158 MiB/s 1387 MiB/s test_write_64K 1184 MiB/s 
> 1448 MiB/s test_write_128K 1198 MiB/s 1481 MiB/s test_write_1M 1178 
> MiB/s 1486 MiB/s
>
>
> So for now I’d rather keep using flushcache in pcache. In future, once 
> we’ve come up with a general-purpose optimization, we can switch to that.
>
Sorry for the formatting issue—the table can be checked in attachment 
<pmem_test_result>

Thanx

Dongsheng

[-- Attachment #1.2: Type: text/html, Size: 4922 bytes --]

[-- Attachment #2: pmem_test_result.txt --]
[-- Type: text/plain, Size: 2516 bytes --]

Test (memmap pmem)                      clflushopt   flushcache
-------------------------------------------------
test_randwrite_512             200 MiB/s      228 MiB/s
test_randwrite_1024            378 MiB/s      431 MiB/s
test_randwrite_2K              773 MiB/s      769 MiB/s
test_randwrite_4K             1364 MiB/s     1272 MiB/s
test_randwrite_8K             2078 MiB/s     1817 MiB/s
test_randwrite_16K            2745 MiB/s     2098 MiB/s
test_randwrite_32K            3232 MiB/s     2231 MiB/s
test_randwrite_64K            3660 MiB/s     2411 MiB/s
test_randwrite_128K           3922 MiB/s     2513 MiB/s
test_randwrite_1M             3824 MiB/s     2537 MiB/s
test_write_512                 228 MiB/s      228 MiB/s
test_write_1024                439 MiB/s      423 MiB/s
test_write_2K                  841 MiB/s      800 MiB/s
test_write_4K                 1364 MiB/s     1308 MiB/s
test_write_8K                 2107 MiB/s     1838 MiB/s
test_write_16K                2752 MiB/s     2166 MiB/s
test_write_32K                3213 MiB/s     2247 MiB/s
test_write_64K                3661 MiB/s     2415 MiB/s
test_write_128K               3902 MiB/s     2514 MiB/s
test_write_1M                 3808 MiB/s     2529 MiB/s


Test (Optane pmem100)                     clflushopt   flushcache
-------------------------------------------------
test_randwrite_512             167 MiB/s      226 MiB/s
test_randwrite_1024            301 MiB/s      420 MiB/s
test_randwrite_2K              615 MiB/s      639 MiB/s
test_randwrite_4K              967 MiB/s     1024 MiB/s
test_randwrite_8K             1047 MiB/s     1314 MiB/s
test_randwrite_16K            1096 MiB/s     1377 MiB/s
test_randwrite_32K            1155 MiB/s     1382 MiB/s
test_randwrite_64K            1184 MiB/s     1452 MiB/s
test_randwrite_128K           1199 MiB/s     1488 MiB/s
test_randwrite_1M             1178 MiB/s     1499 MiB/s
test_write_512                 233 MiB/s      233 MiB/s
test_write_1024                424 MiB/s      391 MiB/s
test_write_2K                  706 MiB/s      760 MiB/s
test_write_4K                  978 MiB/s     1076 MiB/s
test_write_8K                 1059 MiB/s     1296 MiB/s
test_write_16K                1119 MiB/s     1380 MiB/s
test_write_32K                1158 MiB/s     1387 MiB/s
test_write_64K                1184 MiB/s     1448 MiB/s
test_write_128K               1198 MiB/s     1481 MiB/s
test_write_1M                 1178 MiB/s     1486 MiB/s

  reply	other threads:[~2025-06-23  4:18 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-05 14:22 [RFC v2 00/11] dm-pcache – persistent-memory cache for block devices Dongsheng Yang
2025-06-05 14:22 ` [RFC PATCH 01/11] dm-pcache: add pcache_internal.h Dongsheng Yang
2025-06-05 14:22 ` [RFC PATCH 02/11] dm-pcache: add backing device management Dongsheng Yang
2025-06-05 14:22 ` [RFC PATCH 03/11] dm-pcache: add cache device Dongsheng Yang
2025-06-05 14:22 ` [RFC PATCH 04/11] dm-pcache: add segment layer Dongsheng Yang
2025-06-05 14:23 ` [RFC PATCH 05/11] dm-pcache: add cache_segment Dongsheng Yang
2025-06-05 14:23 ` [RFC PATCH 06/11] dm-pcache: add cache_writeback Dongsheng Yang
2025-06-05 14:23 ` [RFC PATCH 07/11] dm-pcache: add cache_gc Dongsheng Yang
2025-06-05 14:23 ` [RFC PATCH 08/11] dm-pcache: add cache_key Dongsheng Yang
2025-06-05 14:23 ` [RFC PATCH 09/11] dm-pcache: add cache_req Dongsheng Yang
2025-06-05 14:23 ` [RFC PATCH 10/11] dm-pcache: add cache core Dongsheng Yang
2025-06-05 14:23 ` [RFC PATCH 11/11] dm-pcache: initial dm-pcache target Dongsheng Yang
2025-06-12 16:57 ` [RFC v2 00/11] dm-pcache – persistent-memory cache for block devices Mikulas Patocka
2025-06-13  3:39   ` Dongsheng Yang
2025-06-23  3:13   ` Dongsheng Yang
2025-06-23  4:18     ` Dongsheng Yang [this message]
2025-06-30 15:57     ` Mikulas Patocka
2025-06-30 16:28       ` Dongsheng Yang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9dd017ea-a0ec-47c4-b7ae-b6f441dbd5ec@linux.dev \
    --to=dongsheng.yang@linux.dev \
    --cc=Jonathan.Cameron@Huawei.com \
    --cc=agk@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=dan.j.williams@intel.com \
    --cc=dm-devel@lists.linux.dev \
    --cc=hch@lst.de \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-cxl@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mpatocka@redhat.com \
    --cc=nvdimm@lists.linux.dev \
    --cc=snitzer@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox