qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Max Reitz <mreitz@redhat.com>
To: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>,
	"Denis V. Lunev" <den@openvz.org>,
	qemu-block@nongnu.org
Cc: kwolf@redhat.com, qemu-devel@nongnu.org, armbru@redhat.com
Subject: Re: [PATCH 0/7] qcow2: compressed write cache
Date: Wed, 10 Feb 2021 11:00:05 +0100	[thread overview]
Message-ID: <d7387da7-fb53-5d58-7ddf-4bf469e6526c@redhat.com> (raw)
In-Reply-To: <dad2a202-430e-f430-6cfd-1c75fd3f2549@virtuozzo.com>

On 09.02.21 19:51, Vladimir Sementsov-Ogievskiy wrote:
> 09.02.2021 21:41, Denis V. Lunev wrote:
>> On 2/9/21 9:36 PM, Vladimir Sementsov-Ogievskiy wrote:
>>> 09.02.2021 19:39, Vladimir Sementsov-Ogievskiy wrote:
>>>> 09.02.2021 17:47, Max Reitz wrote:
>>>>> On 09.02.21 15:10, Vladimir Sementsov-Ogievskiy wrote:
>>>>>> 09.02.2021 16:25, Max Reitz wrote:
>>>>>>> On 29.01.21 17:50, Vladimir Sementsov-Ogievskiy wrote:
>>>>>>>> Hi all!
>>>>>>>>
>>>>>>>> I know, I have several series waiting for a resend, but I had to
>>>>>>>> switch
>>>>>>>> to another task spawned from our customer's bug.
>>>>>>>>
>>>>>>>> Original problem: we use O_DIRECT for all vm images in our
>>>>>>>> product, it's
>>>>>>>> the policy. The only exclusion is backup target qcow2 image for
>>>>>>>> compressed backup, because compressed backup is extremely slow with
>>>>>>>> O_DIRECT (due to unaligned writes). Customer complains that backup
>>>>>>>> produces a lot of pagecache.
>>>>>>>>
>>>>>>>> So we can either implement some internal cache or use fadvise
>>>>>>>> somehow.
>>>>>>>> Backup has several async workes, which writes simultaneously, so
>>>>>>>> in both
>>>>>>>> ways we have to track host cluster filling (before dropping the
>>>>>>>> cache
>>>>>>>> corresponding to the cluster).  So, if we have to track anyway,
>>>>>>>> let's
>>>>>>>> try to implement the cache.
>>>>>>>
>>>>>>> I wanted to be excited here, because that sounds like it would be
>>>>>>> very easy to implement caching.  Like, just keep the cluster at
>>>>>>> free_byte_offset cached until the cluster it points to changes,
>>>>>>> then flush the cluster.
>>>>>>
>>>>>> The problem is that chunks are written asynchronously.. That's why
>>>>>> this all is not so easy.
>>>>>>
>>>>>>>
>>>>>>> But then I see like 900 new lines of code, and I’m much less
>>>>>>> excited...
>>>>>>>
>>>>>>>> Idea is simple: cache small unaligned write and flush the cluster
>>>>>>>> when
>>>>>>>> filled.
>>>>>>>>
>>>>>>>> Performance result is very good (results in a table is time of
>>>>>>>> compressed backup of 1000M disk filled with ones in seconds):
>>>>>>>
>>>>>>> “Filled with ones” really is an edge case, though.
>>>>>>
>>>>>> Yes, I think, all clusters are compressed to rather small chunks :)
>>>>>>
>>>>>>>
>>>>>>>> ---------------  -----------  -----------
>>>>>>>>                    backup(old)  backup(new)
>>>>>>>> ssd:hdd(direct)  3e+02        4.4
>>>>>>>>                                   -99%
>>>>>>>> ssd:hdd(cached)  5.7          5.4
>>>>>>>>                                   -5%
>>>>>>>> ---------------  -----------  -----------
>>>>>>>>
>>>>>>>> So, we have benefit even for cached mode! And the fastest thing is
>>>>>>>> O_DIRECT with new implemented cache. So, I suggest to enable the 
>>>>>>>> new
>>>>>>>> cache by default (which is done by the series).
>>>>>>>
>>>>>>> First, I’m not sure how O_DIRECT really is relevant, because I
>>>>>>> don’t really see the point for writing compressed images.
>>>>>>
>>>>>> compressed backup is a point
>>>>>
>>>>> (Perhaps irrelevant, but just to be clear:) I meant the point of
>>>>> using O_DIRECT, which one can decide to not use for backup targets
>>>>> (as you have done already).
>>>>>
>>>>>>> Second, I find it a bit cheating if you say there is a huge
>>>>>>> improvement for the no-cache case, when actually, well, you just
>>>>>>> added a cache.  So the no-cache case just became faster because
>>>>>>> there is a cache now.
>>>>>>
>>>>>> Still, performance comparison is relevant to show that O_DIRECT as
>>>>>> is unusable for compressed backup.
>>>>>
>>>>> (Again, perhaps irrelevant, but:) Yes, but my first point was
>>>>> exactly whether O_DIRECT is even relevant for writing compressed
>>>>> images.
>>>>>
>>>>>>> Well, I suppose I could follow that if O_DIRECT doesn’t make much
>>>>>>> sense for compressed images, qemu’s format drivers are free to
>>>>>>> introduce some caching (because technically the cache.direct
>>>>>>> option only applies to the protocol driver) for collecting
>>>>>>> compressed writes.
>>>>>>
>>>>>> Yes I thought in this way, enabling the cache by default.
>>>>>>
>>>>>>> That conclusion makes both of my complaints kind of moot.
>>>>>>>
>>>>>>> *shrug*
>>>>>>>
>>>>>>> Third, what is the real-world impact on the page cache?  You
>>>>>>> described that that’s the reason why you need the cache in qemu,
>>>>>>> because otherwise the page cache is polluted too much.  How much
>>>>>>> is the difference really?  (I don’t know how good the compression
>>>>>>> ratio is for real-world images.)
>>>>>>
>>>>>> Hm. I don't know the ratio.. Customer reported that most of RAM is
>>>>>> polluted by Qemu's cache, and we use O_DIRECT for everything except
>>>>>> for target of compressed backup.. Still the pollution may relate to
>>>>>> several backups and of course it is simple enough to drop the cache
>>>>>> after each backup. But I think that even one backup of 16T disk may
>>>>>> pollute RAM enough.
>>>>>
>>>>> Oh, sorry, I just realized I had a brain fart there.  I was
>>>>> referring to whether this series improves the page cache pollution.
>>>>> But obviously it will if it allows you to re-enable O_DIRECT.
>>>>>
>>>>>>> Related to that, I remember a long time ago we had some discussion
>>>>>>> about letting qemu-img convert set a special cache mode for the
>>>>>>> target image that would make Linux drop everything before the last
>>>>>>> offset written (i.e., I suppose fadvise() with
>>>>>>> POSIX_FADV_SEQUENTIAL).  You discard that idea based on the fact
>>>>>>> that implementing a cache in qemu would be simple, but it isn’t,
>>>>>>> really.  What would the impact of POSIX_FADV_SEQUENTIAL be?  (One
>>>>>>> advantage of using that would be that we could reuse it for
>>>>>>> non-compressed images that are written by backup or qemu-img
>>>>>>> convert.)
>>>>>>
>>>>>> The problem is that writes are async. And therefore, not sequential.
>>>>>
>>>>> In theory, yes, but all compressed writes still goes through
>>>>> qcow2_alloc_bytes() right before submitting the write, so I wonder
>>>>> whether in practice the writes aren’t usually sufficiently
>>>>> sequential to make POSIX_FADV_SEQUENTIAL work fine.
>>>>
>>>> Yes, allocation is sequential. But writes are not.. Reasonable, I
>>>> should at least bench it. So we should set POSIX_FADV_SEQUENTIAL for
>>>> the whole backup target before the backup? Will try. Still, I expect
>>>> that my cache will show better performance anyway. Actually,
>>>> comparing cached (by pagecache) vs my cache we have 5.7 -> 4.4, i.e.
>>>> 20% faster, which is significant (still, yes, would be good to check
>>>> it on more real case than all-ones).
>>>>
>>>>>
>>>>>> So
>>>>>> I have to track the writes and wait until the whole cluster is
>>>>>> filled. It's simple use fadvise as an option to my cache: instead
>>>>>> of caching data and write when cluster is filled we can instead
>>>>>> mark cluster POSIX_FADV_DONTNEED.
>>>>>>
>>>>>>>
>>>>>>> (I don’t remember why that qemu-img discussion died back then.)
>>>>>>>
>>>>>>>
>>>>>>> Fourth, regarding the code, would it be simpler if it were a pure
>>>>>>> write cache?  I.e., on read, everything is flushed, so we don’t
>>>>>>> have to deal with that.  I don’t think there are many valid cases
>>>>>>> where a compressed image is both written to and read from at the
>>>>>>> same time. (Just asking, because I’d really want this code to be
>>>>>>> simpler.  I can imagine that reading from the cache is the least
>>>>>>> bit of complexity, but perhaps...)
>>>>>>>
>>>>>>
>>>>>> Hm. I really didn't want to support reads, and do it only to make
>>>>>> it possible to enable the cache by default.. Still read function is
>>>>>> really simple, and I don't think that dropping it will simplify the
>>>>>> code significantly.
>>>>>
>>>>> That’s too bad.
>>>>>
>>>>> So the only question I have left is what POSIX_FADV_SEQUENTIAL
>>>>> actually would do in practice.
>>>>
>>>> will check.
>>>>
>>>
>>> Checked that if I mark the whole file by FADV_SEQUENTIAL, cache is not
>>> removed.
>>>
>>> Test:
>>> [root@kvm fadvise]# cat a.c
>>> #define _GNU_SOURCE
>>> #include <fcntl.h>
>>> #include <unistd.h>
>>> #include <stdio.h>
>>> #include <getopt.h>
>>> #include <string.h>
>>> #include <stdbool.h>
>>>
>>> int main(int argc, char *argv[])
>>> {
>>>      int fd;
>>>      int i;
>>>      char mb[1024 * 1024];
>>>      int open_flags = O_RDWR | O_CREAT | O_EXCL;
>>>      int fadv_flags = 0;
>>>      int fadv_final_flags = 0;
>>>      int len = 1024 * 1024;
>>>      bool do_fsync = false;
>>>
>>>      for (i = 1; i < argc - 1; i++) {
>>>          const char *arg = argv[i];
>>>
>>>          if (!strcmp(arg, "direct")) {
>>>              open_flags |= O_DIRECT;
>>>          } else if (!strcmp(arg, "seq")) {
>>>              fadv_flags = POSIX_FADV_SEQUENTIAL;
>>>          } else if (!strcmp(arg, "dontneed")) {
>>>              fadv_flags = POSIX_FADV_DONTNEED;
>>>          } else if (!strcmp(arg, "final-dontneed")) {
>>>              fadv_final_flags = POSIX_FADV_DONTNEED;
>>>          } else if (!strcmp(arg, "fsync")) {
>>>              do_fsync = true;
>>>          } else {
>>>              fprintf(stderr, "unknown: %s\n", arg);
>>>              return 1;
>>>          }
>>>      }
>>>
>>>      fd = open(argv[argc - 1], open_flags);
>>>
>>>      if (fd < 0) {
>>>          fprintf(stderr, "failed to open\n");
>>>          return 1;
>>>      }
>>>
>>>      if (fadv_flags) {
>>>          posix_fadvise(fd, 0, 100 * 1024 * 1024, fadv_flags);
>>>      }
>>>      for (i = 0; i < 100; i++) {
>>>          write(fd, mb, len);
>>>      }
>>>      if (fadv_final_flags) {
>>>          posix_fadvise(fd, 0, 100 * 1024 * 1024, fadv_final_flags);
>>>      }
>>>      if (do_fsync) {
>>>          fsync(fd);
>>>      }
>>>
>>>      close(fd);
>>> }
>>>
>>>
>>>
>>> [root@kvm fadvise]# gcc a.c
>>> [root@kvm fadvise]# rm -f x; ./a.out seq x; fincore x
>>>    RES PAGES  SIZE FILE
>>>   100M 25600  100M x
>>> [root@kvm fadvise]# rm -f x; ./a.out dontneed x; fincore x
>>>    RES PAGES  SIZE FILE
>>>   100M 25600  100M x
>>> [root@kvm fadvise]# rm -f x; ./a.out final-dontneed x; fincore x
>>>    RES PAGES  SIZE FILE
>>>    36M  9216  100M x
>>> [root@kvm fadvise]# rm -f x; ./a.out seq fsync x; fincore x
>>>    RES PAGES  SIZE FILE
>>>   100M 25600  100M x
>>> [root@kvm fadvise]# rm -f x; ./a.out dontneed fsync x; fincore x
>>>    RES PAGES  SIZE FILE
>>>   100M 25600  100M x
>>> [root@kvm fadvise]# rm -f x; ./a.out final-dontneed fsync x; fincore x
>>>    RES PAGES  SIZE FILE
>>>    36M  9216  100M x
>>> [root@kvm fadvise]# rm -f x; ./a.out direct x; fincore x
>>> RES PAGES SIZE FILE
>>>   0B     0   0B x
>>>
>>>
>>>
>>> Backup-generated pagecache is a formal trash, it will be never used.
>>> And it's bad that it can displace another good pagecache. So the best
>>> thing for backup is direct mode + proposed cache.

What a shame.

Thanks a lot for testing.

>> I think that the original intention of Max is about POSIX_FADV_DONTNEED
>> One should call this fadvise for just fully written cluster.

I had hoped that SEQUENTIAL would just work, magically.

> This should work, but:
> 
>   - as we see from test above, POSIX_FADV_DONTNEED don't remove the 
> whole cache (see final-dontneed)
>   - as I said we'll have to track writes, so the cache will be the same, 
> just instead of postponed-write operation we'll do fadvise.
> 
> Hmm. Still, in this way, we will not need some difficult things in my 
> proposed cache.
> 
> So, it may be reasonable to at least split the big patch so that
> 
>   - first part brings writes / full-cluster tracking and fadvice
> 
>   - second part adds caching-wrties option, corresponding flush code and 
> additional performance
> 
> Does it make sense?

I think the fadvise solution would have been nice if it gave us 
something magical that we could also use for normal qemu-img convert (or 
backup) operations, but as that doesn’t seem to be the case, I don’t 
think it makes too much sense to implement something like that.  (I 
imagine doing fadvise also creates the need to implement new block to 
call into file-posix and so on.)

I’d propose I take some time to look at your patch as-is and then I 
report back.

Max



  reply	other threads:[~2021-02-10 10:02 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-29 16:50 [PATCH 0/7] qcow2: compressed write cache Vladimir Sementsov-Ogievskiy
2021-01-29 16:50 ` [PATCH 1/7] qemu/queue: add some useful QLIST_ and QTAILQ_ macros Vladimir Sementsov-Ogievskiy
2021-02-01  8:29   ` Markus Armbruster
2021-02-01  8:34     ` Vladimir Sementsov-Ogievskiy
2021-02-10 17:07   ` Max Reitz
2021-01-29 16:50 ` [PATCH 2/7] block/qcow2: introduce cache for compressed writes Vladimir Sementsov-Ogievskiy
2021-02-10 17:07   ` Max Reitz
2021-02-11 12:49     ` Vladimir Sementsov-Ogievskiy
2021-02-18 15:04       ` Max Reitz
2021-01-29 16:50 ` [PATCH 3/7] block/qcow2: use compressed write cache Vladimir Sementsov-Ogievskiy
2021-02-10 17:11   ` Max Reitz
2021-02-11 12:53     ` Vladimir Sementsov-Ogievskiy
2021-02-18 16:02       ` Max Reitz
2021-01-29 16:50 ` [PATCH 4/7] simplebench: bench_one(): add slow_limit argument Vladimir Sementsov-Ogievskiy
2021-01-29 16:50 ` [PATCH 5/7] simplebench: bench_one(): support count=1 Vladimir Sementsov-Ogievskiy
2021-01-29 16:50 ` [PATCH 6/7] simplebench/bench-backup: add --compressed option Vladimir Sementsov-Ogievskiy
2021-01-29 16:50 ` [PATCH 7/7] simplebench/bench-backup: add target-cache argument Vladimir Sementsov-Ogievskiy
2021-01-29 17:30 ` [PATCH 0/7] qcow2: compressed write cache no-reply
2021-02-01  8:24   ` Vladimir Sementsov-Ogievskiy
2021-02-09 13:25 ` Max Reitz
2021-02-09 14:10   ` Vladimir Sementsov-Ogievskiy
2021-02-09 14:47     ` Max Reitz
2021-02-09 16:39       ` Vladimir Sementsov-Ogievskiy
2021-02-09 18:36         ` Vladimir Sementsov-Ogievskiy
2021-02-09 18:41           ` Denis V. Lunev
2021-02-09 18:51             ` Vladimir Sementsov-Ogievskiy
2021-02-10 10:00               ` Max Reitz [this message]
2021-02-10 10:10                 ` Vladimir Sementsov-Ogievskiy
2021-02-09 16:52       ` Denis V. Lunev
2021-02-10 10:00         ` Max Reitz
2021-02-10 12:35 ` Kevin Wolf
2021-02-10 14:35   ` Vladimir Sementsov-Ogievskiy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d7387da7-fb53-5d58-7ddf-4bf469e6526c@redhat.com \
    --to=mreitz@redhat.com \
    --cc=armbru@redhat.com \
    --cc=den@openvz.org \
    --cc=kwolf@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=vsementsov@virtuozzo.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).