qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Kevin Wolf <kwolf@redhat.com>
To: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Cc: "qemu-block@nongnu.org" <qemu-block@nongnu.org>,
	"mreitz@redhat.com" <mreitz@redhat.com>,
	"eblake@redhat.com" <eblake@redhat.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Subject: Re: [Qemu-devel] [PATCH] file-posix: Cache lseek result for data regions
Date: Thu, 24 Jan 2019 16:11:05 +0100	[thread overview]
Message-ID: <20190124151105.GH4601@localhost.localdomain> (raw)
In-Reply-To: <f3b34487-5c86-287e-00b5-331adf9ac931@virtuozzo.com>

Am 24.01.2019 um 15:40 hat Vladimir Sementsov-Ogievskiy geschrieben:
> 24.01.2019 17:17, Kevin Wolf wrote:
> > Depending on the exact image layout and the storage backend (tmpfs is
> > konwn to have very slow SEEK_HOLE/SEEK_DATA), caching lseek results can
> > save us a lot of time e.g. during a mirror block job or qemu-img convert
> > with a fragmented source image (.bdrv_co_block_status on the protocol
> > layer can be called for every single cluster in the extreme case).
> > 
> > We may only cache data regions because of possible concurrent writers.
> > This means that we can later treat a recently punched hole as data, but
> > this is safe. We can't cache holes because then we might treat recently
> > written data as holes, which can cause corruption.
> > 
> > Signed-off-by: Kevin Wolf <kwolf@redhat.com>
> > ---
> >   block/file-posix.c | 51 ++++++++++++++++++++++++++++++++++++++++++++--
> >   1 file changed, 49 insertions(+), 2 deletions(-)
> > 
> > diff --git a/block/file-posix.c b/block/file-posix.c
> > index 8aee7a3fb8..7272c7c99d 100644
> > --- a/block/file-posix.c
> > +++ b/block/file-posix.c
> > @@ -168,6 +168,12 @@ typedef struct BDRVRawState {
> >       bool needs_alignment;
> >       bool check_cache_dropped;
> >   
> > +    struct seek_data_cache {
> > +        bool        valid;
> > +        uint64_t    start;
> > +        uint64_t    end;
> > +    } seek_data_cache;
> 
> Should we have some mutex-locking to protect it?

It is protected by the AioContext lock, like everything else in
BDRVRawState.

Kevin

  reply	other threads:[~2019-01-24 15:11 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-24 14:17 [Qemu-devel] [PATCH] file-posix: Cache lseek result for data regions Kevin Wolf
2019-01-24 14:40 ` Vladimir Sementsov-Ogievskiy
2019-01-24 15:11   ` Kevin Wolf [this message]
2019-01-24 15:22     ` Vladimir Sementsov-Ogievskiy
2019-01-24 15:42       ` Kevin Wolf
2019-01-25 10:10         ` Paolo Bonzini
2019-01-25 10:30           ` Kevin Wolf
2019-02-04 10:17             ` Paolo Bonzini
2019-01-24 15:56 ` Eric Blake
2019-01-29 10:56   ` Kevin Wolf
2019-01-29 21:03     ` Eric Blake
2019-01-24 16:18 ` Vladimir Sementsov-Ogievskiy
2019-01-24 16:36   ` Kevin Wolf
2019-01-25  9:13     ` Vladimir Sementsov-Ogievskiy
2019-01-25 13:26       ` Eric Blake

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190124151105.GH4601@localhost.localdomain \
    --to=kwolf@redhat.com \
    --cc=eblake@redhat.com \
    --cc=mreitz@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=vsementsov@virtuozzo.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).