From: Sergey Senozhatsky <senozhatsky@chromium.org>
To: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Minchan Kim <minchan@kernel.org>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/6] zram: cond_resched() in writeback loop
Date: Wed, 11 Dec 2024 16:49:53 +0900 [thread overview]
Message-ID: <20241211074953.GD2091455@google.com> (raw)
In-Reply-To: <20241211041112.GC2091455@google.com>
On (24/12/11 13:11), Sergey Senozhatsky wrote:
> On (24/12/10 16:54), Andrew Morton wrote:
> > On Tue, 10 Dec 2024 19:53:55 +0900 Sergey Senozhatsky <senozhatsky@chromium.org> wrote:
> >
> > > Writeback loop can run for quite a while (depending on
> > > wb device performance, compression algorithm and the
> > > number of entries we writeback), so we need to do
> > > cond_resched() there, similarly to what we do in
> > > recompress loop.
> > >
> > > ...
> > >
> > > --- a/drivers/block/zram/zram_drv.c
> > > +++ b/drivers/block/zram/zram_drv.c
> > > @@ -889,6 +889,8 @@ static ssize_t writeback_store(struct device *dev,
> > > next:
> > > zram_slot_unlock(zram, index);
> > > release_pp_slot(zram, pps);
> > > +
> > > + cond_resched();
> > > }
> > >
> > > if (blk_idx)
> >
> > Should this be treated as a hotfix? With a -stable backport?
>
> Actually... can I please ask you to drop this [1] particular patch for
> now? The stall should not happen, because submit_bio_wait() is a
> rescheduling point (in blk_wait_io()). So I'm not sure why I'm seeing
> unhappy watchdogs.
OK, so. submit_bio_wait() is not necessarily a rescheduling point.
By the time it calls blk_wait_io() the I/O can already be completed
so it won't schedule(). Why would I/O be completed is another story.
For instance, the backing device may have BD_HAS_SUBMIT_BIO bit set
so __submit_bio() would call disk->fops->submit_bio(bio) on the backing
device directly. So on such setups we end up in a loop
for_each (target slot) {
decompress slot
submit bio
disk->fops->submit_bio
}
without rescheduling.
next prev parent reply other threads:[~2024-12-11 7:49 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-10 10:53 [PATCH 0/6] zram: split page type read/write handling Sergey Senozhatsky
2024-12-10 10:53 ` [PATCH 1/6] zram: cond_resched() in writeback loop Sergey Senozhatsky
2024-12-11 0:54 ` Andrew Morton
2024-12-11 3:43 ` Sergey Senozhatsky
2024-12-11 3:59 ` Sergey Senozhatsky
2024-12-11 4:11 ` Sergey Senozhatsky
2024-12-11 7:49 ` Sergey Senozhatsky [this message]
2024-12-10 10:53 ` [PATCH 2/6] zram: free slot memory early during write Sergey Senozhatsky
2024-12-10 10:53 ` [PATCH 3/6] zram: remove entry element member Sergey Senozhatsky
2024-12-10 10:53 ` [PATCH 4/6] zram: factor out ZRAM_SAME write Sergey Senozhatsky
2024-12-10 10:53 ` [PATCH 5/6] zram: factor out ZRAM_HUGE write Sergey Senozhatsky
2024-12-10 11:31 ` Sergey Senozhatsky
2024-12-11 10:06 ` Sergey Senozhatsky
2024-12-11 23:51 ` Andrew Morton
2024-12-12 3:45 ` Sergey Senozhatsky
2024-12-10 10:54 ` [PATCH 6/6] zram: factor out different page types read Sergey Senozhatsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241211074953.GD2091455@google.com \
--to=senozhatsky@chromium.org \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=minchan@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox