linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Michael Ellerman <mpe@ellerman.id.au>
To: Niklas Cassel <cassel@kernel.org>
Cc: "Kolbjørn Barmen" <linux-ppc@kolla.no>,
	linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org,
	linux-ide@vger.kernel.org, "Jonáš Vidra" <vidra@ufal.mff.cuni.cz>,
	"Christoph Hellwig" <hch@lst.de>,
	linux@roeck-us.net
Subject: Re: Since 6.10 - kernel oops/panics on G4 macmini due to change in drivers/ata/pata_macio.c
Date: Wed, 14 Aug 2024 22:20:55 +1000	[thread overview]
Message-ID: <87jzgj1ejc.fsf@mail.lhotse> (raw)
In-Reply-To: <Zrt028rSVT5hVPbU@ryzen.lan>

Niklas Cassel <cassel@kernel.org> writes:
> On Tue, Aug 13, 2024 at 10:32:36PM +1000, Michael Ellerman wrote:
>> Niklas Cassel <cassel@kernel.org> writes:
>> > On Tue, Aug 13, 2024 at 07:49:34AM +0200, Jonáš Vidra wrote:
...
>> >> ------------[ cut here ]------------
>> >> kernel BUG at drivers/ata/pata_macio.c:544!
>> >
>> > https://github.com/torvalds/linux/blob/v6.11-rc3/drivers/ata/pata_macio.c#L544
>> >
>> > It seems that the
>> > while (sg_len) loop does not play nice with the new .max_segment_size.
>> 
>> Right, but only for 4KB kernels for some reason. Is there some limit
>> elsewhere that prevents the bug tripping on 64KB kernels, or is it just
>> luck that no one has hit it?
>
> Have your tried running fio (flexible I/O tester), with reads with a very
> large block sizes?
>
> I would be surprised if it isn't possible to trigger the same bug with
> 64K page size.
>
> max segment size = 64K
> MAX_DCMDS = 256
> 256 * 64K = 16 MiB
> What happens if you run fio with a 16 MiB blocksize?
>
> Something like:
> $ sudo fio --name=test --filename=/dev/sdX --direct=1 --runtime=60 --ioengine=io_uring --rw=read --iodepth=4 --bs=16M

Nothing interesting happens, fio succeeds.

The largest request that comes into pata_macio_qc_prep() is 1280KB,
which results in 40 DMA list entries.

I tried with a larger block size but it doesn't change anything. I guess
there's some limit somewhere else in the stack?

That was testing on qemu, but I don't think it should matter?

I guess there's no way to run the fio test against a file, ie. without a
raw partition? My real G5 doesn't have any spare disks/partitions in it.

cheers


fio-3.37
Starting 1 process

test: (groupid=0, jobs=1): err= 0: pid=257: Wed Aug 14 22:18:59 2024
  read: IOPS=6, BW=195MiB/s (204MB/s)(96.0MiB/493msec)
    slat (usec): min=32973, max=35222, avg=33836.35, stdev=1212.51
    clat (msec): min=378, max=448, avg=413.35, stdev=34.99
     lat (msec): min=413, max=481, avg=447.19, stdev=33.87
    clat percentiles (msec):
     |  1.00th=[  380],  5.00th=[  380], 10.00th=[  380], 20.00th=[  380],
     | 30.00th=[  380], 40.00th=[  414], 50.00th=[  414], 60.00th=[  414],
     | 70.00th=[  447], 80.00th=[  447], 90.00th=[  447], 95.00th=[  447],
     | 99.00th=[  447], 99.50th=[  447], 99.90th=[  447], 99.95th=[  447],
     | 99.99th=[  447]
   bw (  KiB/s): min=195047, max=195047, per=97.82%, avg=195047.00, stdev= 0.00, samples=1
   iops        : min=    5, max=    5, avg= 5.00, stdev= 0.00, samples=1
  lat (msec)   : 500=100.00%
  cpu          : usr=1.62%, sys=11.97%, ctx=22, majf=0, minf=1540
  IO depths    : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=3,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
   READ: bw=195MiB/s (204MB/s), 195MiB/s-195MiB/s (204MB/s-204MB/s), io=96.0MiB (101MB), run=493-493msec

Disk stats (read/write):
  sda: ios=78/0, sectors=196608/0, merge=0/0, ticks=745/0, in_queue=745, util=66.89%


  reply	other threads:[~2024-08-14 12:21 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-12 22:32 Since 6.10 - kernel oops/panics on G4 macmini due to change in drivers/ata/pata_macio.c Kolbjørn Barmen
2024-08-13  5:49 ` Jonáš Vidra
2024-08-13  9:54   ` Niklas Cassel
2024-08-13  9:58     ` Jonáš Vidra
2024-08-13 12:32     ` Michael Ellerman
2024-08-13 14:33       ` Kolbjørn Barmen
2024-08-13 14:59       ` Niklas Cassel
2024-08-14 12:20         ` Michael Ellerman [this message]
2024-08-14 14:06           ` Niklas Cassel
2024-08-16 23:46             ` Michael Ellerman
2024-08-17  3:32               ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87jzgj1ejc.fsf@mail.lhotse \
    --to=mpe@ellerman.id.au \
    --cc=cassel@kernel.org \
    --cc=hch@lst.de \
    --cc=linux-ide@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-ppc@kolla.no \
    --cc=linux@roeck-us.net \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=vidra@ufal.mff.cuni.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).