From: Marc Zyngier <maz@kernel.org>
To: wangwudi <wangwudi@hisilicon.com>
Cc: <linux-kernel@vger.kernel.org>, Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [PATCH] irqchip: gic-v3: Collection table support muti pages
Date: Tue, 16 May 2023 08:16:19 +0100 [thread overview]
Message-ID: <87cz30wxto.wl-maz@kernel.org> (raw)
In-Reply-To: <5e42a892-3826-6370-9702-fefee88bf339@hisilicon.com>
On Tue, 16 May 2023 03:53:06 +0100,
wangwudi <wangwudi@hisilicon.com> wrote:
>
>
>
> 在 2023/5/16 9:57, wangwudi 写道:
> >
> >
> > -----邮件原件-----
> > 发件人: Marc Zyngier [mailto:maz@kernel.org]
> > 发送时间: 2023年5月15日 20:45
> > 收件人: wangwudi <wangwudi@hisilicon.com>
> > 抄送: linux-kernel@vger.kernel.org; Thomas Gleixner <tglx@linutronix.de>
> > 主题: Re: [PATCH] irqchip: gic-v3: Collection table support muti pages
> >
> > On Mon, 15 May 2023 13:10:04 +0100,
> > wangwudi <wangwudi@hisilicon.com> wrote:
> >>
> >> Only one page is allocated to the collection table.
> >> Recalculate the page number of collection table based on the number of
> >> CPUs.
> >
> > Please document *why* we should even consider this. Do you know of
> > any existing implementation that is so large (or need so much
> > memory for its collection) that it would result in overflowing the
> > collection table?
>
> Each CPU occupies an entry in the collection table. When there are a
> large number of CPUs and only one page of the collection table, some
> CPUs fail to execute ITS-MAPC cmd, and fail to receive LPI
> interrupts.
>
> For example, GITS_BASER indicates that the page_size of the
> collection table is 4 KB, the entry size is 16 Bytes, and only 256
> entries can be stored on one page. When the number of CPUs is more
> than 256(which is common in the SMP system of the server), the
> subsequent CPUs cannot receive the LPI.
You're stating the obvious. My question was whether we were anywhere
close to that limit on any existing, or even planned HW.
> It is noticed by code review, not by on actual HW.
Right. So let me repeat my question: do you of any existing or planned
implementation that is both:
- using a small ITS page size
- having large per-collection memory requirements
- with a potentially large number of CPUs
that would result in CPUs not fitting in the collection table?
Assuming this is the case, is the CPU numbering space so large and
potentially sparse that it would benefit from 2 level tables instead
of a larger single-level table?
Finally, assuming all the above conditions are satisfied, what
actually populates the second level table in your patch? I don't see
anything that does. Which makes me think that it was never properly
tested.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
next prev parent reply other threads:[~2023-05-16 7:16 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-15 12:10 [PATCH] irqchip: gic-v3: Collection table support muti pages wangwudi
2023-05-15 12:44 ` Marc Zyngier
[not found] ` <41cbc6cb4e964fe0bbba87f52110b1c3@hisilicon.com>
2023-05-16 2:53 ` wangwudi
2023-05-16 7:16 ` Marc Zyngier [this message]
2023-05-22 12:52 ` wangwudi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87cz30wxto.wl-maz@kernel.org \
--to=maz@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=tglx@linutronix.de \
--cc=wangwudi@hisilicon.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox