qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Alex Bennée" <alex.bennee@linaro.org>
To: "Philippe Mathieu-Daudé" <philmd@linaro.org>
Cc: qemu-devel@nongnu.org,
	Richard Henderson <richard.henderson@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>
Subject: Re: [PATCH-for-8.0 5/5] accel/tcg: Restrict page_collection structure to system TB maintainance
Date: Fri, 16 Dec 2022 12:22:36 +0000	[thread overview]
Message-ID: <87fsdfh70v.fsf@linaro.org> (raw)
In-Reply-To: <20221209093649.43738-6-philmd@linaro.org>


Philippe Mathieu-Daudé <philmd@linaro.org> writes:

> Only the system emulation part of TB maintainance uses the
> page_collection structure. Restrict its declaration (and the
> functions requiring it) to tb-maint.c.
>
> Convert the 'len' argument of tb_invalidate_phys_page_locked_fast()
> from signed to unsigned.

You could push that cleanup higher because I think we only ever have
DATA_SIZE which is always in a fixed range.

Anyway:

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

>
> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
> ---
>  accel/tcg/internal.h |  7 -------
>  accel/tcg/tb-maint.c | 12 ++++++------
>  2 files changed, 6 insertions(+), 13 deletions(-)
>
> diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h
> index db078390b1..6edff16fb0 100644
> --- a/accel/tcg/internal.h
> +++ b/accel/tcg/internal.h
> @@ -36,16 +36,9 @@ void page_table_config_init(void);
>  #endif
>  
>  #ifdef CONFIG_SOFTMMU
> -struct page_collection;
> -void tb_invalidate_phys_page_locked_fast(struct page_collection *pages,
> -                                         tb_page_addr_t start, int len,
> -                                         uintptr_t retaddr);
> -struct page_collection *page_collection_lock(tb_page_addr_t start,
> -                                             tb_page_addr_t end);
>  void tb_invalidate_phys_range_fast(ram_addr_t ram_addr,
>                                     unsigned size,
>                                     uintptr_t retaddr);
> -void page_collection_unlock(struct page_collection *set);
>  G_NORETURN void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr);
>  #endif /* CONFIG_SOFTMMU */
>  
> diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c
> index 4dc2fa1060..10d7e4b7a8 100644
> --- a/accel/tcg/tb-maint.c
> +++ b/accel/tcg/tb-maint.c
> @@ -523,8 +523,8 @@ static gint tb_page_addr_cmp(gconstpointer ap, gconstpointer bp, gpointer udata)
>   * intersecting TBs.
>   * Locking order: acquire locks in ascending order of page index.
>   */
> -struct page_collection *
> -page_collection_lock(tb_page_addr_t start, tb_page_addr_t end)
> +static struct page_collection *page_collection_lock(tb_page_addr_t start,
> +                                                    tb_page_addr_t end)
>  {
>      struct page_collection *set = g_malloc(sizeof(*set));
>      tb_page_addr_t index;
> @@ -568,7 +568,7 @@ page_collection_lock(tb_page_addr_t start, tb_page_addr_t end)
>      return set;
>  }
>  
> -void page_collection_unlock(struct page_collection *set)
> +static void page_collection_unlock(struct page_collection *set)
>  {
>      /* entries are unlocked and freed via page_entry_destroy */
>      g_tree_destroy(set->tree);
> @@ -1196,9 +1196,9 @@ void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end)
>  /*
>   * Call with all @pages in the range [@start, @start + len[ locked.
>   */
> -void tb_invalidate_phys_page_locked_fast(struct page_collection *pages,
> -                                         tb_page_addr_t start, int len,
> -                                         uintptr_t retaddr)
> +static void tb_invalidate_phys_page_locked_fast(struct page_collection *pages,
> +                                                tb_page_addr_t start,
> +                                                unsigned len, uintptr_t retaddr)
>  {
>      PageDesc *p;


-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


      reply	other threads:[~2022-12-16 12:23 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-09  9:36 [PATCH-for-8.0 0/5] accel/tcg: Restrict page_collection structure to system TB maintainance Philippe Mathieu-Daudé
2022-12-09  9:36 ` [PATCH-for-8.0 1/5] accel/tcg: Restrict cpu_io_recompile() to system emulation Philippe Mathieu-Daudé
2022-12-16 12:07   ` Alex Bennée
2022-12-09  9:36 ` [PATCH-for-8.0 2/5] accel/tcg: Remove trace events from trace-root.h Philippe Mathieu-Daudé
2022-12-16 12:09   ` Alex Bennée
2022-12-09  9:36 ` [PATCH-for-8.0 3/5] accel/tcg: Rename tb_invalidate_phys_page_[locked_]fast() Philippe Mathieu-Daudé
2022-12-16 12:11   ` Alex Bennée
2022-12-16 17:31   ` Richard Henderson
2022-12-09  9:36 ` [PATCH-for-8.0 4/5] accel/tcg: Factor tb_invalidate_phys_range_fast() out Philippe Mathieu-Daudé
2022-12-16 12:17   ` Alex Bennée
2022-12-09  9:36 ` [PATCH-for-8.0 5/5] accel/tcg: Restrict page_collection structure to system TB maintainance Philippe Mathieu-Daudé
2022-12-16 12:22   ` Alex Bennée [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87fsdfh70v.fsf@linaro.org \
    --to=alex.bennee@linaro.org \
    --cc=pbonzini@redhat.com \
    --cc=philmd@linaro.org \
    --cc=qemu-devel@nongnu.org \
    --cc=richard.henderson@linaro.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).