From: Igor Mammedov <imammedo@redhat.com>
To: Juan Quintela <quintela@redhat.com>
Cc: "Marc-André Lureau" <marcandre.lureau@gmail.com>,
QEMU <qemu-devel@nongnu.org>,
xiaoguangrong@tencent.com,
"Eduardo Habkost" <ehabkost@redhat.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
"Stefan Berger" <stefanb@linux.vnet.ibm.com>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Richard Henderson" <rth@twiddle.net>
Subject: Re: [Qemu-devel] [PATCH v10 3/6] tpm: allocate/map buffer for TPM Physical Presence interface
Date: Tue, 4 Sep 2018 08:51:52 +0200 [thread overview]
Message-ID: <20180904085152.685b47ab@redhat.com> (raw)
In-Reply-To: <87musy1cdc.fsf@trasno.org>
On Mon, 03 Sep 2018 23:48:15 +0200
Juan Quintela <quintela@redhat.com> wrote:
> Marc-André Lureau <marcandre.lureau@gmail.com> wrote:
> > Hi
> >
> > On Fri, Aug 31, 2018 at 7:32 PM Marc-André Lureau
> > <marcandre.lureau@redhat.com> wrote:
> >>
> >> From: Stefan Berger <stefanb@linux.vnet.ibm.com>
> >>
> >> Implement a virtual memory device for the TPM Physical Presence interface.
> >> The memory is located at 0xFED45000 and used by ACPI to send messages to the
> >> firmware (BIOS) and by the firmware to provide parameters for each one of
> >> the supported codes.
> >>
> >> This interface should be used by all TPM devices on x86 and can be
> >> added by calling tpm_ppi_init_io().
> >>
> >> Note: bios_linker cannot be used to allocate the PPI memory region,
> >> since the reserved memory should stay stable across reboots, and might
> >> be needed before the ACPI tables are installed.
> >>
> >> Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
> >> Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
> >> Reviewed-by: Igor Mammedov <imammedo@redhat.com>
>
>
> ....
>
> >> + */
> >> +#define TPM_PPI_ADDR_SIZE 0x400
> >> +#define TPM_PPI_ADDR_BASE 0xFED45000
>
> > There is a (new) issue with the PPI ram region:
> >
> > READ of size 32 at 0x61d000090480 thread T6
> > #0 0x5622bd8de0f4 in buffer_zero_avx2
> > /home/elmarco/src/qq/util/bufferiszero.c:169
> > #1 0x5622bd8de899 in select_accel_fn
> > /home/elmarco/src/qq/util/bufferiszero.c:282
> > #2 0x5622bd8de8f1 in buffer_is_zero
> > /home/elmarco/src/qq/util/bufferiszero.c:309
> > #3 0x5622bc209f94 in is_zero_range /home/elmarco/src/qq/migration/ram.c:82
> > #4 0x5622bc21938d in save_zero_page_to_file
> > /home/elmarco/src/qq/migration/ram.c:1694
> > #5 0x5622bc219452 in save_zero_page
> > /home/elmarco/src/qq/migration/ram.c:1713
> > #6 0x5622bc21db67 in ram_save_target_page
> > /home/elmarco/src/qq/migration/ram.c:2289
> > #7 0x5622bc21e13e in ram_save_host_page
> > /home/elmarco/src/qq/migration/ram.c:2351
> > #8 0x5622bc21ea3a in ram_find_and_save_block
> > /home/elmarco/src/qq/migration/ram.c:2413
> > #9 0x5622bc223b5d in ram_save_iterate
> > /home/elmarco/src/qq/migration/ram.c:3193
> > #10 0x5622bd16f544 in qemu_savevm_state_iterate
> > /home/elmarco/src/qq/migration/savevm.c:1103
> > #11 0x5622bd157e75 in migration_iteration_run
> > /home/elmarco/src/qq/migration/migration.c:2897
> > #12 0x5622bd15892e in migration_thread
> > /home/elmarco/src/qq/migration/migration.c:3018
> > #13 0x5622bd902f31 in qemu_thread_start
> > /home/elmarco/src/qq/util/qemu-thread-posix.c:504
> > #14 0x7f42f0ef4593 in start_thread (/lib64/libpthread.so.0+0x7593)
> > #15 0x7f42f0c280de in clone (/lib64/libc.so.6+0xfa0de)
> > 0x61d000090490 is located 0 bytes to the right of 2064-byte region
> > [0x61d00008fc80,0x61d000090490)
> >
> > migration code is assuming RAM is multiple of TARGET_PAGE_SIZE.
>
> Physical RAM is multiple of TARGET_PAGE_SIZE O:-)
> That assumtion is held in too many places, if you can change the size to
> be multiple of page size, that would be greate.
>
> > Should the migration code be fixed, or should the TPM code allocate
> > ram differently?
>
> Migration people (i.e. me) would preffer that you fix the TPM
> allocation. Or you can decide that this is *not* RAM. The unit of
> transfer for migrate ram is ram pages, a.k.a. TARGET_PAGE_SIZE.
I'd loath to waste whole page in already cramped area.
Can we implement it as mmio region? (it will add some bolerplate code for read/write
handlers/migration and cause vmexits on every read/write but it's not a hot path so we might not care)
> > In all case, I think the migration code should either be fixed or have
> > an assert.
>
> An assert for sure.
>
> Fixed .... Do we have real devices that put ram regions that are smaller
> than page size?
>
> Later, Juan.
next prev parent reply other threads:[~2018-09-04 6:52 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-08-31 17:24 [Qemu-devel] [PATCH v10 0/6] Add support for TPM Physical Presence interface Marc-André Lureau
2018-08-31 17:24 ` [Qemu-devel] [PATCH v10 1/6] hw/i386: add pc-i440fx-3.1 & pc-q35-3.1 Marc-André Lureau
2018-08-31 17:24 ` [Qemu-devel] [PATCH v10 2/6] tpm: add a "ppi" boolean property Marc-André Lureau
2018-08-31 17:24 ` [Qemu-devel] [PATCH v10 3/6] tpm: allocate/map buffer for TPM Physical Presence interface Marc-André Lureau
2018-08-31 23:28 ` Marc-André Lureau
2018-09-03 21:48 ` Juan Quintela
2018-09-04 6:51 ` Igor Mammedov [this message]
2018-09-05 8:21 ` Marc-André Lureau
2018-09-05 8:36 ` Juan Quintela
2018-09-05 9:17 ` Igor Mammedov
2018-08-31 17:24 ` [Qemu-devel] [PATCH v10 4/6] acpi: expose TPM/PPI configuration parameters to firmware via fw_cfg Marc-André Lureau
2018-08-31 17:24 ` [Qemu-devel] [PATCH v10 5/6] acpi: build TPM Physical Presence interface Marc-André Lureau
2018-08-31 17:24 ` [Qemu-devel] [PATCH v10 6/6] tpm: add ACPI memory clear interface Marc-André Lureau
2018-09-04 6:46 ` Igor Mammedov
2018-09-06 3:50 ` Marc-André Lureau
2018-09-06 7:58 ` Igor Mammedov
2018-09-06 8:01 ` Marc-André Lureau
2018-09-06 8:40 ` Igor Mammedov
2018-09-06 8:59 ` Dr. David Alan Gilbert
2018-09-06 9:11 ` Marc-André Lureau
2018-09-06 9:42 ` Dr. David Alan Gilbert
2018-09-06 16:50 ` Marc-André Lureau
2018-09-06 17:23 ` Dr. David Alan Gilbert
2018-09-06 18:58 ` Laszlo Ersek
2018-09-10 10:44 ` Dr. David Alan Gilbert
2018-09-10 13:03 ` Marc-André Lureau
2018-09-11 14:19 ` Laszlo Ersek
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180904085152.685b47ab@redhat.com \
--to=imammedo@redhat.com \
--cc=ehabkost@redhat.com \
--cc=marcandre.lureau@gmail.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=rth@twiddle.net \
--cc=stefanb@linux.vnet.ibm.com \
--cc=xiaoguangrong@tencent.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).