* [Qemu-devel] [PATCH] Fix phys memory client - pass guest physical address not region offset
@ 2011-04-29 3:15 Alex Williamson
2011-04-29 15:06 ` Michael S. Tsirkin
2011-05-03 13:15 ` Markus Armbruster
0 siblings, 2 replies; 15+ messages in thread
From: Alex Williamson @ 2011-04-29 3:15 UTC (permalink / raw)
To: qemu-devel, mst; +Cc: alex.williamson
When we're trying to get a newly registered phys memory client updated
with the current page mappings, we end up passing the region offset
(a ram_addr_t) as the start address rather than the actual guest
physical memory address (target_phys_addr_t). If your guest has less
than 3.5G of memory, these are coincidentally the same thing. If
there's more, the region offset for the memory above 4G starts over
at 0, so the set_memory client will overwrite it's lower memory entries.
Instead, keep track of the guest phsyical address as we're walking the
tables and pass that to the set_memory client.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
---
exec.c | 10 ++++++----
1 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/exec.c b/exec.c
index 4752af1..e670929 100644
--- a/exec.c
+++ b/exec.c
@@ -1742,7 +1742,7 @@ static int cpu_notify_migration_log(int enable)
}
static void phys_page_for_each_1(CPUPhysMemoryClient *client,
- int level, void **lp)
+ int level, void **lp, target_phys_addr_t addr)
{
int i;
@@ -1751,16 +1751,18 @@ static void phys_page_for_each_1(CPUPhysMemoryClient *client,
}
if (level == 0) {
PhysPageDesc *pd = *lp;
+ addr <<= L2_BITS + TARGET_PAGE_BITS;
for (i = 0; i < L2_SIZE; ++i) {
if (pd[i].phys_offset != IO_MEM_UNASSIGNED) {
- client->set_memory(client, pd[i].region_offset,
+ client->set_memory(client, addr | i << TARGET_PAGE_BITS,
TARGET_PAGE_SIZE, pd[i].phys_offset);
}
}
} else {
void **pp = *lp;
for (i = 0; i < L2_SIZE; ++i) {
- phys_page_for_each_1(client, level - 1, pp + i);
+ phys_page_for_each_1(client, level - 1, pp + i,
+ (addr << L2_BITS) | i);
}
}
}
@@ -1770,7 +1772,7 @@ static void phys_page_for_each(CPUPhysMemoryClient *client)
int i;
for (i = 0; i < P_L1_SIZE; ++i) {
phys_page_for_each_1(client, P_L1_SHIFT / L2_BITS - 1,
- l1_phys_map + i);
+ l1_phys_map + i, i);
}
}
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] [PATCH] Fix phys memory client - pass guest physical address not region offset
2011-04-29 3:15 [Qemu-devel] [PATCH] Fix phys memory client - pass guest physical address not region offset Alex Williamson
@ 2011-04-29 15:06 ` Michael S. Tsirkin
2011-04-29 15:29 ` Jan Kiszka
2011-05-03 13:15 ` Markus Armbruster
1 sibling, 1 reply; 15+ messages in thread
From: Michael S. Tsirkin @ 2011-04-29 15:06 UTC (permalink / raw)
To: Alex Williamson; +Cc: qemu-devel
On Thu, Apr 28, 2011 at 09:15:23PM -0600, Alex Williamson wrote:
> When we're trying to get a newly registered phys memory client updated
> with the current page mappings, we end up passing the region offset
> (a ram_addr_t) as the start address rather than the actual guest
> physical memory address (target_phys_addr_t). If your guest has less
> than 3.5G of memory, these are coincidentally the same thing. If
> there's more, the region offset for the memory above 4G starts over
> at 0, so the set_memory client will overwrite it's lower memory entries.
>
> Instead, keep track of the guest phsyical address as we're walking the
> tables and pass that to the set_memory client.
>
> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Given all this, can yo tell how much time does
it take to hotplug a device with, say, a 40G RAM guest?
> ---
>
> exec.c | 10 ++++++----
> 1 files changed, 6 insertions(+), 4 deletions(-)
>
> diff --git a/exec.c b/exec.c
> index 4752af1..e670929 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -1742,7 +1742,7 @@ static int cpu_notify_migration_log(int enable)
> }
>
> static void phys_page_for_each_1(CPUPhysMemoryClient *client,
> - int level, void **lp)
> + int level, void **lp, target_phys_addr_t addr)
> {
> int i;
>
> @@ -1751,16 +1751,18 @@ static void phys_page_for_each_1(CPUPhysMemoryClient *client,
> }
> if (level == 0) {
> PhysPageDesc *pd = *lp;
> + addr <<= L2_BITS + TARGET_PAGE_BITS;
> for (i = 0; i < L2_SIZE; ++i) {
> if (pd[i].phys_offset != IO_MEM_UNASSIGNED) {
> - client->set_memory(client, pd[i].region_offset,
> + client->set_memory(client, addr | i << TARGET_PAGE_BITS,
> TARGET_PAGE_SIZE, pd[i].phys_offset);
> }
> }
> } else {
> void **pp = *lp;
> for (i = 0; i < L2_SIZE; ++i) {
> - phys_page_for_each_1(client, level - 1, pp + i);
> + phys_page_for_each_1(client, level - 1, pp + i,
> + (addr << L2_BITS) | i);
> }
> }
> }
> @@ -1770,7 +1772,7 @@ static void phys_page_for_each(CPUPhysMemoryClient *client)
> int i;
> for (i = 0; i < P_L1_SIZE; ++i) {
> phys_page_for_each_1(client, P_L1_SHIFT / L2_BITS - 1,
> - l1_phys_map + i);
> + l1_phys_map + i, i);
> }
> }
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] [PATCH] Fix phys memory client - pass guest physical address not region offset
2011-04-29 15:06 ` Michael S. Tsirkin
@ 2011-04-29 15:29 ` Jan Kiszka
2011-04-29 15:34 ` Michael S. Tsirkin
2011-04-29 15:38 ` Alex Williamson
0 siblings, 2 replies; 15+ messages in thread
From: Jan Kiszka @ 2011-04-29 15:29 UTC (permalink / raw)
To: Michael S. Tsirkin; +Cc: Alex Williamson, qemu-devel
On 2011-04-29 17:06, Michael S. Tsirkin wrote:
> On Thu, Apr 28, 2011 at 09:15:23PM -0600, Alex Williamson wrote:
>> When we're trying to get a newly registered phys memory client updated
>> with the current page mappings, we end up passing the region offset
>> (a ram_addr_t) as the start address rather than the actual guest
>> physical memory address (target_phys_addr_t). If your guest has less
>> than 3.5G of memory, these are coincidentally the same thing. If
I think this broke even with < 3.5G as phys_offset also encodes the
memory type while region_offset does not. So everything became RAMthis
way, no MMIO was announced.
>> there's more, the region offset for the memory above 4G starts over
>> at 0, so the set_memory client will overwrite it's lower memory entries.
>>
>> Instead, keep track of the guest phsyical address as we're walking the
>> tables and pass that to the set_memory client.
>>
>> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
>
> Acked-by: Michael S. Tsirkin <mst@redhat.com>
>
> Given all this, can yo tell how much time does
> it take to hotplug a device with, say, a 40G RAM guest?
Why not collect pages of identical types and report them as one chunk
once the type changes?
Jan
--
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] [PATCH] Fix phys memory client - pass guest physical address not region offset
2011-04-29 15:29 ` Jan Kiszka
@ 2011-04-29 15:34 ` Michael S. Tsirkin
2011-04-29 15:41 ` Alex Williamson
2011-04-29 15:38 ` Alex Williamson
1 sibling, 1 reply; 15+ messages in thread
From: Michael S. Tsirkin @ 2011-04-29 15:34 UTC (permalink / raw)
To: Jan Kiszka; +Cc: Alex Williamson, qemu-devel
On Fri, Apr 29, 2011 at 05:29:06PM +0200, Jan Kiszka wrote:
> On 2011-04-29 17:06, Michael S. Tsirkin wrote:
> > On Thu, Apr 28, 2011 at 09:15:23PM -0600, Alex Williamson wrote:
> >> When we're trying to get a newly registered phys memory client updated
> >> with the current page mappings, we end up passing the region offset
> >> (a ram_addr_t) as the start address rather than the actual guest
> >> physical memory address (target_phys_addr_t). If your guest has less
> >> than 3.5G of memory, these are coincidentally the same thing. If
>
> I think this broke even with < 3.5G as phys_offset also encodes the
> memory type while region_offset does not. So everything became RAMthis
> way, no MMIO was announced.
>
> >> there's more, the region offset for the memory above 4G starts over
> >> at 0, so the set_memory client will overwrite it's lower memory entries.
> >>
> >> Instead, keep track of the guest phsyical address as we're walking the
> >> tables and pass that to the set_memory client.
> >>
> >> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
> >
> > Acked-by: Michael S. Tsirkin <mst@redhat.com>
> >
> > Given all this, can yo tell how much time does
> > it take to hotplug a device with, say, a 40G RAM guest?
>
> Why not collect pages of identical types and report them as one chunk
> once the type changes?
Sure, but before we bother to optimize this, is this too slow?
> Jan
>
> --
> Siemens AG, Corporate Technology, CT T DE IT 1
> Corporate Competence Center Embedded Linux
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] [PATCH] Fix phys memory client - pass guest physical address not region offset
2011-04-29 15:29 ` Jan Kiszka
2011-04-29 15:34 ` Michael S. Tsirkin
@ 2011-04-29 15:38 ` Alex Williamson
2011-04-29 15:45 ` Jan Kiszka
2011-04-29 16:52 ` Alex Williamson
1 sibling, 2 replies; 15+ messages in thread
From: Alex Williamson @ 2011-04-29 15:38 UTC (permalink / raw)
To: Jan Kiszka; +Cc: qemu-devel, Michael S. Tsirkin
On Fri, 2011-04-29 at 17:29 +0200, Jan Kiszka wrote:
> On 2011-04-29 17:06, Michael S. Tsirkin wrote:
> > On Thu, Apr 28, 2011 at 09:15:23PM -0600, Alex Williamson wrote:
> >> When we're trying to get a newly registered phys memory client updated
> >> with the current page mappings, we end up passing the region offset
> >> (a ram_addr_t) as the start address rather than the actual guest
> >> physical memory address (target_phys_addr_t). If your guest has less
> >> than 3.5G of memory, these are coincidentally the same thing. If
>
> I think this broke even with < 3.5G as phys_offset also encodes the
> memory type while region_offset does not. So everything became RAMthis
> way, no MMIO was announced.
>
> >> there's more, the region offset for the memory above 4G starts over
> >> at 0, so the set_memory client will overwrite it's lower memory entries.
> >>
> >> Instead, keep track of the guest phsyical address as we're walking the
> >> tables and pass that to the set_memory client.
> >>
> >> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
> >
> > Acked-by: Michael S. Tsirkin <mst@redhat.com>
> >
> > Given all this, can yo tell how much time does
> > it take to hotplug a device with, say, a 40G RAM guest?
>
> Why not collect pages of identical types and report them as one chunk
> once the type changes?
Good idea, I'll see if I can code that up. I don't have a terribly
large system to test with, but with an 8G guest, it's surprisingly not
very noticeable. For vfio, I intend to only have one memory client, so
adding additional devices won't have to rescan everything. The memory
overhead of keeping the list that the memory client creates is probably
also low enough that it isn't worthwhile to tear it all down if all the
devices are removed. Thanks,
Alex
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] [PATCH] Fix phys memory client - pass guest physical address not region offset
2011-04-29 15:34 ` Michael S. Tsirkin
@ 2011-04-29 15:41 ` Alex Williamson
0 siblings, 0 replies; 15+ messages in thread
From: Alex Williamson @ 2011-04-29 15:41 UTC (permalink / raw)
To: Michael S. Tsirkin; +Cc: Jan Kiszka, qemu-devel
On Fri, 2011-04-29 at 18:34 +0300, Michael S. Tsirkin wrote:
> On Fri, Apr 29, 2011 at 05:29:06PM +0200, Jan Kiszka wrote:
> > On 2011-04-29 17:06, Michael S. Tsirkin wrote:
> > > On Thu, Apr 28, 2011 at 09:15:23PM -0600, Alex Williamson wrote:
> > >> When we're trying to get a newly registered phys memory client updated
> > >> with the current page mappings, we end up passing the region offset
> > >> (a ram_addr_t) as the start address rather than the actual guest
> > >> physical memory address (target_phys_addr_t). If your guest has less
> > >> than 3.5G of memory, these are coincidentally the same thing. If
> >
> > I think this broke even with < 3.5G as phys_offset also encodes the
> > memory type while region_offset does not. So everything became RAMthis
> > way, no MMIO was announced.
> >
> > >> there's more, the region offset for the memory above 4G starts over
> > >> at 0, so the set_memory client will overwrite it's lower memory entries.
> > >>
> > >> Instead, keep track of the guest phsyical address as we're walking the
> > >> tables and pass that to the set_memory client.
> > >>
> > >> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
> > >
> > > Acked-by: Michael S. Tsirkin <mst@redhat.com>
> > >
> > > Given all this, can yo tell how much time does
> > > it take to hotplug a device with, say, a 40G RAM guest?
> >
> > Why not collect pages of identical types and report them as one chunk
> > once the type changes?
>
> Sure, but before we bother to optimize this, is this too slow?
At a set_memory call per 4k page, it's probably worthwhile to factor in
some simply optimizations. My set_memory callback was being hit 10^6
times. Thanks,
Alex
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] [PATCH] Fix phys memory client - pass guest physical address not region offset
2011-04-29 15:38 ` Alex Williamson
@ 2011-04-29 15:45 ` Jan Kiszka
2011-04-29 15:55 ` Alex Williamson
2011-04-29 16:52 ` Alex Williamson
1 sibling, 1 reply; 15+ messages in thread
From: Jan Kiszka @ 2011-04-29 15:45 UTC (permalink / raw)
To: Alex Williamson; +Cc: qemu-devel@nongnu.org, Michael S. Tsirkin
On 2011-04-29 17:38, Alex Williamson wrote:
> On Fri, 2011-04-29 at 17:29 +0200, Jan Kiszka wrote:
>> On 2011-04-29 17:06, Michael S. Tsirkin wrote:
>>> On Thu, Apr 28, 2011 at 09:15:23PM -0600, Alex Williamson wrote:
>>>> When we're trying to get a newly registered phys memory client updated
>>>> with the current page mappings, we end up passing the region offset
>>>> (a ram_addr_t) as the start address rather than the actual guest
>>>> physical memory address (target_phys_addr_t). If your guest has less
>>>> than 3.5G of memory, these are coincidentally the same thing. If
>>
>> I think this broke even with < 3.5G as phys_offset also encodes the
>> memory type while region_offset does not. So everything became RAMthis
>> way, no MMIO was announced.
>>
>>>> there's more, the region offset for the memory above 4G starts over
>>>> at 0, so the set_memory client will overwrite it's lower memory entries.
>>>>
>>>> Instead, keep track of the guest phsyical address as we're walking the
>>>> tables and pass that to the set_memory client.
>>>>
>>>> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
>>>
>>> Acked-by: Michael S. Tsirkin <mst@redhat.com>
>>>
>>> Given all this, can yo tell how much time does
>>> it take to hotplug a device with, say, a 40G RAM guest?
>>
>> Why not collect pages of identical types and report them as one chunk
>> once the type changes?
>
> Good idea, I'll see if I can code that up. I don't have a terribly
> large system to test with, but with an 8G guest, it's surprisingly not
> very noticeable. For vfio, I intend to only have one memory client, so
> adding additional devices won't have to rescan everything. The memory
> overhead of keeping the list that the memory client creates is probably
> also low enough that it isn't worthwhile to tear it all down if all the
> devices are removed. Thanks,
What other clients register late? Do the need to know to whole memory
layout?
This full page table walk is likely a latency killer as it happens under
global lock. Ugly.
Jan
--
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] [PATCH] Fix phys memory client - pass guest physical address not region offset
2011-04-29 15:45 ` Jan Kiszka
@ 2011-04-29 15:55 ` Alex Williamson
2011-04-29 16:07 ` Jan Kiszka
0 siblings, 1 reply; 15+ messages in thread
From: Alex Williamson @ 2011-04-29 15:55 UTC (permalink / raw)
To: Jan Kiszka; +Cc: qemu-devel@nongnu.org, Michael S. Tsirkin
On Fri, 2011-04-29 at 17:45 +0200, Jan Kiszka wrote:
> On 2011-04-29 17:38, Alex Williamson wrote:
> > On Fri, 2011-04-29 at 17:29 +0200, Jan Kiszka wrote:
> >> On 2011-04-29 17:06, Michael S. Tsirkin wrote:
> >>> On Thu, Apr 28, 2011 at 09:15:23PM -0600, Alex Williamson wrote:
> >>>> When we're trying to get a newly registered phys memory client updated
> >>>> with the current page mappings, we end up passing the region offset
> >>>> (a ram_addr_t) as the start address rather than the actual guest
> >>>> physical memory address (target_phys_addr_t). If your guest has less
> >>>> than 3.5G of memory, these are coincidentally the same thing. If
> >>
> >> I think this broke even with < 3.5G as phys_offset also encodes the
> >> memory type while region_offset does not. So everything became RAMthis
> >> way, no MMIO was announced.
> >>
> >>>> there's more, the region offset for the memory above 4G starts over
> >>>> at 0, so the set_memory client will overwrite it's lower memory entries.
> >>>>
> >>>> Instead, keep track of the guest phsyical address as we're walking the
> >>>> tables and pass that to the set_memory client.
> >>>>
> >>>> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
> >>>
> >>> Acked-by: Michael S. Tsirkin <mst@redhat.com>
> >>>
> >>> Given all this, can yo tell how much time does
> >>> it take to hotplug a device with, say, a 40G RAM guest?
> >>
> >> Why not collect pages of identical types and report them as one chunk
> >> once the type changes?
> >
> > Good idea, I'll see if I can code that up. I don't have a terribly
> > large system to test with, but with an 8G guest, it's surprisingly not
> > very noticeable. For vfio, I intend to only have one memory client, so
> > adding additional devices won't have to rescan everything. The memory
> > overhead of keeping the list that the memory client creates is probably
> > also low enough that it isn't worthwhile to tear it all down if all the
> > devices are removed. Thanks,
>
> What other clients register late? Do the need to know to whole memory
> layout?
>
> This full page table walk is likely a latency killer as it happens under
> global lock. Ugly.
vhost and kvm are the only current users. kvm registers it's client
early enough that there's no memory registered, so doesn't really need
this replay through the page table walk. I'm not sure how vhost works
currently. I'm also looking at using this for vfio to register pages
for the iommu.
Alex
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] [PATCH] Fix phys memory client - pass guest physical address not region offset
2011-04-29 15:55 ` Alex Williamson
@ 2011-04-29 16:07 ` Jan Kiszka
2011-04-29 16:20 ` Alex Williamson
0 siblings, 1 reply; 15+ messages in thread
From: Jan Kiszka @ 2011-04-29 16:07 UTC (permalink / raw)
To: Alex Williamson; +Cc: qemu-devel@nongnu.org, Michael S. Tsirkin
On 2011-04-29 17:55, Alex Williamson wrote:
> On Fri, 2011-04-29 at 17:45 +0200, Jan Kiszka wrote:
>> On 2011-04-29 17:38, Alex Williamson wrote:
>>> On Fri, 2011-04-29 at 17:29 +0200, Jan Kiszka wrote:
>>>> On 2011-04-29 17:06, Michael S. Tsirkin wrote:
>>>>> On Thu, Apr 28, 2011 at 09:15:23PM -0600, Alex Williamson wrote:
>>>>>> When we're trying to get a newly registered phys memory client updated
>>>>>> with the current page mappings, we end up passing the region offset
>>>>>> (a ram_addr_t) as the start address rather than the actual guest
>>>>>> physical memory address (target_phys_addr_t). If your guest has less
>>>>>> than 3.5G of memory, these are coincidentally the same thing. If
>>>>
>>>> I think this broke even with < 3.5G as phys_offset also encodes the
>>>> memory type while region_offset does not. So everything became RAMthis
>>>> way, no MMIO was announced.
>>>>
>>>>>> there's more, the region offset for the memory above 4G starts over
>>>>>> at 0, so the set_memory client will overwrite it's lower memory entries.
>>>>>>
>>>>>> Instead, keep track of the guest phsyical address as we're walking the
>>>>>> tables and pass that to the set_memory client.
>>>>>>
>>>>>> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
>>>>>
>>>>> Acked-by: Michael S. Tsirkin <mst@redhat.com>
>>>>>
>>>>> Given all this, can yo tell how much time does
>>>>> it take to hotplug a device with, say, a 40G RAM guest?
>>>>
>>>> Why not collect pages of identical types and report them as one chunk
>>>> once the type changes?
>>>
>>> Good idea, I'll see if I can code that up. I don't have a terribly
>>> large system to test with, but with an 8G guest, it's surprisingly not
>>> very noticeable. For vfio, I intend to only have one memory client, so
>>> adding additional devices won't have to rescan everything. The memory
>>> overhead of keeping the list that the memory client creates is probably
>>> also low enough that it isn't worthwhile to tear it all down if all the
>>> devices are removed. Thanks,
>>
>> What other clients register late? Do the need to know to whole memory
>> layout?
>>
>> This full page table walk is likely a latency killer as it happens under
>> global lock. Ugly.
>
> vhost and kvm are the only current users. kvm registers it's client
> early enough that there's no memory registered, so doesn't really need
> this replay through the page table walk. I'm not sure how vhost works
> currently. I'm also looking at using this for vfio to register pages
> for the iommu.
Hmm, it looks like vhost is basically recreating the condensed, slotted
memory layout from the per-page reports now. A bit inefficient,
specifically as this happens per vhost device, no? And if vfio preferred
a slotted format as well, you would end up copying vhost logic.
That sounds to me like the qemu core should start tracking slots and
report slot changes, not memory region registrations.
Jan
--
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] [PATCH] Fix phys memory client - pass guest physical address not region offset
2011-04-29 16:07 ` Jan Kiszka
@ 2011-04-29 16:20 ` Alex Williamson
2011-04-29 16:31 ` Jan Kiszka
0 siblings, 1 reply; 15+ messages in thread
From: Alex Williamson @ 2011-04-29 16:20 UTC (permalink / raw)
To: Jan Kiszka; +Cc: qemu-devel@nongnu.org, Michael S. Tsirkin
On Fri, 2011-04-29 at 18:07 +0200, Jan Kiszka wrote:
> On 2011-04-29 17:55, Alex Williamson wrote:
> > On Fri, 2011-04-29 at 17:45 +0200, Jan Kiszka wrote:
> >> On 2011-04-29 17:38, Alex Williamson wrote:
> >>> On Fri, 2011-04-29 at 17:29 +0200, Jan Kiszka wrote:
> >>>> On 2011-04-29 17:06, Michael S. Tsirkin wrote:
> >>>>> On Thu, Apr 28, 2011 at 09:15:23PM -0600, Alex Williamson wrote:
> >>>>>> When we're trying to get a newly registered phys memory client updated
> >>>>>> with the current page mappings, we end up passing the region offset
> >>>>>> (a ram_addr_t) as the start address rather than the actual guest
> >>>>>> physical memory address (target_phys_addr_t). If your guest has less
> >>>>>> than 3.5G of memory, these are coincidentally the same thing. If
> >>>>
> >>>> I think this broke even with < 3.5G as phys_offset also encodes the
> >>>> memory type while region_offset does not. So everything became RAMthis
> >>>> way, no MMIO was announced.
> >>>>
> >>>>>> there's more, the region offset for the memory above 4G starts over
> >>>>>> at 0, so the set_memory client will overwrite it's lower memory entries.
> >>>>>>
> >>>>>> Instead, keep track of the guest phsyical address as we're walking the
> >>>>>> tables and pass that to the set_memory client.
> >>>>>>
> >>>>>> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
> >>>>>
> >>>>> Acked-by: Michael S. Tsirkin <mst@redhat.com>
> >>>>>
> >>>>> Given all this, can yo tell how much time does
> >>>>> it take to hotplug a device with, say, a 40G RAM guest?
> >>>>
> >>>> Why not collect pages of identical types and report them as one chunk
> >>>> once the type changes?
> >>>
> >>> Good idea, I'll see if I can code that up. I don't have a terribly
> >>> large system to test with, but with an 8G guest, it's surprisingly not
> >>> very noticeable. For vfio, I intend to only have one memory client, so
> >>> adding additional devices won't have to rescan everything. The memory
> >>> overhead of keeping the list that the memory client creates is probably
> >>> also low enough that it isn't worthwhile to tear it all down if all the
> >>> devices are removed. Thanks,
> >>
> >> What other clients register late? Do the need to know to whole memory
> >> layout?
> >>
> >> This full page table walk is likely a latency killer as it happens under
> >> global lock. Ugly.
> >
> > vhost and kvm are the only current users. kvm registers it's client
> > early enough that there's no memory registered, so doesn't really need
> > this replay through the page table walk. I'm not sure how vhost works
> > currently. I'm also looking at using this for vfio to register pages
> > for the iommu.
>
> Hmm, it looks like vhost is basically recreating the condensed, slotted
> memory layout from the per-page reports now. A bit inefficient,
> specifically as this happens per vhost device, no? And if vfio preferred
> a slotted format as well, you would end up copying vhost logic.
>
> That sounds to me like the qemu core should start tracking slots and
> report slot changes, not memory region registrations.
I was thinking the same thing, but I think Michael is concerned if we'll
each need slightly different lists. This is also where kvm is mapping
to a fixed array of slots, which is know to blow-up with too many
assigned devices. Needs to be fixed on both kernel and qemu side.
Runtime overhead of the phys memory client is pretty minimal, it's just
the startup that thrashes set_memory.
Alex
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] [PATCH] Fix phys memory client - pass guest physical address not region offset
2011-04-29 16:20 ` Alex Williamson
@ 2011-04-29 16:31 ` Jan Kiszka
2011-05-01 10:29 ` Michael S. Tsirkin
0 siblings, 1 reply; 15+ messages in thread
From: Jan Kiszka @ 2011-04-29 16:31 UTC (permalink / raw)
To: Alex Williamson; +Cc: qemu-devel@nongnu.org, Michael S. Tsirkin
On 2011-04-29 18:20, Alex Williamson wrote:
> On Fri, 2011-04-29 at 18:07 +0200, Jan Kiszka wrote:
>> On 2011-04-29 17:55, Alex Williamson wrote:
>>> On Fri, 2011-04-29 at 17:45 +0200, Jan Kiszka wrote:
>>>> On 2011-04-29 17:38, Alex Williamson wrote:
>>>>> On Fri, 2011-04-29 at 17:29 +0200, Jan Kiszka wrote:
>>>>>> On 2011-04-29 17:06, Michael S. Tsirkin wrote:
>>>>>>> On Thu, Apr 28, 2011 at 09:15:23PM -0600, Alex Williamson wrote:
>>>>>>>> When we're trying to get a newly registered phys memory client updated
>>>>>>>> with the current page mappings, we end up passing the region offset
>>>>>>>> (a ram_addr_t) as the start address rather than the actual guest
>>>>>>>> physical memory address (target_phys_addr_t). If your guest has less
>>>>>>>> than 3.5G of memory, these are coincidentally the same thing. If
>>>>>>
>>>>>> I think this broke even with < 3.5G as phys_offset also encodes the
>>>>>> memory type while region_offset does not. So everything became RAMthis
>>>>>> way, no MMIO was announced.
>>>>>>
>>>>>>>> there's more, the region offset for the memory above 4G starts over
>>>>>>>> at 0, so the set_memory client will overwrite it's lower memory entries.
>>>>>>>>
>>>>>>>> Instead, keep track of the guest phsyical address as we're walking the
>>>>>>>> tables and pass that to the set_memory client.
>>>>>>>>
>>>>>>>> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
>>>>>>>
>>>>>>> Acked-by: Michael S. Tsirkin <mst@redhat.com>
>>>>>>>
>>>>>>> Given all this, can yo tell how much time does
>>>>>>> it take to hotplug a device with, say, a 40G RAM guest?
>>>>>>
>>>>>> Why not collect pages of identical types and report them as one chunk
>>>>>> once the type changes?
>>>>>
>>>>> Good idea, I'll see if I can code that up. I don't have a terribly
>>>>> large system to test with, but with an 8G guest, it's surprisingly not
>>>>> very noticeable. For vfio, I intend to only have one memory client, so
>>>>> adding additional devices won't have to rescan everything. The memory
>>>>> overhead of keeping the list that the memory client creates is probably
>>>>> also low enough that it isn't worthwhile to tear it all down if all the
>>>>> devices are removed. Thanks,
>>>>
>>>> What other clients register late? Do the need to know to whole memory
>>>> layout?
>>>>
>>>> This full page table walk is likely a latency killer as it happens under
>>>> global lock. Ugly.
>>>
>>> vhost and kvm are the only current users. kvm registers it's client
>>> early enough that there's no memory registered, so doesn't really need
>>> this replay through the page table walk. I'm not sure how vhost works
>>> currently. I'm also looking at using this for vfio to register pages
>>> for the iommu.
>>
>> Hmm, it looks like vhost is basically recreating the condensed, slotted
>> memory layout from the per-page reports now. A bit inefficient,
>> specifically as this happens per vhost device, no? And if vfio preferred
>> a slotted format as well, you would end up copying vhost logic.
>>
>> That sounds to me like the qemu core should start tracking slots and
>> report slot changes, not memory region registrations.
>
> I was thinking the same thing, but I think Michael is concerned if we'll
> each need slightly different lists. This is also where kvm is mapping
> to a fixed array of slots, which is know to blow-up with too many
> assigned devices. Needs to be fixed on both kernel and qemu side.
> Runtime overhead of the phys memory client is pretty minimal, it's just
> the startup that thrashes set_memory.
I'm not just concerned about the runtime overhead. This is code
duplication. Even if the format of the lists differ, their structure
should not: one entry per continuous memory region, and some lists may
track sparsely based on their interests.
I'm sure the core could be taught to help the clients creating and
maintaining such lists. We already have two types of users in tree, you
are about to create another one, and Xen should have some need for it as
well.
Jan
--
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] [PATCH] Fix phys memory client - pass guest physical address not region offset
2011-04-29 15:38 ` Alex Williamson
2011-04-29 15:45 ` Jan Kiszka
@ 2011-04-29 16:52 ` Alex Williamson
1 sibling, 0 replies; 15+ messages in thread
From: Alex Williamson @ 2011-04-29 16:52 UTC (permalink / raw)
To: Jan Kiszka; +Cc: qemu-devel, Michael S. Tsirkin
On Fri, 2011-04-29 at 09:38 -0600, Alex Williamson wrote:
> On Fri, 2011-04-29 at 17:29 +0200, Jan Kiszka wrote:
> > On 2011-04-29 17:06, Michael S. Tsirkin wrote:
> > > On Thu, Apr 28, 2011 at 09:15:23PM -0600, Alex Williamson wrote:
> > >> When we're trying to get a newly registered phys memory client updated
> > >> with the current page mappings, we end up passing the region offset
> > >> (a ram_addr_t) as the start address rather than the actual guest
> > >> physical memory address (target_phys_addr_t). If your guest has less
> > >> than 3.5G of memory, these are coincidentally the same thing. If
> >
> > I think this broke even with < 3.5G as phys_offset also encodes the
> > memory type while region_offset does not. So everything became RAMthis
> > way, no MMIO was announced.
> >
> > >> there's more, the region offset for the memory above 4G starts over
> > >> at 0, so the set_memory client will overwrite it's lower memory entries.
> > >>
> > >> Instead, keep track of the guest phsyical address as we're walking the
> > >> tables and pass that to the set_memory client.
> > >>
> > >> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
> > >
> > > Acked-by: Michael S. Tsirkin <mst@redhat.com>
> > >
> > > Given all this, can yo tell how much time does
> > > it take to hotplug a device with, say, a 40G RAM guest?
> >
> > Why not collect pages of identical types and report them as one chunk
> > once the type changes?
>
> Good idea, I'll see if I can code that up. I don't have a terribly
> large system to test with, but with an 8G guest, it's surprisingly not
> very noticeable. For vfio, I intend to only have one memory client, so
> adding additional devices won't have to rescan everything. The memory
> overhead of keeping the list that the memory client creates is probably
> also low enough that it isn't worthwhile to tear it all down if all the
> devices are removed. Thanks,
Here's a first patch at a patch to do this. For a 4G guest, it reduces
the number of registration induced set_memory callbacks from 1048866 to
296.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
---
diff --git a/exec.c b/exec.c
index e670929..5510b0b 100644
--- a/exec.c
+++ b/exec.c
@@ -1741,8 +1741,15 @@ static int cpu_notify_migration_log(int enable)
return 0;
}
+struct last_map {
+ target_phys_addr_t start_addr;
+ ram_addr_t size;
+ ram_addr_t phys_offset;
+};
+
static void phys_page_for_each_1(CPUPhysMemoryClient *client,
- int level, void **lp, target_phys_addr_t addr)
+ int level, void **lp,
+ target_phys_addr_t addr, struct last_map *map)
{
int i;
@@ -1754,15 +1761,28 @@ static void phys_page_for_each_1(CPUPhysMemoryClient *client,
addr <<= L2_BITS + TARGET_PAGE_BITS;
for (i = 0; i < L2_SIZE; ++i) {
if (pd[i].phys_offset != IO_MEM_UNASSIGNED) {
- client->set_memory(client, addr | i << TARGET_PAGE_BITS,
- TARGET_PAGE_SIZE, pd[i].phys_offset);
+ target_phys_addr_t cur = addr | i << TARGET_PAGE_BITS;
+ if (map->size &&
+ cur == map->start_addr + map->size &&
+ pd[i].phys_offset == map->phys_offset + map->size) {
+
+ map->size += TARGET_PAGE_SIZE;
+ continue;
+ } else if (map->size) {
+ client->set_memory(client, map->start_addr,
+ map->size, map->phys_offset);
+ }
+
+ map->start_addr = addr | i << TARGET_PAGE_BITS;
+ map->size = TARGET_PAGE_SIZE;
+ map->phys_offset = pd[i].phys_offset;
}
}
} else {
void **pp = *lp;
for (i = 0; i < L2_SIZE; ++i) {
phys_page_for_each_1(client, level - 1, pp + i,
- (addr << L2_BITS) | i);
+ (addr << L2_BITS) | i, map);
}
}
}
@@ -1770,9 +1790,15 @@ static void phys_page_for_each_1(CPUPhysMemoryClient *client,
static void phys_page_for_each(CPUPhysMemoryClient *client)
{
int i;
+ struct last_map map = { 0 };
+
for (i = 0; i < P_L1_SIZE; ++i) {
phys_page_for_each_1(client, P_L1_SHIFT / L2_BITS - 1,
- l1_phys_map + i, i);
+ l1_phys_map + i, i, &map);
+ }
+ if (map.size) {
+ client->set_memory(client, map.start_addr,
+ map.size, map.phys_offset);
}
}
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] [PATCH] Fix phys memory client - pass guest physical address not region offset
2011-04-29 16:31 ` Jan Kiszka
@ 2011-05-01 10:29 ` Michael S. Tsirkin
0 siblings, 0 replies; 15+ messages in thread
From: Michael S. Tsirkin @ 2011-05-01 10:29 UTC (permalink / raw)
To: Jan Kiszka; +Cc: Alex Williamson, qemu-devel@nongnu.org
On Fri, Apr 29, 2011 at 06:31:03PM +0200, Jan Kiszka wrote:
> On 2011-04-29 18:20, Alex Williamson wrote:
> > On Fri, 2011-04-29 at 18:07 +0200, Jan Kiszka wrote:
> >> On 2011-04-29 17:55, Alex Williamson wrote:
> >>> On Fri, 2011-04-29 at 17:45 +0200, Jan Kiszka wrote:
> >>>> On 2011-04-29 17:38, Alex Williamson wrote:
> >>>>> On Fri, 2011-04-29 at 17:29 +0200, Jan Kiszka wrote:
> >>>>>> On 2011-04-29 17:06, Michael S. Tsirkin wrote:
> >>>>>>> On Thu, Apr 28, 2011 at 09:15:23PM -0600, Alex Williamson wrote:
> >>>>>>>> When we're trying to get a newly registered phys memory client updated
> >>>>>>>> with the current page mappings, we end up passing the region offset
> >>>>>>>> (a ram_addr_t) as the start address rather than the actual guest
> >>>>>>>> physical memory address (target_phys_addr_t). If your guest has less
> >>>>>>>> than 3.5G of memory, these are coincidentally the same thing. If
> >>>>>>
> >>>>>> I think this broke even with < 3.5G as phys_offset also encodes the
> >>>>>> memory type while region_offset does not. So everything became RAMthis
> >>>>>> way, no MMIO was announced.
> >>>>>>
> >>>>>>>> there's more, the region offset for the memory above 4G starts over
> >>>>>>>> at 0, so the set_memory client will overwrite it's lower memory entries.
> >>>>>>>>
> >>>>>>>> Instead, keep track of the guest phsyical address as we're walking the
> >>>>>>>> tables and pass that to the set_memory client.
> >>>>>>>>
> >>>>>>>> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
> >>>>>>>
> >>>>>>> Acked-by: Michael S. Tsirkin <mst@redhat.com>
> >>>>>>>
> >>>>>>> Given all this, can yo tell how much time does
> >>>>>>> it take to hotplug a device with, say, a 40G RAM guest?
> >>>>>>
> >>>>>> Why not collect pages of identical types and report them as one chunk
> >>>>>> once the type changes?
> >>>>>
> >>>>> Good idea, I'll see if I can code that up. I don't have a terribly
> >>>>> large system to test with, but with an 8G guest, it's surprisingly not
> >>>>> very noticeable. For vfio, I intend to only have one memory client, so
> >>>>> adding additional devices won't have to rescan everything. The memory
> >>>>> overhead of keeping the list that the memory client creates is probably
> >>>>> also low enough that it isn't worthwhile to tear it all down if all the
> >>>>> devices are removed. Thanks,
> >>>>
> >>>> What other clients register late? Do the need to know to whole memory
> >>>> layout?
> >>>>
> >>>> This full page table walk is likely a latency killer as it happens under
> >>>> global lock. Ugly.
> >>>
> >>> vhost and kvm are the only current users. kvm registers it's client
> >>> early enough that there's no memory registered, so doesn't really need
> >>> this replay through the page table walk. I'm not sure how vhost works
> >>> currently. I'm also looking at using this for vfio to register pages
> >>> for the iommu.
> >>
> >> Hmm, it looks like vhost is basically recreating the condensed, slotted
> >> memory layout from the per-page reports now. A bit inefficient,
> >> specifically as this happens per vhost device, no? And if vfio preferred
> >> a slotted format as well, you would end up copying vhost logic.
> >>
> >> That sounds to me like the qemu core should start tracking slots and
> >> report slot changes, not memory region registrations.
> >
> > I was thinking the same thing, but I think Michael is concerned if we'll
> > each need slightly different lists. This is also where kvm is mapping
> > to a fixed array of slots, which is know to blow-up with too many
> > assigned devices. Needs to be fixed on both kernel and qemu side.
> > Runtime overhead of the phys memory client is pretty minimal, it's just
> > the startup that thrashes set_memory.
>
> I'm not just concerned about the runtime overhead. This is code
> duplication. Even if the format of the lists differ, their structure
> should not: one entry per continuous memory region, and some lists may
> track sparsely based on their interests.
>
> I'm sure the core could be taught to help the clients creating and
> maintaining such lists. We already have two types of users in tree, you
> are about to create another one, and Xen should have some need for it as
> well.
>
> Jan
Absolutely. There should be some common code to deal with
slots.
> --
> Siemens AG, Corporate Technology, CT T DE IT 1
> Corporate Competence Center Embedded Linux
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] [PATCH] Fix phys memory client - pass guest physical address not region offset
2011-04-29 3:15 [Qemu-devel] [PATCH] Fix phys memory client - pass guest physical address not region offset Alex Williamson
2011-04-29 15:06 ` Michael S. Tsirkin
@ 2011-05-03 13:15 ` Markus Armbruster
2011-05-03 14:20 ` Alex Williamson
1 sibling, 1 reply; 15+ messages in thread
From: Markus Armbruster @ 2011-05-03 13:15 UTC (permalink / raw)
To: Alex Williamson; +Cc: qemu-devel, mst
Alex Williamson <alex.williamson@redhat.com> writes:
> When we're trying to get a newly registered phys memory client updated
> with the current page mappings, we end up passing the region offset
> (a ram_addr_t) as the start address rather than the actual guest
> physical memory address (target_phys_addr_t). If your guest has less
> than 3.5G of memory, these are coincidentally the same thing. If
> there's more, the region offset for the memory above 4G starts over
> at 0, so the set_memory client will overwrite it's lower memory entries.
>
> Instead, keep track of the guest phsyical address as we're walking the
> tables and pass that to the set_memory client.
>
> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
> ---
>
> exec.c | 10 ++++++----
> 1 files changed, 6 insertions(+), 4 deletions(-)
>
> diff --git a/exec.c b/exec.c
> index 4752af1..e670929 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -1742,7 +1742,7 @@ static int cpu_notify_migration_log(int enable)
> }
>
> static void phys_page_for_each_1(CPUPhysMemoryClient *client,
> - int level, void **lp)
> + int level, void **lp, target_phys_addr_t addr)
> {
> int i;
>
Aren't you abusing target_phys_addr_t here? It's not a physical
address, it needs to be shifted left to become one. By how much depends
on level. Please take pity on future maintainers and spell this out in
a comment.
Perhaps you can code it in a way that makes the parameter an address.
Probably no need for a comment then.
> @@ -1751,16 +1751,18 @@ static void phys_page_for_each_1(CPUPhysMemoryClient *client,
> }
> if (level == 0) {
> PhysPageDesc *pd = *lp;
> + addr <<= L2_BITS + TARGET_PAGE_BITS;
> for (i = 0; i < L2_SIZE; ++i) {
> if (pd[i].phys_offset != IO_MEM_UNASSIGNED) {
> - client->set_memory(client, pd[i].region_offset,
> + client->set_memory(client, addr | i << TARGET_PAGE_BITS,
> TARGET_PAGE_SIZE, pd[i].phys_offset);
> }
> }
> } else {
> void **pp = *lp;
> for (i = 0; i < L2_SIZE; ++i) {
> - phys_page_for_each_1(client, level - 1, pp + i);
> + phys_page_for_each_1(client, level - 1, pp + i,
> + (addr << L2_BITS) | i);
> }
> }
> }
> @@ -1770,7 +1772,7 @@ static void phys_page_for_each(CPUPhysMemoryClient *client)
> int i;
> for (i = 0; i < P_L1_SIZE; ++i) {
> phys_page_for_each_1(client, P_L1_SHIFT / L2_BITS - 1,
> - l1_phys_map + i);
> + l1_phys_map + i, i);
> }
> }
>
Fix makes sense to me, after some head scratching. A comment explaining
the phys map data structure would be helpful. l1_phys_map[] has a
comment, but it's devoid of detail.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] [PATCH] Fix phys memory client - pass guest physical address not region offset
2011-05-03 13:15 ` Markus Armbruster
@ 2011-05-03 14:20 ` Alex Williamson
0 siblings, 0 replies; 15+ messages in thread
From: Alex Williamson @ 2011-05-03 14:20 UTC (permalink / raw)
To: Markus Armbruster; +Cc: qemu-devel, mst
On Tue, 2011-05-03 at 15:15 +0200, Markus Armbruster wrote:
> Alex Williamson <alex.williamson@redhat.com> writes:
>
> > When we're trying to get a newly registered phys memory client updated
> > with the current page mappings, we end up passing the region offset
> > (a ram_addr_t) as the start address rather than the actual guest
> > physical memory address (target_phys_addr_t). If your guest has less
> > than 3.5G of memory, these are coincidentally the same thing. If
> > there's more, the region offset for the memory above 4G starts over
> > at 0, so the set_memory client will overwrite it's lower memory entries.
> >
> > Instead, keep track of the guest phsyical address as we're walking the
> > tables and pass that to the set_memory client.
> >
> > Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
> > ---
> >
> > exec.c | 10 ++++++----
> > 1 files changed, 6 insertions(+), 4 deletions(-)
> >
> > diff --git a/exec.c b/exec.c
> > index 4752af1..e670929 100644
> > --- a/exec.c
> > +++ b/exec.c
> > @@ -1742,7 +1742,7 @@ static int cpu_notify_migration_log(int enable)
> > }
> >
> > static void phys_page_for_each_1(CPUPhysMemoryClient *client,
> > - int level, void **lp)
> > + int level, void **lp, target_phys_addr_t addr)
> > {
> > int i;
> >
>
> Aren't you abusing target_phys_addr_t here? It's not a physical
> address, it needs to be shifted left to become one. By how much depends
> on level. Please take pity on future maintainers and spell this out in
> a comment.
>
> Perhaps you can code it in a way that makes the parameter an address.
> Probably no need for a comment then.
Right, it's not a target_phys_addr_t on passing to the function, but it
becomes one as we work, so it still seemed the appropriate data type. I
rather like how the shifting works into the recursive-ness of the
function, I think it removes a bit of ugliness for figuring how many
levels are there, where am I, how many multiples of *_BITS do I shift.
I'll add a comment and hope that helps.
> > @@ -1751,16 +1751,18 @@ static void phys_page_for_each_1(CPUPhysMemoryClient *client,
> > }
> > if (level == 0) {
> > PhysPageDesc *pd = *lp;
> > + addr <<= L2_BITS + TARGET_PAGE_BITS;
> > for (i = 0; i < L2_SIZE; ++i) {
> > if (pd[i].phys_offset != IO_MEM_UNASSIGNED) {
> > - client->set_memory(client, pd[i].region_offset,
> > + client->set_memory(client, addr | i << TARGET_PAGE_BITS,
> > TARGET_PAGE_SIZE, pd[i].phys_offset);
> > }
> > }
> > } else {
> > void **pp = *lp;
> > for (i = 0; i < L2_SIZE; ++i) {
> > - phys_page_for_each_1(client, level - 1, pp + i);
> > + phys_page_for_each_1(client, level - 1, pp + i,
> > + (addr << L2_BITS) | i);
> > }
> > }
> > }
> > @@ -1770,7 +1772,7 @@ static void phys_page_for_each(CPUPhysMemoryClient *client)
> > int i;
> > for (i = 0; i < P_L1_SIZE; ++i) {
> > phys_page_for_each_1(client, P_L1_SHIFT / L2_BITS - 1,
> > - l1_phys_map + i);
> > + l1_phys_map + i, i);
> > }
> > }
> >
>
> Fix makes sense to me, after some head scratching. A comment explaining
> the phys map data structure would be helpful. l1_phys_map[] has a
> comment, but it's devoid of detail.
I'll see what I can do, though I'm pretty sure I'm not at the top of the
list for describing the existence and format of these tables. Thanks,
Alex
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2011-05-03 14:20 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-04-29 3:15 [Qemu-devel] [PATCH] Fix phys memory client - pass guest physical address not region offset Alex Williamson
2011-04-29 15:06 ` Michael S. Tsirkin
2011-04-29 15:29 ` Jan Kiszka
2011-04-29 15:34 ` Michael S. Tsirkin
2011-04-29 15:41 ` Alex Williamson
2011-04-29 15:38 ` Alex Williamson
2011-04-29 15:45 ` Jan Kiszka
2011-04-29 15:55 ` Alex Williamson
2011-04-29 16:07 ` Jan Kiszka
2011-04-29 16:20 ` Alex Williamson
2011-04-29 16:31 ` Jan Kiszka
2011-05-01 10:29 ` Michael S. Tsirkin
2011-04-29 16:52 ` Alex Williamson
2011-05-03 13:15 ` Markus Armbruster
2011-05-03 14:20 ` Alex Williamson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).