linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 1/4] ARM: KVM: on unhandled IO mem abort, route the call to the KVM MMIO bus
       [not found] <1394726249-1547-1-git-send-email-a.motakis@virtualopensystems.com>
@ 2014-03-13 15:57 ` Antonios Motakis
  2014-03-28 19:09   ` Christoffer Dall
  2014-03-13 15:57 ` [RFC PATCH 3/4] ARM: KVM: enable linking against eventfd Antonios Motakis
  2014-03-13 15:57 ` [RFC PATCH 4/4] ARM: KVM: enable KVM_CAP_IOEVENTFD Antonios Motakis
  2 siblings, 1 reply; 16+ messages in thread
From: Antonios Motakis @ 2014-03-13 15:57 UTC (permalink / raw)
  To: linux-arm-kernel

On an unhandled IO memory abort, use the kvm_io_bus_* API in order to
handle the MMIO access through any registered read/write callbacks. This
is a dependency for eventfd support (ioeventfd and irqfd).

However, accesses to the VGIC are still left implemented independently,
since the kvm_io_bus_* API doesn't pass the VCPU pointer doing the access.

Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
---
 arch/arm/kvm/mmio.c | 32 ++++++++++++++++++++++++++++++++
 virt/kvm/arm/vgic.c |  5 ++++-
 2 files changed, 36 insertions(+), 1 deletion(-)

diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c
index 4cb5a93..1d17831 100644
--- a/arch/arm/kvm/mmio.c
+++ b/arch/arm/kvm/mmio.c
@@ -162,6 +162,35 @@ static int decode_hsr(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	return 0;
 }
 
+/**
+ * kvm_handle_mmio - handle an in-kernel MMIO access
+ * @vcpu:	pointer to the vcpu performing the access
+ * @run:	pointer to the kvm_run structure
+ * @mmio:	pointer to the data describing the access
+ *
+ * returns true if the MMIO access has been performed in kernel space,
+ * and false if it needs to be emulated in user space.
+ */
+static bool handle_kernel_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
+		struct kvm_exit_mmio *mmio)
+{
+	int ret;
+	if (mmio->is_write) {
+		ret = kvm_io_bus_write(vcpu->kvm, KVM_MMIO_BUS, mmio->phys_addr,
+				mmio->len, &mmio->data);
+
+	} else {
+		ret = kvm_io_bus_read(vcpu->kvm, KVM_MMIO_BUS, mmio->phys_addr,
+				mmio->len, &mmio->data);
+	}
+	if (!ret) {
+		kvm_prepare_mmio(run, mmio);
+		kvm_handle_mmio_return(vcpu, run);
+	}
+
+	return !ret;
+}
+
 int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
 		 phys_addr_t fault_ipa)
 {
@@ -200,6 +229,9 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
 	if (vgic_handle_mmio(vcpu, run, &mmio))
 		return 1;
 
+	if (handle_kernel_mmio(vcpu, run, &mmio))
+		return 1;
+
 	kvm_prepare_mmio(run, &mmio);
 	return 0;
 }
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index 8ca405c..afdecc3 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -849,13 +849,16 @@ struct mmio_range *find_matching_range(const struct mmio_range *ranges,
 }
 
 /**
- * vgic_handle_mmio - handle an in-kernel MMIO access
+ * vgic_handle_mmio - handle an in-kernel vgic MMIO access
  * @vcpu:	pointer to the vcpu performing the access
  * @run:	pointer to the kvm_run structure
  * @mmio:	pointer to the data describing the access
  *
  * returns true if the MMIO access has been performed in kernel space,
  * and false if it needs to be emulated in user space.
+ *
+ * This is handled outside of kvm_handle_mmio because the kvm_io_bus only
+ * passes the VM pointer, while we need the VCPU performing the access.
  */
 bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
 		      struct kvm_exit_mmio *mmio)
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 3/4] ARM: KVM: enable linking against eventfd
       [not found] <1394726249-1547-1-git-send-email-a.motakis@virtualopensystems.com>
  2014-03-13 15:57 ` [RFC PATCH 1/4] ARM: KVM: on unhandled IO mem abort, route the call to the KVM MMIO bus Antonios Motakis
@ 2014-03-13 15:57 ` Antonios Motakis
  2014-03-13 15:57 ` [RFC PATCH 4/4] ARM: KVM: enable KVM_CAP_IOEVENTFD Antonios Motakis
  2 siblings, 0 replies; 16+ messages in thread
From: Antonios Motakis @ 2014-03-13 15:57 UTC (permalink / raw)
  To: linux-arm-kernel

This enables and compiles the ioeventfd capability of KVM on ARM. The
irqfd feature will not be included in a build, due to the lack of IRQ
routing in the VGIC implementation of KVM.

Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
---
 arch/arm/kvm/Kconfig  | 1 +
 arch/arm/kvm/Makefile | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/arm/kvm/Kconfig b/arch/arm/kvm/Kconfig
index 466bd29..a4b0312 100644
--- a/arch/arm/kvm/Kconfig
+++ b/arch/arm/kvm/Kconfig
@@ -20,6 +20,7 @@ config KVM
 	bool "Kernel-based Virtual Machine (KVM) support"
 	select PREEMPT_NOTIFIERS
 	select ANON_INODES
+	select HAVE_KVM_EVENTFD
 	select HAVE_KVM_CPU_RELAX_INTERCEPT
 	select KVM_MMIO
 	select KVM_ARM_HOST
diff --git a/arch/arm/kvm/Makefile b/arch/arm/kvm/Makefile
index 789bca9..2fa2f82 100644
--- a/arch/arm/kvm/Makefile
+++ b/arch/arm/kvm/Makefile
@@ -15,7 +15,7 @@ AFLAGS_init.o := -Wa,-march=armv7-a$(plus_virt)
 AFLAGS_interrupts.o := -Wa,-march=armv7-a$(plus_virt)
 
 KVM := ../../../virt/kvm
-kvm-arm-y = $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o
+kvm-arm-y = $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o
 
 obj-y += kvm-arm.o init.o interrupts.o
 obj-y += arm.o handle_exit.o guest.o mmu.o emulate.o reset.o
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 4/4] ARM: KVM: enable KVM_CAP_IOEVENTFD
       [not found] <1394726249-1547-1-git-send-email-a.motakis@virtualopensystems.com>
  2014-03-13 15:57 ` [RFC PATCH 1/4] ARM: KVM: on unhandled IO mem abort, route the call to the KVM MMIO bus Antonios Motakis
  2014-03-13 15:57 ` [RFC PATCH 3/4] ARM: KVM: enable linking against eventfd Antonios Motakis
@ 2014-03-13 15:57 ` Antonios Motakis
  2 siblings, 0 replies; 16+ messages in thread
From: Antonios Motakis @ 2014-03-13 15:57 UTC (permalink / raw)
  To: linux-arm-kernel

KVM on ARM can now advertise support for the ioeventfd capability.

Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
---
 arch/arm/kvm/arm.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index bd18bb8..08cd89b 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -211,6 +211,9 @@ int kvm_dev_ioctl_check_extension(long ext)
 	case KVM_CAP_MAX_VCPUS:
 		r = KVM_MAX_VCPUS;
 		break;
+	case KVM_CAP_IOEVENTFD:
+		r = 1;
+		break;
 	default:
 		r = kvm_arch_dev_ioctl_check_extension(ext);
 		break;
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 1/4] ARM: KVM: on unhandled IO mem abort, route the call to the KVM MMIO bus
  2014-03-13 15:57 ` [RFC PATCH 1/4] ARM: KVM: on unhandled IO mem abort, route the call to the KVM MMIO bus Antonios Motakis
@ 2014-03-28 19:09   ` Christoffer Dall
  2014-03-29 17:34     ` Paolo Bonzini
  2014-11-10 15:09     ` Nikolay Nikolaev
  0 siblings, 2 replies; 16+ messages in thread
From: Christoffer Dall @ 2014-03-28 19:09 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Mar 13, 2014 at 04:57:26PM +0100, Antonios Motakis wrote:
> On an unhandled IO memory abort, use the kvm_io_bus_* API in order to
> handle the MMIO access through any registered read/write callbacks. This
> is a dependency for eventfd support (ioeventfd and irqfd).
> 
> However, accesses to the VGIC are still left implemented independently,
> since the kvm_io_bus_* API doesn't pass the VCPU pointer doing the access.
> 
> Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
> Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
> ---
>  arch/arm/kvm/mmio.c | 32 ++++++++++++++++++++++++++++++++
>  virt/kvm/arm/vgic.c |  5 ++++-
>  2 files changed, 36 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c
> index 4cb5a93..1d17831 100644
> --- a/arch/arm/kvm/mmio.c
> +++ b/arch/arm/kvm/mmio.c
> @@ -162,6 +162,35 @@ static int decode_hsr(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	return 0;
>  }
>  
> +/**
> + * kvm_handle_mmio - handle an in-kernel MMIO access
> + * @vcpu:	pointer to the vcpu performing the access
> + * @run:	pointer to the kvm_run structure
> + * @mmio:	pointer to the data describing the access
> + *
> + * returns true if the MMIO access has been performed in kernel space,
> + * and false if it needs to be emulated in user space.
> + */
> +static bool handle_kernel_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
> +		struct kvm_exit_mmio *mmio)
> +{
> +	int ret;
> +	if (mmio->is_write) {
> +		ret = kvm_io_bus_write(vcpu->kvm, KVM_MMIO_BUS, mmio->phys_addr,
> +				mmio->len, &mmio->data);
> +
> +	} else {
> +		ret = kvm_io_bus_read(vcpu->kvm, KVM_MMIO_BUS, mmio->phys_addr,
> +				mmio->len, &mmio->data);
> +	}
> +	if (!ret) {
> +		kvm_prepare_mmio(run, mmio);
> +		kvm_handle_mmio_return(vcpu, run);
> +	}
> +
> +	return !ret;
> +}
> +
>  int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
>  		 phys_addr_t fault_ipa)
>  {
> @@ -200,6 +229,9 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
>  	if (vgic_handle_mmio(vcpu, run, &mmio))
>  		return 1;
>  
> +	if (handle_kernel_mmio(vcpu, run, &mmio))
> +		return 1;
> +

this special-casing of the vgic is now really terrible.  Is there
anything holding you back from doing the necessary restructure of the
kvm_bus_io_*() API instead?  That would allow us to get rid of the ugly
Fix it! in the vgic driver as well.

-Christoffer

>  	kvm_prepare_mmio(run, &mmio);
>  	return 0;
>  }
> diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
> index 8ca405c..afdecc3 100644
> --- a/virt/kvm/arm/vgic.c
> +++ b/virt/kvm/arm/vgic.c
> @@ -849,13 +849,16 @@ struct mmio_range *find_matching_range(const struct mmio_range *ranges,
>  }
>  
>  /**
> - * vgic_handle_mmio - handle an in-kernel MMIO access
> + * vgic_handle_mmio - handle an in-kernel vgic MMIO access
>   * @vcpu:	pointer to the vcpu performing the access
>   * @run:	pointer to the kvm_run structure
>   * @mmio:	pointer to the data describing the access
>   *
>   * returns true if the MMIO access has been performed in kernel space,
>   * and false if it needs to be emulated in user space.
> + *
> + * This is handled outside of kvm_handle_mmio because the kvm_io_bus only
> + * passes the VM pointer, while we need the VCPU performing the access.
>   */
>  bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
>  		      struct kvm_exit_mmio *mmio)
> -- 
> 1.8.3.2
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH 1/4] ARM: KVM: on unhandled IO mem abort, route the call to the KVM MMIO bus
  2014-03-28 19:09   ` Christoffer Dall
@ 2014-03-29 17:34     ` Paolo Bonzini
  2014-11-10 15:09     ` Nikolay Nikolaev
  1 sibling, 0 replies; 16+ messages in thread
From: Paolo Bonzini @ 2014-03-29 17:34 UTC (permalink / raw)
  To: linux-arm-kernel

Il 28/03/2014 20:09, Christoffer Dall ha scritto:
> On Thu, Mar 13, 2014 at 04:57:26PM +0100, Antonios Motakis wrote:
>> On an unhandled IO memory abort, use the kvm_io_bus_* API in order to
>> handle the MMIO access through any registered read/write callbacks. This
>> is a dependency for eventfd support (ioeventfd and irqfd).
>>
>> However, accesses to the VGIC are still left implemented independently,
>> since the kvm_io_bus_* API doesn't pass the VCPU pointer doing the access.
>>
>> Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
>> Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
>> ---
>>  arch/arm/kvm/mmio.c | 32 ++++++++++++++++++++++++++++++++
>>  virt/kvm/arm/vgic.c |  5 ++++-
>>  2 files changed, 36 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c
>> index 4cb5a93..1d17831 100644
>> --- a/arch/arm/kvm/mmio.c
>> +++ b/arch/arm/kvm/mmio.c
>> @@ -162,6 +162,35 @@ static int decode_hsr(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>  	return 0;
>>  }
>>
>> +/**
>> + * kvm_handle_mmio - handle an in-kernel MMIO access
>> + * @vcpu:	pointer to the vcpu performing the access
>> + * @run:	pointer to the kvm_run structure
>> + * @mmio:	pointer to the data describing the access
>> + *
>> + * returns true if the MMIO access has been performed in kernel space,
>> + * and false if it needs to be emulated in user space.
>> + */
>> +static bool handle_kernel_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
>> +		struct kvm_exit_mmio *mmio)
>> +{
>> +	int ret;
>> +	if (mmio->is_write) {
>> +		ret = kvm_io_bus_write(vcpu->kvm, KVM_MMIO_BUS, mmio->phys_addr,
>> +				mmio->len, &mmio->data);
>> +
>> +	} else {
>> +		ret = kvm_io_bus_read(vcpu->kvm, KVM_MMIO_BUS, mmio->phys_addr,
>> +				mmio->len, &mmio->data);
>> +	}
>> +	if (!ret) {
>> +		kvm_prepare_mmio(run, mmio);
>> +		kvm_handle_mmio_return(vcpu, run);
>> +	}
>> +
>> +	return !ret;
>> +}
>> +
>>  int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
>>  		 phys_addr_t fault_ipa)
>>  {
>> @@ -200,6 +229,9 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
>>  	if (vgic_handle_mmio(vcpu, run, &mmio))
>>  		return 1;
>>
>> +	if (handle_kernel_mmio(vcpu, run, &mmio))
>> +		return 1;
>> +
>
> this special-casing of the vgic is now really terrible.  Is there
> anything holding you back from doing the necessary restructure of the
> kvm_bus_io_*() API instead?  That would allow us to get rid of the ugly
> Fix it! in the vgic driver as well.

It's also quite terrible in x86, for the same reason (see 
vcpu_mmio_write in arch/x86/kvm/x86.c).

Though I suppose moving vgic_handle_mmio to handle_kernel_mmio would 
ameliorate the situation a bit.

Paolo

> -Christoffer
>
>>  	kvm_prepare_mmio(run, &mmio);
>>  	return 0;
>>  }
>> diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
>> index 8ca405c..afdecc3 100644
>> --- a/virt/kvm/arm/vgic.c
>> +++ b/virt/kvm/arm/vgic.c
>> @@ -849,13 +849,16 @@ struct mmio_range *find_matching_range(const struct mmio_range *ranges,
>>  }
>>
>>  /**
>> - * vgic_handle_mmio - handle an in-kernel MMIO access
>> + * vgic_handle_mmio - handle an in-kernel vgic MMIO access
>>   * @vcpu:	pointer to the vcpu performing the access
>>   * @run:	pointer to the kvm_run structure
>>   * @mmio:	pointer to the data describing the access
>>   *
>>   * returns true if the MMIO access has been performed in kernel space,
>>   * and false if it needs to be emulated in user space.
>> + *
>> + * This is handled outside of kvm_handle_mmio because the kvm_io_bus only
>> + * passes the VM pointer, while we need the VCPU performing the access.
>>   */
>>  bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
>>  		      struct kvm_exit_mmio *mmio)
>> --
>> 1.8.3.2
>>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH 1/4] ARM: KVM: on unhandled IO mem abort, route the call to the KVM MMIO bus
  2014-03-28 19:09   ` Christoffer Dall
  2014-03-29 17:34     ` Paolo Bonzini
@ 2014-11-10 15:09     ` Nikolay Nikolaev
  2014-11-10 16:27       ` Christoffer Dall
  1 sibling, 1 reply; 16+ messages in thread
From: Nikolay Nikolaev @ 2014-11-10 15:09 UTC (permalink / raw)
  To: linux-arm-kernel

Hello,

On Fri, Mar 28, 2014 at 9:09 PM, Christoffer Dall
<christoffer.dall@linaro.org> wrote:
>
> On Thu, Mar 13, 2014 at 04:57:26PM +0100, Antonios Motakis wrote:
> > On an unhandled IO memory abort, use the kvm_io_bus_* API in order to
> > handle the MMIO access through any registered read/write callbacks. This
> > is a dependency for eventfd support (ioeventfd and irqfd).
> >
> > However, accesses to the VGIC are still left implemented independently,
> > since the kvm_io_bus_* API doesn't pass the VCPU pointer doing the access.
> >
> > Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
> > Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
> > ---
> >  arch/arm/kvm/mmio.c | 32 ++++++++++++++++++++++++++++++++
> >  virt/kvm/arm/vgic.c |  5 ++++-
> >  2 files changed, 36 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c
> > index 4cb5a93..1d17831 100644
> > --- a/arch/arm/kvm/mmio.c
> > +++ b/arch/arm/kvm/mmio.c
> > @@ -162,6 +162,35 @@ static int decode_hsr(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> >       return 0;
> >  }
> >
> > +/**
> > + * kvm_handle_mmio - handle an in-kernel MMIO access
> > + * @vcpu:    pointer to the vcpu performing the access
> > + * @run:     pointer to the kvm_run structure
> > + * @mmio:    pointer to the data describing the access
> > + *
> > + * returns true if the MMIO access has been performed in kernel space,
> > + * and false if it needs to be emulated in user space.
> > + */
> > +static bool handle_kernel_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
> > +             struct kvm_exit_mmio *mmio)
> > +{
> > +     int ret;
> > +     if (mmio->is_write) {
> > +             ret = kvm_io_bus_write(vcpu->kvm, KVM_MMIO_BUS, mmio->phys_addr,
> > +                             mmio->len, &mmio->data);
> > +
> > +     } else {
> > +             ret = kvm_io_bus_read(vcpu->kvm, KVM_MMIO_BUS, mmio->phys_addr,
> > +                             mmio->len, &mmio->data);
> > +     }
> > +     if (!ret) {
> > +             kvm_prepare_mmio(run, mmio);
> > +             kvm_handle_mmio_return(vcpu, run);
> > +     }
> > +
> > +     return !ret;
> > +}
> > +
> >  int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
> >                phys_addr_t fault_ipa)
> >  {
> > @@ -200,6 +229,9 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
> >       if (vgic_handle_mmio(vcpu, run, &mmio))
> >               return 1;
> >
> > +     if (handle_kernel_mmio(vcpu, run, &mmio))
> > +             return 1;
> > +


We're reconsidering ioeventfds patchseries and we tried to evaluate
what you suggested here.

>
> this special-casing of the vgic is now really terrible.  Is there
> anything holding you back from doing the necessary restructure of the
> kvm_bus_io_*() API instead?

Restructuring the kvm_io_bus_ API is not a big thing (we actually did
it), but is not directly related to the these patches.
Of course it can be justified if we do it in the context of removing
vgic_handle_mmio and leaving only handle_kernel_mmio.

>
> That would allow us to get rid of the ugly
> Fix it! in the vgic driver as well.

Going through the vgic_handle_mmio we see that it will require large
refactoring:
 - there are 15 MMIO ranges for the vgic now - each should be
registered as a separate device
 - the handler of each range should be split into read and write
 - all handlers take 'struct kvm_exit_mmio', and pass it to
'vgic_reg_access', 'mmio_data_read' and 'mmio_data_read'

To sum up - if we do this refactoring of vgic's MMIO handling +
kvm_io_bus_ API getting 'vcpu" argument we'll get a 'much' cleaner
vgic code and as a bonus we'll get 'ioeventfd' capabilities.

We have 3 questions:
 - is the kvm_io_bus_ getting 'vcpu' argument acceptable for the other
architectures too?
 - is this huge vgic MMIO handling redesign acceptable/desired (it
touches a lot of code)?
 - is there a way that ioeventfd is accepted leaving vgic.c in it's
current state?

regards,
Nikolay NIkolaev
Virtual Open Systems

>
> -Christoffer
>
> >       kvm_prepare_mmio(run, &mmio);
> >       return 0;
> >  }
> > diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
> > index 8ca405c..afdecc3 100644
> > --- a/virt/kvm/arm/vgic.c
> > +++ b/virt/kvm/arm/vgic.c
> > @@ -849,13 +849,16 @@ struct mmio_range *find_matching_range(const struct mmio_range *ranges,
> >  }
> >
> >  /**
> > - * vgic_handle_mmio - handle an in-kernel MMIO access
> > + * vgic_handle_mmio - handle an in-kernel vgic MMIO access
> >   * @vcpu:    pointer to the vcpu performing the access
> >   * @run:     pointer to the kvm_run structure
> >   * @mmio:    pointer to the data describing the access
> >   *
> >   * returns true if the MMIO access has been performed in kernel space,
> >   * and false if it needs to be emulated in user space.
> > + *
> > + * This is handled outside of kvm_handle_mmio because the kvm_io_bus only
> > + * passes the VM pointer, while we need the VCPU performing the access.
> >   */
> >  bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
> >                     struct kvm_exit_mmio *mmio)
> > --
> > 1.8.3.2
> >

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH 1/4] ARM: KVM: on unhandled IO mem abort, route the call to the KVM MMIO bus
  2014-11-10 15:09     ` Nikolay Nikolaev
@ 2014-11-10 16:27       ` Christoffer Dall
  2014-11-13 10:45         ` Nikolay Nikolaev
  0 siblings, 1 reply; 16+ messages in thread
From: Christoffer Dall @ 2014-11-10 16:27 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Nov 10, 2014 at 05:09:07PM +0200, Nikolay Nikolaev wrote:
> Hello,
> 
> On Fri, Mar 28, 2014 at 9:09 PM, Christoffer Dall
> <christoffer.dall@linaro.org> wrote:
> >
> > On Thu, Mar 13, 2014 at 04:57:26PM +0100, Antonios Motakis wrote:
> > > On an unhandled IO memory abort, use the kvm_io_bus_* API in order to
> > > handle the MMIO access through any registered read/write callbacks. This
> > > is a dependency for eventfd support (ioeventfd and irqfd).
> > >
> > > However, accesses to the VGIC are still left implemented independently,
> > > since the kvm_io_bus_* API doesn't pass the VCPU pointer doing the access.
> > >
> > > Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
> > > Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
> > > ---
> > >  arch/arm/kvm/mmio.c | 32 ++++++++++++++++++++++++++++++++
> > >  virt/kvm/arm/vgic.c |  5 ++++-
> > >  2 files changed, 36 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c
> > > index 4cb5a93..1d17831 100644
> > > --- a/arch/arm/kvm/mmio.c
> > > +++ b/arch/arm/kvm/mmio.c
> > > @@ -162,6 +162,35 @@ static int decode_hsr(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> > >       return 0;
> > >  }
> > >
> > > +/**
> > > + * kvm_handle_mmio - handle an in-kernel MMIO access
> > > + * @vcpu:    pointer to the vcpu performing the access
> > > + * @run:     pointer to the kvm_run structure
> > > + * @mmio:    pointer to the data describing the access
> > > + *
> > > + * returns true if the MMIO access has been performed in kernel space,
> > > + * and false if it needs to be emulated in user space.
> > > + */
> > > +static bool handle_kernel_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
> > > +             struct kvm_exit_mmio *mmio)
> > > +{
> > > +     int ret;
> > > +     if (mmio->is_write) {
> > > +             ret = kvm_io_bus_write(vcpu->kvm, KVM_MMIO_BUS, mmio->phys_addr,
> > > +                             mmio->len, &mmio->data);
> > > +
> > > +     } else {
> > > +             ret = kvm_io_bus_read(vcpu->kvm, KVM_MMIO_BUS, mmio->phys_addr,
> > > +                             mmio->len, &mmio->data);
> > > +     }
> > > +     if (!ret) {
> > > +             kvm_prepare_mmio(run, mmio);
> > > +             kvm_handle_mmio_return(vcpu, run);
> > > +     }
> > > +
> > > +     return !ret;
> > > +}
> > > +
> > >  int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
> > >                phys_addr_t fault_ipa)
> > >  {
> > > @@ -200,6 +229,9 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
> > >       if (vgic_handle_mmio(vcpu, run, &mmio))
> > >               return 1;
> > >
> > > +     if (handle_kernel_mmio(vcpu, run, &mmio))
> > > +             return 1;
> > > +
> 
> 
> We're reconsidering ioeventfds patchseries and we tried to evaluate
> what you suggested here.
> 
> >
> > this special-casing of the vgic is now really terrible.  Is there
> > anything holding you back from doing the necessary restructure of the
> > kvm_bus_io_*() API instead?
> 
> Restructuring the kvm_io_bus_ API is not a big thing (we actually did
> it), but is not directly related to the these patches.
> Of course it can be justified if we do it in the context of removing
> vgic_handle_mmio and leaving only handle_kernel_mmio.
> 
> >
> > That would allow us to get rid of the ugly
> > Fix it! in the vgic driver as well.
> 
> Going through the vgic_handle_mmio we see that it will require large
> refactoring:
>  - there are 15 MMIO ranges for the vgic now - each should be
> registered as a separate device
>  - the handler of each range should be split into read and write
>  - all handlers take 'struct kvm_exit_mmio', and pass it to
> 'vgic_reg_access', 'mmio_data_read' and 'mmio_data_read'
> 
> To sum up - if we do this refactoring of vgic's MMIO handling +
> kvm_io_bus_ API getting 'vcpu" argument we'll get a 'much' cleaner
> vgic code and as a bonus we'll get 'ioeventfd' capabilities.
> 
> We have 3 questions:
>  - is the kvm_io_bus_ getting 'vcpu' argument acceptable for the other
> architectures too?
>  - is this huge vgic MMIO handling redesign acceptable/desired (it
> touches a lot of code)?
>  - is there a way that ioeventfd is accepted leaving vgic.c in it's
> current state?
> 
Not sure how the latter question is relevant to this, but check with
Andre who recently looked at this as well and decided that for GICv3 the
only sane thing was to remove that comment for the gic.

I don't recall the details of what you were trying to accomplish here
(it's been 8 months or so) but the surely the vgic handling code should
*somehow* be integrated into the handle_kernel_mmio (like Paolo
suggested), unless you come back and tell me that that would involve a
complete rewrite of the vgic code.

-Christoffer

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH 1/4] ARM: KVM: on unhandled IO mem abort, route the call to the KVM MMIO bus
  2014-11-10 16:27       ` Christoffer Dall
@ 2014-11-13 10:45         ` Nikolay Nikolaev
  2014-11-13 11:20           ` Christoffer Dall
  2014-11-13 14:16           ` Eric Auger
  0 siblings, 2 replies; 16+ messages in thread
From: Nikolay Nikolaev @ 2014-11-13 10:45 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Nov 10, 2014 at 6:27 PM, Christoffer Dall
<christoffer.dall@linaro.org> wrote:
> On Mon, Nov 10, 2014 at 05:09:07PM +0200, Nikolay Nikolaev wrote:
>> Hello,
>>
>> On Fri, Mar 28, 2014 at 9:09 PM, Christoffer Dall
>> <christoffer.dall@linaro.org> wrote:
>> >
>> > On Thu, Mar 13, 2014 at 04:57:26PM +0100, Antonios Motakis wrote:
>> > > On an unhandled IO memory abort, use the kvm_io_bus_* API in order to
>> > > handle the MMIO access through any registered read/write callbacks. This
>> > > is a dependency for eventfd support (ioeventfd and irqfd).
>> > >
>> > > However, accesses to the VGIC are still left implemented independently,
>> > > since the kvm_io_bus_* API doesn't pass the VCPU pointer doing the access.
>> > >
>> > > Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
>> > > Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
>> > > ---
>> > >  arch/arm/kvm/mmio.c | 32 ++++++++++++++++++++++++++++++++
>> > >  virt/kvm/arm/vgic.c |  5 ++++-
>> > >  2 files changed, 36 insertions(+), 1 deletion(-)
>> > >
>> > > diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c
>> > > index 4cb5a93..1d17831 100644
>> > > --- a/arch/arm/kvm/mmio.c
>> > > +++ b/arch/arm/kvm/mmio.c
>> > > @@ -162,6 +162,35 @@ static int decode_hsr(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>> > >       return 0;
>> > >  }
>> > >
>> > > +/**
>> > > + * kvm_handle_mmio - handle an in-kernel MMIO access
>> > > + * @vcpu:    pointer to the vcpu performing the access
>> > > + * @run:     pointer to the kvm_run structure
>> > > + * @mmio:    pointer to the data describing the access
>> > > + *
>> > > + * returns true if the MMIO access has been performed in kernel space,
>> > > + * and false if it needs to be emulated in user space.
>> > > + */
>> > > +static bool handle_kernel_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
>> > > +             struct kvm_exit_mmio *mmio)
>> > > +{
>> > > +     int ret;
>> > > +     if (mmio->is_write) {
>> > > +             ret = kvm_io_bus_write(vcpu->kvm, KVM_MMIO_BUS, mmio->phys_addr,
>> > > +                             mmio->len, &mmio->data);
>> > > +
>> > > +     } else {
>> > > +             ret = kvm_io_bus_read(vcpu->kvm, KVM_MMIO_BUS, mmio->phys_addr,
>> > > +                             mmio->len, &mmio->data);
>> > > +     }
>> > > +     if (!ret) {
>> > > +             kvm_prepare_mmio(run, mmio);
>> > > +             kvm_handle_mmio_return(vcpu, run);
>> > > +     }
>> > > +
>> > > +     return !ret;
>> > > +}
>> > > +
>> > >  int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
>> > >                phys_addr_t fault_ipa)
>> > >  {
>> > > @@ -200,6 +229,9 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
>> > >       if (vgic_handle_mmio(vcpu, run, &mmio))
>> > >               return 1;
>> > >
>> > > +     if (handle_kernel_mmio(vcpu, run, &mmio))
>> > > +             return 1;
>> > > +
>>
>>
>> We're reconsidering ioeventfds patchseries and we tried to evaluate
>> what you suggested here.
>>
>> >
>> > this special-casing of the vgic is now really terrible.  Is there
>> > anything holding you back from doing the necessary restructure of the
>> > kvm_bus_io_*() API instead?
>>
>> Restructuring the kvm_io_bus_ API is not a big thing (we actually did
>> it), but is not directly related to the these patches.
>> Of course it can be justified if we do it in the context of removing
>> vgic_handle_mmio and leaving only handle_kernel_mmio.
>>
>> >
>> > That would allow us to get rid of the ugly
>> > Fix it! in the vgic driver as well.
>>
>> Going through the vgic_handle_mmio we see that it will require large
>> refactoring:
>>  - there are 15 MMIO ranges for the vgic now - each should be
>> registered as a separate device
>>  - the handler of each range should be split into read and write
>>  - all handlers take 'struct kvm_exit_mmio', and pass it to
>> 'vgic_reg_access', 'mmio_data_read' and 'mmio_data_read'
>>
>> To sum up - if we do this refactoring of vgic's MMIO handling +
>> kvm_io_bus_ API getting 'vcpu" argument we'll get a 'much' cleaner
>> vgic code and as a bonus we'll get 'ioeventfd' capabilities.
>>
>> We have 3 questions:
>>  - is the kvm_io_bus_ getting 'vcpu' argument acceptable for the other
>> architectures too?
>>  - is this huge vgic MMIO handling redesign acceptable/desired (it
>> touches a lot of code)?
>>  - is there a way that ioeventfd is accepted leaving vgic.c in it's
>> current state?
>>
> Not sure how the latter question is relevant to this, but check with
> Andre who recently looked at this as well and decided that for GICv3 the
> only sane thing was to remove that comment for the gic.
@Andre - what's your experience with the GICv3 and MMIO handling,
anything specific?
>
> I don't recall the details of what you were trying to accomplish here
> (it's been 8 months or so) but the surely the vgic handling code should
> *somehow* be integrated into the handle_kernel_mmio (like Paolo
> suggested), unless you come back and tell me that that would involve a
> complete rewrite of the vgic code.
I'm experimenting now - it's not exactly rewrite of whole vgic code,
but it will touch a lot of it  - all MMIO access handlers and the
supporting functions.
We're ready to spend the effort. My question is  - is this acceptable?

regards,
Nikolay Nikolaev
Virtual Open Systems
>
> -Christoffer

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH 1/4] ARM: KVM: on unhandled IO mem abort, route the call to the KVM MMIO bus
  2014-11-13 10:45         ` Nikolay Nikolaev
@ 2014-11-13 11:20           ` Christoffer Dall
  2014-11-13 11:20             ` Christoffer Dall
  2014-11-13 11:37             ` Marc Zyngier
  2014-11-13 14:16           ` Eric Auger
  1 sibling, 2 replies; 16+ messages in thread
From: Christoffer Dall @ 2014-11-13 11:20 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Nov 13, 2014 at 12:45:42PM +0200, Nikolay Nikolaev wrote:

[...]

> >>
> >> Going through the vgic_handle_mmio we see that it will require large
> >> refactoring:
> >>  - there are 15 MMIO ranges for the vgic now - each should be
> >> registered as a separate device
> >>  - the handler of each range should be split into read and write
> >>  - all handlers take 'struct kvm_exit_mmio', and pass it to
> >> 'vgic_reg_access', 'mmio_data_read' and 'mmio_data_read'
> >>
> >> To sum up - if we do this refactoring of vgic's MMIO handling +
> >> kvm_io_bus_ API getting 'vcpu" argument we'll get a 'much' cleaner
> >> vgic code and as a bonus we'll get 'ioeventfd' capabilities.
> >>
> >> We have 3 questions:
> >>  - is the kvm_io_bus_ getting 'vcpu' argument acceptable for the other
> >> architectures too?
> >>  - is this huge vgic MMIO handling redesign acceptable/desired (it
> >> touches a lot of code)?
> >>  - is there a way that ioeventfd is accepted leaving vgic.c in it's
> >> current state?
> >>
> > Not sure how the latter question is relevant to this, but check with
> > Andre who recently looked at this as well and decided that for GICv3 the
> > only sane thing was to remove that comment for the gic.
> @Andre - what's your experience with the GICv3 and MMIO handling,
> anything specific?
> >
> > I don't recall the details of what you were trying to accomplish here
> > (it's been 8 months or so) but the surely the vgic handling code should
> > *somehow* be integrated into the handle_kernel_mmio (like Paolo
> > suggested), unless you come back and tell me that that would involve a
> > complete rewrite of the vgic code.
> I'm experimenting now - it's not exactly rewrite of whole vgic code,
> but it will touch a lot of it  - all MMIO access handlers and the
> supporting functions.
> We're ready to spend the effort. My question is  - is this acceptable?
> 
I certainly appreciate the offer to do this work, but it's hard to say
at this point if it is worth it.

What I was trying to say above is that Andre looked at this, and came to
the conclusion that it is not worth it.

Marc, what are your thoughts?

-Christoffer

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH 1/4] ARM: KVM: on unhandled IO mem abort, route the call to the KVM MMIO bus
  2014-11-13 11:20           ` Christoffer Dall
@ 2014-11-13 11:20             ` Christoffer Dall
  2014-11-13 11:37             ` Marc Zyngier
  1 sibling, 0 replies; 16+ messages in thread
From: Christoffer Dall @ 2014-11-13 11:20 UTC (permalink / raw)
  To: linux-arm-kernel

[resending to Andre's actual e-mail address]

On Thu, Nov 13, 2014 at 12:20 PM, Christoffer Dall
<christoffer.dall@linaro.org> wrote:
> On Thu, Nov 13, 2014 at 12:45:42PM +0200, Nikolay Nikolaev wrote:
>
> [...]
>
>> >>
>> >> Going through the vgic_handle_mmio we see that it will require large
>> >> refactoring:
>> >>  - there are 15 MMIO ranges for the vgic now - each should be
>> >> registered as a separate device
>> >>  - the handler of each range should be split into read and write
>> >>  - all handlers take 'struct kvm_exit_mmio', and pass it to
>> >> 'vgic_reg_access', 'mmio_data_read' and 'mmio_data_read'
>> >>
>> >> To sum up - if we do this refactoring of vgic's MMIO handling +
>> >> kvm_io_bus_ API getting 'vcpu" argument we'll get a 'much' cleaner
>> >> vgic code and as a bonus we'll get 'ioeventfd' capabilities.
>> >>
>> >> We have 3 questions:
>> >>  - is the kvm_io_bus_ getting 'vcpu' argument acceptable for the other
>> >> architectures too?
>> >>  - is this huge vgic MMIO handling redesign acceptable/desired (it
>> >> touches a lot of code)?
>> >>  - is there a way that ioeventfd is accepted leaving vgic.c in it's
>> >> current state?
>> >>
>> > Not sure how the latter question is relevant to this, but check with
>> > Andre who recently looked at this as well and decided that for GICv3 the
>> > only sane thing was to remove that comment for the gic.
>> @Andre - what's your experience with the GICv3 and MMIO handling,
>> anything specific?
>> >
>> > I don't recall the details of what you were trying to accomplish here
>> > (it's been 8 months or so) but the surely the vgic handling code should
>> > *somehow* be integrated into the handle_kernel_mmio (like Paolo
>> > suggested), unless you come back and tell me that that would involve a
>> > complete rewrite of the vgic code.
>> I'm experimenting now - it's not exactly rewrite of whole vgic code,
>> but it will touch a lot of it  - all MMIO access handlers and the
>> supporting functions.
>> We're ready to spend the effort. My question is  - is this acceptable?
>>
> I certainly appreciate the offer to do this work, but it's hard to say
> at this point if it is worth it.
>
> What I was trying to say above is that Andre looked at this, and came to
> the conclusion that it is not worth it.
>
> Marc, what are your thoughts?
>
> -Christoffer

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH 1/4] ARM: KVM: on unhandled IO mem abort, route the call to the KVM MMIO bus
  2014-11-13 11:20           ` Christoffer Dall
  2014-11-13 11:20             ` Christoffer Dall
@ 2014-11-13 11:37             ` Marc Zyngier
  2014-11-13 11:52               ` Andre Przywara
  1 sibling, 1 reply; 16+ messages in thread
From: Marc Zyngier @ 2014-11-13 11:37 UTC (permalink / raw)
  To: linux-arm-kernel

[fixing Andre's email address]

On 13/11/14 11:20, Christoffer Dall wrote:
> On Thu, Nov 13, 2014 at 12:45:42PM +0200, Nikolay Nikolaev wrote:
> 
> [...]
> 
>>>>
>>>> Going through the vgic_handle_mmio we see that it will require large
>>>> refactoring:
>>>>  - there are 15 MMIO ranges for the vgic now - each should be
>>>> registered as a separate device
>>>>  - the handler of each range should be split into read and write
>>>>  - all handlers take 'struct kvm_exit_mmio', and pass it to
>>>> 'vgic_reg_access', 'mmio_data_read' and 'mmio_data_read'
>>>>
>>>> To sum up - if we do this refactoring of vgic's MMIO handling +
>>>> kvm_io_bus_ API getting 'vcpu" argument we'll get a 'much' cleaner
>>>> vgic code and as a bonus we'll get 'ioeventfd' capabilities.
>>>>
>>>> We have 3 questions:
>>>>  - is the kvm_io_bus_ getting 'vcpu' argument acceptable for the other
>>>> architectures too?
>>>>  - is this huge vgic MMIO handling redesign acceptable/desired (it
>>>> touches a lot of code)?
>>>>  - is there a way that ioeventfd is accepted leaving vgic.c in it's
>>>> current state?
>>>>
>>> Not sure how the latter question is relevant to this, but check with
>>> Andre who recently looked at this as well and decided that for GICv3 the
>>> only sane thing was to remove that comment for the gic.
>> @Andre - what's your experience with the GICv3 and MMIO handling,
>> anything specific?
>>>
>>> I don't recall the details of what you were trying to accomplish here
>>> (it's been 8 months or so) but the surely the vgic handling code should
>>> *somehow* be integrated into the handle_kernel_mmio (like Paolo
>>> suggested), unless you come back and tell me that that would involve a
>>> complete rewrite of the vgic code.
>> I'm experimenting now - it's not exactly rewrite of whole vgic code,
>> but it will touch a lot of it  - all MMIO access handlers and the
>> supporting functions.
>> We're ready to spend the effort. My question is  - is this acceptable?
>>
> I certainly appreciate the offer to do this work, but it's hard to say
> at this point if it is worth it.
> 
> What I was trying to say above is that Andre looked at this, and came to
> the conclusion that it is not worth it.
> 
> Marc, what are your thoughts?

Same here, I rely on Andre's view that it was not very useful. Now, it
would be good to see a mock-up of the patches and find out:

- if it is a major improvement for the general quality of the code
- if that allow us to *delete* a lot of code (if it is just churn, I'm
not really interested)
- if it helps or hinders further developments that are currently in the
pipeline

Andre, can you please share your findings? I don't remember the
specifics of the discussion we had a few months ago...

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH 1/4] ARM: KVM: on unhandled IO mem abort, route the call to the KVM MMIO bus
  2014-11-13 11:37             ` Marc Zyngier
@ 2014-11-13 11:52               ` Andre Przywara
  2014-11-13 12:29                 ` Nikolay Nikolaev
  0 siblings, 1 reply; 16+ messages in thread
From: Andre Przywara @ 2014-11-13 11:52 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Nikolay,

On 13/11/14 11:37, Marc Zyngier wrote:
> [fixing Andre's email address]
> 
> On 13/11/14 11:20, Christoffer Dall wrote:
>> On Thu, Nov 13, 2014 at 12:45:42PM +0200, Nikolay Nikolaev wrote:
>>
>> [...]
>>
>>>>>
>>>>> Going through the vgic_handle_mmio we see that it will require large
>>>>> refactoring:
>>>>>  - there are 15 MMIO ranges for the vgic now - each should be
>>>>> registered as a separate device
>>>>>  - the handler of each range should be split into read and write
>>>>>  - all handlers take 'struct kvm_exit_mmio', and pass it to
>>>>> 'vgic_reg_access', 'mmio_data_read' and 'mmio_data_read'
>>>>>
>>>>> To sum up - if we do this refactoring of vgic's MMIO handling +
>>>>> kvm_io_bus_ API getting 'vcpu" argument we'll get a 'much' cleaner
>>>>> vgic code and as a bonus we'll get 'ioeventfd' capabilities.
>>>>>
>>>>> We have 3 questions:
>>>>>  - is the kvm_io_bus_ getting 'vcpu' argument acceptable for the other
>>>>> architectures too?
>>>>>  - is this huge vgic MMIO handling redesign acceptable/desired (it
>>>>> touches a lot of code)?
>>>>>  - is there a way that ioeventfd is accepted leaving vgic.c in it's
>>>>> current state?
>>>>>
>>>> Not sure how the latter question is relevant to this, but check with
>>>> Andre who recently looked at this as well and decided that for GICv3 the
>>>> only sane thing was to remove that comment for the gic.
>>> @Andre - what's your experience with the GICv3 and MMIO handling,
>>> anything specific?
>>>>
>>>> I don't recall the details of what you were trying to accomplish here
>>>> (it's been 8 months or so) but the surely the vgic handling code should
>>>> *somehow* be integrated into the handle_kernel_mmio (like Paolo
>>>> suggested), unless you come back and tell me that that would involve a
>>>> complete rewrite of the vgic code.
>>> I'm experimenting now - it's not exactly rewrite of whole vgic code,
>>> but it will touch a lot of it  - all MMIO access handlers and the
>>> supporting functions.
>>> We're ready to spend the effort. My question is  - is this acceptable?
>>>
>> I certainly appreciate the offer to do this work, but it's hard to say
>> at this point if it is worth it.
>>
>> What I was trying to say above is that Andre looked at this, and came to
>> the conclusion that it is not worth it.
>>
>> Marc, what are your thoughts?
> 
> Same here, I rely on Andre's view that it was not very useful. Now, it
> would be good to see a mock-up of the patches and find out:

Seconded, can you send a pointer to the VGIC rework patches mentioned?

> - if it is a major improvement for the general quality of the code
> - if that allow us to *delete* a lot of code (if it is just churn, I'm
> not really interested)
> - if it helps or hinders further developments that are currently in the
> pipeline
> 
> Andre, can you please share your findings? I don't remember the
> specifics of the discussion we had a few months ago...

1) Given the date in the reply I sense that your patches are from March
this year or earlier. So this is based on VGIC code from March, which
predates Marc's vgic_dyn changes that just went in 3.18-rc1? His patches
introduced another member to struct mmio_range to check validity of
accesses with a reduced number of SPIs supported (.bits_per_irq).
So is this covered in your rework?

2)
>>>  - there are 15 MMIO ranges for the vgic now - each should be

Well, the GICv3 emulation adds 41 new ranges. Not sure if this still fits.

>>> registered as a separate device

I found this fact a show-stopper when looking at this a month ago.
Somehow it feels wrong to register a bunch of pseudo-devices. I could go
with registering a small number of regions (one distributor, two
redistributor regions for instance), but not handling every single of
the 41 + 15 register "groups" as a device.

Also I wasn't sure if we had to expose some of the vGIC structures to
the other KVM code layers.

But I am open to any suggestions (as long as they go in _after_ my
vGICv3 series ;-)  - so looking forward to some repo to see how it looks
like.

Cheers,
Andre.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH 1/4] ARM: KVM: on unhandled IO mem abort, route the call to the KVM MMIO bus
  2014-11-13 11:52               ` Andre Przywara
@ 2014-11-13 12:29                 ` Nikolay Nikolaev
  2014-11-13 12:52                   ` Andre Przywara
  0 siblings, 1 reply; 16+ messages in thread
From: Nikolay Nikolaev @ 2014-11-13 12:29 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Nov 13, 2014 at 1:52 PM, Andre Przywara <andre.przywara@arm.com> wrote:
> Hi Nikolay,
>
> On 13/11/14 11:37, Marc Zyngier wrote:
>> [fixing Andre's email address]
>>
>> On 13/11/14 11:20, Christoffer Dall wrote:
>>> On Thu, Nov 13, 2014 at 12:45:42PM +0200, Nikolay Nikolaev wrote:
>>>
>>> [...]
>>>
>>>>>>
>>>>>> Going through the vgic_handle_mmio we see that it will require large
>>>>>> refactoring:
>>>>>>  - there are 15 MMIO ranges for the vgic now - each should be
>>>>>> registered as a separate device
>>>>>>  - the handler of each range should be split into read and write
>>>>>>  - all handlers take 'struct kvm_exit_mmio', and pass it to
>>>>>> 'vgic_reg_access', 'mmio_data_read' and 'mmio_data_read'
>>>>>>
>>>>>> To sum up - if we do this refactoring of vgic's MMIO handling +
>>>>>> kvm_io_bus_ API getting 'vcpu" argument we'll get a 'much' cleaner
>>>>>> vgic code and as a bonus we'll get 'ioeventfd' capabilities.
>>>>>>
>>>>>> We have 3 questions:
>>>>>>  - is the kvm_io_bus_ getting 'vcpu' argument acceptable for the other
>>>>>> architectures too?
>>>>>>  - is this huge vgic MMIO handling redesign acceptable/desired (it
>>>>>> touches a lot of code)?
>>>>>>  - is there a way that ioeventfd is accepted leaving vgic.c in it's
>>>>>> current state?
>>>>>>
>>>>> Not sure how the latter question is relevant to this, but check with
>>>>> Andre who recently looked at this as well and decided that for GICv3 the
>>>>> only sane thing was to remove that comment for the gic.
>>>> @Andre - what's your experience with the GICv3 and MMIO handling,
>>>> anything specific?
>>>>>
>>>>> I don't recall the details of what you were trying to accomplish here
>>>>> (it's been 8 months or so) but the surely the vgic handling code should
>>>>> *somehow* be integrated into the handle_kernel_mmio (like Paolo
>>>>> suggested), unless you come back and tell me that that would involve a
>>>>> complete rewrite of the vgic code.
>>>> I'm experimenting now - it's not exactly rewrite of whole vgic code,
>>>> but it will touch a lot of it  - all MMIO access handlers and the
>>>> supporting functions.
>>>> We're ready to spend the effort. My question is  - is this acceptable?
>>>>
>>> I certainly appreciate the offer to do this work, but it's hard to say
>>> at this point if it is worth it.
>>>
>>> What I was trying to say above is that Andre looked at this, and came to
>>> the conclusion that it is not worth it.
>>>
>>> Marc, what are your thoughts?
>>
>> Same here, I rely on Andre's view that it was not very useful. Now, it
>> would be good to see a mock-up of the patches and find out:
>
> Seconded, can you send a pointer to the VGIC rework patches mentioned?
They are still in WiP state - not exactly working. I'm still exploring
what the status is.

Our major target is having ioeventfd suport in ARM. For this we need
to support kvm_io_bus_ mechanisms for MMIO access (cause ioevent fd
device is registered this way). Then this subject of integrating vgic
with the kvm_io_bus_ APIs came up.
My personal opinion - they should be able to coexist in peace.

>
>> - if it is a major improvement for the general quality of the code
>> - if that allow us to *delete* a lot of code (if it is just churn, I'm
>> not really interested)
>> - if it helps or hinders further developments that are currently in the
>> pipeline
>>
>> Andre, can you please share your findings? I don't remember the
>> specifics of the discussion we had a few months ago...
>
> 1) Given the date in the reply I sense that your patches are from March
> this year or earlier. So this is based on VGIC code from March, which
> predates Marc's vgic_dyn changes that just went in 3.18-rc1? His patches
> introduced another member to struct mmio_range to check validity of
> accesses with a reduced number of SPIs supported (.bits_per_irq).
> So is this covered in your rework?
Still no (rebased to 3.17) - didn't see it, but should not be an issue.
>
> 2)
>>>>  - there are 15 MMIO ranges for the vgic now - each should be
>
> Well, the GICv3 emulation adds 41 new ranges. Not sure if this still fits.
>
>>>> registered as a separate device
>
> I found this fact a show-stopper when looking at this a month ago.
> Somehow it feels wrong to register a bunch of pseudo-devices. I could go
> with registering a small number of regions (one distributor, two
> redistributor regions for instance), but not handling every single of
> the 41 + 15 register "groups" as a device.
Do you sense performance issues, or just "it's not right"?
Maybe kvm_io_bus_ needs some extesion to hanlde a device with multiple regions?

>
> Also I wasn't sure if we had to expose some of the vGIC structures to
> the other KVM code layers.
I don't see such a need. Can you point some example?
>
> But I am open to any suggestions (as long as they go in _after_ my
> vGICv3 series ;-)  - so looking forward to some repo to see how it looks
> like.
There is still nothing much to show - but if there is interest we may
prepare something that shows the idea.
BTW, where is your repo (sorry I don't follow so close) with the vGICv3?

>
> Cheers,
> Andre.

regards,
Nikolay Nikolaev

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH 1/4] ARM: KVM: on unhandled IO mem abort, route the call to the KVM MMIO bus
  2014-11-13 12:29                 ` Nikolay Nikolaev
@ 2014-11-13 12:52                   ` Andre Przywara
  0 siblings, 0 replies; 16+ messages in thread
From: Andre Przywara @ 2014-11-13 12:52 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Nikolay,

On 13/11/14 12:29, Nikolay Nikolaev wrote:
> On Thu, Nov 13, 2014 at 1:52 PM, Andre Przywara <andre.przywara@arm.com> wrote:
>> Hi Nikolay,
>>
>> On 13/11/14 11:37, Marc Zyngier wrote:
>>> [fixing Andre's email address]
>>>
>>> On 13/11/14 11:20, Christoffer Dall wrote:
>>>> On Thu, Nov 13, 2014 at 12:45:42PM +0200, Nikolay Nikolaev wrote:
>>>>
>>>> [...]
>>>>
>>>>>>>
>>>>>>> Going through the vgic_handle_mmio we see that it will require large
>>>>>>> refactoring:
>>>>>>>  - there are 15 MMIO ranges for the vgic now - each should be
>>>>>>> registered as a separate device
>>>>>>>  - the handler of each range should be split into read and write
>>>>>>>  - all handlers take 'struct kvm_exit_mmio', and pass it to
>>>>>>> 'vgic_reg_access', 'mmio_data_read' and 'mmio_data_read'
>>>>>>>
>>>>>>> To sum up - if we do this refactoring of vgic's MMIO handling +
>>>>>>> kvm_io_bus_ API getting 'vcpu" argument we'll get a 'much' cleaner
>>>>>>> vgic code and as a bonus we'll get 'ioeventfd' capabilities.
>>>>>>>
>>>>>>> We have 3 questions:
>>>>>>>  - is the kvm_io_bus_ getting 'vcpu' argument acceptable for the other
>>>>>>> architectures too?
>>>>>>>  - is this huge vgic MMIO handling redesign acceptable/desired (it
>>>>>>> touches a lot of code)?
>>>>>>>  - is there a way that ioeventfd is accepted leaving vgic.c in it's
>>>>>>> current state?
>>>>>>>
>>>>>> Not sure how the latter question is relevant to this, but check with
>>>>>> Andre who recently looked at this as well and decided that for GICv3 the
>>>>>> only sane thing was to remove that comment for the gic.
>>>>> @Andre - what's your experience with the GICv3 and MMIO handling,
>>>>> anything specific?
>>>>>>
>>>>>> I don't recall the details of what you were trying to accomplish here
>>>>>> (it's been 8 months or so) but the surely the vgic handling code should
>>>>>> *somehow* be integrated into the handle_kernel_mmio (like Paolo
>>>>>> suggested), unless you come back and tell me that that would involve a
>>>>>> complete rewrite of the vgic code.
>>>>> I'm experimenting now - it's not exactly rewrite of whole vgic code,
>>>>> but it will touch a lot of it  - all MMIO access handlers and the
>>>>> supporting functions.
>>>>> We're ready to spend the effort. My question is  - is this acceptable?
>>>>>
>>>> I certainly appreciate the offer to do this work, but it's hard to say
>>>> at this point if it is worth it.
>>>>
>>>> What I was trying to say above is that Andre looked at this, and came to
>>>> the conclusion that it is not worth it.
>>>>
>>>> Marc, what are your thoughts?
>>>
>>> Same here, I rely on Andre's view that it was not very useful. Now, it
>>> would be good to see a mock-up of the patches and find out:
>>
>> Seconded, can you send a pointer to the VGIC rework patches mentioned?
> They are still in WiP state - not exactly working. I'm still exploring
> what the status is.
> 
> Our major target is having ioeventfd suport in ARM. For this we need
> to support kvm_io_bus_ mechanisms for MMIO access (cause ioevent fd
> device is registered this way). Then this subject of integrating vgic
> with the kvm_io_bus_ APIs came up.
> My personal opinion - they should be able to coexist in peace.
> 
>>
>>> - if it is a major improvement for the general quality of the code
>>> - if that allow us to *delete* a lot of code (if it is just churn, I'm
>>> not really interested)
>>> - if it helps or hinders further developments that are currently in the
>>> pipeline
>>>
>>> Andre, can you please share your findings? I don't remember the
>>> specifics of the discussion we had a few months ago...
>>
>> 1) Given the date in the reply I sense that your patches are from March
>> this year or earlier. So this is based on VGIC code from March, which
>> predates Marc's vgic_dyn changes that just went in 3.18-rc1? His patches
>> introduced another member to struct mmio_range to check validity of
>> accesses with a reduced number of SPIs supported (.bits_per_irq).
>> So is this covered in your rework?
> Still no (rebased to 3.17) - didn't see it, but should not be an issue.
>>
>> 2)
>>>>>  - there are 15 MMIO ranges for the vgic now - each should be
>>
>> Well, the GICv3 emulation adds 41 new ranges. Not sure if this still fits.
>>
>>>>> registered as a separate device
>>
>> I found this fact a show-stopper when looking at this a month ago.
>> Somehow it feels wrong to register a bunch of pseudo-devices. I could go
>> with registering a small number of regions (one distributor, two
>> redistributor regions for instance), but not handling every single of
>> the 41 + 15 register "groups" as a device.
> Do you sense performance issues, or just "it's not right"?

Just "not right", since they are no 15 devices for a GICv2 emulation.

> Maybe kvm_io_bus_ needs some extesion to hanlde a device with multiple regions?

Well, maybe a simple rename could fix this, but I am not sure it still
fits then. I am just afraid we end up with quite some code duplication
in each handler function. Also if we needed to split-up read and write
this ends up with much more code. Currently this is cleverly handled in
one function without looking messy (great job, Marc, btw!)

Also handling private GICv3 interrupts is no longer done via a single
MMIO offset banked by the accessing (v)CPU, but by per-CPU MMIO regions.
Would we need to register separate devices for each VCPU then?
Which would multiply the 41 "devices" by the number of VCPUs?

>> Also I wasn't sure if we had to expose some of the vGIC structures to
>> the other KVM code layers.
> I don't see such a need. Can you point some example?

No, was just a feeling. Currently we are happily confined to
virt/kvm/arm and the interface to the generic and arch KVM code is
pretty small. I was just afraid that would have to be extended. But if
you say it's fine, then it's fine.

>> But I am open to any suggestions (as long as they go in _after_ my
>> vGICv3 series ;-)  - so looking forward to some repo to see how it looks
>> like.
> There is still nothing much to show - but if there is interest we may
> prepare something that shows the idea.

Yeah, just some dump of the vgic.c would suffice. Or maybe even an
example implementation of one or two registers to see how it compares to
the current code.

I just get the feeling that GICv2 emulation would be fine with this
refactoring, but it wouldn't fit anymore for GICv3 without hacks.

> BTW, where is your repo (sorry I don't follow so close) with the vGICv3?

It's on: http://www.linux-arm.org/git?p=linux-ap.git
Check the latest kvm-gicv3 branch.

Thanks,
Andre.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH 1/4] ARM: KVM: on unhandled IO mem abort, route the call to the KVM MMIO bus
  2014-11-13 10:45         ` Nikolay Nikolaev
  2014-11-13 11:20           ` Christoffer Dall
@ 2014-11-13 14:16           ` Eric Auger
  2014-11-13 14:23             ` Eric Auger
  1 sibling, 1 reply; 16+ messages in thread
From: Eric Auger @ 2014-11-13 14:16 UTC (permalink / raw)
  To: linux-arm-kernel

On 11/13/2014 11:45 AM, Nikolay Nikolaev wrote:
> On Mon, Nov 10, 2014 at 6:27 PM, Christoffer Dall
> <christoffer.dall@linaro.org> wrote:
>> On Mon, Nov 10, 2014 at 05:09:07PM +0200, Nikolay Nikolaev wrote:
>>> Hello,
>>>
>>> On Fri, Mar 28, 2014 at 9:09 PM, Christoffer Dall
>>> <christoffer.dall@linaro.org> wrote:
>>>>
>>>> On Thu, Mar 13, 2014 at 04:57:26PM +0100, Antonios Motakis wrote:
>>>>> On an unhandled IO memory abort, use the kvm_io_bus_* API in order to
>>>>> handle the MMIO access through any registered read/write callbacks. This
>>>>> is a dependency for eventfd support (ioeventfd and irqfd).
>>>>>
>>>>> However, accesses to the VGIC are still left implemented independently,
>>>>> since the kvm_io_bus_* API doesn't pass the VCPU pointer doing the access.
>>>>>
>>>>> Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
>>>>> Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
>>>>> ---
>>>>>  arch/arm/kvm/mmio.c | 32 ++++++++++++++++++++++++++++++++
>>>>>  virt/kvm/arm/vgic.c |  5 ++++-
>>>>>  2 files changed, 36 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c
>>>>> index 4cb5a93..1d17831 100644
>>>>> --- a/arch/arm/kvm/mmio.c
>>>>> +++ b/arch/arm/kvm/mmio.c
>>>>> @@ -162,6 +162,35 @@ static int decode_hsr(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>>>>       return 0;
>>>>>  }
>>>>>
>>>>> +/**
>>>>> + * kvm_handle_mmio - handle an in-kernel MMIO access
>>>>> + * @vcpu:    pointer to the vcpu performing the access
>>>>> + * @run:     pointer to the kvm_run structure
>>>>> + * @mmio:    pointer to the data describing the access
>>>>> + *
>>>>> + * returns true if the MMIO access has been performed in kernel space,
>>>>> + * and false if it needs to be emulated in user space.
>>>>> + */
>>>>> +static bool handle_kernel_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
>>>>> +             struct kvm_exit_mmio *mmio)
>>>>> +{
>>>>> +     int ret;
>>>>> +     if (mmio->is_write) {
>>>>> +             ret = kvm_io_bus_write(vcpu->kvm, KVM_MMIO_BUS, mmio->phys_addr,
>>>>> +                             mmio->len, &mmio->data);
>>>>> +
>>>>> +     } else {
>>>>> +             ret = kvm_io_bus_read(vcpu->kvm, KVM_MMIO_BUS, mmio->phys_addr,
>>>>> +                             mmio->len, &mmio->data);
>>>>> +     }
>>>>> +     if (!ret) {
>>>>> +             kvm_prepare_mmio(run, mmio);
>>>>> +             kvm_handle_mmio_return(vcpu, run);
>>>>> +     }
>>>>> +
>>>>> +     return !ret;
>>>>> +}
>>>>> +
>>>>>  int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
>>>>>                phys_addr_t fault_ipa)
>>>>>  {
>>>>> @@ -200,6 +229,9 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
>>>>>       if (vgic_handle_mmio(vcpu, run, &mmio))
>>>>>               return 1;
>>>>>
>>>>> +     if (handle_kernel_mmio(vcpu, run, &mmio))
>>>>> +             return 1;
>>>>> +
>>>
>>>
>>> We're reconsidering ioeventfds patchseries and we tried to evaluate
>>> what you suggested here.
>>>
>>>>
>>>> this special-casing of the vgic is now really terrible.  Is there
>>>> anything holding you back from doing the necessary restructure of the
>>>> kvm_bus_io_*() API instead?
>>>
>>> Restructuring the kvm_io_bus_ API is not a big thing (we actually did
>>> it), but is not directly related to the these patches.
>>> Of course it can be justified if we do it in the context of removing
>>> vgic_handle_mmio and leaving only handle_kernel_mmio.
>>>
>>>>
>>>> That would allow us to get rid of the ugly
>>>> Fix it! in the vgic driver as well.
>>>
>>> Going through the vgic_handle_mmio we see that it will require large
>>> refactoring:
>>>  - there are 15 MMIO ranges for the vgic now - each should be
>>> registered as a separate device
Hi Nikolaev, Andre,

what does mandate to register 15 devices? Isn't possible to register a
single kvm_io_device covering the whole distributor range [base, base +
KVM_VGIC_V2_DIST_SIZE] (current code) and in associated
kvm_io_device_ops read/write locate the addressed range and do the same
as what is done in current vgic_handle_mmio? Isn't it done that way for
the ioapic? what do I miss?

Thanks

Best Regards

Eric
>>>  - the handler of each range should be split into read and write
>>>  - all handlers take 'struct kvm_exit_mmio', and pass it to
>>> 'vgic_reg_access', 'mmio_data_read' and 'mmio_data_read'
>>>
>>> To sum up - if we do this refactoring of vgic's MMIO handling +
>>> kvm_io_bus_ API getting 'vcpu" argument we'll get a 'much' cleaner
>>> vgic code and as a bonus we'll get 'ioeventfd' capabilities.
>>>
>>> We have 3 questions:
>>>  - is the kvm_io_bus_ getting 'vcpu' argument acceptable for the other
>>> architectures too?
>>>  - is this huge vgic MMIO handling redesign acceptable/desired (it
>>> touches a lot of code)?
>>>  - is there a way that ioeventfd is accepted leaving vgic.c in it's
>>> current state?
>>>
>> Not sure how the latter question is relevant to this, but check with
>> Andre who recently looked at this as well and decided that for GICv3 the
>> only sane thing was to remove that comment for the gic.
> @Andre - what's your experience with the GICv3 and MMIO handling,
> anything specific?
>>
>> I don't recall the details of what you were trying to accomplish here
>> (it's been 8 months or so) but the surely the vgic handling code should
>> *somehow* be integrated into the handle_kernel_mmio (like Paolo
>> suggested), unless you come back and tell me that that would involve a
>> complete rewrite of the vgic code.
> I'm experimenting now - it's not exactly rewrite of whole vgic code,
> but it will touch a lot of it  - all MMIO access handlers and the
> supporting functions.
> We're ready to spend the effort. My question is  - is this acceptable?
> 
> regards,
> Nikolay Nikolaev
> Virtual Open Systems
>>
>> -Christoffer
> _______________________________________________
> kvmarm mailing list
> kvmarm at lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH 1/4] ARM: KVM: on unhandled IO mem abort, route the call to the KVM MMIO bus
  2014-11-13 14:16           ` Eric Auger
@ 2014-11-13 14:23             ` Eric Auger
  0 siblings, 0 replies; 16+ messages in thread
From: Eric Auger @ 2014-11-13 14:23 UTC (permalink / raw)
  To: linux-arm-kernel

On 11/13/2014 03:16 PM, Eric Auger wrote:
> On 11/13/2014 11:45 AM, Nikolay Nikolaev wrote:
>> On Mon, Nov 10, 2014 at 6:27 PM, Christoffer Dall
>> <christoffer.dall@linaro.org> wrote:
>>> On Mon, Nov 10, 2014 at 05:09:07PM +0200, Nikolay Nikolaev wrote:
>>>> Hello,
>>>>
>>>> On Fri, Mar 28, 2014 at 9:09 PM, Christoffer Dall
>>>> <christoffer.dall@linaro.org> wrote:
>>>>>
>>>>> On Thu, Mar 13, 2014 at 04:57:26PM +0100, Antonios Motakis wrote:
>>>>>> On an unhandled IO memory abort, use the kvm_io_bus_* API in order to
>>>>>> handle the MMIO access through any registered read/write callbacks. This
>>>>>> is a dependency for eventfd support (ioeventfd and irqfd).
>>>>>>
>>>>>> However, accesses to the VGIC are still left implemented independently,
>>>>>> since the kvm_io_bus_* API doesn't pass the VCPU pointer doing the access.
>>>>>>
>>>>>> Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
>>>>>> Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
>>>>>> ---
>>>>>>  arch/arm/kvm/mmio.c | 32 ++++++++++++++++++++++++++++++++
>>>>>>  virt/kvm/arm/vgic.c |  5 ++++-
>>>>>>  2 files changed, 36 insertions(+), 1 deletion(-)
>>>>>>
>>>>>> diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c
>>>>>> index 4cb5a93..1d17831 100644
>>>>>> --- a/arch/arm/kvm/mmio.c
>>>>>> +++ b/arch/arm/kvm/mmio.c
>>>>>> @@ -162,6 +162,35 @@ static int decode_hsr(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>>>>>       return 0;
>>>>>>  }
>>>>>>
>>>>>> +/**
>>>>>> + * kvm_handle_mmio - handle an in-kernel MMIO access
>>>>>> + * @vcpu:    pointer to the vcpu performing the access
>>>>>> + * @run:     pointer to the kvm_run structure
>>>>>> + * @mmio:    pointer to the data describing the access
>>>>>> + *
>>>>>> + * returns true if the MMIO access has been performed in kernel space,
>>>>>> + * and false if it needs to be emulated in user space.
>>>>>> + */
>>>>>> +static bool handle_kernel_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
>>>>>> +             struct kvm_exit_mmio *mmio)
>>>>>> +{
>>>>>> +     int ret;
>>>>>> +     if (mmio->is_write) {
>>>>>> +             ret = kvm_io_bus_write(vcpu->kvm, KVM_MMIO_BUS, mmio->phys_addr,
>>>>>> +                             mmio->len, &mmio->data);
>>>>>> +
>>>>>> +     } else {
>>>>>> +             ret = kvm_io_bus_read(vcpu->kvm, KVM_MMIO_BUS, mmio->phys_addr,
>>>>>> +                             mmio->len, &mmio->data);
>>>>>> +     }
>>>>>> +     if (!ret) {
>>>>>> +             kvm_prepare_mmio(run, mmio);
>>>>>> +             kvm_handle_mmio_return(vcpu, run);
>>>>>> +     }
>>>>>> +
>>>>>> +     return !ret;
>>>>>> +}
>>>>>> +
>>>>>>  int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
>>>>>>                phys_addr_t fault_ipa)
>>>>>>  {
>>>>>> @@ -200,6 +229,9 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
>>>>>>       if (vgic_handle_mmio(vcpu, run, &mmio))
>>>>>>               return 1;
>>>>>>
>>>>>> +     if (handle_kernel_mmio(vcpu, run, &mmio))
>>>>>> +             return 1;
>>>>>> +
>>>>
>>>>
>>>> We're reconsidering ioeventfds patchseries and we tried to evaluate
>>>> what you suggested here.
>>>>
>>>>>
>>>>> this special-casing of the vgic is now really terrible.  Is there
>>>>> anything holding you back from doing the necessary restructure of the
>>>>> kvm_bus_io_*() API instead?
>>>>
>>>> Restructuring the kvm_io_bus_ API is not a big thing (we actually did
>>>> it), but is not directly related to the these patches.
>>>> Of course it can be justified if we do it in the context of removing
>>>> vgic_handle_mmio and leaving only handle_kernel_mmio.
>>>>
>>>>>
>>>>> That would allow us to get rid of the ugly
>>>>> Fix it! in the vgic driver as well.
>>>>
>>>> Going through the vgic_handle_mmio we see that it will require large
>>>> refactoring:
>>>>  - there are 15 MMIO ranges for the vgic now - each should be
>>>> registered as a separate device
Re-correcting Andre's address, sorry:
Hi Nikolay, Andre,

what does mandate to register 15 devices? Isn't possible to register a
single kvm_io_device covering the whole distributor range [base, base +
KVM_VGIC_V2_DIST_SIZE] (current code) and in associated
kvm_io_device_ops read/write locate the addressed range and do the same
as what is done in current vgic_handle_mmio? Isn't it done that way for
the ioapic? what do I miss?

Thanks

Best Regards

Eric
>>>>  - the handler of each range should be split into read and write
>>>>  - all handlers take 'struct kvm_exit_mmio', and pass it to
>>>> 'vgic_reg_access', 'mmio_data_read' and 'mmio_data_read'
>>>>
>>>> To sum up - if we do this refactoring of vgic's MMIO handling +
>>>> kvm_io_bus_ API getting 'vcpu" argument we'll get a 'much' cleaner
>>>> vgic code and as a bonus we'll get 'ioeventfd' capabilities.
>>>>
>>>> We have 3 questions:
>>>>  - is the kvm_io_bus_ getting 'vcpu' argument acceptable for the other
>>>> architectures too?
>>>>  - is this huge vgic MMIO handling redesign acceptable/desired (it
>>>> touches a lot of code)?
>>>>  - is there a way that ioeventfd is accepted leaving vgic.c in it's
>>>> current state?
>>>>
>>> Not sure how the latter question is relevant to this, but check with
>>> Andre who recently looked at this as well and decided that for GICv3 the
>>> only sane thing was to remove that comment for the gic.
>> @Andre - what's your experience with the GICv3 and MMIO handling,
>> anything specific?
>>>
>>> I don't recall the details of what you were trying to accomplish here
>>> (it's been 8 months or so) but the surely the vgic handling code should
>>> *somehow* be integrated into the handle_kernel_mmio (like Paolo
>>> suggested), unless you come back and tell me that that would involve a
>>> complete rewrite of the vgic code.
>> I'm experimenting now - it's not exactly rewrite of whole vgic code,
>> but it will touch a lot of it  - all MMIO access handlers and the
>> supporting functions.
>> We're ready to spend the effort. My question is  - is this acceptable?
>>
>> regards,
>> Nikolay Nikolaev
>> Virtual Open Systems
>>>
>>> -Christoffer
>> _______________________________________________
>> kvmarm mailing list
>> kvmarm at lists.cs.columbia.edu
>> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
>>
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2014-11-13 14:23 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <1394726249-1547-1-git-send-email-a.motakis@virtualopensystems.com>
2014-03-13 15:57 ` [RFC PATCH 1/4] ARM: KVM: on unhandled IO mem abort, route the call to the KVM MMIO bus Antonios Motakis
2014-03-28 19:09   ` Christoffer Dall
2014-03-29 17:34     ` Paolo Bonzini
2014-11-10 15:09     ` Nikolay Nikolaev
2014-11-10 16:27       ` Christoffer Dall
2014-11-13 10:45         ` Nikolay Nikolaev
2014-11-13 11:20           ` Christoffer Dall
2014-11-13 11:20             ` Christoffer Dall
2014-11-13 11:37             ` Marc Zyngier
2014-11-13 11:52               ` Andre Przywara
2014-11-13 12:29                 ` Nikolay Nikolaev
2014-11-13 12:52                   ` Andre Przywara
2014-11-13 14:16           ` Eric Auger
2014-11-13 14:23             ` Eric Auger
2014-03-13 15:57 ` [RFC PATCH 3/4] ARM: KVM: enable linking against eventfd Antonios Motakis
2014-03-13 15:57 ` [RFC PATCH 4/4] ARM: KVM: enable KVM_CAP_IOEVENTFD Antonios Motakis

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).