From: Alexander Graf <agraf@suse.de>
To: Alexey Kardashevskiy <aik@ozlabs.ru>, linuxppc-dev@lists.ozlabs.org
Cc: kvm@vger.kernel.org, Gleb Natapov <gleb@kernel.org>,
linux-kernel@vger.kernel.org, kvm-ppc@vger.kernel.org,
Paul Mackerras <paulus@samba.org>,
Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [PATCH 0/3] Prepare for in-kernel VFIO DMA operations acceleration
Date: Thu, 26 Jun 2014 12:37:29 +0200 [thread overview]
Message-ID: <53ABF7E9.7040104@suse.de> (raw)
In-Reply-To: <53AB625A.5040305@ozlabs.ru>
On 26.06.14 01:59, Alexey Kardashevskiy wrote:
> On 06/26/2014 07:12 AM, Alexander Graf wrote:
>> On 06.06.14 02:20, Alexey Kardashevskiy wrote:
>>> On 06/05/2014 09:57 PM, Alexander Graf wrote:
>>>> On 05.06.14 09:25, Alexey Kardashevskiy wrote:
>>>>> This reserves 2 capability numbers.
>>>>>
>>>>> This implements an extended version of KVM_CREATE_SPAPR_TCE_64 ioctl.
>>>>>
>>>>> Please advise how to proceed with these patches as I suspect that
>>>>> first two should go via Paolo's tree while the last one via Alex Graf's
>>>>> tree
>>>>> (correct?).
>>>> They would just go via my tree, but only be actually allocated (read:
>>>> mergable to qemu) when they hit Paolo's tree.
>>>>
>>>> In fact, I don't think it makes sense to split them off at all.
>>> So? Are these patches going anywhere? Thanks.
>> So? Are you going to address the comments?
> Sorry, I cannot find here anything to fix. Ben asked some questions, I
> answered and there were no objections. What do I miss this time?...
> >> In fact, the code as is today can allocate an arbitrary amount of pinned
> >> kernel memory from within user space without any checks.
> >
> > Right. We should at least account it in the locked limit.
>
> Yup. And (probably) this thing will keep a counter of how many windows were
> created per KVM instance to avoid having multiple copies of the same table.
Alex
prev parent reply other threads:[~2014-06-26 10:37 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-06-05 7:25 [PATCH 0/3] Prepare for in-kernel VFIO DMA operations acceleration Alexey Kardashevskiy
2014-06-05 7:25 ` [PATCH 1/3] PPC: KVM: Reserve KVM_CAP_SPAPR_TCE_VFIO capability number Alexey Kardashevskiy
2014-06-05 7:25 ` [PATCH 2/3] PPC: KVM: Reserve KVM_CAP_SPAPR_TCE_64 " Alexey Kardashevskiy
2014-06-05 7:25 ` [PATCH 3/3] PPC: KVM: Add support for 64bit TCE windows Alexey Kardashevskiy
2014-06-05 7:38 ` Benjamin Herrenschmidt
2014-06-05 9:26 ` Alexey Kardashevskiy
2014-06-05 10:27 ` Benjamin Herrenschmidt
2014-06-05 11:56 ` Alexander Graf
2014-06-05 12:30 ` Benjamin Herrenschmidt
2014-06-05 12:32 ` Alexander Graf
2014-06-05 13:04 ` Alexey Kardashevskiy
2014-06-05 11:57 ` [PATCH 0/3] Prepare for in-kernel VFIO DMA operations acceleration Alexander Graf
2014-06-06 0:20 ` Alexey Kardashevskiy
2014-06-25 21:12 ` Alexander Graf
2014-06-25 23:59 ` Alexey Kardashevskiy
2014-06-26 10:37 ` Alexander Graf [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=53ABF7E9.7040104@suse.de \
--to=agraf@suse.de \
--cc=aik@ozlabs.ru \
--cc=gleb@kernel.org \
--cc=kvm-ppc@vger.kernel.org \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=paulus@samba.org \
--cc=pbonzini@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).