From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:43791) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1THctU-0000HX-Vu for qemu-devel@nongnu.org; Fri, 28 Sep 2012 11:54:29 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1THctP-0005bk-TH for qemu-devel@nongnu.org; Fri, 28 Sep 2012 11:54:24 -0400 Received: from mail-ob0-f173.google.com ([209.85.214.173]:43902) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1THctP-0005bF-O2 for qemu-devel@nongnu.org; Fri, 28 Sep 2012 11:54:19 -0400 Received: by obbwc18 with SMTP id wc18so940339obb.4 for ; Fri, 28 Sep 2012 08:54:18 -0700 (PDT) From: Anthony Liguori In-Reply-To: <5065A2AB.7050104@siemens.com> References: <8631DC5930FA9E468F04F3FD3A5D00721394D3EE@USINDEM103.corp.hds.com> <50655A35.9080505@siemens.com> <87a9was4wn.fsf@codemonkey.ws> <5065A2AB.7050104@siemens.com> Date: Fri, 28 Sep 2012 10:54:14 -0500 Message-ID: <87lifucfdl.fsf@codemonkey.ws> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Subject: Re: [Qemu-devel] [PATCH] Add option to mlock guest and qemu memory List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Jan Kiszka Cc: "dle-develop@lists.sourceforge.net" , Seiji Aguchi , Satoru Moriya , "qemu-devel@nongnu.org" , "avi@redhat.com" Jan Kiszka writes: > On 2012-09-28 14:33, Anthony Liguori wrote: >> Jan Kiszka writes: >> >>> On 2012-09-28 01:21, Satoru Moriya wrote: >>>> This is a first time for me to post a patch to qemu-devel. >>>> If there is something missing/wrong, please let me know. >>>> >>>> We have some plans to migrate old enterprise systems which require >>>> low latency (msec order) to kvm virtualized environment. Usually, >>>> we uses mlock to preallocate and pin down process memory in order >>>> to avoid page allocation in latency critical path. On the other >>>> hand, in kvm environment, mlocking in guests is not effective >>>> because it can't avoid page reclaim in host. Actually, to avoid >>>> guest memory reclaim, qemu has "mem-path" option that is actually >>>> for using hugepage. But a memory region of qemu is not allocated >>>> on hugepage, so it may be reclaimed. That may cause a latency >>>> problem. >>>> >>>> To avoid guest and qemu memory reclaim, this patch introduces >>>> a new "mlock" option. With this option, we can preallocate and >>>> pin down guest and qemu memory before booting guest OS. >>> >>> I guess this reduces the likeliness of multi-millisecond latencies for >>> you but not eliminate them. Of course, mlockall is part of our local >>> changes for real-time QEMU/KVM, but it is just one of the many pieces >>> required. I'm wondering how the situation is on your side. >>> >>> I think mlockall should once be enabled automatically as soon as you ask >>> for real-time support for QEMU guests. How that should be controlled is >>> another question. I'm currently carrying a top-level switch "-rt >>> maxprio=x[,policy=y]" here, likely not the final solution. I'm not >>> really convinced we need to control memory locking separately. And as we >>> are very reluctant to add new top-level switches, this is even more >>> important. >> >> I think you're right here although I'd suggest not abbreviating. > > You mean the sense of "-realtime" instead of "-rt"? Yes. Or any other word that makes sense. Regards, Anthony Liguori > > Jan > > -- > Siemens AG, Corporate Technology, CT RTC ITP SDP-DE > Corporate Competence Center Embedded Linux