From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753865AbcAVP4V (ORCPT ); Fri, 22 Jan 2016 10:56:21 -0500 Received: from mx1.redhat.com ([209.132.183.28]:52600 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751190AbcAVP4R (ORCPT ); Fri, 22 Jan 2016 10:56:17 -0500 To: lsf-pc@lists.linuxfoundation.org Cc: Linux Memory Management List , Linux kernel Mailing List , KVM list From: Rik van Riel Subject: [LSF/MM TOPIC] VM containers X-Enigmail-Draft-Status: N1110 Message-ID: <56A2511F.1080900@redhat.com> Date: Fri, 22 Jan 2016 10:56:15 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.3.0 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, I am trying to gauge interest in discussing VM containers at the LSF/MM summit this year. Projects like ClearLinux, Qubes, and others are all trying to use virtual machines as better isolated containers. That changes some of the goals the memory management subsystem has, from "use all the resources effectively" to "use as few resources as necessary, in case the host needs the memory for something else". These VMs could be as small as running just one application, so this goes a little further than simply trying to squeeze more virtual machines into a system with frontswap and cleancache. Single-application VM sandboxes could also get their data differently, using (partial) host filesystem passthrough, instead of a virtual block device. This may change the relative utility of caching data inside the guest page cache, versus freeing up that memory and allowing the host to use it to cache things. Are people interested in discussing this at LSF/MM, or is it better saved for a different forum? -- All rights reversed