From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5757C38A2D for ; Tue, 25 Oct 2022 00:22:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229867AbiJYAWO (ORCPT ); Mon, 24 Oct 2022 20:22:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44322 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229964AbiJYAVz (ORCPT ); Mon, 24 Oct 2022 20:21:55 -0400 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F5312C6E1F for ; Mon, 24 Oct 2022 15:45:21 -0700 (PDT) Received: by mail-pj1-x1029.google.com with SMTP id v4-20020a17090a088400b00212cb0ed97eso10091226pjc.5 for ; Mon, 24 Oct 2022 15:45:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=IzIdWqwFLj5fY1F/cHydtWFars9saIw7viA8DOZalt0=; b=id/Shf2px0nm0w6YpRtcH83oau8NFtXtpVpRJhfStEnVAySUuwckkepCJhHxm++jTJ n023l4Kup9Ljld4UuM5WL2dtHuTMGMBIQcRhICrBPeiW8uWieqlZ49z5CXVorvviXfsN 41OD4ajvLuE1MSbu4oW+LpZy7fXOPyiv9ZX8VqSFyKAztuuvotjpOn2qLfFfaDr8a2GN C6QDkzneR4VR34BC4gtwwb3rjnofH04ROA7i7f6gz05ofysJWeGFodBQyxb0rw/dmxCI I0JfWdevDb7YvoFs0KBEUzbm0IAEA1dXJH3ZyPgLikcbGAY3zmKMVBNnWk+hGhdkOb3v 0KRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=IzIdWqwFLj5fY1F/cHydtWFars9saIw7viA8DOZalt0=; b=ADmL20o18qFxgxZSHj5+2FXYYUnM/kHfDmgqv6RBLwMnYSQrXk7CzzlpqTTgk4kg7P ZFIT7Jp0CaqNI4eOHkogdei4ECkdHq2ahWuh6cKHYtUcfH5mn9tBh84bDDrCBmiaq0Ko lQHzwkd6KL7hOO55gPzBlhpusLxRnoolNtpVjgCamo9a66bZK2pQfr1N3RWSHUFJPBAP 8Dr2dXnRLyHiH7nJmxQa3UwsWWblxLNp+mo8TnjAbMO+DAY3AmLTuSW12GWCzBpSVMlD Xh9BEh8BTStnZRTrFNctdR/1JzvcIVmA5kmfVv+atea8ozItpwq23Lnqb+nSfteAHjKw Ye/Q== X-Gm-Message-State: ACrzQf39HzOHwni/isnZu52EfxnpbqexnLuRSb7ZYEtsHL6cjgIRBRDd 9VENqUDMJZBDGJaQfgpqtHKa8g== X-Google-Smtp-Source: AMsMyM4l/bCd7Zveoo+hni1dgz3W5uY9VEgosW/pOypYo8SEZWHh9AHRhyLQfyGd681OtJHgps7ddQ== X-Received: by 2002:a17:902:b907:b0:178:2898:8084 with SMTP id bf7-20020a170902b90700b0017828988084mr34970711plb.140.1666651520768; Mon, 24 Oct 2022 15:45:20 -0700 (PDT) Received: from google.com (7.104.168.34.bc.googleusercontent.com. [34.168.104.7]) by smtp.gmail.com with ESMTPSA id u14-20020a170903124e00b00177ff4019d9sm192701plh.274.2022.10.24.15.45.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 15:45:20 -0700 (PDT) Date: Mon, 24 Oct 2022 22:45:16 +0000 From: Sean Christopherson To: Christian Borntraeger Cc: Emanuele Giuseppe Esposito , kvm@vger.kernel.org, Paolo Bonzini , Jonathan Corbet , Maxim Levitsky , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , David Hildenbrand , x86@kernel.org, "H. Peter Anvin" , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 0/4] KVM: API to block and resume all running vcpus in a vm Message-ID: References: <20221022154819.1823133-1-eesposit@redhat.com> <2701ce67-bfff-8c0c-4450-7c4a281419de@redhat.com> <384b2622-8d7f-ce02-1452-84a86e3a5697@linux.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <384b2622-8d7f-ce02-1452-84a86e3a5697@linux.ibm.com> Precedence: bulk List-ID: X-Mailing-List: linux-doc@vger.kernel.org On Mon, Oct 24, 2022, Christian Borntraeger wrote: > Am 24.10.22 um 10:33 schrieb Emanuele Giuseppe Esposito: > > Am 24/10/2022 um 09:56 schrieb Christian Borntraeger: > > > > Therefore the simplest solution is to pause all vcpus in the kvm > > > > side, so that: Simplest for QEMU maybe, most definitely not simplest for KVM. > > > > - userspace just needs to call the new API before making memslots > > > > changes, keeping modifications to the minimum > > > > - dirty page updates are also performed when vcpus are blocked, so > > > > there is no time window between the dirty page ioctl and memslots > > > > modifications, since vcpus are all stopped. > > > > - no need to modify the existing memslots API > > > Isnt QEMU able to achieve the same goal today by forcing all vCPUs > > > into userspace with a signal? Can you provide some rationale why this > > > is better in the cover letter or patch description? > > > > > David Hildenbrand tried to propose something similar here: > > https://github.com/davidhildenbrand/qemu/commit/86b1bf546a8d00908e33f7362b0b61e2be8dbb7a > > > > While it is not optimized, I think it's more complex that the current > > serie, since qemu should also make sure all running ioctls finish and > > prevent the new ones from getting executed. > > > > Also we can't use pause_all_vcpus()/resume_all_vcpus() because they drop > > the BQL. > > > > Would that be ok as rationale? > > Yes that helps and should be part of the cover letter for the next iterations. But that doesn't explain why KVM needs to get involved, it only explains why QEMU can't use its existing pause_all_vcpus(). I do not understand why this is a problem QEMU needs KVM's help to solve.