From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF595C3524A for ; Mon, 3 Feb 2020 03:29:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9F41020679 for ; Mon, 3 Feb 2020 03:29:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727242AbgBCD30 convert rfc822-to-8bit (ORCPT ); Sun, 2 Feb 2020 22:29:26 -0500 Received: from szxga01-in.huawei.com ([45.249.212.187]:2936 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727034AbgBCD30 (ORCPT ); Sun, 2 Feb 2020 22:29:26 -0500 Received: from DGGEMM404-HUB.china.huawei.com (unknown [172.30.72.53]) by Forcepoint Email with ESMTP id C98D5E13684D1791D165; Mon, 3 Feb 2020 11:29:20 +0800 (CST) Received: from dggeme715-chm.china.huawei.com (10.1.199.111) by DGGEMM404-HUB.china.huawei.com (10.3.20.212) with Microsoft SMTP Server (TLS) id 14.3.439.0; Mon, 3 Feb 2020 11:29:20 +0800 Received: from dggeme763-chm.china.huawei.com (10.3.19.109) by dggeme715-chm.china.huawei.com (10.1.199.111) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1713.5; Mon, 3 Feb 2020 11:29:20 +0800 Received: from dggeme763-chm.china.huawei.com ([10.6.66.36]) by dggeme763-chm.china.huawei.com ([10.6.66.36]) with mapi id 15.01.1713.004; Mon, 3 Feb 2020 11:29:20 +0800 From: linmiaohe To: Vitaly Kuznetsov , Paolo Bonzini , Sean Christopherson , Jim Mattson CC: kvm list , LKML , "the arch/x86 maintainers" , =?iso-8859-2?Q?Radim_Kr=E8m=E1=F8?= , Wanpeng Li , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H . Peter Anvin" Subject: Re: [PATCH] KVM: nVMX: set rflags to specify success in handle_invvpid() default case Thread-Topic: [PATCH] KVM: nVMX: set rflags to specify success in handle_invvpid() default case Thread-Index: AdXaOs5msQGrd2JlS/qT+AceqW9QkQ== Date: Mon, 3 Feb 2020 03:29:19 +0000 Message-ID: <668e0827d62c489cbf52b7bc5d27ba9b@huawei.com> Accept-Language: en-US Content-Language: zh-CN X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.173.221.158] Content-Type: text/plain; charset="iso-8859-2" Content-Transfer-Encoding: 8BIT MIME-Version: 1.0 X-CFilter-Loop: Reflected Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Vitaly Kuznetsov writes: > Sean Christopherson writes: >> On Thu, Jan 23, 2020 at 10:22:24AM -0800, Jim Mattson wrote: >>> On Thu, Jan 23, 2020 at 1:54 AM Paolo Bonzini wrote: >>> > >>> > On 23/01/20 10:45, Vitaly Kuznetsov wrote: >>> > >>> SDM says that "If an >>> > >>> unsupported INVVPID type is specified, the instruction fails." >>> > >>> and this is similar to INVEPT and I decided to check what >>> > >>> handle_invept() does. Well, it does BUG_ON(). >>> > >>> >>> > >>> Are we doing the right thing in any of these cases? >>> > >> >>> > >> Yes, both INVEPT and INVVPID catch this earlier. >>> > >> >>> > >> So I'm leaning towards not applying Miaohe's patch. >>> > > >>> > > Well, we may at least want to converge on BUG_ON() for both >>> > > handle_invvpid()/handle_invept(), there's no need for them to differ. >>> > >>> > WARN_ON_ONCE + nested_vmx_failValid would probably be better, if we >>> > really want to change this. >>> > >>> > Paolo >>> >>> In both cases, something is seriously wrong. The only plausible >>> explanations are compiler error or hardware failure. It would be nice >>> to handle *all* such failures with a KVM_INTERNAL_ERROR exit to >>> userspace. (I'm also thinking of situations like getting a VM-exit >> for >>>> INIT.) >> >> Ya. Vitaly and I had a similar discussion[*]. The idea we tossed >> around was to also mark the VM as having encountered a KVM/hardware >> bug so that the VM is effectively dead. That would also allow >> gracefully handling bugs that are detected deep in the stack, i.e. >> can't simply return 0 to get out to userspace. > >Yea, I was thinking about introducing a big hammer which would stop the whole VM as soon as possible to make it easier to debug such situations. Something like (not really tested): > Yea, please just ignore my origin patch and do what you want. :) I'm sorry for reply in such a big day. I'am just backing from a really hard festival. :(