From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BFFEC433EF for ; Tue, 24 May 2022 18:59:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6D0388D0003; Tue, 24 May 2022 14:59:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 67F4D8D0001; Tue, 24 May 2022 14:59:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 545658D0003; Tue, 24 May 2022 14:59:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 471198D0001 for ; Tue, 24 May 2022 14:59:15 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 1614F807FC for ; Tue, 24 May 2022 18:59:15 +0000 (UTC) X-FDA: 79501549470.20.81F52E7 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf12.hostedemail.com (Postfix) with ESMTP id BD2B140021 for ; Tue, 24 May 2022 18:58:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1653418754; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/u3XlUG6KtQr1BQPpeDQC4OLmlAGTwSNLNWZkdYSkog=; b=YfICYk1DtlgD5jQRw+BhJIfe4ve8rvI87YHCO9ZkdHxoUqiWfo7t7BI1nCWNNGn8l3jBul pTOH0RV7onGljkYVZYO9ze2M7skLkS159PQb/yEfPrGfhjHhFjEdwC8q5i9rv/Vuwv17Cj 8pnj7q51Y8wNYptFE3S1EGJMxJ9je80= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-549-HnZZDyKuOi-cdCJOZyuufQ-1; Tue, 24 May 2022 14:59:13 -0400 X-MC-Unique: HnZZDyKuOi-cdCJOZyuufQ-1 Received: by mail-wm1-f71.google.com with SMTP id k16-20020a7bc310000000b0038e6cf00439so1654935wmj.0 for ; Tue, 24 May 2022 11:59:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent:subject :content-language:to:cc:references:from:organization:in-reply-to :content-transfer-encoding; bh=/u3XlUG6KtQr1BQPpeDQC4OLmlAGTwSNLNWZkdYSkog=; b=uJmse522oX3RdYSqiZkaI/EU5n0SDrQ09jjNiSQDBZgm/BfXzeMed8oRTWgjUE/j58 v/77ZoSF/QbLafmbUk8tFHYipKKR2QRWakGWa23Lq4ZvzB5iDfGWgFQj2LVCRY2kHLxd ddOvV7aC9GPKW1WoJqEpKgUJ+i8VuQ4fKFDVjNzs8Q7F/mzCddmxzt4Qb0Z+pkCPSVb2 pEGgSejk6a7tKnCTrtOGyXVF9/FKiMiJIWwSv/HICr84I7b1LPdfXsxFiWB5YlRv/K5X 9fnpKc7YQIaBHXVzJnEq+KmViOCsC04k3jB9xx+UOgY8HZTN7nNKLMscRD04pOJyE5ip +ovQ== X-Gm-Message-State: AOAM531MyI9ERKEbVZ05r393omGmTh02EWDgHgTvg8mH3Mq2hD3gNJQT 8V0iExKWryh3/88YoXb8PNBUFV5JHWfhH0fTFWrLjFbzXh8GYT62tMX7SJkLCmQRNW5UlscpS3c 8CIz2ih75qcc= X-Received: by 2002:adf:c80a:0:b0:20c:ffa0:6a3 with SMTP id d10-20020adfc80a000000b0020cffa006a3mr23572795wrh.360.1653418751819; Tue, 24 May 2022 11:59:11 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyGSz2pcp7eL3O58tziWOYGbUj4OEBI4+pd8yfPiv1mrR1trcb64B+iGECL8KJSWE+HJH27zw== X-Received: by 2002:adf:c80a:0:b0:20c:ffa0:6a3 with SMTP id d10-20020adfc80a000000b0020cffa006a3mr23572784wrh.360.1653418751530; Tue, 24 May 2022 11:59:11 -0700 (PDT) Received: from ?IPV6:2003:cb:c70a:5200:b78b:b654:3bbe:992? (p200300cbc70a5200b78bb6543bbe0992.dip0.t-ipconnect.de. [2003:cb:c70a:5200:b78b:b654:3bbe:992]) by smtp.gmail.com with ESMTPSA id r15-20020a056000014f00b0020e609dd274sm187544wrx.76.2022.05.24.11.59.10 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 24 May 2022 11:59:10 -0700 (PDT) Message-ID: <68a4a96b-9c66-6509-e75d-b1bea6cd55d1@redhat.com> Date: Tue, 24 May 2022 20:59:09 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.9.0 Subject: Re: [PATCH 0/3] recover hardware corrupted page by virtio balloon To: zhenwei pi , akpm@linux-foundation.org, naoya.horiguchi@nec.com, mst@redhat.com Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, jasowang@redhat.com, virtualization@lists.linux-foundation.org, pbonzini@redhat.com, peterx@redhat.com, qemu-devel@nongnu.org References: <20220520070648.1794132-1-pizhenwei@bytedance.com> From: David Hildenbrand Organization: Red Hat In-Reply-To: <20220520070648.1794132-1-pizhenwei@bytedance.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: BD2B140021 X-Stat-Signature: ucd53hmbuaxeqcb3zc6xmz8bswfm3w4g Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=YfICYk1D; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf12.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.129.124) smtp.mailfrom=david@redhat.com X-Rspam-User: X-HE-Tag: 1653418719-285380 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 20.05.22 09:06, zhenwei pi wrote: > Hi, > > I'm trying to recover hardware corrupted page by virtio balloon, the > workflow of this feature like this: > > Guest 5.MF -> 6.RVQ FE 10.Unpoison page > / \ / > -------------------+-------------+----------+----------- > | | | > 4.MCE 7.RVQ BE 9.RVQ Event > QEMU / \ / > 3.SIGBUS 8.Remap > / > ----------------+------------------------------------ > | > +--2.MF > Host / > 1.HW error > > 1, HardWare page error occurs randomly. > 2, host side handles corrupted page by Memory Failure mechanism, sends > SIGBUS to the user process if early-kill is enabled. > 3, QEMU handles SIGBUS, if the address belongs to guest RAM, then: > 4, QEMU tries to inject MCE into guest. > 5, guest handles memory failure again. > > 1-5 is already supported for a long time, the next steps are supported > in this patch(also related driver patch): > > 6, guest balloon driver gets noticed of the corrupted PFN, and sends > request to host side by Recover VQ FrontEnd. > 7, QEMU handles request from Recover VQ BackEnd, then: > 8, QEMU remaps the corrupted HVA fo fix the memory failure, then: > 9, QEMU acks the guest side the result by Recover VQ. > 10, guest unpoisons the page if the corrupted page gets recoverd > successfully. > > Test: > This patch set can be tested with QEMU(also in developing): > https://github.com/pizhenwei/qemu/tree/balloon-recover > > Emulate MCE by QEMU(guest RAM normal page only, hugepage is not supported): > virsh qemu-monitor-command vm --hmp mce 0 9 0xbd000000000000c0 0xd 0x61646678 0x8c > > The guest works fine(on Intel Platinum 8260): > mce: [Hardware Error]: Machine check events logged > Memory failure: 0x61646: recovery action for dirty LRU page: Recovered > virtio_balloon virtio5: recovered pfn 0x61646 > Unpoison: Unpoisoned page 0x61646 by virtio-balloon > MCE: Killing stress:24502 due to hardware memory corruption fault at 7f5be2e5a010 > > And the 'HardwareCorrupted' in /proc/meminfo also shows 0 kB. > > About the protocol of virtio balloon recover VQ, it's undefined and in > developing currently: > - 'struct virtio_balloon_recover' defines the structure which is used to > exchange message between guest and host. > - '__le32 corrupted_pages' in struct virtio_balloon_config is used in the next > step: > 1, a VM uses RAM of 2M huge page, once a MCE occurs, the 2M becomes > unaccessible. Reporting 512 * 4K 'corrupted_pages' to the guest, the guest > has a chance to isolate the 512 pages ahead of time. > > 2, after migrating to another host, the corrupted pages are actually recovered, > once the guest gets the 'corrupted_pages' with 0, then the guest could > unpoison all the poisoned pages which are recorded in the balloon driver. > Hi, I'm still on vacation this week, I'll try to have a look when I'm back (and flushed out my overflowing inbox :D). -- Thanks, David / dhildenb