From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73F35C43334 for ; Thu, 2 Jun 2022 09:40:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 79F486B0071; Thu, 2 Jun 2022 05:40:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 74E5B6B0072; Thu, 2 Jun 2022 05:40:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 63B506B0073; Thu, 2 Jun 2022 05:40:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 539756B0071 for ; Thu, 2 Jun 2022 05:40:30 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 1B22D34F60 for ; Thu, 2 Jun 2022 09:40:30 +0000 (UTC) X-FDA: 79532800620.03.EB537AE Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf14.hostedemail.com (Postfix) with ESMTP id E3F73100044 for ; Thu, 2 Jun 2022 09:40:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1654162822; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=52LjxcDCDPkycB4f7TfSZD4ZH4TRdf/5KkEsxrmPS+E=; b=J06owR+A0CISQyNw/9Jx6rM7WLd5H58UuGE8fPS67OV04g+qRFD/qmYk+RygmXZEF7LR6d 6xFHs3HnXdqs1Pb0S+VFahuzETADbsHZqKqjrUR0Orwv35IVV7L68ziqQI3bFs77i0+zbp 0ThY9AgQTpjg08kc8yWQTtpWcw+w/eA= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-83-6VnW7b-HNFe8VxvwRxnGYA-1; Thu, 02 Jun 2022 05:40:21 -0400 X-MC-Unique: 6VnW7b-HNFe8VxvwRxnGYA-1 Received: by mail-wr1-f71.google.com with SMTP id e40-20020a5d5968000000b002128083515eso267282wri.0 for ; Thu, 02 Jun 2022 02:40:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent:subject :content-language:to:cc:references:from:organization:in-reply-to :content-transfer-encoding; bh=52LjxcDCDPkycB4f7TfSZD4ZH4TRdf/5KkEsxrmPS+E=; b=LrnpQo8CjmMu5pCHYbEP/fe/jRaLHlbmk41XzvPDYNlqFSxk+bMK48UMY029Kplehj 17gCtudsKjEiKibXIrGomOXy+lb0o6BVNoZD6+216AuKA2q6zAXLkNzcIOs/hPbJYZ2Y diE72fcKPazUel3fiYNfrgaZ+PxhplDrhddgw7wCb8BtTDnPgTmvK6hBMr6oClzNSCBb dueJcM9lWcAzp3YTH7t1tTTusa5Hwxnin3CnNVVdFKmbJ1tjU8BV8fg8zdqDwNSuJ6Gy G+z7bZhwmRlItdFscfLnlC54HD/QUAekbQ4A4q8/8gck/8txevEUNZ+V72DD760onfSF PKtw== X-Gm-Message-State: AOAM532O10eh3z2slWBRNbYX+pbvTlJAxmKGuDMIRg7W3wsEetLD53pY JnLNUTLtiRew7Gy+BPjPUJ7DmlCr8bv76gsLjqJgxFovIponAPI9g669oB99MWSxbdQQiYvs7Pn AcLegAhApiGQ= X-Received: by 2002:a05:600c:5126:b0:39a:eede:5bf4 with SMTP id o38-20020a05600c512600b0039aeede5bf4mr3176143wms.81.1654162820606; Thu, 02 Jun 2022 02:40:20 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxHYaDjNFGZohw0FilN1sHRbTenO2UBUUNE+T3ndq17x5ElP+Hjk2H4wgNC7lGgcxhyEKxNTg== X-Received: by 2002:a05:600c:5126:b0:39a:eede:5bf4 with SMTP id o38-20020a05600c512600b0039aeede5bf4mr3176113wms.81.1654162820283; Thu, 02 Jun 2022 02:40:20 -0700 (PDT) Received: from [192.168.178.20] (p57a1a7d6.dip0.t-ipconnect.de. [87.161.167.214]) by smtp.gmail.com with ESMTPSA id d13-20020adfef8d000000b0020fc40d006bsm3830184wro.17.2022.06.02.02.40.19 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 02 Jun 2022 02:40:19 -0700 (PDT) Message-ID: <8e4ffc3f-62c3-636e-e65b-af4b5bbc6c99@redhat.com> Date: Thu, 2 Jun 2022 11:40:18 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.9.0 Subject: Re: [PATCH 0/3] recover hardware corrupted page by virtio balloon To: zhenwei pi , Andrew Morton , =?UTF-8?B?SE9SSUdVQ0hJIE5BT1lBKOWggOWPoyDnm7TkuZ8p?= Cc: Peter Xu , Jue Wang , Paolo Bonzini , jasowang@redhat.com, LKML , Linux MM , mst@redhat.com, qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org References: <24a95dea-9ea6-a904-7c0b-197961afa1d1@bytedance.com> <0d266c61-605d-ce0c-4274-b0c7e10f845a@redhat.com> <4b0c3e37-b882-681a-36fc-16cee7e1fff0@bytedance.com> <5f622a65-8348-8825-a167-414f2a8cd2eb@bytedance.com> <484546da-16cc-8070-2a2c-868717b8a75a@redhat.com> From: David Hildenbrand Organization: Red Hat In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: E3F73100044 X-Stat-Signature: n7w5n3y4sfgdssuyfyizne46jzuhmwh9 X-Rspam-User: Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=J06owR+A; spf=none (imf14.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.129.124) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-HE-Tag: 1654162828-222890 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 02.06.22 11:28, zhenwei pi wrote: > On 6/1/22 15:59, David Hildenbrand wrote: >> On 01.06.22 04:17, zhenwei pi wrote: >>> On 5/31/22 12:08, Jue Wang wrote: >>>> On Mon, May 30, 2022 at 8:49 AM Peter Xu wrote: >>>>> >>>>> On Mon, May 30, 2022 at 07:33:35PM +0800, zhenwei pi wrote: >>>>>> A VM uses RAM of 2M huge page. Once a MCE(@HVAy in [HVAx,HVAz)) occurs, the >>>>>> 2M([HVAx,HVAz)) of hypervisor becomes unaccessible, but the guest poisons 4K >>>>>> (@GPAy in [GPAx, GPAz)) only, it may hit another 511 MCE ([GPAx, GPAz) >>>>>> except GPAy). This is the worse case, so I want to add >>>>>> '__le32 corrupted_pages' in struct virtio_balloon_config, it is used in the >>>>>> next step: reporting 512 * 4K 'corrupted_pages' to the guest, the guest has >>>>>> a chance to isolate the other 511 pages ahead of time. And the guest >>>>>> actually loses 2M, fixing 512*4K seems to help significantly. >>>>> >>>>> It sounds hackish to teach a virtio device to assume one page will always >>>>> be poisoned in huge page granule. That's only a limitation to host kernel >>>>> not virtio itself. >>>>> >>>>> E.g. there're upstream effort ongoing with enabling doublemap on hugetlbfs >>>>> pages so hugetlb pages can be mapped in 4k with it. It provides potential >>>>> possibility to do page poisoning with huge pages in 4k too. When that'll >>>>> be ready the assumption can go away, and that does sound like a better >>>>> approach towards this problem. >>>> >>>> +1. >>>> >>>> A hypervisor should always strive to minimize the guest memory loss. >>>> >>>> The HugeTLB double mapping enlightened memory poisoning behavior (only >>>> poison 4K out of a 2MB huge page and 4K in guest) is a much better >>>> solution here. To be completely transparent, it's not _strictly_ >>>> required to poison the page (whatever the granularity it is) on the >>>> host side, as long as the following are true: >>>> >>>> 1. A hypervisor can emulate the _minimized_ (e.g., 4K) the poison to the guest. >>>> 2. The host page with the UC error is "isolated" (could be PG_HWPOISON >>>> or in some other way) and prevented from being reused by other >>>> processes. >>>> >>>> For #2, PG_HWPOISON and HugeTLB double mapping enlightened memory >>>> poisoning is a good solution. >>>> >>>>> >>>>>> >>>>>>> >>>>>>> I assume when talking about "the performance memory drops a lot", you >>>>>>> imply that this patch set can mitigate that performance drop? >>>>>>> >>>>>>> But why do you see a performance drop? Because we might lose some >>>>>>> possible THP candidates (in the host or the guest) and you want to plug >>>>>>> does holes? I assume you'll see a performance drop simply because >>>>>>> poisoning memory is expensive, including migrating pages around on CE. >>>>>>> >>>>>>> If you have some numbers to share, especially before/after this change, >>>>>>> that would be great. >>>>>>> >>>>>> >>>>>> The CE storm leads 2 problems I have even seen: >>>>>> 1, the memory bandwidth slows down to 10%~20%, and the cycles per >>>>>> instruction of CPU increases a lot. >>>>>> 2, the THR (/proc/interrupts) interrupts frequently, the CPU has to use a >>>>>> lot time to handle IRQ. >>>>> >>>>> Totally no good knowledge on CMCI, but if 2) is true then I'm wondering >>>>> whether it's necessary to handle the interrupts that frequently. When I >>>>> was reading the Intel CMCI vector handler I stumbled over this comment: >>>>> >>>>> /* >>>>> * The interrupt handler. This is called on every event. >>>>> * Just call the poller directly to log any events. >>>>> * This could in theory increase the threshold under high load, >>>>> * but doesn't for now. >>>>> */ >>>>> static void intel_threshold_interrupt(void) >>>>> >>>>> I think that matches with what I was thinking.. I mean for 2) not sure >>>>> whether it can be seen as a CMCI problem and potentially can be optimized >>>>> by adjust the cmci threshold dynamically. >>>> >>>> The CE storm caused performance drop is caused by the extra cycles >>>> spent by the ECC steps in memory controller, not in CMCI handling. >>>> This is observed in the Google fleet as well. A good solution is to >>>> monitor the CE rate closely in user space via /dev/mcelog and migrate >>>> all VMs to another host once the CE rate exceeds some threshold. >>>> >>>> CMCI is a _background_ interrupt that is not handled in the process >>>> execution context and its handler is setup to switch to poll (1 / 5 >>>> min) mode if there are more than ~ a dozen CEs reported via CMCI per >>>> second. >>>>> >>>>> -- >>>>> Peter Xu >>>>> >>> >>> Hi, Andrew, David, Naoya >>> >>> According to the suggestions, I'd give up the improvement of memory >>> failure on huge page in this series. >>> >>> Is it worth recovering corrupted pages for the guest kernel? I'd follow >>> your decision. >> >> Well, as I said, I am not sure if we really need/want this for a handful >> of 4k poisoned pages in a VM. As I suspected, doing so might primarily >> be interesting for some sort of de-fragmentation (allow again a higher >> order page to be placed at the affected PFNs), not because of the slight >> reduction of available memory. A simple VM reboot would get the job >> similarly done. >> > > Sure, Let's drop this idea. Thanks to all for the suggestions. Thanks for the interesting idea + discussions. Just a note that if you believe that we want/need something like that, and there is a reasonable use case, please tell us we're wrong and push back :) -- Thanks, David / dhildenb