From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9BE8C432C0 for ; Fri, 22 Nov 2019 04:38:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8E19E2068F for ; Fri, 22 Nov 2019 04:38:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="E3Nds/5q" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726735AbfKVEiu (ORCPT ); Thu, 21 Nov 2019 23:38:50 -0500 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:54924 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726529AbfKVEiu (ORCPT ); Thu, 21 Nov 2019 23:38:50 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1574397529; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eGSHBQja7qoAUYzMf3cEFcK/LDtr3a+1zdNJojEp37E=; b=E3Nds/5qjcFsI7um6PHPrHZYl7ar5/9vKJ8aug1JQxa1Oi6z/Tp9umeI7fX1VK/KvxbOFh KkxN8ti6UjvSPmH6AUPeWO++JqwtANeqvhFtImtHEuIeKVffDMIK3HlcV+ANg1YqeA8z5Y QHjtS4i4KHS9V5R0LCfNeSZxng2GJT8= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-213-K6V0ySTOOYGesCDyUuoUIw-1; Thu, 21 Nov 2019 23:38:44 -0500 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B3EE2107ACC4; Fri, 22 Nov 2019 04:38:42 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 9719028D18; Fri, 22 Nov 2019 04:38:42 +0000 (UTC) Received: from zmail21.collab.prod.int.phx2.redhat.com (zmail21.collab.prod.int.phx2.redhat.com [10.5.83.24]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id D29F64BB5C; Fri, 22 Nov 2019 04:38:41 +0000 (UTC) Date: Thu, 21 Nov 2019 23:38:41 -0500 (EST) From: Pankaj Gupta To: Dan Williams Cc: Jeff Moyer , linux-nvdimm , Linux Kernel Mailing List , Linux ACPI , Vishal L Verma , Dave Jiang , Ira Weiny , "Rafael J. Wysocki" , Len Brown , Vivek Goyal , Keith Busch Message-ID: <560894997.35969622.1574397521533.JavaMail.zimbra@redhat.com> In-Reply-To: References: <20191120092831.6198-1-pagupta@redhat.com> <1617854972.35808055.1574323227395.JavaMail.zimbra@redhat.com> Subject: Re: [PATCH] virtio pmem: fix async flush ordering MIME-Version: 1.0 X-Originating-IP: [10.67.116.36, 10.4.195.13] Thread-Topic: virtio pmem: fix async flush ordering Thread-Index: LSm9oNBEmc1YyqOBo3tZ7gh0w041hg== X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-MC-Unique: K6V0ySTOOYGesCDyUuoUIw-1 X-Mimecast-Spam-Score: 0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > > > > > > > > > > > Remove logic to create child bio in the async flush function w= hich > > > > > > causes child bio to get executed after parent bio > > > > > > 'pmem_make_request' > > > > > > completes. This resulted in wrong ordering of REQ_PREFLUSH wit= h > > > > > > the > > > > > > data write request. > > > > > > > > > > > > Instead we are performing flush from the parent bio to maintai= n > > > > > > the > > > > > > correct order. Also, returning from function 'pmem_make_reques= t' > > > > > > if > > > > > > REQ_PREFLUSH returns an error. > > > > > > > > > > > > Reported-by: Jeff Moyer > > > > > > Signed-off-by: Pankaj Gupta > > > > > > > > > > There's a slight change in behavior for the error path in the > > > > > virtio_pmem driver. Previously, all errors from virtio_pmem_flus= h > > > > > were > > > > > converted to -EIO. Now, they are reported as-is. I think this i= s > > > > > actually an improvement. > > > > > > > > > > I'll also note that the current behavior can result in data > > > > > corruption, > > > > > so this should be tagged for stable. > > > > > > > > I added that and was about to push this out, but what about the fac= t > > > > that now the guest will synchronously wait for flushing to occur. T= he > > > > goal of the child bio was to allow that to be an I/O wait with > > > > overlapping I/O, or at least not blocking the submission thread. Do= es > > > > the block layer synchronously wait for PREFLUSH requests? If not I > > > > think a synchronous wait is going to be a significant performance > > > > regression. Are there any numbers to accompany this change? > > > > > > Why not just swap the parent child relationship in the PREFLUSH case? > > > > I we are already inside parent bio "make_request" function and we creat= e > > child > > bio. How we exactly will swap the parent/child relationship for PREFLUS= H > > case? > > > > Child bio is queued after parent bio completes. >=20 > Sorry, I didn't quite mean with bio_split, but issuing another request > in front of the real bio. See md_flush_request() for inspiration. o.k. Thank you. Will try to post patch today to be considered for 5.4. Best regards, Pankaj >=20 >=20