From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D4CE4C433F5 for ; Mon, 20 Dec 2021 16:34:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:In-Reply-To:MIME-Version:Date:Message-ID:From:References:Cc:To: Subject:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=WbgSjsoENY6E8nvrdhFPtFmLL5CXt0S6vafMWe65XQ4=; b=yZ+/SzVjuU8VpTbA75xFSyYUI/ gMNwyPx1Bsn/9iG5xgCwoj9ZSw1hWQgu4MHOgUwg6eYqHMWvSPDdePS9CqSDdVCXTuRybzuwfHUBl OjleLiKoy4ahNu0RcQb38rK9SFHMXhL6dV4veumPsgaH8ZFvomvUmF8Xry931sm7QIVFkJ8qllxx7 OOe7srbz/dt57BbGFSqySa+ykz+R9TIMnp7K9bjnpajhxofxTpe1tNVSqVv14u+P/1GRmLyWyfWvm vVY/i0kWLbr+rZFT4qaN/m2VLFl4B58GB/ZbSlJLUo6Di3cFE18kNGanKczjY4NJjpzQu6u8Ts6+8 i4xf2G3w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzLcO-003JSp-PE; Mon, 20 Dec 2021 16:34:32 +0000 Received: from mail-il1-x12d.google.com ([2607:f8b0:4864:20::12d]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzLcJ-003JQr-Q0 for linux-nvme@lists.infradead.org; Mon, 20 Dec 2021 16:34:31 +0000 Received: by mail-il1-x12d.google.com with SMTP id s11so8029579ilt.13 for ; Mon, 20 Dec 2021 08:34:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-language:content-transfer-encoding; bh=WbgSjsoENY6E8nvrdhFPtFmLL5CXt0S6vafMWe65XQ4=; b=74pm+5216KCF/7lSQjoKqmzV8IrYSpz80PZ6RfKWBo5jEO4DAI3YK4a2Re4lZU88mD oatf0bSytKPizVYwbAym8jniZeMI7cVEhQmM3QeYimgIfGMsnXr9w63qh0vuAH4aZxOh /VthySzlZ2Vr7PnG6Zt/xRc0pKYJHTk0npDtBA9mVXj+qmGAc4h09rnpEkQ2h6Jizf/Z ob2UPXnmbr4GbO8laDavlh1ujqLP8uz/ilSJjbYrvd8btkR5D2X4SWBiRlVtM9Xz0oJ9 i3YCK4JGr4YxVr2r19BWepC7VijVimoo78d5djqP11riM8+w5yQIYOtLwrj2QLnU5+Ox H2Eg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=WbgSjsoENY6E8nvrdhFPtFmLL5CXt0S6vafMWe65XQ4=; b=O7bdWAnZEjpvbcdh980o9PwZWIcNmcrNUBjGuLD6YbzaqTYw1hzdkQ/qv6ot7DFPh9 iTlRYG45ZrpV2zafPfZMvohGJJPvxFCJ07wS7D9+CqoA/Qy3tArmgu2zHiU+lc7WOzzg CKHjUSpUN0uBtEJUbc3Q8yB0qljtYKL7TlTZ44WcOmhZF4ZomnlK+L6zXos3Bl1ze2XM 8Nf/k+gfGYFFdyhtKPm9G0ZZ2aw557i6WoNY+aZlJp9PBIDxjZOntGkNbz5WZhBKpdc/ /t6tmMNSvPYRRX/kDrHtSJILCyOBfZhTBpwSe2aAW6t/v3UawLCE6hIYumqyKIDvUWaI ZLXA== X-Gm-Message-State: AOAM5316RBq+dneJDMZG9GWmeB17KmAPhS4fOFZ/x0M7mOqmOT2pXtz/ rvWgE3Cx0Cum2p08BY7pvW9IjFAm2qSjTA== X-Google-Smtp-Source: ABdhPJz8wgO2sHeFeFAk8/ST7R5Pc8lgwSjjxaJQ4+K86bL8LtVuY2j1DcvD3VDwmnoc4/5uiOkIKg== X-Received: by 2002:a05:6e02:b41:: with SMTP id f1mr4391911ilu.257.1640018065868; Mon, 20 Dec 2021 08:34:25 -0800 (PST) Received: from [192.168.1.30] ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id q8sm10974559iow.47.2021.12.20.08.34.25 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 20 Dec 2021 08:34:25 -0800 (PST) Subject: Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() To: Max Gurtovoy , Christoph Hellwig Cc: io-uring@vger.kernel.org, linux-nvme@lists.infradead.org, Hannes Reinecke , Oren Duer References: <20211215162421.14896-1-axboe@kernel.dk> <20211215162421.14896-5-axboe@kernel.dk> <2adafc43-3860-d9f0-9cb5-ca3bf9a27109@nvidia.com> <06ab52e6-47b7-6010-524c-45bb73fbfabc@kernel.dk> <9b4202b4-192a-6611-922e-0b837e2b97c3@nvidia.com> <5f249c03-5cb2-9978-cd2c-669c0594d1c0@kernel.dk> <3474493a-a04d-528c-7565-f75db5205074@nvidia.com> <87e3a197-e8f7-d8d6-85b6-ce05bf1f35cd@kernel.dk> <5ee0e257-651a-ec44-7ca3-479438a737fb@nvidia.com> <01f9ce91-d998-c823-f2f2-de457625021e@nvidia.com> <573bbe72-d232-6063-dd34-2e12d8374594@kernel.dk> <4fbf2936-8e4c-9c04-e5a9-10eae387b562@nvidia.com> <6ca82929-7e70-be15-dcbb-1e68a02dd933@kernel.dk> From: Jens Axboe Message-ID: <92c5065e-dc2a-9e3f-404a-64c6e22624b7@kernel.dk> Date: Mon, 20 Dec 2021 09:34:24 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211220_083427_928737_BF813DB7 X-CRM114-Status: GOOD ( 36.96 ) X-BeenThere: linux-nvme@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-nvme" Errors-To: linux-nvme-bounces+linux-nvme=archiver.kernel.org@lists.infradead.org On 12/20/21 8:29 AM, Max Gurtovoy wrote: > > On 12/20/2021 4:19 PM, Jens Axboe wrote: >> On 12/20/21 3:11 AM, Max Gurtovoy wrote: >>> On 12/19/2021 4:48 PM, Jens Axboe wrote: >>>> On 12/19/21 5:14 AM, Max Gurtovoy wrote: >>>>> On 12/16/2021 7:16 PM, Jens Axboe wrote: >>>>>> On 12/16/21 9:57 AM, Max Gurtovoy wrote: >>>>>>> On 12/16/2021 6:36 PM, Jens Axboe wrote: >>>>>>>> On 12/16/21 9:34 AM, Max Gurtovoy wrote: >>>>>>>>> On 12/16/2021 6:25 PM, Jens Axboe wrote: >>>>>>>>>> On 12/16/21 9:19 AM, Max Gurtovoy wrote: >>>>>>>>>>> On 12/16/2021 6:05 PM, Jens Axboe wrote: >>>>>>>>>>>> On 12/16/21 9:00 AM, Max Gurtovoy wrote: >>>>>>>>>>>>> On 12/16/2021 5:48 PM, Jens Axboe wrote: >>>>>>>>>>>>>> On 12/16/21 6:06 AM, Max Gurtovoy wrote: >>>>>>>>>>>>>>> On 12/16/2021 11:08 AM, Christoph Hellwig wrote: >>>>>>>>>>>>>>>> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote: >>>>>>>>>>>>>>>>> + spin_lock(&nvmeq->sq_lock); >>>>>>>>>>>>>>>>> + while (!rq_list_empty(*rqlist)) { >>>>>>>>>>>>>>>>> + struct request *req = rq_list_pop(rqlist); >>>>>>>>>>>>>>>>> + struct nvme_iod *iod = blk_mq_rq_to_pdu(req); >>>>>>>>>>>>>>>>> + >>>>>>>>>>>>>>>>> + memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes), >>>>>>>>>>>>>>>>> + absolute_pointer(&iod->cmd), sizeof(iod->cmd)); >>>>>>>>>>>>>>>>> + if (++nvmeq->sq_tail == nvmeq->q_depth) >>>>>>>>>>>>>>>>> + nvmeq->sq_tail = 0; >>>>>>>>>>>>>>>> So this doesn't even use the new helper added in patch 2? I think this >>>>>>>>>>>>>>>> should call nvme_sq_copy_cmd(). >>>>>>>>>>>>>>> I also noticed that. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> So need to decide if to open code it or use the helper function. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Inline helper sounds reasonable if you have 3 places that will use it. >>>>>>>>>>>>>> Yes agree, that's been my stance too :-) >>>>>>>>>>>>>> >>>>>>>>>>>>>>>> The rest looks identical to the incremental patch I posted, so I guess >>>>>>>>>>>>>>>> the performance degration measured on the first try was a measurement >>>>>>>>>>>>>>>> error? >>>>>>>>>>>>>>> giving 1 dbr for a batch of N commands sounds good idea. Also for RDMA host. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> But how do you moderate it ? what is the batch_sz <--> time_to_wait >>>>>>>>>>>>>>> algorithm ? >>>>>>>>>>>>>> The batching is naturally limited at BLK_MAX_REQUEST_COUNT, which is 32 >>>>>>>>>>>>>> in total. I do agree that if we ever made it much larger, then we might >>>>>>>>>>>>>> want to cap it differently. But 32 seems like a pretty reasonable number >>>>>>>>>>>>>> to get enough gain from the batching done in various areas, while still >>>>>>>>>>>>>> not making it so large that we have a potential latency issue. That >>>>>>>>>>>>>> batch count is already used consistently for other items too (like tag >>>>>>>>>>>>>> allocation), so it's not specific to just this one case. >>>>>>>>>>>>> I'm saying that the you can wait to the batch_max_count too long and it >>>>>>>>>>>>> won't be efficient from latency POV. >>>>>>>>>>>>> >>>>>>>>>>>>> So it's better to limit the block layar to wait for the first to come: x >>>>>>>>>>>>> usecs or batch_max_count before issue queue_rqs. >>>>>>>>>>>> There's no waiting specifically for this, it's just based on the plug. >>>>>>>>>>>> We just won't do more than 32 in that plug. This is really just an >>>>>>>>>>>> artifact of the plugging, and if that should be limited based on "max of >>>>>>>>>>>> 32 or xx time", then that should be done there. >>>>>>>>>>>> >>>>>>>>>>>> But in general I think it's saner and enough to just limit the total >>>>>>>>>>>> size. If we spend more than xx usec building up the plug list, we're >>>>>>>>>>>> doing something horribly wrong. That really should not happen with 32 >>>>>>>>>>>> requests, and we'll never eg wait on requests if we're out of tags. That >>>>>>>>>>>> will result in a plug flush to begin with. >>>>>>>>>>> I'm not aware of the plug. I hope to get to it soon. >>>>>>>>>>> >>>>>>>>>>> My concern is if the user application submitted only 28 requests and >>>>>>>>>>> then you'll wait forever ? or for very long time. >>>>>>>>>>> >>>>>>>>>>> I guess not, but I'm asking how do you know how to batch and when to >>>>>>>>>>> stop in case 32 commands won't arrive anytime soon. >>>>>>>>>> The plug is in the stack of the task, so that condition can never >>>>>>>>>> happen. If the application originally asks for 32 but then only submits >>>>>>>>>> 28, then once that last one is submitted the plug is flushed and >>>>>>>>>> requests are issued. >>>>>>>>> So if I'm running fio with --iodepth=28 what will plug do ? send batches >>>>>>>>> of 28 ? or 1 by 1 ? >>>>>>>> --iodepth just controls the overall depth, the batch submit count >>>>>>>> dictates what happens further down. If you run queue depth 28 and submit >>>>>>>> one at the time, then you'll get one at the time further down too. Hence >>>>>>>> the batching is directly driven by what the application is already >>>>>>>> doing. >>>>>>> I see. Thanks for the explanation. >>>>>>> >>>>>>> So it works only for io_uring based applications ? >>>>>> It's only enabled for io_uring right now, but it's generically available >>>>>> for anyone that wants to use it... Would be trivial to do for aio, and >>>>>> other spots that currently use blk_start_plug() and has an idea of how >>>>>> many IOs will be submitted >>>>> Can you please share an example application (or is it fio patches) that >>>>> can submit batches ? The same that was used to test this patchset is >>>>> fine too. >>>>> >>>>> I would like to test it with our NVMe SNAP controllers and also to >>>>> develop NVMe/RDMA queue_rqs code and test the perf with it. >>>> You should just be able to use iodepth_batch with fio. For my peak >>>> testing, I use t/io_uring from the fio repo. By default, it'll run QD of >>>> and do batches of 32 for complete and submit. You can just run: >>>> >>>> t/io_uring >>>> >>>> maybe adding -p0 for IRQ driven rather than polled IO. >>> I used your block/for-next branch and implemented queue_rqs in NVMe/RDMA >>> but it was never called using the t/io_uring test nor fio with >>> iodepth_batch=32 flag with io_uring engine. >>> >>> Any idea what might be the issue ? >>> >>> I installed fio from sources.. >> The two main restrictions right now are a scheduler and shared tags, are >> you using any of those? > > No. > > But maybe I'm missing the .commit_rqs callback. is it mandatory for this > feature ? I've only tested with nvme pci which does have it, but I don't think so. Unless there's some check somewhere that makes it necessary. Can you share the patch you're currently using on top? -- Jens Axboe