From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.3 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8AA26C4338F for ; Tue, 10 Aug 2021 02:08:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6A55F61051 for ; Tue, 10 Aug 2021 02:08:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234336AbhHJCJQ (ORCPT ); Mon, 9 Aug 2021 22:09:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34738 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232947AbhHJCJM (ORCPT ); Mon, 9 Aug 2021 22:09:12 -0400 Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com [IPv6:2a00:1450:4864:20::329]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1032CC0613D3 for ; Mon, 9 Aug 2021 19:08:51 -0700 (PDT) Received: by mail-wm1-x329.google.com with SMTP id n11so11877991wmd.2 for ; Mon, 09 Aug 2021 19:08:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=to:references:from:subject:message-id:date:user-agent:mime-version :in-reply-to:content-language:content-transfer-encoding; bh=sw6A6X1l9yulmnchiRIEU0bwN9NMl28vNWX3L3/91o8=; b=AJBxP1H3EkIZB7Up+hyOvpFYFQNzJNZ0XjCOG8mUBI7L3Oc198J/bomfpcVh5gni24 HL3bMpLwuomDXMopjlwFZeipJtakh6N2YP5G9Ln2KCtxjVavsAtjbR70XvI/4t3VYPrD /QMUvZvxf597L+e0DVcnV7K7eHRfpzGCFEj+I1+ulNJsI1tIjvHRfVLkgQIZKW8gMoow LDGJERdQCd6hpCWXBvUbtLjl3T9AHRzZvDU/aQIrdnEFF9g22a/k46u9QCUzxrx00ODG 6F9lIZ06AaRBzvMx/XiYukF6V/SfePSXmLvTDzG2xnvdHvtnUiSIG/GQaUu84r8w30JC 3tjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:to:references:from:subject:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=sw6A6X1l9yulmnchiRIEU0bwN9NMl28vNWX3L3/91o8=; b=DZ/Gu0uOO2Ww8MNSZMKeCf61uC5Krz5489isgKIwAqhabULXuo5xPp80d6+f6aNdiV KqWqoDIJNxfWZ0u/l7Q9VrPR4S4RTwYutaZNa9Mb/fF0WSDT9VvqueLnu/XUP6eTWZ82 vx8JXmSFHWY3Ep9PcEl5f9UUX5O4aOZvlUBUZh40M506q8n2LbeLbbt3BarCIhgv/ybc oOsmNdF5TV95fc6jDRgCkhuxuOqITVrZvTLQwzMFaMINPAn6grV5OETxaxbEYgGzoGEb LfikyNFzmLyljvoXg0wguobVAHagW/DYp6EdLgnoEg12O4My+Ck3LH5fqLuVXhp4iqkZ 1bdA== X-Gm-Message-State: AOAM531DJpheabo3k5w5dWM0Hk7+DjbKqdJl6jdSmXDKb+bs4N8lmJM7 V9jMTVvOKUUYSkMW0aafQPLu865sxcE= X-Google-Smtp-Source: ABdhPJx4CzE0a75DnNmcNTadr/4tq2JdXp8jqAnQcaM0lgI0CKucxhkVxVDY6JP42hc9yekrt2pwng== X-Received: by 2002:a1c:5404:: with SMTP id i4mr1959635wmb.80.1628561329534; Mon, 09 Aug 2021 19:08:49 -0700 (PDT) Received: from [192.168.8.197] ([85.255.236.119]) by smtp.gmail.com with ESMTPSA id g12sm21289972wri.49.2021.08.09.19.08.48 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 09 Aug 2021 19:08:49 -0700 (PDT) To: Jens Axboe , io-uring References: <27997f97-68cc-63c3-863b-b0c460bc42c0@fb.com> <4f310c1a-2630-75ba-1692-cc7d12c11fc0@fb.com> From: Pavel Begunkov Subject: Re: [PATCH] io_uring: be smarter about waking multiple CQ ring waiters Message-ID: Date: Tue, 10 Aug 2021 03:08:21 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.12.0 MIME-Version: 1.0 In-Reply-To: <4f310c1a-2630-75ba-1692-cc7d12c11fc0@fb.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org On 8/10/21 2:55 AM, Jens Axboe wrote: > On 8/9/21 7:42 PM, Pavel Begunkov wrote: >> On 8/6/21 9:19 PM, Jens Axboe wrote: >>> Currently we only wake the first waiter, even if we have enough entries >>> posted to satisfy multiple waiters. Improve that situation so that >>> every waiter knows how much the CQ tail has to advance before they can >>> be safely woken up. >>> >>> With this change, if we have N waiters each asking for 1 event and we get >>> 4 completions, then we wake up 4 waiters. If we have N waiters asking >>> for 2 completions and we get 4 completions, then we wake up the first >>> two. Previously, only the first waiter would've been woken up. >>> >>> Signed-off-by: Jens Axboe >>> >>> --- >>> >>> diff --git a/fs/io_uring.c b/fs/io_uring.c >>> index bf548af0426c..04df4fa3c75e 100644 >>> --- a/fs/io_uring.c >>> +++ b/fs/io_uring.c >>> @@ -1435,11 +1435,13 @@ static inline bool io_should_trigger_evfd(struct io_ring_ctx *ctx) >>> >>> static void io_cqring_ev_posted(struct io_ring_ctx *ctx) >>> { >>> - /* see waitqueue_active() comment */ >>> - smp_mb(); >>> - >>> - if (waitqueue_active(&ctx->cq_wait)) >>> - wake_up(&ctx->cq_wait); >>> + /* >>> + * wake_up_all() may seem excessive, but io_wake_function() and >>> + * io_should_wake() handle the termination of the loop and only >>> + * wake as many waiters as we need to. >>> + */ >>> + if (wq_has_sleeper(&ctx->cq_wait)) >>> + wake_up_all(&ctx->cq_wait); >>> if (ctx->sq_data && waitqueue_active(&ctx->sq_data->wait)) >>> wake_up(&ctx->sq_data->wait); >>> if (io_should_trigger_evfd(ctx)) >>> @@ -6968,20 +6970,21 @@ static int io_sq_thread(void *data) >>> struct io_wait_queue { >>> struct wait_queue_entry wq; >>> struct io_ring_ctx *ctx; >>> - unsigned to_wait; >>> + unsigned cq_tail; >>> unsigned nr_timeouts; >>> }; >>> >>> static inline bool io_should_wake(struct io_wait_queue *iowq) >>> { >>> struct io_ring_ctx *ctx = iowq->ctx; >>> + unsigned tail = ctx->cached_cq_tail + atomic_read(&ctx->cq_timeouts); >> >> Seems, adding cq_timeouts can be dropped from here and iowq.cq_tail > > Good point, we can drop it at both ends. > >>> /* >>> * Wake up if we have enough events, or if a timeout occurred since we >>> * started waiting. For timeouts, we always want to return to userspace, >>> * regardless of event count. >>> */ >>> - return io_cqring_events(ctx) >= iowq->to_wait || >> >> Don't we miss smp_rmb() previously provided my io_cqring_events()? > > For? We aren't reading any user modified pats. I was rather thinking about who provides the barrier for userspace, but that should be indeed on the userspace, and the function is called from arbitrary CPU/context anyway. >> >>> + return tail >= iowq->cq_tail || >> >> tails might overflow > > Indeed, I actually did fix this one before committing it. Great -- Pavel Begunkov