From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 51146C433F5 for ; Tue, 26 Apr 2022 18:27:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type: Content-Transfer-Encoding:MIME-Version:References:In-Reply-To:Message-ID:Date :Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=L3gMA0FBcsSd+gTv2h0kcsh53oOvBZ6JTsM+1jS6pR4=; b=xgtQyyMLOBtSA2KoQLS1anecJv xkfoztnPtUwBQNWwSS6Q+8KAqya1Sxz57hzQ4hpzkYL/hiVvqf8Y+ggzCQeHPhJ+/K4X+gehhiCCL Y0HEUOqv7K29DSR5IhHNybi1ptHAQX6lAb6gDNF4nrx/r+r2arKPHnSPtLF786Z1un0Ia90gZwrXd Ylh1SCNld3A9vsQcXWf2rl/GUVtM1dEntz2AzyGD6VelbCvgvkph/kJXwpagYu8XqXlVnbSiTkN11 GqKfGxS6mK05zHOwZetgPkg5DKHsqIqlVYEQ7BBbElEMs/57rXhj+dSd1cy0vRQNLL8w3Bra4KEW/ 3U7ruHZQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1njPtk-00Fhub-S4; Tue, 26 Apr 2022 18:26:53 +0000 Received: from mx0b-00082601.pphosted.com ([67.231.153.30]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1njPoo-00FfXw-3W for linux-nvme@lists.infradead.org; Tue, 26 Apr 2022 18:21:49 +0000 Received: from pps.filterd (m0109332.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 23QGQTuY022474 for ; Tue, 26 Apr 2022 11:21:41 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=L3gMA0FBcsSd+gTv2h0kcsh53oOvBZ6JTsM+1jS6pR4=; b=FHrRERyj45+/jMtQn6YB0F9OnifhArS7uc1HD9Sdy/vNEUPZNdrw51i9pIYOfjQfKbUP pvvcNvEVO/KQDpbQHqqRSiZs0h6yz5bIfJozcNH6au1//k251iRtT0EoBMlhEAoOhyze +S9oQi6SoKta65Wjix7GGig+pyZGfFsBLGQ= Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3fp10efweg-4 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Tue, 26 Apr 2022 11:21:40 -0700 Received: from twshared14141.02.ash7.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:82::c) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 26 Apr 2022 11:21:39 -0700 Received: by devvm225.atn0.facebook.com (Postfix, from userid 425415) id 5601DE2E5694; Tue, 26 Apr 2022 11:21:36 -0700 (PDT) From: Stefan Roesch To: , , CC: , Subject: [PATCH v4 04/12] io_uring: add CQE32 setup processing Date: Tue, 26 Apr 2022 11:21:26 -0700 Message-ID: <20220426182134.136504-5-shr@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220426182134.136504-1-shr@fb.com> References: <20220426182134.136504-1-shr@fb.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-FB-Internal: Safe Content-Type: text/plain X-Proofpoint-GUID: RDV5ypA6qT2Uq9TVmlZHYDG_JXefRygX X-Proofpoint-ORIG-GUID: RDV5ypA6qT2Uq9TVmlZHYDG_JXefRygX X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-04-26_05,2022-04-26_02,2022-02-23_01 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220426_112146_325617_8E1A172C X-CRM114-Status: GOOD ( 16.06 ) X-BeenThere: linux-nvme@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-nvme" Errors-To: linux-nvme-bounces+linux-nvme=archiver.kernel.org@lists.infradead.org This adds two new function to setup and fill the CQE32 result structure. Signed-off-by: Stefan Roesch Signed-off-by: Jens Axboe Reviewed-by: Kanchan Joshi --- fs/io_uring.c | 58 +++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 58 insertions(+) diff --git a/fs/io_uring.c b/fs/io_uring.c index 9712483d3a17..8cb51676d38d 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -2175,12 +2175,70 @@ static inline bool __io_fill_cqe_req_filled(struc= t io_ring_ctx *ctx, req->cqe.res, req->cqe.flags); } =20 +static inline bool __io_fill_cqe32_req_filled(struct io_ring_ctx *ctx, + struct io_kiocb *req) +{ + struct io_uring_cqe *cqe; + u64 extra1 =3D req->extra1; + u64 extra2 =3D req->extra2; + + trace_io_uring_complete(req->ctx, req, req->cqe.user_data, + req->cqe.res, req->cqe.flags); + + /* + * If we can't get a cq entry, userspace overflowed the + * submission (by quite a lot). Increment the overflow count in + * the ring. + */ + cqe =3D io_get_cqe(ctx); + if (likely(cqe)) { + memcpy(cqe, &req->cqe, sizeof(struct io_uring_cqe)); + cqe->big_cqe[0] =3D extra1; + cqe->big_cqe[1] =3D extra2; + return true; + } + + return io_cqring_event_overflow(ctx, req->cqe.user_data, + req->cqe.res, req->cqe.flags); +} + static inline bool __io_fill_cqe_req(struct io_kiocb *req, s32 res, u32 = cflags) { trace_io_uring_complete(req->ctx, req, req->cqe.user_data, res, cflags)= ; return __io_fill_cqe(req->ctx, req->cqe.user_data, res, cflags); } =20 +static inline void __io_fill_cqe32_req(struct io_kiocb *req, s32 res, u3= 2 cflags, + u64 extra1, u64 extra2) +{ + struct io_ring_ctx *ctx =3D req->ctx; + struct io_uring_cqe *cqe; + + if (WARN_ON_ONCE(!(ctx->flags & IORING_SETUP_CQE32))) + return; + if (req->flags & REQ_F_CQE_SKIP) + return; + + trace_io_uring_complete(ctx, req, req->cqe.user_data, res, cflags); + + /* + * If we can't get a cq entry, userspace overflowed the + * submission (by quite a lot). Increment the overflow count in + * the ring. + */ + cqe =3D io_get_cqe(ctx); + if (likely(cqe)) { + WRITE_ONCE(cqe->user_data, req->cqe.user_data); + WRITE_ONCE(cqe->res, res); + WRITE_ONCE(cqe->flags, cflags); + WRITE_ONCE(cqe->big_cqe[0], extra1); + WRITE_ONCE(cqe->big_cqe[1], extra2); + return; + } + + io_cqring_event_overflow(ctx, req->cqe.user_data, res, cflags); +} + static noinline bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_d= ata, s32 res, u32 cflags) { --=20 2.30.2