From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0FB4EE8000 for ; Fri, 8 Sep 2023 14:29:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243885AbjIHO3K (ORCPT ); Fri, 8 Sep 2023 10:29:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49366 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232909AbjIHO3I (ORCPT ); Fri, 8 Sep 2023 10:29:08 -0400 Received: from mail-ua1-x933.google.com (mail-ua1-x933.google.com [IPv6:2607:f8b0:4864:20::933]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 571421BF5 for ; Fri, 8 Sep 2023 07:29:04 -0700 (PDT) Received: by mail-ua1-x933.google.com with SMTP id a1e0cc1a2514c-7a5170c78e6so779877241.2 for ; Fri, 08 Sep 2023 07:29:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; t=1694183343; x=1694788143; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=8aqh0zmf4yZbMkh2LM7vlZhrvNSdTYvOhnt75WUn3Y4=; b=MKnWA5k33MpP5wbal+Z00tcw/ShWnqgxeMuLhEaus+1lPNAMpzd6cYE+Rx5a980lWd XbcN39LDvRrYytk3aXpTv6DwRz4WkXjwCsQOSTPxSYaSOf32VtJjiQ2mxiWa6JuuzH5M C+N1UvT04KZmW2rajGkGzeCfz3+IId6gDxiDTiSzyTyB8Uqbr+xRv5rYI6DovNNeFDkU cskGPR6gGeBjB3xAXyZsDxxoJPj+y/nc8zRoR8YSbLkitLUKIImZFL/ppaXTKzFZC+fH RPeWw+oaFwvsOh8TIfngWPE9lbMdfJUlr6u/WmcemTC48duXkht0QC6mtMqEGRLsEaVd Jusw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694183343; x=1694788143; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=8aqh0zmf4yZbMkh2LM7vlZhrvNSdTYvOhnt75WUn3Y4=; b=ofzxKvHrZYiqA8cYNfnT88W7JYt0z9YH07DsouvToZEUZTbzmz/TW1AXtP/vxZJPFN v6A2AtDwds8DUTYEFZDktiYMchOL2/OERk2aGR9aQG2dFIC6S8oHEwFs61cD11ke2P5k qEF49RZrbyEwxXBcrS8TkC5KmF6rPhswYRio+6GtKShjh+F91noYKQaY5yRefNLSX0hB TBdPbDcL9+y44EWhKOcRxBPWioosudNVvI1hiCuyyXm8cIlOkKqGHPDMXu2VW/EBV7QW 0OgF3+2R0a7d3lxxM37W3FIpmTMql/M3NmgVT21cOs/ZIdGHZBW0g8CoTKNmz8HQAVhg CX8A== X-Gm-Message-State: AOJu0YxV9XRHPsL1s8sYZLkTq5UDoK6sP3Qco04SmwW2zVQM/D9zjI2z 9DJNJFvT+yq+LpIp3Zsd47b5DA== X-Google-Smtp-Source: AGHT+IHDsErZZ2cj+KD77fiOyqcvO+Iwf0pw8fDOjuWUBVlZwA9B6Ma2+vNWXjQWja50pqd7xFLkqQ== X-Received: by 2002:a05:6102:3541:b0:44e:b571:27af with SMTP id e1-20020a056102354100b0044eb57127afmr2853264vss.1.1694183343406; Fri, 08 Sep 2023 07:29:03 -0700 (PDT) Received: from ziepe.ca (hlfxns017vw-134-41-202-196.dhcp-dynamic.fibreop.ns.bellaliant.net. [134.41.202.196]) by smtp.gmail.com with ESMTPSA id r19-20020a0ccc13000000b0064f523836fdsm735871qvk.123.2023.09.08.07.29.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 08 Sep 2023 07:29:02 -0700 (PDT) Received: from jgg by wakko with local (Exim 4.95) (envelope-from ) id 1qecTl-001IVr-SH; Fri, 08 Sep 2023 11:29:01 -0300 Date: Fri, 8 Sep 2023 11:29:01 -0300 From: Jason Gunthorpe To: Daisuke Matsuda Cc: linux-rdma@vger.kernel.org, leon@kernel.org, zyjzyj2000@gmail.com, linux-kernel@vger.kernel.org, rpearsonhpe@gmail.com, yangx.jy@fujitsu.com, lizhijian@fujitsu.com, y-goto@fujitsu.com Subject: Re: [PATCH for-next v6 7/7] RDMA/rxe: Add support for the traditional Atomic operations with ODP Message-ID: References: <908514dfa6bbeae72d36481d893674b254ee416d.1694153251.git.matsuda-daisuke@fujitsu.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <908514dfa6bbeae72d36481d893674b254ee416d.1694153251.git.matsuda-daisuke@fujitsu.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Sep 08, 2023 at 03:26:48PM +0900, Daisuke Matsuda wrote: > +int rxe_odp_mr_atomic_op(struct rxe_mr *mr, u64 iova, int opcode, > + u64 compare, u64 swap_add, u64 *orig_val) > +{ > + int err; > + int retry = 0; > + struct ib_umem_odp *umem_odp = to_ib_umem_odp(mr->umem); > + > + mutex_lock(&umem_odp->umem_mutex); > + > + /* Atomic operations manipulate a single char. */ > + if (rxe_odp_check_pages(mr, iova, sizeof(char), 0)) > + goto need_fault; > + > + err = rxe_mr_do_atomic_op(mr, iova, opcode, compare, > + swap_add, orig_val); > + > + mutex_unlock(&umem_odp->umem_mutex); You should just use the xarray spinlock, the umem_mutex should only be held around the faulting flow > + > + return err; > + > +need_fault: > + /* allow max 3 tries for pagefault */ > + do { Why a retry loop? We already have a retry loop in ib_umem_odp_map_dma_and_lock,it doesn't need to be done externally. If you reach here with the lock held then progress should be guarenteed under the lock. Jason