From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4EAC66134 for ; Sat, 16 Sep 2023 09:05:32 +0000 (UTC) X-Greylist: delayed 143 seconds by postgrey-1.37 at lindbergh.monkeyblade.net; Sat, 16 Sep 2023 02:05:27 PDT Received: from r3-24.sinamail.sina.com.cn (r3-24.sinamail.sina.com.cn [202.108.3.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AECC1170E for ; Sat, 16 Sep 2023 02:05:27 -0700 (PDT) X-SMAIL-HELO: localhost.localdomain Received: from unknown (HELO localhost.localdomain)([112.97.59.200]) by sina.com (172.16.97.23) with ESMTP id 65056F4200018B57; Sat, 16 Sep 2023 17:03:00 +0800 (CST) X-Sender: hdanton@sina.com X-Auth-ID: hdanton@sina.com Authentication-Results: sina.com; spf=none smtp.mailfrom=hdanton@sina.com; dkim=none header.i=none; dmarc=none action=none header.from=hdanton@sina.com X-SMAIL-MID: 88986931458004 X-SMAIL-UIID: 52ABD7A421C44218A7C2853B1F2CFB49-20230916-170300-1 From: Hillf Danton To: Mathias Nyman Cc: Wesley Cheng , Greg KH , Christoph Hellwig , LKML , USB Subject: Re: [PATCH v6 03/33] xhci: sideband: add initial api to register a sideband entity Date: Sat, 16 Sep 2023 17:02:49 +0800 Message-Id: <20230916090249.94-1-hdanton@sina.com> In-Reply-To: <20230916001026.315-4-quic_wcheng@quicinc.com> References: <20230916001026.315-1-quic_wcheng@quicinc.com> Precedence: bulk X-Mailing-List: linux-usb@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net On Fri, 15 Sep 2023 17:09:56 -0700 Wesley Cheng > +static int > +xhci_ring_to_sgtable(struct xhci_sideband *sb, struct xhci_ring *ring, struct device *dev) > +{ > + struct sg_table *sgt; > + struct xhci_segment *seg; > + struct page **pages; > + unsigned int n_pages; > + size_t sz; > + int i; > + > + sz = ring->num_segs * TRB_SEGMENT_SIZE; > + n_pages = PAGE_ALIGN(sz) >> PAGE_SHIFT; > + pages = kvmalloc_array(n_pages, sizeof(struct page *), GFP_KERNEL); > + if (!pages) > + return 0; > + > + sgt = kzalloc(sizeof(struct sg_table), GFP_KERNEL); > + if (!sgt) { > + kvfree(pages); > + return 0; > + } > + > + seg = ring->first_seg; > + > + /* > + * Rings can potentially have multiple segments, create an array that > + * carries page references to allocated segments. Utilize the > + * sg_alloc_table_from_pages() to create the sg table, and to ensure > + * that page links are created. > + */ > + for (i = 0; i < ring->num_segs; i++) { > + pages[i] = vmalloc_to_page(seg->trbs); > + seg = seg->next; > + } Given dma_pool_zalloc() in xhci_segment_alloc() and dma_alloc_coherent() in pool_alloc_page(), it is incorrect to get page from the cpu address returned by the dma alloc routine.