From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE162C433F5 for ; Mon, 6 Dec 2021 15:03:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346611AbhLFPGw (ORCPT ); Mon, 6 Dec 2021 10:06:52 -0500 Received: from ams.source.kernel.org ([145.40.68.75]:38150 "EHLO ams.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346029AbhLFPFl (ORCPT ); Mon, 6 Dec 2021 10:05:41 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 024D9B8110B; Mon, 6 Dec 2021 15:02:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 35B2AC341C2; Mon, 6 Dec 2021 15:02:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1638802929; bh=4VdDmKEfQkTjxQ25zmlzzACpXovCF9ZXg8FIygv5DNQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eQJ+c7PSurNrLLRZLEd7kvYYg5BK2/vLWli9pO4iJbXHSEeZ7BVzlNrBbnSWsN65P M1Lda2cNM5+ibck9/iDpGyuI5KvC3N3FD2UwPXi6lyalvhFtGjwdTx6IHUhBh7GJ+R FHDcMidSjn8c3qv+mSCrMq5RRMSZ1rupc2H7iDuA= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Juergen Gross , Jan Beulich , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= Subject: [PATCH 4.9 30/62] xen/blkfront: dont take local copy of a request from the ring page Date: Mon, 6 Dec 2021 15:56:13 +0100 Message-Id: <20211206145550.214191680@linuxfoundation.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20211206145549.155163074@linuxfoundation.org> References: <20211206145549.155163074@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Juergen Gross commit 8f5a695d99000fc3aa73934d7ced33cfc64dcdab upstream. In order to avoid a malicious backend being able to influence the local copy of a request build the request locally first and then copy it to the ring page instead of doing it the other way round as today. Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich Acked-by: Roger Pau Monné Link: https://lore.kernel.org/r/20210730103854.12681-3-jgross@suse.com Signed-off-by: Juergen Gross Signed-off-by: Greg Kroah-Hartman --- drivers/block/xen-blkfront.c | 25 +++++++++++++++---------- 1 file changed, 15 insertions(+), 10 deletions(-) --- a/drivers/block/xen-blkfront.c +++ b/drivers/block/xen-blkfront.c @@ -532,7 +532,7 @@ static unsigned long blkif_ring_get_requ rinfo->shadow[id].status = REQ_WAITING; rinfo->shadow[id].associated_id = NO_ASSOCIATED_ID; - (*ring_req)->u.rw.id = id; + rinfo->shadow[id].req.u.rw.id = id; return id; } @@ -540,11 +540,12 @@ static unsigned long blkif_ring_get_requ static int blkif_queue_discard_req(struct request *req, struct blkfront_ring_info *rinfo) { struct blkfront_info *info = rinfo->dev_info; - struct blkif_request *ring_req; + struct blkif_request *ring_req, *final_ring_req; unsigned long id; /* Fill out a communications ring structure. */ - id = blkif_ring_get_request(rinfo, req, &ring_req); + id = blkif_ring_get_request(rinfo, req, &final_ring_req); + ring_req = &rinfo->shadow[id].req; ring_req->operation = BLKIF_OP_DISCARD; ring_req->u.discard.nr_sectors = blk_rq_sectors(req); @@ -555,8 +556,8 @@ static int blkif_queue_discard_req(struc else ring_req->u.discard.flag = 0; - /* Keep a private copy so we can reissue requests when recovering. */ - rinfo->shadow[id].req = *ring_req; + /* Copy the request to the ring page. */ + *final_ring_req = *ring_req; return 0; } @@ -689,6 +690,7 @@ static int blkif_queue_rw_req(struct req { struct blkfront_info *info = rinfo->dev_info; struct blkif_request *ring_req, *extra_ring_req = NULL; + struct blkif_request *final_ring_req, *final_extra_ring_req = NULL; unsigned long id, extra_id = NO_ASSOCIATED_ID; bool require_extra_req = false; int i; @@ -730,7 +732,8 @@ static int blkif_queue_rw_req(struct req } /* Fill out a communications ring structure. */ - id = blkif_ring_get_request(rinfo, req, &ring_req); + id = blkif_ring_get_request(rinfo, req, &final_ring_req); + ring_req = &rinfo->shadow[id].req; num_sg = blk_rq_map_sg(req->q, req, rinfo->shadow[id].sg); num_grant = 0; @@ -781,7 +784,9 @@ static int blkif_queue_rw_req(struct req ring_req->u.rw.nr_segments = num_grant; if (unlikely(require_extra_req)) { extra_id = blkif_ring_get_request(rinfo, req, - &extra_ring_req); + &final_extra_ring_req); + extra_ring_req = &rinfo->shadow[extra_id].req; + /* * Only the first request contains the scatter-gather * list. @@ -823,10 +828,10 @@ static int blkif_queue_rw_req(struct req if (setup.segments) kunmap_atomic(setup.segments); - /* Keep a private copy so we can reissue requests when recovering. */ - rinfo->shadow[id].req = *ring_req; + /* Copy request(s) to the ring page. */ + *final_ring_req = *ring_req; if (unlikely(require_extra_req)) - rinfo->shadow[extra_id].req = *extra_ring_req; + *final_extra_ring_req = *extra_ring_req; if (max_grefs > 0) gnttab_free_grant_references(setup.gref_head);