From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752192AbcDBXWq (ORCPT ); Sat, 2 Apr 2016 19:22:46 -0400 Received: from p3plsmtps2ded04.prod.phx3.secureserver.net ([208.109.80.198]:43292 "EHLO p3plsmtps2ded04.prod.phx3.secureserver.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751574AbcDBXWS (ORCPT ); Sat, 2 Apr 2016 19:22:18 -0400 x-originating-ip: 72.167.245.219 From: "K. Y. Srinivasan" To: gregkh@linuxfoundation.org, linux-kernel@vger.kernel.org, devel@linuxdriverproject.org, olaf@aepfle.de, apw@canonical.com, vkuznets@redhat.com, jasowang@redhat.com Cc: "K. Y. Srinivasan" Subject: [PATCH 3/6] Drivers: hv: vmbus: Use the new virt_xx barrier code Date: Sat, 2 Apr 2016 17:59:48 -0700 Message-Id: <1459645191-25290-3-git-send-email-kys@microsoft.com> X-Mailer: git-send-email 1.7.4.1 In-Reply-To: <1459645191-25290-1-git-send-email-kys@microsoft.com> References: <1459645169-25251-1-git-send-email-kys@microsoft.com> <1459645191-25290-1-git-send-email-kys@microsoft.com> X-CMAE-Envelope: MS4wfMmu4Dln9VtIB4mrgLysH7fzgJ5qyJ73FAey2MKDIWChptuHvF+U7FtpKP/pm+nachhIRpXL86PUSTbBkLR2UVQzzRb21my2XCUWMmPqlT7IgWNBK9/4 Ca/wZ56StNSgJ5T0MGkJ4Imj2rTq08bx2TuRIkBg+fnu9D2Db7tlHiOQB2GGrdNFMJXbPG8LU+6tFtqIOAG+eT/OhcMsQgHtCDtHTFFdohzN/9bfOWwppksF UueBnvilQvPY1l/k8MiWy1qoFCysk4dzHgPr4sTbi9kFwjbwie6rzdx1Gf775PAzb14RTQYzQG+Jv1E1TdDWzZe8b3/E/KAjPT368d2o/s3SYF476TH66Qpp FST7zLVYqaMeaZW0JkzGWs3vEY1n6c7UHEdCppeEGwJuxjyD1VqhSsmBkeo5e+aUU9eVfnrl Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Use the virt_xx barriers that have been defined for use in virtual machines. Signed-off-by: K. Y. Srinivasan --- drivers/hv/ring_buffer.c | 14 +++++++------- 1 files changed, 7 insertions(+), 7 deletions(-) diff --git a/drivers/hv/ring_buffer.c b/drivers/hv/ring_buffer.c index 6ea1b55..8f518af 100644 --- a/drivers/hv/ring_buffer.c +++ b/drivers/hv/ring_buffer.c @@ -33,14 +33,14 @@ void hv_begin_read(struct hv_ring_buffer_info *rbi) { rbi->ring_buffer->interrupt_mask = 1; - mb(); + virt_mb(); } u32 hv_end_read(struct hv_ring_buffer_info *rbi) { rbi->ring_buffer->interrupt_mask = 0; - mb(); + virt_mb(); /* * Now check to see if the ring buffer is still empty. @@ -68,12 +68,12 @@ u32 hv_end_read(struct hv_ring_buffer_info *rbi) static bool hv_need_to_signal(u32 old_write, struct hv_ring_buffer_info *rbi) { - mb(); + virt_mb(); if (READ_ONCE(rbi->ring_buffer->interrupt_mask)) return false; /* check interrupt_mask before read_index */ - rmb(); + virt_rmb(); /* * This is the only case we need to signal when the * ring transitions from being empty to non-empty. @@ -115,7 +115,7 @@ static bool hv_need_to_signal_on_read(struct hv_ring_buffer_info *rbi) * read index, we could miss sending the interrupt. Issue a full * memory barrier to address this. */ - mb(); + virt_mb(); pending_sz = READ_ONCE(rbi->ring_buffer->pending_send_sz); /* If the other end is not blocked on write don't bother. */ @@ -371,7 +371,7 @@ int hv_ringbuffer_write(struct hv_ring_buffer_info *outring_info, sizeof(u64)); /* Issue a full memory barrier before updating the write index */ - mb(); + virt_mb(); /* Now, update the write location */ hv_set_next_write_location(outring_info, next_write_location); @@ -447,7 +447,7 @@ int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info, * the writer may start writing to the read area once the read index * is updated. */ - mb(); + virt_mb(); /* Update the read index */ hv_set_next_read_location(inring_info, next_read_location); -- 1.7.4.1