From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754058AbcCWXRH (ORCPT ); Wed, 23 Mar 2016 19:17:07 -0400 Received: from p3plsmtps2ded03.prod.phx3.secureserver.net ([208.109.80.60]:44593 "EHLO p3plsmtps2ded03.prod.phx3.secureserver.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752535AbcCWXRD (ORCPT ); Wed, 23 Mar 2016 19:17:03 -0400 x-originating-ip: 72.167.245.219 From: "K. Y. Srinivasan" To: gregkh@linuxfoundation.org, linux-kernel@vger.kernel.org, devel@linuxdriverproject.org, olaf@aepfle.de, apw@canonical.com, vkuznets@redhat.com, jasowang@redhat.com Cc: "K. Y. Srinivasan" Subject: [PATCH 4/7] Drivers: hv: vmbus: Use the new virt_xx barrier code Date: Wed, 23 Mar 2016 17:53:54 -0700 Message-Id: <1458780837-4367-4-git-send-email-kys@microsoft.com> X-Mailer: git-send-email 1.7.4.1 In-Reply-To: <1458780837-4367-1-git-send-email-kys@microsoft.com> References: <1458780816-4328-1-git-send-email-kys@microsoft.com> <1458780837-4367-1-git-send-email-kys@microsoft.com> X-CMAE-Envelope: MS4wfLWDK2bxfk0YqyO38VH2AsySrTZuW+XvUGqUVJIQninVp2mLCnicItGxl71mi32hGXItMv7z7GtQkN+iC6bX5lQqaNJuxYXna7lPPbqzPR63FG6niZ3F tdf1zUmNF0vC0Z2/qt7OzG7gDfSYAGpprSpcLfyr2Zj7wpWyNQ4dGxTFQWTPleQd5XCVNMWxxYJL9PEa+aYeDd+2jfodZ4pbDE8IyFgD1pRzuDMgC5NNetan AVrvCsJJYIaEK7zLeNeiQgm/pkGNNgK9uZvbBn8KayVgb5xJubHuUHJDjhz2dLua0AjslbgDcWNwU9HZrqyZ2d5Jn1LrnsLVrhImtZQtWPLaa63gvgQctqnE C1K9GW/95DhlNuhXsXuGu3+ZHlMSFqgRrwNE6OyvnNa7GWiNUwlr75QfHeu0671bOQbUwmJ4 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Use the virt_xx barriers that have been defined for use in virtual machines. Signed-off-by: K. Y. Srinivasan --- drivers/hv/ring_buffer.c | 14 +++++++------- 1 files changed, 7 insertions(+), 7 deletions(-) diff --git a/drivers/hv/ring_buffer.c b/drivers/hv/ring_buffer.c index 67dc245..c2c2b2e 100644 --- a/drivers/hv/ring_buffer.c +++ b/drivers/hv/ring_buffer.c @@ -33,14 +33,14 @@ void hv_begin_read(struct hv_ring_buffer_info *rbi) { rbi->ring_buffer->interrupt_mask = 1; - mb(); + virt_mb(); } u32 hv_end_read(struct hv_ring_buffer_info *rbi) { rbi->ring_buffer->interrupt_mask = 0; - mb(); + virt_mb(); /* * Now check to see if the ring buffer is still empty. @@ -68,12 +68,12 @@ u32 hv_end_read(struct hv_ring_buffer_info *rbi) static bool hv_need_to_signal(u32 old_write, struct hv_ring_buffer_info *rbi) { - mb(); + virt_mb(); if (READ_ONCE(rbi->ring_buffer->interrupt_mask)) return false; /* check interrupt_mask before read_index */ - rmb(); + virt_rmb(); /* * This is the only case we need to signal when the * ring transitions from being empty to non-empty. @@ -104,7 +104,7 @@ static bool hv_need_to_signal_on_read(struct hv_ring_buffer_info *rbi) u32 cur_write_sz; u32 pending_sz; - mb(); + virt_mb(); pending_sz = READ_ONCE(rbi->ring_buffer->pending_send_sz); /* If the other end is not blocked on write don't bother. */ if (pending_sz == 0) @@ -359,7 +359,7 @@ int hv_ringbuffer_write(struct hv_ring_buffer_info *outring_info, sizeof(u64)); /* Issue a full memory barrier before updating the write index */ - mb(); + virt_mb(); /* Now, update the write location */ hv_set_next_write_location(outring_info, next_write_location); @@ -435,7 +435,7 @@ int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info, * the writer may start writing to the read area once the read index * is updated. */ - mb(); + virt_mb(); /* Update the read index */ hv_set_next_read_location(inring_info, next_read_location); -- 1.7.4.1