From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1760748AbcDEWUU (ORCPT ); Tue, 5 Apr 2016 18:20:20 -0400 Received: from p3plsmtps2ded01.prod.phx3.secureserver.net ([208.109.80.58]:45677 "EHLO p3plsmtps2ded01.prod.phx3.secureserver.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1760690AbcDEWUS (ORCPT ); Tue, 5 Apr 2016 18:20:18 -0400 x-originating-ip: 72.167.245.219 From: "K. Y. Srinivasan" To: gregkh@linuxfoundation.org, linux-kernel@vger.kernel.org, devel@linuxdriverproject.org, olaf@aepfle.de, apw@canonical.com, vkuznets@redhat.com, jasowang@redhat.com Cc: "K. Y. Srinivasan" Subject: [PATCH 4/8] Drivers: hv: vmbus: Use the new virt_xx barrier code Date: Tue, 5 Apr 2016 16:57:43 -0700 Message-Id: <1459900667-20367-4-git-send-email-kys@microsoft.com> X-Mailer: git-send-email 1.7.4.1 In-Reply-To: <1459900667-20367-1-git-send-email-kys@microsoft.com> References: <1459900641-20328-1-git-send-email-kys@microsoft.com> <1459900667-20367-1-git-send-email-kys@microsoft.com> X-CMAE-Envelope: MS4wfI83R95Mrb7MekiTdQb2ArEjBrz31+500BZitOPsfQYmZKiBl5wKNmi8nkyQrF88HqzmTddFwWVhM/ACKxbSJsRfQdWqjTZGDJs+gCcMuFIPWIu9halj IbQEy5dw3M6PWkUJbU5bNphB82m8LcHESEPctRe2Qp+iiBiAJ7L2N5S7zQQsCY4uRZALzMcVeWs2Sg4/nUhKQPb8ahzb1xwXD7ugv7OwBeogg+6MiGjo+dfN YgPnM2/z/gDOlhCuGGAAnJtTlTFzE7ihrDqYF496NR5RDHvp0+qFDY1N0mfIEabyDfLI3tKyaYqKIZ7wnidGzlpSVDgavvhAlUEW+UJB/Xu6kfzkIF/bAeJH h5w8f9SCbcLqT3lIIPOdwdIrUhzciAdz9/bDDxXJoztzMD8Meigjw3wdvXwvYO5XpONXU0il Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Use the virt_xx barriers that have been defined for use in virtual machines. Signed-off-by: K. Y. Srinivasan --- drivers/hv/ring_buffer.c | 14 +++++++------- 1 files changed, 7 insertions(+), 7 deletions(-) diff --git a/drivers/hv/ring_buffer.c b/drivers/hv/ring_buffer.c index 6ea1b55..8f518af 100644 --- a/drivers/hv/ring_buffer.c +++ b/drivers/hv/ring_buffer.c @@ -33,14 +33,14 @@ void hv_begin_read(struct hv_ring_buffer_info *rbi) { rbi->ring_buffer->interrupt_mask = 1; - mb(); + virt_mb(); } u32 hv_end_read(struct hv_ring_buffer_info *rbi) { rbi->ring_buffer->interrupt_mask = 0; - mb(); + virt_mb(); /* * Now check to see if the ring buffer is still empty. @@ -68,12 +68,12 @@ u32 hv_end_read(struct hv_ring_buffer_info *rbi) static bool hv_need_to_signal(u32 old_write, struct hv_ring_buffer_info *rbi) { - mb(); + virt_mb(); if (READ_ONCE(rbi->ring_buffer->interrupt_mask)) return false; /* check interrupt_mask before read_index */ - rmb(); + virt_rmb(); /* * This is the only case we need to signal when the * ring transitions from being empty to non-empty. @@ -115,7 +115,7 @@ static bool hv_need_to_signal_on_read(struct hv_ring_buffer_info *rbi) * read index, we could miss sending the interrupt. Issue a full * memory barrier to address this. */ - mb(); + virt_mb(); pending_sz = READ_ONCE(rbi->ring_buffer->pending_send_sz); /* If the other end is not blocked on write don't bother. */ @@ -371,7 +371,7 @@ int hv_ringbuffer_write(struct hv_ring_buffer_info *outring_info, sizeof(u64)); /* Issue a full memory barrier before updating the write index */ - mb(); + virt_mb(); /* Now, update the write location */ hv_set_next_write_location(outring_info, next_write_location); @@ -447,7 +447,7 @@ int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info, * the writer may start writing to the read area once the read index * is updated. */ - mb(); + virt_mb(); /* Update the read index */ hv_set_next_read_location(inring_info, next_read_location); -- 1.7.4.1