From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751640AbcEJMPq (ORCPT ); Tue, 10 May 2016 08:15:46 -0400 Received: from merlin.infradead.org ([205.233.59.134]:58318 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750910AbcEJMPp (ORCPT ); Tue, 10 May 2016 08:15:45 -0400 Date: Tue, 10 May 2016 14:15:38 +0200 From: Peter Zijlstra To: Vikas Shivappa Cc: vikas.shivappa@intel.com, x86@kernel.org, linux-kernel@vger.kernel.org, hpa@zytor.com, tglx@linutronix.de, mingo@kernel.org, ravi.v.shankar@intel.com, tony.luck@intel.com, fenghua.yu@intel.com Subject: Re: [PATCH 2/3] perf/x86/mbm: Fix mbm counting for RMID reuse Message-ID: <20160510121538.GA3193@twins.programming.kicks-ass.net> References: <1462578255-5858-1-git-send-email-vikas.shivappa@linux.intel.com> <1462578255-5858-3-git-send-email-vikas.shivappa@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1462578255-5858-3-git-send-email-vikas.shivappa@linux.intel.com> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, May 06, 2016 at 04:44:14PM -0700, Vikas Shivappa wrote: > This patch tries to fix the issue where multiple perf instances try to > monitor the same PID. > MBM cannot count directly in the usual perf way of continuously adding > the diff of current h/w counter and the prev count to the event->count > because of some h/w dependencies: And yet the patch appears to do exactly that; *confused*. > (1) the mbm h/w counters overflow. As do most other counters.. so your point is? You also have the software timer < overflow period.. > (2) There are limited h/w RMIDs and hence we recycle the RMIDs due to > which an event may count from different RMIDs. This fails to explain why this is a problem. > (3) Also we may not want to count at every sched_in and sched_out > because the MSR reads involve quite a bit of overhead. Every single other PMU driver just does this; why are you special? You list 3 issues of why you think you cannot do the regular thing, but completely fail to explain how these issues are a problem. > However we try to do something similar to usual perf way in this patch > and mainly handle (1) and (3). > update_sample takes care of the overflow in the hardware counters and > provides abstraction by returning total bytes counted as if there was no > overflow. We use this abstraction to count as below: > > init: > event->prev_count = update_sample(rmid) //returns current total_bytes > > count: // MBM right now uses count instead of read > cur_count = update_sample(rmid) > event->count += cur_count - event->prev_count > event->prev_count = cur_count So where does cqm_prev_count come from and why do you need it? What's wrong with event->hw.prev_count ? In fact, I cannot seem to find any event->hw.prev_count usage in this or the next patch, so can we simply use that and not add pointless new members?