From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934918AbdEVOhV (ORCPT ); Mon, 22 May 2017 10:37:21 -0400 Received: from mx1.redhat.com ([209.132.183.28]:49408 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934046AbdEVOhP (ORCPT ); Mon, 22 May 2017 10:37:15 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com EF371C04B94B Authentication-Results: ext-mx07.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx07.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=rkrcmar@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com EF371C04B94B Date: Mon, 22 May 2017 16:37:12 +0200 From: Radim =?utf-8?B?S3LEjW3DocWZ?= To: Linus Torvalds Cc: Juergen Gross , Linux Kernel Mailing List Subject: Re: [GIT PULL] KVM fixes for v4.12-rc2 Message-ID: <20170522143711.GA30620@potion> References: <20170519184340.GC11087@potion> <97a8ffc4-dd0f-3e2a-0dc4-1ac46c993745@suse.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <97a8ffc4-dd0f-3e2a-0dc4-1ac46c993745@suse.com> X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Mon, 22 May 2017 14:37:15 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 2017-05-20 12:52+0200, Juergen Gross: > On 20/05/17 00:21, Linus Torvalds wrote: > > So I noticed that my diffstat didn't match either the KVM or the Xen pull. > > > > The *reason* seems to be that both Radim and Juergen have enabled the > > "patience" diff, because if I add "--patience" to the diff line, I get > > the same numbers you guys report. > > In my case it was a patch which was much easier to review using the > patience diff. I just didn't switch back afterwards (what I did now). Similar here. I have the 'histogram' algorithm in the global config for few years now. Using the same algorithm for pull diffstat would be best, though; I added "algorithm = default" to the repo config. --- This KVM pull shows the reason why I abandoned 'myers' -- it can obfuscate simple cut & paste. (I don't recall the case that brought 'histogram' and there apparently were no significant improvements to be gained by switching since then.) The patch in question is 76d837a4c0f9 ("KVM: PPC: Book3S PR: Don't include SPAPR TCE code on non-pseries platforms"), see trimmed output of both algorithms below. First, the important hunks of the 'histogram' diff: @@ -244,20 +262,6 @@ static int kvmppc_h_pr_protect(struct kvm_vcpu *vcpu) return EMULATE_DONE; } -static int kvmppc_h_pr_put_tce(struct kvm_vcpu *vcpu) -{ - unsigned long liobn = kvmppc_get_gpr(vcpu, 4); - unsigned long ioba = kvmppc_get_gpr(vcpu, 5); - unsigned long tce = kvmppc_get_gpr(vcpu, 6); - long rc; - - rc = kvmppc_h_put_tce(vcpu, liobn, ioba, tce); - if (rc == H_TOO_HARD) - return EMULATE_FAIL; - kvmppc_set_gpr(vcpu, 3, rc); - return EMULATE_DONE; -} - static int kvmppc_h_pr_logical_ci_load(struct kvm_vcpu *vcpu) { long rc; @@ -280,6 +284,21 @@ static int kvmppc_h_pr_logical_ci_store(struct kvm_vcpu *vcpu) return EMULATE_DONE; } +#ifdef CONFIG_SPAPR_TCE_IOMMU +static int kvmppc_h_pr_put_tce(struct kvm_vcpu *vcpu) +{ + unsigned long liobn = kvmppc_get_gpr(vcpu, 4); + unsigned long ioba = kvmppc_get_gpr(vcpu, 5); + unsigned long tce = kvmppc_get_gpr(vcpu, 6); + long rc; + + rc = kvmppc_h_put_tce(vcpu, liobn, ioba, tce); + if (rc == H_TOO_HARD) + return EMULATE_FAIL; + kvmppc_set_gpr(vcpu, 3, rc); + return EMULATE_DONE; +} + static int kvmppc_h_pr_put_tce_indirect(struct kvm_vcpu *vcpu) { unsigned long liobn = kvmppc_get_gpr(vcpu, 4); and now the same change with the 'myers' algorithm: @@ -244,36 +262,37 @@ static int kvmppc_h_pr_protect(struct kvm_vcpu *vcpu) return EMULATE_DONE; } -static int kvmppc_h_pr_put_tce(struct kvm_vcpu *vcpu) +static int kvmppc_h_pr_logical_ci_load(struct kvm_vcpu *vcpu) { - unsigned long liobn = kvmppc_get_gpr(vcpu, 4); - unsigned long ioba = kvmppc_get_gpr(vcpu, 5); - unsigned long tce = kvmppc_get_gpr(vcpu, 6); long rc; - rc = kvmppc_h_put_tce(vcpu, liobn, ioba, tce); + rc = kvmppc_h_logical_ci_load(vcpu); if (rc == H_TOO_HARD) return EMULATE_FAIL; kvmppc_set_gpr(vcpu, 3, rc); return EMULATE_DONE; } -static int kvmppc_h_pr_logical_ci_load(struct kvm_vcpu *vcpu) +static int kvmppc_h_pr_logical_ci_store(struct kvm_vcpu *vcpu) { long rc; - rc = kvmppc_h_logical_ci_load(vcpu); + rc = kvmppc_h_logical_ci_store(vcpu); if (rc == H_TOO_HARD) return EMULATE_FAIL; kvmppc_set_gpr(vcpu, 3, rc); return EMULATE_DONE; } -static int kvmppc_h_pr_logical_ci_store(struct kvm_vcpu *vcpu) +#ifdef CONFIG_SPAPR_TCE_IOMMU +static int kvmppc_h_pr_put_tce(struct kvm_vcpu *vcpu) { + unsigned long liobn = kvmppc_get_gpr(vcpu, 4); + unsigned long ioba = kvmppc_get_gpr(vcpu, 5); + unsigned long tce = kvmppc_get_gpr(vcpu, 6); long rc; - rc = kvmppc_h_logical_ci_store(vcpu); + rc = kvmppc_h_put_tce(vcpu, liobn, ioba, tce); if (rc == H_TOO_HARD) return EMULATE_FAIL; kvmppc_set_gpr(vcpu, 3, rc); The move of kvmppc_h_pr_put_tce() under #ifdef is not so simple anymore.