linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Alexander Graf <agraf@suse.de>
To: Stewart Smith <stewart@linux.vnet.ibm.com>,
	linuxppc-dev@lists.ozlabs.org, paulus@samba.org,
	kvm-ppc@vger.kernel.org, Mel Gorman <mgorman@suse.com>
Subject: Re: [PATCH v2] Use the POWER8 Micro Partition Prefetch Engine in KVM HV on POWER8
Date: Thu, 10 Jul 2014 13:05:47 +0200	[thread overview]
Message-ID: <53BE738B.1010100@suse.de> (raw)
In-Reply-To: <m3bnszqwx1.fsf@oc8180480414.ibm.com>


On 09.07.14 00:59, Stewart Smith wrote:
> Hi!
>
> Thanks for review, much appreciated!
>
> Alexander Graf <agraf@suse.de> writes:
>> On 08.07.14 07:06, Stewart Smith wrote:
>>> @@ -1528,6 +1535,7 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc)
>>>    	int i, need_vpa_update;
>>>    	int srcu_idx;
>>>    	struct kvm_vcpu *vcpus_to_update[threads_per_core];
>>> +	phys_addr_t phy_addr, tmp;
>> Please put the variable declarations into the if () branch so that the
>> compiler can catch potential leaks :)
> ack. will fix.
>
>>> @@ -1590,9 +1598,48 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc)
>>>    
>>>    	srcu_idx = srcu_read_lock(&vc->kvm->srcu);
>>>    
>>> +	/* If we have a saved list of L2/L3, restore it */
>>> +	if (cpu_has_feature(CPU_FTR_ARCH_207S) && vc->mpp_buffer) {
>>> +		phy_addr = virt_to_phys((void *)vc->mpp_buffer);
>>> +#if defined(CONFIG_PPC_4K_PAGES)
>>> +		phy_addr = (phy_addr + 8*4096) & ~(8*4096);
>> get_free_pages() is automatically aligned to the order, no?
> That's what Paul reckoned too, and then we've attempted to find anywhere
> that documents that behaviour. Happen to be able to point to docs/source
> that say this is part of API?

Phew - it's probably buried somewhere. I could only find this document 
saying that we always get order-aligned allocations:

http://www.thehackademy.net/madchat/ebooks/Mem_virtuelle/linux-mm/zonealloc.html

Mel, do you happen to have any pointer to something that explicitly (or 
even properly implicitly) says that get_free_pages() returns 
order-aligned memory?

>
>>> +#endif
>>> +		tmp = phy_addr & PPC_MPPE_ADDRESS_MASK;
>>> +		tmp = tmp | PPC_MPPE_WHOLE_TABLE;
>>> +
>>> +		/* For sanity, abort any 'save' requests in progress */
>>> +		asm volatile(PPC_LOGMPP(R1) : : "r" (tmp));
>>> +
>>> +		/* Inititate a cache-load request */
>>> +		mtspr(SPRN_MPPR, tmp);
>>> +	}
>> In fact, this whole block up here could be a function, no?
> It could, perfectly happy for it to be one. Will fix.
>
>>> +
>>> +	/* Allocate memory before switching out of guest so we don't
>>> +	   trash L2/L3 with memory allocation stuff */
>>> +	if (cpu_has_feature(CPU_FTR_ARCH_207S) && !vc->mpp_buffer) {
>>> +		vc->mpp_buffer = __get_free_pages(GFP_KERNEL|__GFP_ZERO,
>>> +						  MPP_BUFFER_ORDER);
>> get_order(64 * 1024)?
>>
>> Also, why allocate it here and not on vcore creation?
> There's also the possibility of saving/restorting part of the L3 cache
> as well, and I was envisioning a future patch to this which checks a
> flag in vcore (maybe exposed via sysfs or whatever mechanism is
> applicable) if it should save/restore L2 or L2/L3, so thus it makes a
> bit more sense allocating it there rather than elsewhere.
>
> There's also no real reason to fail to create a vcore if we can't
> allocate a buffer for L2/L3 cache contents - retrying later is perfectly
> harmless.

If we failed during core creation just don't save/restore L2 cache 
contents at all. I really prefer to have allocation and dealloction all 
at init time - and such low order allocations will most likely succeed.

Let's leave the L3 cache bits for later when we know whether it actually 
has an impact. I personally doubt it :).


Alex

  reply	other threads:[~2014-07-10 11:05 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-07-04  1:23 [PATCH] Use the POWER8 Micro Partition Prefetch Engine in KVM HV on POWER8 Stewart Smith
2014-07-08  5:06 ` [PATCH v2] " Stewart Smith
2014-07-08 10:41   ` Alexander Graf
2014-07-08 22:59     ` Stewart Smith
2014-07-10 11:05       ` Alexander Graf [this message]
2014-07-10 13:07         ` Mel Gorman
2014-07-10 13:17           ` Alexander Graf
2014-07-10 13:30             ` Mel Gorman
2014-07-10 13:30               ` Alexander Graf
2014-07-17  3:19   ` [PATCH v3] " Stewart Smith
2014-07-17  7:55     ` Alexander Graf
2014-07-18  4:10       ` Stewart Smith
2014-07-28 12:30         ` Alexander Graf
2014-07-17 23:52     ` Paul Mackerras
2014-07-18  4:10       ` Stewart Smith
2014-07-18  4:18     ` [PATCH v4 0/2] Use the POWER8 Micro Partition Prefetch Engine in KVM HV Stewart Smith
2014-07-18  4:18       ` [PATCH v4 1/2] Split out struct kvmppc_vcore creation to separate function Stewart Smith
2014-07-18  7:47         ` Paul Mackerras
2014-07-18  4:18       ` [PATCH v4 2/2] Use the POWER8 Micro Partition Prefetch Engine in KVM HV on POWER8 Stewart Smith
2014-07-18  7:48         ` Paul Mackerras
2014-07-28 12:34       ` [PATCH v4 0/2] Use the POWER8 Micro Partition Prefetch Engine in KVM HV Alexander Graf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=53BE738B.1010100@suse.de \
    --to=agraf@suse.de \
    --cc=kvm-ppc@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mgorman@suse.com \
    --cc=paulus@samba.org \
    --cc=stewart@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).