From: David Gibson <david@gibson.dropbear.id.au>
To: Shivaprasad G Bhat <sbhat@linux.ibm.com>
Cc: qemu-devel@nongnu.org, xiaoguangrong.eric@gmail.com,
mst@redhat.com, bharata@linux.ibm.com, qemu-ppc@nongnu.org,
vaibhav@linux.ibm.com, imammedo@redhat.com
Subject: Re: [Qemu-devel] [RFC PATCH 3/4] spapr: Add NVDIMM device support
Date: Wed, 27 Feb 2019 15:27:30 +1100 [thread overview]
Message-ID: <20190227042729.GZ6872@umbus.fritz.box> (raw)
In-Reply-To: <88da2809-61f8-0a7a-d2c4-54cb0f3e5e35@linux.ibm.com>
[-- Attachment #1: Type: text/plain, Size: 4581 bytes --]
On Mon, Feb 18, 2019 at 09:45:13PM +0530, Shivaprasad G Bhat wrote:
>
>
> On 02/18/2019 04:32 AM, David Gibson wrote:
> > On Fri, Feb 15, 2019 at 04:41:09PM +0530, Shivaprasad G Bhat wrote:
> > > Thanks for the comments David. Please find my replies inline..
[snip]
> > > > > +
> > > > > + qemu_uuid_unparse(&uuid, buf);
> > > > > + _FDT((fdt_setprop_string(fdt, offset, "ibm,unit-guid", buf)));
> > > > > +
> > > > > + _FDT((fdt_setprop_cell(fdt, offset, "ibm,my-drc-index", drc_idx)));
> > > > > +
> > > > > + /*NB : What it should be? */
> > > > > + _FDT(fdt_setprop_cell(fdt, offset, "ibm,latency-attribute", 828));
> > > > > +
> > > > > + _FDT((fdt_setprop_u64(fdt, offset, "ibm,block-size",
> > > > > + SPAPR_MINIMUM_SCM_BLOCK_SIZE)));
> > > > > + _FDT((fdt_setprop_u64(fdt, offset, "ibm,number-of-blocks",
> > > > > + size / SPAPR_MINIMUM_SCM_BLOCK_SIZE)));
> > > > > + _FDT((fdt_setprop_cell(fdt, offset, "ibm,metadata-size", label_size)));
> > > > > +
> > > > > + return offset;
> > > > > +}
> > > > > +
> > > > > +static void spapr_add_nvdimm(DeviceState *dev, uint64_t addr,
> > > > > + uint64_t size, uint32_t node,
> > > > > + Error **errp)
> > > > > +{
> > > > > + sPAPRMachineState *spapr = SPAPR_MACHINE(qdev_get_hotplug_handler(dev));
> > > > > + sPAPRDRConnector *drc;
> > > > > + bool hotplugged = spapr_drc_hotplugged(dev);
> > > > > + NVDIMMDevice *nvdimm = NVDIMM(OBJECT(dev));
> > > > > + void *fdt;
> > > > > + int fdt_offset, fdt_size;
> > > > > + Error *local_err = NULL;
> > > > > +
> > > > > + spapr_dr_connector_new(OBJECT(spapr), TYPE_SPAPR_DRC_PMEM,
> > > > > + addr / SPAPR_MINIMUM_SCM_BLOCK_SIZE);
> > > > > + drc = spapr_drc_by_id(TYPE_SPAPR_DRC_PMEM,
> > > > > + addr / SPAPR_MINIMUM_SCM_BLOCK_SIZE);
> > > > > + g_assert(drc);
> > > > Creating the DRC in the hotplug path looks bogus. Generally the DRC
> > > > has to exist before you can even attempt to plug the device.
> > > We dont really know how many DRC to create. Unlike memory hotplug
> > > where we know how many LMBs are required to fit till the maxmem, in this
> > > case we dont know how many NVDIMM devices guest can have. That is the
> > > reason I am creating the DRC on demand. I'll see if it is possible to
> > > address this
> > > by putting a cap on maximum number of NVDIMM devices a guest can have.
> > Urgh, PAPR. First it specifies a crappy hotplug model that requires
> > zillions of fixed attachment points to be instantiated, then it breaks
> > its own model.
> >
> > But.. I still don't really understand how this works.
> >
> > a) How does the guest know the DRC index to use for the new NVDIMM?
> > Generally that comes from the device tree, but the guest doesn't
> > get new device tree information until it calls configure-connector
> > for which it needs the DRC index.
> The DRC is passed in the device tree blob passed as payload of hotplug
> interrupt
Um.. there is no device tree blob as paylod of a hotplug interrupt.
The guest only gets device tree information when it makes
configure-connector calls.
I see that there is a drc identifier field though, so I guess you're
getting the DRC from that. In existing cases the guest looks that up
in the *existing* device tree to find infomation about that DRC. I
guess in the case of NVDIMMs here it doesn't need any more info.
> from which the guest picks the DRC index and makes the subsequent calls.
> > b) AFAICT, NVDIMMs would also require HPT space, much like regular
> > memory would. PowerVM doesn't have HPT resizing, so surely it must
> > already have some sort of cap on the amount of NVDIMM space in
> > order to size the HPT correctly.
> On Power KVM we will enforce the NVDIMM is mapped within the maxmem,
> however the spec allows outside of it. Coming back to the original point of
> creating the DRCs at the hotplug time, we could impose a limit on the
> number of NVDIMM devices that could be hotplugged so that we can
> create the DRCs at the machine init time.
Ah, so NVDIMMs live within the same maxmem limit as regular memory.
Ok, I guess that makes sense.
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
next prev parent reply other threads:[~2019-02-27 4:42 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-02-06 5:24 [Qemu-devel] [RFC PATCH 0/4] ppc: spapr: virtual NVDIMM support Shivaprasad G Bhat
2019-02-06 5:25 ` [Qemu-devel] [RFC PATCH 1/4] mem: make nvdimm_device_list global Shivaprasad G Bhat
2019-02-19 7:59 ` Igor Mammedov
2019-02-06 5:26 ` [Qemu-devel] [RFC PATCH 2/4] mem: implement memory_device_set_region_size Shivaprasad G Bhat
2019-02-06 5:26 ` [Qemu-devel] [RFC PATCH 3/4] spapr: Add NVDIMM device support Shivaprasad G Bhat
2019-02-12 1:49 ` David Gibson
2019-02-15 11:11 ` Shivaprasad G Bhat
2019-02-17 23:02 ` David Gibson
2019-02-18 16:15 ` Shivaprasad G Bhat
2019-02-27 4:27 ` David Gibson [this message]
2019-02-19 8:11 ` Igor Mammedov
2019-02-19 9:29 ` Shivaprasad G Bhat
2019-02-21 14:12 ` Igor Mammedov
2019-02-28 8:54 ` Shivaprasad G Bhat
2019-03-05 9:13 ` Igor Mammedov
2019-02-06 5:26 ` [Qemu-devel] [RFC PATCH 4/4] spapr: Add Hcalls to support PAPR NVDIMM device Shivaprasad G Bhat
2019-02-12 2:28 ` David Gibson
2019-02-15 11:11 ` Shivaprasad G Bhat
2019-02-19 5:33 ` David Gibson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190227042729.GZ6872@umbus.fritz.box \
--to=david@gibson.dropbear.id.au \
--cc=bharata@linux.ibm.com \
--cc=imammedo@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
--cc=sbhat@linux.ibm.com \
--cc=vaibhav@linux.ibm.com \
--cc=xiaoguangrong.eric@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).