From: Paul Brook <paul@codesourcery.com>
To: qemu-devel@nongnu.org
Cc: Blue Swirl <blauwirbel@gmail.com>,
agraf@suse.de, Richard Henderson <rth@twiddle.net>
Subject: Re: [Qemu-devel] [PATCH 0/2] [RFC] 64-bit io paths
Date: Fri, 28 May 2010 21:45:51 +0100 [thread overview]
Message-ID: <201005282145.51468.paul@codesourcery.com> (raw)
In-Reply-To: <4BD0BD6E.4010000@twiddle.net>
> The basic device interface looks like
> ...
> +
> +/* Register a memory region at START_ADDR/SIZE. The REGION structure will
> + be initialized appropriately for DEV using CB as the operation set. */
> +extern void cpu_register_memory_region(MemoryRegion *region,
> + const MemoryCallbackInfo *cb,
> + target_phys_addr_t start_addr,
> + target_phys_addr_t size);
> +
> +/* Unregister a memory region. */
> +extern void cpu_unregister_memory_region(MemoryRegion *);
> +
> +/* Allocate ram for use with cpu_register_memory_region. */
> +extern const MemoryCallbackInfo *qemu_ram_alloc_r(ram_addr_t);
> +extern void qemu_ram_free_r(const MemoryCallbackInfo *);
>
> The Basic Idea is that we have a MemoryRegion object that describes
> a contiguous mapping within the guest address space. This object
> needs to handle RAM, ROM and devices. The desire to handle memory
> and devices the same comes from the wish to have PCI device BARs
> show up as plain memory in the TLB as plain memory, and to be able
> to handle all PCI device regions identically within sysbus.
Looks reasonable to me.
I'm tempted to add a DeviceState* argument to cpu_register_memory_region.
This might be informative for debugging, and allow future disjoint bus
support. OTOH it may be more trouble than it's worth.
> I will admit that I do not yet have a good replacement for IO_MEM_ROMD,
> or toggling the read-only bit on a RAM region.
next prev parent reply other threads:[~2010-05-28 20:46 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-04-20 20:26 [Qemu-devel] [PATCH 0/2] [RFC] 64-bit io paths Richard Henderson
2010-04-20 19:54 ` [Qemu-devel] [PATCH 1/2] io: Add CPUIOMemoryOps and use it Richard Henderson
2010-04-20 20:22 ` [Qemu-devel] [PATCH 2/2] io: Add readq/writeq hooks and use them Richard Henderson
2010-04-22 19:38 ` [Qemu-devel] [PATCH 0/2] [RFC] 64-bit io paths Blue Swirl
2010-04-22 19:55 ` Richard Henderson
2010-04-22 20:01 ` Blue Swirl
2010-04-22 21:19 ` Richard Henderson
2010-04-23 18:30 ` Blue Swirl
2010-05-28 20:45 ` Paul Brook [this message]
2010-04-22 23:47 ` [Qemu-devel] [PATCH] Remove IO_MEM_SUBWIDTH Richard Henderson
2010-04-25 15:06 ` [Qemu-devel] " Blue Swirl
2010-04-26 21:54 ` Artyom Tarasenko
2010-04-27 18:30 ` Richard Henderson
2010-04-28 19:29 ` Artyom Tarasenko
2010-05-06 20:25 ` Artyom Tarasenko
2010-05-07 15:28 ` Blue Swirl
2010-05-07 16:52 ` [Qemu-devel] [PATCH] Fill in unassigned mem read/write callbacks Richard Henderson
2010-05-07 17:02 ` [Qemu-devel] " Blue Swirl
-- strict thread matches above, loose matches on Subject: below --
2010-05-28 21:03 [Qemu-devel] [PATCH 0/2] [RFC] 64-bit io paths Paul Brook
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=201005282145.51468.paul@codesourcery.com \
--to=paul@codesourcery.com \
--cc=agraf@suse.de \
--cc=blauwirbel@gmail.com \
--cc=qemu-devel@nongnu.org \
--cc=rth@twiddle.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).