From: Darren Kenny <darren.kenny@oracle.com>
To: Alexander Bulekov <alxndr@bu.edu>, qemu-devel@nongnu.org
Cc: pbonzini@redhat.com, bsd@redhat.com, f4bug@amsat.org,
stefanha@redhat.com, Alexander Bulekov <alxndr@bu.edu>
Subject: Re: [PATCH v2 3/3] fuzz: move some DMA hooks
Date: Mon, 15 Mar 2021 12:12:48 +0000 [thread overview]
Message-ID: <m2sg4wsji7.fsf@oracle.com> (raw)
In-Reply-To: <20210313231859.941263-4-alxndr@bu.edu>
On Saturday, 2021-03-13 at 18:18:59 -05, Alexander Bulekov wrote:
> For the sparse-mem device, we want the fuzzer to populate entire DMA
> reads from sparse-mem, rather than hooking into the individual MMIO
> memory_region_dispatch_read operations. Otherwise, the fuzzer will treat
> each sequential read separately (and populate it with a separate
> pattern). Work around this by rearranging some DMA hooks. Since the
> fuzzer has it's own logic to skip accidentally writing to MMIO regions,
> we can call the DMA cb, outside the flatview_translate loop.
>
> Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
> ---
> softmmu/memory.c | 1 -
> softmmu/physmem.c | 2 +-
> 2 files changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/softmmu/memory.c b/softmmu/memory.c
> index 874a8fccde..3b8e428064 100644
> --- a/softmmu/memory.c
> +++ b/softmmu/memory.c
> @@ -1440,7 +1440,6 @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
> unsigned size = memop_size(op);
> MemTxResult r;
>
> - fuzz_dma_read_cb(addr, size, mr);
> if (!memory_region_access_valid(mr, addr, size, false, attrs)) {
> *pval = unassigned_mem_read(mr, addr, size);
> return MEMTX_DECODE_ERROR;
> diff --git a/softmmu/physmem.c b/softmmu/physmem.c
> index 7e8b0fab89..6a58c86750 100644
> --- a/softmmu/physmem.c
> +++ b/softmmu/physmem.c
> @@ -2831,6 +2831,7 @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
> bool release_lock = false;
> uint8_t *buf = ptr;
>
> + fuzz_dma_read_cb(addr, len, mr);
> for (;;) {
> if (!memory_access_is_direct(mr, false)) {
> /* I/O case */
> @@ -2841,7 +2842,6 @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
> stn_he_p(buf, l, val);
> } else {
> /* RAM case */
> - fuzz_dma_read_cb(addr, len, mr);
> ram_ptr = qemu_ram_ptr_length(mr->ram_block, addr1, &l, false);
> memcpy(buf, ram_ptr, l);
> }
> --
> 2.28.0
prev parent reply other threads:[~2021-03-15 12:25 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-03-13 23:18 [PATCH v2 0/3] fuzz: Add a sparse-memory device to accelerate fuzzing Alexander Bulekov
2021-03-13 23:18 ` [PATCH v2 1/3] memory: add a sparse memory device for fuzzing Alexander Bulekov
2021-03-14 23:14 ` Alexander Bulekov
2021-03-15 12:09 ` Darren Kenny
2021-03-15 13:52 ` Alexander Bulekov
2021-03-13 23:18 ` [PATCH v2 2/3] fuzz: configure a sparse-mem device, by default Alexander Bulekov
2021-03-15 12:12 ` Darren Kenny
2021-03-13 23:18 ` [PATCH v2 3/3] fuzz: move some DMA hooks Alexander Bulekov
2021-03-15 12:12 ` Darren Kenny [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=m2sg4wsji7.fsf@oracle.com \
--to=darren.kenny@oracle.com \
--cc=alxndr@bu.edu \
--cc=bsd@redhat.com \
--cc=f4bug@amsat.org \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).