From: Anthony Liguori <aliguori@us.ibm.com>
To: Avi Kivity <avi@redhat.com>
Cc: qemu-devel@nongnu.org
Subject: [Qemu-devel] Re: [PATCH 1/4] Add a scatter-gather list type and accessors
Date: Wed, 04 Feb 2009 14:50:35 -0600 [thread overview]
Message-ID: <4989FF9B.4010809@us.ibm.com> (raw)
In-Reply-To: <4989FE92.5020400@redhat.com>
Avi Kivity wrote:
> Anthony Liguori wrote:
>>>
>>> Is it possible to have a blanket license for files which don't have
>>> explicit terms? I don't much like boilerplate.
>>
>> I'd greatly prefer not to. You can refer to a COPYING and we can
>> have a default COPYING file but a copyright is really needed as far
>> as I understand it.
>>
>
> Okay. I'll add an explicit license and leave the generic license to
> our esteemed maintainers.
>
>>>> Would be nice to check for malloc failures and fail gracefully at
>>>> least.
>>>
>>> Do you mean an exit(1)? If so we could just put it in qemu_malloc().
>>
>> In theory, some users may be able to cope with malloc failure. In
>> practice, I don't think anyone can. I'm open to suggestion.
>
> malloc() will never fail on Linux with overcommit enabled; since Linux
> is fairly useless without overcommit, it means you'll never see a
> failure.
Sure it will. You just have to run out of VA space.
> Other ways of allocating memory (stack growth, first access to
> anonymous memory) are not covered. They can fail (most ungracefully)
> without strict overcommit control.
>
> So I suggest to have qemu_malloc() and its friends abort on failure.
As I said, I'm not at all opposed to this. glib does this and it makes
life a lot easier.
>>> I expect this to trigger rarely since the allocation hint should
>>> suffice nearly 100% of the time. But in case we miss, it's better
>>> to reallocate as little as possible.
>>>
>>> (what I really want is std::vector<>)
>>
>> Which I'm pretty sure has a linear growth strategy :-)
>
> Not in any of the implementations I'm familiar with. I believe
> std::vector<> is required to have amortized O(1) append operations.
Linear growth doesn't imply element-by-element growth. You can have a
coefficient > 1.
Regards,
Anthony Liguori
next prev parent reply other threads:[~2009-02-04 20:51 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-02-04 12:25 [Qemu-devel] [PATCH 0/4] Block DMA helpers Avi Kivity
2009-02-04 12:25 ` [Qemu-devel] [PATCH 1/4] Add a scatter-gather list type and accessors Avi Kivity
2009-02-04 19:27 ` [Qemu-devel] " Anthony Liguori
2009-02-04 20:30 ` Avi Kivity
2009-02-04 20:36 ` Anthony Liguori
2009-02-04 20:46 ` Avi Kivity
2009-02-04 20:50 ` Anthony Liguori [this message]
2009-02-04 21:03 ` Avi Kivity
2009-02-04 23:58 ` Paul Brook
2009-02-05 7:25 ` Avi Kivity
2009-02-05 0:29 ` M. Warner Losh
2009-02-05 1:56 ` Anthony Liguori
2009-02-04 23:49 ` Paul Brook
2009-02-04 12:25 ` [Qemu-devel] [PATCH 2/4] Add qemu_iovec_reset() Avi Kivity
2009-02-04 12:25 ` [Qemu-devel] [PATCH 3/4] Introduce block dma helpers Avi Kivity
2009-02-04 19:29 ` [Qemu-devel] " Anthony Liguori
2009-02-04 12:25 ` [Qemu-devel] [PATCH 4/4] Convert IDE to use new " Avi Kivity
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4989FF9B.4010809@us.ibm.com \
--to=aliguori@us.ibm.com \
--cc=avi@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).