From: Richard Fitzgerald <rf@opensource.wolfsonmicro.com>
To: vinod.koul@intel.com
Cc: alsa-devel@alsa-project.org
Subject: [PATCH TINYCOMPRESS 10/14] compress: Block if unable to write all data in single write()
Date: Sun, 10 Feb 2013 00:18:12 +0000 [thread overview]
Message-ID: <20130210001812.GJ31139@opensource.wolfsonmicro.com> (raw)
Previously compress_write() would only block on poll() if the
available space < fragment_size. If the device has a small fragment
size this could lead to it never blocking and instead looping around
doing many small writes. This is bad for power saving.
If we were unable to write all the remaining data in a single write
we want to block until the device reaches a buffer high water mark,
to allow the CPU to sleep.
This change will always attempt to issue the first write providing
avail >= fragment_size. All subsequent loops will block on poll()
before attempting another write() unless there is enough buffer space
to write all remaining data.
diff --git a/compress.c b/compress.c
index 7c8ddc8..7ee287d 100644
--- a/compress.c
+++ b/compress.c
@@ -317,7 +317,8 @@ int compress_write(struct compress *compress, const void *buf, unsigned int size
{
struct snd_compr_avail avail;
struct pollfd fds;
- int to_write, written, total = 0, ret;
+ int to_write = 0; /* zero indicates we haven't written yet */
+ int written, total = 0, ret;
const char* cbuf = buf;
if (!(compress->flags & COMPRESS_IN))
@@ -333,24 +334,21 @@ int compress_write(struct compress *compress, const void *buf, unsigned int size
if (ioctl(compress->fd, SNDRV_COMPRESS_AVAIL, &avail))
return oops(compress, errno, "cannot get avail");
- /* we will write only when avail > fragment size */
- if (avail.avail < compress->config.fragment_size) {
- /* nothing to write so wait */
- ret = poll(&fds, 1, compress->max_poll_wait_ms);
- /* A pause will cause -EBADFD or zero return from driver
- * This is not an error, just stop writing
+ if ( (avail.avail < compress->config.fragment_size)
+ || ((to_write != 0) && (avail.avail < size)) ) {
+ /* not enough space for one fragment, or we have done
+ * a short write and there isn't enough space for all
+ * the remaining data
*/
+ ret = poll(&fds, 1, compress->max_poll_wait_ms);
+ /* A pause will cause -EBADFD or zero.
+ * This is not an error, just stop writing */
if ((ret == 0) || (ret == -EBADFD))
break;
if (ret < 0)
return oops(compress, errno, "poll error");
if (fds.revents & POLLOUT) {
- if (ioctl(compress->fd, SNDRV_COMPRESS_AVAIL, &avail))
- return oops(compress, errno, "cannot get avail");
- if (avail.avail == 0) {
- oops(compress, -EIO, "woken up even when avail is 0!!!");
- continue;
- }
+ continue;
}
if (fds.revents & POLLERR) {
return oops(compress, -EIO, "poll returned error!");
--
1.7.2.5
next reply other threads:[~2013-02-10 0:18 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-02-10 0:18 Richard Fitzgerald [this message]
2013-02-22 16:06 ` [PATCH TINYCOMPRESS 10/14 v2] compress: Block if unable to write all data in single write() Richard Fitzgerald
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130210001812.GJ31139@opensource.wolfsonmicro.com \
--to=rf@opensource.wolfsonmicro.com \
--cc=alsa-devel@alsa-project.org \
--cc=vinod.koul@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).