qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Martin Guy" <martinwguy@yahoo.it>
To: qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] Re: NBD server for QEMU images
Date: Wed, 13 Dec 2006 13:19:35 +0000	[thread overview]
Message-ID: <56d259a00612130519l2b324966o2f2cd6e5c69beb10@mail.gmail.com> (raw)
In-Reply-To: <457FE5EE.4000704@qumranet.com>

[-- Attachment #1: Type: text/plain, Size: 930 bytes --]

> - write tons of data to nbd device, data ends up in pagecache
> - memory gets low, kswapd wakes up, calls nbd device to actually write
> the data
> - nbd issues a request, which ends up on the nbd server on the same machine
> - the nbd server allocates memory
> - memory allocation hangs waiting for kswapd

In other words, it can deadlock only if you are swapping to an nbd
device that is served by nbd-server running on the same machine and
kernel. In the case of a qemu system swapping over nbd to a server on
the host machine, it is the guest kernel that waits on the host kernel
paging the nbd server in from the host's separate swap space, so no
deadlock is possible.

Practice bears this out; if you wanna stress-test it, here's a program
that creates a low memory condition by saturating the VM.

Of course, this has nothing to do with the original patch, which just
lets nbd-server interpret qemu image files ;)

    M

[-- Attachment #2: thrash.c --]
[-- Type: text/x-csrc, Size: 2062 bytes --]

/*
 * thrash.c
 *
 * A standalone Unix command-line program to
 * make the machine thrash, ie go into permanent swapping,
 * by using VM >= RAM size and accessing all pages repeatedly
 *
 * Usage: thrash size
 * where "size" is the number of megabytes to thrash
 * A good choice for N is the number of megabytes of physical RAM
 * that the machine has.
 *
 * Reason:
 * to force a machine to use its swap space,
 * to flush all unused pages out to swap and so free RAM for other purposes
 * or to see how a system behaves under extreme duress.
 *
 * It currently *writes* to all pages, but could be made to read them
 * as an alternative, or as well.
 *
 *	Martin Guy, 9 November 2006
 */

#include <stdlib.h>	/* for exit() */
#include <stdio.h>
#include <unistd.h>	/* for system calls */

main(int argc, char **argv)
{
	int megabytes = 0; /* MB of VM to thrash, from command-line argument.
				* 0 means uninitialised */
	char *buf;	/* Huge VM buffer */
	intptr_t bufsize;	/* Size of buffer in bytes */
	long pagesize;	/* size of VM page */
	int i;		/* index into argv */
	int verbose = 0; /* Print a dot for every pass through VM? */

	pagesize = getpagesize();

	for (i=1; i<argc; i++) {
		if (argv[i][0] == '-') {
			switch (argv[i][1]) {
			case 'v': verbose = 1; break;
			default: goto usage;
			}
		} else if (isdigit(argv[i][0])) {
			megabytes = atoi(argv[i]);
			/* Sanity check comes later */
		} else {
usage:			fputs("Usage: thrash [-v] N\n", stderr);
			fputs("-v\tPrint a dot every time for each pass through memoery\n", stderr);
			fputs("N\tNumber of megabytes of VM to thrash\n", stderr);
			exit(1);
		}
	}

	/* Sanity checks */
	if (megabytes <= 0) goto usage;

	bufsize = (long) megabytes * (long)(1024 * 1024);
	buf = (char *) sbrk(bufsize);
	if (buf == (char *)-1) {
		perror("thrash: Failed to allocate VM");
		exit(1);
	}

	/* Write every page repeatedly */
	for (;;) {
		char *p;
		int i;

		for (p=buf, i=bufsize/pagesize; i>0; p+=pagesize, i--)
			*p=(char) i;
		if (verbose) { putchar('.'); fflush(stdout); }
	}
	/* NOTREACHED */
}

  reply	other threads:[~2006-12-13 13:19 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-12-12 12:48 [Qemu-devel] NBD server for QEMU images Salvador Fandiño
2006-12-12 13:37 ` Martin Guy
2006-12-12 17:00   ` [Qemu-devel] " Salvador Fandino
2006-12-12 16:58     ` Paul Brook
2006-12-12 17:13       ` Daniel Jacobowitz
2006-12-12 17:33         ` RE : " Sylvain Petreolle
2006-12-12 17:39           ` Paul Brook
2006-12-12 18:54             ` Anthony Liguori
2006-12-12 17:41           ` RE : " Johannes Schindelin
2006-12-12 17:42           ` Daniel Jacobowitz
2006-12-12 18:41             ` [Qemu-devel] Re: RE : " Salvador Fandino
2006-12-13 12:23               ` Jan Marten Simons
2006-12-13 19:03                 ` Salvador Fandino
2006-12-13 20:03                   ` Jim C. Brown
2006-12-13 22:07                     ` Salvador Fandino
2006-12-13 22:55                       ` Jim C. Brown
2006-12-14  8:37                         ` Salvador Fandino
2006-12-14 14:58                           ` Jim C. Brown
2006-12-12 19:00           ` Salvador Fandino
2006-12-12 17:45         ` [Qemu-devel] " Mark Williamson
2006-12-12 19:30         ` Christian MICHON
2006-12-12 15:09 ` Anthony Liguori
2006-12-12 17:32   ` Salvador Fandino
2006-12-12 20:13     ` Anthony Liguori
2006-12-13  2:14       ` Mark Williamson
2006-12-13 11:37       ` Avi Kivity
2006-12-13 13:19         ` Martin Guy [this message]
2006-12-13 13:29           ` Avi Kivity
2006-12-13 19:14             ` Salvador Fandino
2006-12-14  8:34               ` Avi Kivity
2006-12-13 16:58 ` [Qemu-devel] " Mulyadi Santosa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56d259a00612130519l2b324966o2f2cd6e5c69beb10@mail.gmail.com \
    --to=martinwguy@yahoo.it \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).