* [uml-devel] parallel I/O @ 2008-09-12 13:40 Nicolas Boullis 2008-09-15 8:59 ` Nicolas Boullis 0 siblings, 1 reply; 6+ messages in thread From: Nicolas Boullis @ 2008-09-12 13:40 UTC (permalink / raw) To: user-mode-linux-devel Hi, I've been using uml on production servers for months, and have been quite happy with it, except for I/O performance. I was using the user-mode-linux package from Debian Etch, based on 2.6.18, and compiled with CONFIG_BLK_DEV_UBD_SYNC. Disabling this option of course helps performance, but I feel that it's somewhat unsafe if the host crashes... Moreover, I don't think it helps read performance. As far as I can see, if one wants both safety and performance, the I/O must not be serialized. So I decided to give it a try. The first step was to run a per-device I/O thread. This improves slightly the performance with several UBD devices: I/O on one device do not block I/O on another device. Moreover, that helps to implement parallelized I/O. Then I managed to run several parallel threads per device. As far as I am concernced, that much improves the performance. But currently, my code is more a dirty proof-of-concept than a clean patch. Would you be interested by my work? Cheers, Nicolas Boullis École Centrale Paris ------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ User-mode-linux-devel mailing list User-mode-linux-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/user-mode-linux-devel ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [uml-devel] parallel I/O 2008-09-12 13:40 [uml-devel] parallel I/O Nicolas Boullis @ 2008-09-15 8:59 ` Nicolas Boullis 2008-10-02 13:34 ` Nicolas Boullis 0 siblings, 1 reply; 6+ messages in thread From: Nicolas Boullis @ 2008-09-15 8:59 UTC (permalink / raw) To: user-mode-linux-devel [-- Attachment #1: Type: text/plain, Size: 223 bytes --] Hi, For what it's worth, here is my quick-n-dirty proof-of-concept patch that enables parallel I/O. Note that it is based on 2.6.24. If people are interested, I can try to make a clean patch out of it. Cheers, Nicolas [-- Attachment #2: uml_parallel_io.patch --] [-- Type: text/x-patch, Size: 6580 bytes --] --- linux-source-2.6.24/arch/um/drivers/ubd_kern.c.orig 2008-09-15 10:55:51.000000000 +0200 +++ linux-source-2.6.24/arch/um/drivers/ubd_kern.c 2008-09-12 15:13:24.000000000 +0200 @@ -65,7 +65,7 @@ unsigned long length; char *buffer; int sectorsize; - unsigned long sector_mask; + unsigned long sector_mask[8]; unsigned long long cow_offset; unsigned long bitmap_words[2]; int error; @@ -172,6 +172,9 @@ struct scatterlist sg[MAX_SG]; struct request *request; int start_sg, end_sg; + int pipe_fd[2]; + int io_pid; + unsigned long stack; }; #define DEFAULT_COW { \ @@ -196,6 +199,9 @@ .request = NULL, \ .start_sg = 0, \ .end_sg = 0, \ + .pipe_fd = {-1, -1}, \ + .io_pid = -1, \ + .stack = 0, \ } /* Protected by ubd_lock */ @@ -473,7 +479,7 @@ static void do_ubd_request(struct request_queue * q); /* Only changed by ubd_init, which is an initcall. */ -int thread_fd = -1; +int common_thread_fd = -1; static void ubd_end_request(struct request *req, int bytes, int uptodate) { @@ -513,7 +519,7 @@ int n; while(1){ - n = os_read_file(thread_fd, &req, + n = os_read_file(common_thread_fd, &req, sizeof(struct io_thread_req *)); if(n != sizeof(req)){ if(n == -EAGAIN) @@ -529,7 +535,7 @@ ubd_finish(rq, rq->hard_nr_sectors << 9); kfree(req); } - reactivate_fd(thread_fd, UBD_IRQ); + reactivate_fd(common_thread_fd, UBD_IRQ); list_for_each_safe(list, next_ele, &restart){ ubd = container_of(list, struct ubd, restart); @@ -565,6 +571,8 @@ return os_file_size(file, size_out); } +extern pid_t waitpid(pid_t pid, int *status, int options); + static void ubd_close_dev(struct ubd *ubd_dev) { os_close_file(ubd_dev->fd); @@ -576,6 +584,8 @@ ubd_dev->cow.bitmap = NULL; } +int per_device_io_thread(void *arg); + static int ubd_open_dev(struct ubd *ubd_dev) { struct openflags flags; @@ -636,6 +646,8 @@ if(err < 0) goto error; ubd_dev->cow.fd = err; } + + return 0; error: os_close_file(ubd_dev->fd); @@ -938,7 +950,7 @@ } stack = alloc_stack(0, 0); io_pid = start_io_thread(stack + PAGE_SIZE - sizeof(void *), - &thread_fd); + &common_thread_fd); if(io_pid < 0){ printk(KERN_ERR "ubd : Failed to start I/O thread (errno = %d) - " @@ -946,7 +958,7 @@ io_pid = -1; return 0; } - err = um_request_irq(UBD_IRQ, thread_fd, IRQ_READ, ubd_intr, + err = um_request_irq(UBD_IRQ, common_thread_fd, IRQ_READ, ubd_intr, IRQF_DISABLED, "ubd", ubd_devs); if(err != 0) printk(KERN_ERR "um_request_irq failed - errno = %d\n", -err); @@ -962,12 +974,23 @@ int err = 0; if(ubd_dev->count == 0){ + void *sp; + err = ubd_open_dev(ubd_dev); if(err){ printk(KERN_ERR "%s: Can't open \"%s\": errno = %d\n", disk->disk_name, ubd_dev->file, -err); goto out; } + + pipe(ubd_dev->pipe_fd); + os_set_fd_block(ubd_dev->pipe_fd[1], 0); + + ubd_dev->stack = alloc_stack(0, 0); + sp = (void *)(ubd_dev->stack + PAGE_SIZE - sizeof(void *)); + ubd_dev->io_pid = clone(per_device_io_thread, sp, CLONE_VM, (void *)ubd_dev); + + printk("Launched I/O thread.\n"); } ubd_dev->count++; set_disk_ro(disk, !ubd_dev->openflags.w); @@ -987,12 +1010,21 @@ struct gendisk *disk = inode->i_bdev->bd_disk; struct ubd *ubd_dev = disk->private_data; - if(--ubd_dev->count == 0) + if(--ubd_dev->count == 0) { + printk("Trying to stop I/O thread.\n"); + os_close_file(ubd_dev->pipe_fd[1]); + ubd_dev->pipe_fd[1] = -1; + waitpid(ubd_dev->io_pid, NULL, __WCLONE); + ubd_dev->io_pid = -1; + free_stack(ubd_dev->stack, 0); + ubd_dev->stack = 0; + ubd_close_dev(ubd_dev); + } return 0; } -static void cowify_bitmap(__u64 io_offset, int length, unsigned long *cow_mask, +static void cowify_bitmap(__u64 io_offset, int length, unsigned long (*cow_mask)[], __u64 *cow_offset, unsigned long *bitmap, __u64 bitmap_offset, unsigned long *bitmap_words, __u64 bitmap_len) @@ -1059,6 +1091,7 @@ { struct gendisk *disk = req->rq_disk; struct ubd *ubd_dev = disk->private_data; + int i; io_req->req = req; io_req->fds[0] = (ubd_dev->cow.file != NULL) ? ubd_dev->cow.fd : @@ -1068,7 +1101,8 @@ io_req->offset = offset; io_req->length = len; io_req->error = 0; - io_req->sector_mask = 0; + for (i = 0; i < 8; i++) + io_req->sector_mask[i] = 0; io_req->op = (rq_data_dir(req) == READ) ? UBD_READ : UBD_WRITE; io_req->offsets[0] = 0; @@ -1120,7 +1154,7 @@ sg->offset, sg->length, sg_page(sg)); last_sectors = sg->length >> 9; - n = os_write_file(thread_fd, &io_req, + n = os_write_file(dev->pipe_fd[1], &io_req, sizeof(struct io_thread_req *)); if(n != sizeof(struct io_thread_req *)){ if(n != -EAGAIN) @@ -1482,3 +1516,86 @@ return 0; } + +extern int open(const char *pathname, int flags); +extern int dup2(int oldfd, int newfd); + +void reopen(int fd) +{ + char name[64]; + int tmp_fd; + + sprintf(name, "/proc/self/fd/%d", fd); + tmp_fd = open(name, O_RDWR | O_SYNC); + os_close_file(fd); + dup2(tmp_fd, fd); +} + +int io_subthread(void *arg) +{ + struct ubd *ubd_dev = (struct ubd *)arg; + struct io_thread_req *req; + int n; + + if (ubd_dev->fd >= 0) + reopen(ubd_dev->fd); + + if (ubd_dev->cow.fd >= 0) + reopen(ubd_dev->cow.fd); + + while(1){ + n = os_read_file(ubd_dev->pipe_fd[0], &req, + sizeof(struct io_thread_req *)); + if(n != sizeof(struct io_thread_req *)){ + if(n < 0) + printk("io_subthread - read failed, fd = %d, " + "err = %d\n", ubd_dev->pipe_fd[0], -n); + else if (n == 0) + break; + else { + printk("io_subthread - short read, fd = %d, " + "length = %d\n", ubd_dev->pipe_fd[0], n); + } + continue; + } + io_count++; + do_io(req); + n = os_write_file(kernel_fd, &req, + sizeof(struct io_thread_req *)); + if(n != sizeof(struct io_thread_req *)) + printk("io_subthread - write failed, fd = %d, err = %d\n", + kernel_fd, -n); + } + + ubd_close_dev(ubd_dev); + + return 0; +} + +int per_device_io_thread(void *arg) +{ + struct ubd *ubd_dev = (struct ubd *)arg; + unsigned long stack[16]; + int i; + + printk("I/O thread has started.\n"); + + ignore_sigwinch_sig(); + + os_close_file(ubd_dev->pipe_fd[1]); + + for (i = 0; i < 16; i++) { + void *sp; + stack[i] = alloc_stack(0, 0); + sp = (void *)(stack[i] + PAGE_SIZE - sizeof(void *)); + clone(io_subthread, sp, CLONE_VM, (void *)ubd_dev); + } + + while (waitpid(-1, NULL, __WCLONE) != -1); + for (i = 0; i < 16; i++) + free_stack(stack[i], 0); + + printk("I/O thread has finished.\n"); + + return 0; +} [-- Attachment #3: Type: text/plain, Size: 363 bytes --] ------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ [-- Attachment #4: Type: text/plain, Size: 194 bytes --] _______________________________________________ User-mode-linux-devel mailing list User-mode-linux-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/user-mode-linux-devel ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [uml-devel] parallel I/O 2008-09-15 8:59 ` Nicolas Boullis @ 2008-10-02 13:34 ` Nicolas Boullis 2008-10-04 4:28 ` Jeff Dike 0 siblings, 1 reply; 6+ messages in thread From: Nicolas Boullis @ 2008-10-02 13:34 UTC (permalink / raw) To: user-mode-linux-devel [-- Attachment #1: Type: text/plain, Size: 387 bytes --] Hi, I wrote: > > For what it's worth, here is my quick-n-dirty proof-of-concept patch > that enables parallel I/O. Note that it is based on 2.6.24. > > If people are interested, I can try to make a clean patch out of it. For what it's worth, I fixed a few bugs in my patch. Here is the new patch. It's still "quick'n'dirty", but it works better, and works for me. Cheers, Nicolas [-- Attachment #2: uml_parallel_io.patch --] [-- Type: text/x-patch, Size: 6731 bytes --] --- linux-source-2.6.24/arch/um/drivers/ubd_kern.c.orig 2008-09-12 10:28:52.000000000 +0200 +++ linux-source-2.6.24/arch/um/drivers/ubd_kern.c 2008-10-02 15:07:43.000000000 +0200 @@ -65,7 +65,7 @@ unsigned long length; char *buffer; int sectorsize; - unsigned long sector_mask; + unsigned long sector_mask[8]; unsigned long long cow_offset; unsigned long bitmap_words[2]; int error; @@ -172,6 +172,9 @@ struct scatterlist sg[MAX_SG]; struct request *request; int start_sg, end_sg; + int pipe_fd[2]; + int io_pid; + unsigned long stacks[17]; }; #define DEFAULT_COW { \ @@ -196,6 +199,9 @@ .request = NULL, \ .start_sg = 0, \ .end_sg = 0, \ + .pipe_fd = {-1, -1}, \ + .io_pid = -1, \ + .stacks = { [0 ... 16] = 0 }, \ } /* Protected by ubd_lock */ @@ -473,7 +479,7 @@ static void do_ubd_request(struct request_queue * q); /* Only changed by ubd_init, which is an initcall. */ -int thread_fd = -1; +int common_thread_fd = -1; static void ubd_end_request(struct request *req, int bytes, int uptodate) { @@ -513,7 +519,7 @@ int n; while(1){ - n = os_read_file(thread_fd, &req, + n = os_read_file(common_thread_fd, &req, sizeof(struct io_thread_req *)); if(n != sizeof(req)){ if(n == -EAGAIN) @@ -529,7 +535,7 @@ ubd_finish(rq, rq->hard_nr_sectors << 9); kfree(req); } - reactivate_fd(thread_fd, UBD_IRQ); + reactivate_fd(common_thread_fd, UBD_IRQ); list_for_each_safe(list, next_ele, &restart){ ubd = container_of(list, struct ubd, restart); @@ -565,6 +571,8 @@ return os_file_size(file, size_out); } +extern pid_t waitpid(pid_t pid, int *status, int options); + static void ubd_close_dev(struct ubd *ubd_dev) { os_close_file(ubd_dev->fd); @@ -576,6 +584,8 @@ ubd_dev->cow.bitmap = NULL; } +int per_device_io_thread(void *arg); + static int ubd_open_dev(struct ubd *ubd_dev) { struct openflags flags; @@ -636,6 +646,8 @@ if(err < 0) goto error; ubd_dev->cow.fd = err; } + + return 0; error: os_close_file(ubd_dev->fd); @@ -938,7 +950,7 @@ } stack = alloc_stack(0, 0); io_pid = start_io_thread(stack + PAGE_SIZE - sizeof(void *), - &thread_fd); + &common_thread_fd); if(io_pid < 0){ printk(KERN_ERR "ubd : Failed to start I/O thread (errno = %d) - " @@ -946,7 +958,7 @@ io_pid = -1; return 0; } - err = um_request_irq(UBD_IRQ, thread_fd, IRQ_READ, ubd_intr, + err = um_request_irq(UBD_IRQ, common_thread_fd, IRQ_READ, ubd_intr, IRQF_DISABLED, "ubd", ubd_devs); if(err != 0) printk(KERN_ERR "um_request_irq failed - errno = %d\n", -err); @@ -962,12 +974,27 @@ int err = 0; if(ubd_dev->count == 0){ + void *sp; + int i; + err = ubd_open_dev(ubd_dev); if(err){ printk(KERN_ERR "%s: Can't open \"%s\": errno = %d\n", disk->disk_name, ubd_dev->file, -err); goto out; } + + pipe(ubd_dev->pipe_fd); + + for (i = 0; i <= 16; i++) + ubd_dev->stacks[i] = alloc_stack(0, 0); + sp = (void *)(ubd_dev->stacks[0] + PAGE_SIZE - sizeof(void *)); + ubd_dev->io_pid = clone(per_device_io_thread, sp, CLONE_VM, (void *)ubd_dev); + os_close_file(ubd_dev->pipe_fd[0]); + os_set_exec_close(ubd_dev->pipe_fd[1], 1); + os_set_fd_block(ubd_dev->pipe_fd[1], 0); + + printk("Launched I/O thread for %s.\n", disk->disk_name); } ubd_dev->count++; set_disk_ro(disk, !ubd_dev->openflags.w); @@ -987,12 +1014,26 @@ struct gendisk *disk = inode->i_bdev->bd_disk; struct ubd *ubd_dev = disk->private_data; - if(--ubd_dev->count == 0) + if(--ubd_dev->count == 0) { + int i; + printk("Trying to stop I/O thread for %s... ", disk->disk_name); + os_close_file(ubd_dev->pipe_fd[1]); + waitpid(ubd_dev->io_pid, NULL, __WCLONE); + printk("done.\n"); + ubd_dev->pipe_fd[0] = -1; + ubd_dev->pipe_fd[1] = -1; + ubd_dev->io_pid = -1; + for (i = 0; i <= 16; i++) { + free_stack(ubd_dev->stacks[i], 0); + ubd_dev->stacks[i] = 0; + } + ubd_close_dev(ubd_dev); + } return 0; } -static void cowify_bitmap(__u64 io_offset, int length, unsigned long *cow_mask, +static void cowify_bitmap(__u64 io_offset, int length, unsigned long (*cow_mask)[], __u64 *cow_offset, unsigned long *bitmap, __u64 bitmap_offset, unsigned long *bitmap_words, __u64 bitmap_len) @@ -1059,6 +1100,7 @@ { struct gendisk *disk = req->rq_disk; struct ubd *ubd_dev = disk->private_data; + int i; io_req->req = req; io_req->fds[0] = (ubd_dev->cow.file != NULL) ? ubd_dev->cow.fd : @@ -1068,7 +1110,8 @@ io_req->offset = offset; io_req->length = len; io_req->error = 0; - io_req->sector_mask = 0; + for (i = 0; i < 8; i++) + io_req->sector_mask[i] = 0; io_req->op = (rq_data_dir(req) == READ) ? UBD_READ : UBD_WRITE; io_req->offsets[0] = 0; @@ -1120,7 +1163,7 @@ sg->offset, sg->length, sg_page(sg)); last_sectors = sg->length >> 9; - n = os_write_file(thread_fd, &io_req, + n = os_write_file(dev->pipe_fd[1], &io_req, sizeof(struct io_thread_req *)); if(n != sizeof(struct io_thread_req *)){ if(n != -EAGAIN) @@ -1482,3 +1525,79 @@ return 0; } + +extern int open(const char *pathname, int flags); +extern int dup2(int oldfd, int newfd); + +void reopen(int fd) +{ + char name[64]; + int tmp_fd; + + sprintf(name, "/proc/self/fd/%d", fd); + tmp_fd = open(name, O_RDWR | O_SYNC); + os_close_file(fd); + dup2(tmp_fd, fd); + os_close_file(tmp_fd); +} + +int io_subthread(void *arg) +{ + struct ubd *ubd_dev = (struct ubd *)arg; + struct io_thread_req *req; + int n; + + if (ubd_dev->fd >= 0) + reopen(ubd_dev->fd); + + if (ubd_dev->cow.fd >= 0) + reopen(ubd_dev->cow.fd); + + while(1){ + n = os_read_file(ubd_dev->pipe_fd[0], &req, + sizeof(struct io_thread_req *)); + if(n != sizeof(struct io_thread_req *)){ + if(n < 0) + printk("io_subthread - read failed, fd = %d, " + "err = %d\n", ubd_dev->pipe_fd[0], -n); + else if (n == 0) + break; + else { + printk("io_subthread - short read, fd = %d, " + "length = %d\n", ubd_dev->pipe_fd[0], n); + } + continue; + } + io_count++; + do_io(req); + n = os_write_file(kernel_fd, &req, + sizeof(struct io_thread_req *)); + if(n != sizeof(struct io_thread_req *)) + printk("io_subthread - write failed, fd = %d, err = %d\n", + kernel_fd, -n); + } + + ubd_close_dev(ubd_dev); + + return 0; +} + +int per_device_io_thread(void *arg) +{ + struct ubd *ubd_dev = (struct ubd *)arg; + int i; + + ignore_sigwinch_sig(); + + os_close_file(ubd_dev->pipe_fd[1]); + + for (i = 0; i < 16; i++) { + void *sp; + sp = (void *)(ubd_dev->stacks[i+1] + PAGE_SIZE - sizeof(void *)); + clone(io_subthread, sp, CLONE_VM, (void *)ubd_dev); + } + + while (waitpid(-1, NULL, __WCLONE) != -1); + + return 0; +} [-- Attachment #3: Type: text/plain, Size: 363 bytes --] ------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ [-- Attachment #4: Type: text/plain, Size: 194 bytes --] _______________________________________________ User-mode-linux-devel mailing list User-mode-linux-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/user-mode-linux-devel ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [uml-devel] parallel I/O 2008-10-02 13:34 ` Nicolas Boullis @ 2008-10-04 4:28 ` Jeff Dike 2008-10-04 16:37 ` Nicolas Boullis 0 siblings, 1 reply; 6+ messages in thread From: Jeff Dike @ 2008-10-04 4:28 UTC (permalink / raw) To: Nicolas Boullis; +Cc: user-mode-linux-devel On Thu, Oct 02, 2008 at 03:34:27PM +0200, Nicolas Boullis wrote: > I wrote: > > > > For what it's worth, here is my quick-n-dirty proof-of-concept patch > > that enables parallel I/O. Note that it is based on 2.6.24. > > > > If people are interested, I can try to make a clean patch out of it. > > For what it's worth, I fixed a few bugs in my patch. Here is the new > patch. It's still "quick'n'dirty", but it works better, and works for me. Sorry about the lack of responsiveness. I've had such a patch for quite a while. I never stuck it in mainline because I couldn't find any common use cases where it made a noticable difference. Does your patch help noticably with anything reasonable common? Jeff -- Work email - jdike at linux dot intel dot com ------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ User-mode-linux-devel mailing list User-mode-linux-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/user-mode-linux-devel ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [uml-devel] parallel I/O 2008-10-04 4:28 ` Jeff Dike @ 2008-10-04 16:37 ` Nicolas Boullis 2008-10-06 13:28 ` Nicolas Boullis 0 siblings, 1 reply; 6+ messages in thread From: Nicolas Boullis @ 2008-10-04 16:37 UTC (permalink / raw) To: Jeff Dike; +Cc: user-mode-linux-devel Hello, Quoting Jeff Dike <jdike@addtoit.com>: > Sorry about the lack of responsiveness. No problem, I'm glad to read you now. > I've had such a patch for quite a while. I never stuck it in mainline > because I couldn't find any common use cases where it made a noticable > difference. > > Does your patch help noticably with anything reasonable common? It did help significantly for me. On a reasonably new host, using as UBD devices logical volumes built on top of a FC SAN, building a 10GB ext3 filesystem took something like 50 seconds before my patch, and something like 3 seconds after it. (I can check the exact values if you wish.) Note that this is using CONFIG_UBD_SYNC (or whatever it is called); without it it gets even faster (but I guess parallel I/O make no difference). My feeling is that it might improve performance when reads and writes are performed simultaneously, but I have not made any try. As for using CONFIG_UBD_SYNC, my feeling is that it's much safer if the host might crash (or experience power outages, or...): as far as I understand it, journaled filesystems need to know when data is actually secured on the discs. Hope this helps, Nicolas Boullis Ecole Centrale Paris ------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ User-mode-linux-devel mailing list User-mode-linux-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/user-mode-linux-devel ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [uml-devel] parallel I/O 2008-10-04 16:37 ` Nicolas Boullis @ 2008-10-06 13:28 ` Nicolas Boullis 0 siblings, 0 replies; 6+ messages in thread From: Nicolas Boullis @ 2008-10-06 13:28 UTC (permalink / raw) To: Jeff Dike, user-mode-linux-devel Hi, Nicolas Boullis wrote: > On a reasonably new host, using as UBD devices logical volumes built > on top of a FC SAN, building a 10GB ext3 filesystem took something > like 50 seconds before my patch, and something like 3 seconds after > it. (I can check the exact values if you wish.) Hmmm... my memory was making thing much better than they really are. It's around 38 seconds with 2.6.18 (before the "batch I/O requests" change); it's several minutes with unpatched 2.6.24, and it's around 7 seconds with patched 2.6.24... To be more honnest, my patch contains a part similar to Steve VanDeBogart's "ubd does multiple io's when one will suffice" patch. Using 2.6.24 with his patch but still serialized I/O, the time is around 23 seconds. Hence, my patch (with 16 threads per device) make things around 3 times faster. Cheers, Nicolas Boullis École Centrale Paris ------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ User-mode-linux-devel mailing list User-mode-linux-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/user-mode-linux-devel ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2008-10-06 13:31 UTC | newest] Thread overview: 6+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2008-09-12 13:40 [uml-devel] parallel I/O Nicolas Boullis 2008-09-15 8:59 ` Nicolas Boullis 2008-10-02 13:34 ` Nicolas Boullis 2008-10-04 4:28 ` Jeff Dike 2008-10-04 16:37 ` Nicolas Boullis 2008-10-06 13:28 ` Nicolas Boullis
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.