public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: YeYin <eyniy@qq.com>
To: "Dave Chinner" <david@fromorbit.com>
Cc: xfs <xfs@oss.sgi.com>
Subject: 回复: XFS direct IO problem
Date: Wed, 8 Apr 2015 15:05:57 +0800	[thread overview]
Message-ID: <tencent_60C0CC90244648E22E374DF9@qq.com> (raw)
In-Reply-To: <20150408044955.GE15810@dastard>


[-- Attachment #1.1: Type: text/plain, Size: 4307 bytes --]

Dave,
Thank you for your explanation. I got the reason, and I write some code to  simulate the MySQL.It will reproduce the progress:‍


open file without direct flag
read file   //cause kernel readahead 4 pages, and inode->i_mapping->nrpages > 0
close file


open file with direct flag
lseek 4*4096 // skip 4 readahead pages
read  file //cause xfs_flushinval_pages to do nothing
...


I'd like to ask XFS how to resovle this problem?


Attach code:‍
 
/* gcc -o test_read test_read.c
 
 * dd if=/dev/zero of=/data1/fo.dat bs=4096 count=10
 
 * ./test_read /data1/fo.dat 2 direct
 
 * */
 
#define _GNU_SOURCE
 
#include <stdio.h>
 
#include <unistd.h>
 
#include <sys/types.h>
 
#include <sys/stat.h>
 
#include <fcntl.h>
 
#include <errno.h>
 
#include <string.h>
 
#include <stdlib.h>
 


 
#define BUFSIZE 4096
 


 
int read_count = 2;
 


 
int main(int argc, char *argv[]){
 
	if(argc < 3){
 
		fprintf(stderr, "usage: %s <file> <count> [buffer|direct]\n", argv[0]);
 
		exit(1);
 
	}
 
	char *buf = memalign(BUFSIZE - 1, BUFSIZE);
 
	char *file = argv[1];
 
	read_count = atoi(argv[2]);
 
	int ret = 0,sum = 0, i = 0, fd = -1;
 
	if(argc == 4 && strncmp(argv[3], "direct",6) == 0){
 
		//fd = open(file, O_RDONLY|O_DIRECT);
 
		fd = open(file, O_RDONLY);
 
		if(fd < 0){
 
			fprintf(stderr, "open read only file failed\n");
 
			exit(1);
 
		}
 
		ret = read(fd, buf, BUFSIZE);
 
		if(ret < 0){
 
			fprintf(stderr, "buffer read error\n");
 
		}
 
		close(fd);
 
		fd = open(file, O_RDWR);
 
		if(fd < 0){
 
			fprintf(stderr, "open read only file failed\n");
 
			exit(1);
 
		}
 


 
		if (fcntl(fd, F_SETFL, O_RDONLY|O_DIRECT) == -1) {
 
			fprintf(stderr, "set direct error\n");
 
			exit(1);
 
		}
 


 
	}else{
 
		fd = open(file, O_RDONLY);
 
		if(fd < 0){
 
			fprintf(stderr, "open buf file failed\n");
 
			exit(1);
 
		}
 
	}
 


 
	while(i++ < read_count){
 
		//memset(buf, 0, BUFSIZE);
 
		if(buf == NULL){
 
			fprintf(stderr, "memory allocate failed\n");
 
			exit(1);
 
		}
 
		if(lseek(fd, 4*4096, SEEK_SET) < 0){
 
			fprintf(stderr, "seek error!\n");
 
			break;
 
		}
 
		ret = read(fd, buf, BUFSIZE);
 
		if(ret > 0){
 
			sum += ret;
 
		}else if(ret == 0){
 
			printf("read end\n");
 
			break;
 
		}
 
		else{
 
			printf("error:%d\n", errno);
 
			break;
 
		}
 
		sleep(1);
 
	}
 
	printf("read sum: %d\n", sum);
 
	close(fd);
 
	free(buf);
 
	return 0;
 
}‍





------------------ 原始邮件 ------------------
发件人: "Dave Chinner";<david@fromorbit.com>;
发送时间: 2015年4月8日(星期三) 中午12:49
收件人: "YeYin"<eyniy@qq.com>; 
抄送: "xfs"<xfs@oss.sgi.com>; 
主题: Re: XFS direct IO problem



On Wed, Apr 08, 2015 at 12:21:45PM +0800, YeYin wrote:
> Hi, About 2 months ago, I asked one problem in XFS, see
> here(http://oss.sgi.com/archives/xfs/2015-02/msg00197.html).
> 
> 
> After that, I use direct IO in MySQL, see
> here(https://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_flush_method).‍
> 
> 
> However, I found that MySQL performance is still poor sometimes. I
> use some tools(https://github.com/brendangregg/perf-tools‍) to
> trace the kernel, I found some problems:

<snip>

> This will cause bad performance, even direct IO. I still don't
> understand why not truncate_inode_page called?

Because the cached page must be outside the range of the direct IO
that is in progress - direct IO only tries to flush pages over the
range it is doing the IO over.

> Every time, after I run this: echo 1 > /proc/sys/vm/drop_caches
> 
> Immediately enhance performance.

Because that flushes whatever page is in the cache. Can you identify
what offset that cached page is at? Tracing the xfs events will tell
you what pages that operation invalidates on each inode, and knowing
the offset may tell us why that page is not getting flushed.

Alternatively, write a simple C program that deomnstrates the same
problem so we can reproduce it easily, fix the problem and turn it
into a regression test....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

[-- Attachment #1.2: Type: text/html, Size: 13464 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2015-04-08  7:06 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-04-08  4:21 XFS direct IO problem YeYin
2015-04-08  4:49 ` Dave Chinner
2015-04-08  7:05   ` YeYin [this message]
2015-04-08 21:14     ` 回复: " Dave Chinner
2015-04-09  2:37       ` 回复: " YeYin
2015-04-09  3:48         ` YeYin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=tencent_60C0CC90244648E22E374DF9@qq.com \
    --to=eyniy@qq.com \
    --cc=david@fromorbit.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox