From mboxrd@z Thu Jan 1 00:00:00 1970 From: =?ISO-8859-1?Q?Kai_M=E4kisara?= Subject: Re: After memory pressure: can't read from tape anymore Date: Fri, 03 Dec 2010 16:59:29 +0200 Message-ID: <4CF905D1.6050903@kolumbus.fi> References: <1290971729.2814.13.camel@larosa> <20101203212453W.fujita.tomonori@lab.ntt.co.jp> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Return-path: Received: from vs10.mail.saunalahti.fi ([195.197.172.105]:36767 "EHLO vs10.mail.saunalahti.fi" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750939Ab0LCO7m (ORCPT ); Fri, 3 Dec 2010 09:59:42 -0500 In-Reply-To: <20101203212453W.fujita.tomonori@lab.ntt.co.jp> Sender: linux-scsi-owner@vger.kernel.org List-Id: linux-scsi@vger.kernel.org To: FUJITA Tomonori Cc: lkolbe@techfak.uni-bielefeld.de, linux-scsi@vger.kernel.org On 12/03/2010 02:27 PM, FUJITA Tomonori wrote: > On Mon, 29 Nov 2010 19:09:46 +0200 (EET) > Kai Makisara wrote: > >>> This same behaviour appears when we're doing a few incremental backups; >>> after a while, it just isn't possible to use the tape drives anymore - >>> every I/O operation gives an I/O Error, even a simple dd bs=64k >>> count=10. After a restart, the system behaves correctly until >>> -seemingly- another memory pressure situation occured. >>> >> This is predictable. The maximum number of scatter/gather segments seems >> to be 128. The st driver first tries to set up transfer directly from the >> user buffer to the HBA. The user buffer is usually fragmented so that one >> scatter/gather segment is used for each page. Assuming 4 kB page size, the >> maximu size of the direct transfer is 128 x 4 = 512 kB. > > Can we make enlarge_buffer friendly to the memory alloctor a bit? > > His problem is that the driver can't allocate 2 mB with the hardware > limit 128 segments. > > enlarge_buffer tries to use ST_MAX_ORDER and if the allocation (256 kB > page) fails, enlarge_buffer fails. It could try smaller order instead? > > Not tested at all. > > > diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c > index 5b7388f..119544b 100644 > --- a/drivers/scsi/st.c > +++ b/drivers/scsi/st.c > @@ -3729,7 +3729,8 @@ static int enlarge_buffer(struct st_buffer * STbuffer, int new_size, int need_dm > b_size = PAGE_SIZE<< order; > } else { > for (b_size = PAGE_SIZE, order = 0; > - order< ST_MAX_ORDER&& b_size< new_size; > + order< ST_MAX_ORDER&& > + max_segs * (PAGE_SIZE<< order)< new_size; > order++, b_size *= 2) > ; /* empty */ > } You are correct. The loop does not work at all as it should. Years ago, the strategy was to start with as big blocks as possible to minimize the number s/g segments. Nowadays the segments must be of same size and the old logic is not applicable. I have not tested the patch either but it looks correct. Thanks for noticing this bug. I hope this helps the users. The question about number of s/g segments is still valid for the direct i/o case but that is optimization and not whether one can read/write. Thanks, Kai