From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Eli Malul" Subject: Huge memory allocation Date: Tue, 1 Mar 2011 15:04:56 +0200 Message-ID: Reply-To: device-mapper development Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0505688097858785216==" Return-path: Content-class: urn:content-classes:message List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: dm-devel@redhat.com List-Id: dm-devel.ids This is a multi-part message in MIME format. --===============0505688097858785216== Content-class: urn:content-classes:message Content-Type: multipart/alternative; boundary="----_=_NextPart_001_01CBD811.456E0704" This is a multi-part message in MIME format. ------_=_NextPart_001_01CBD811.456E0704 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable dm_malloc_aux do not allow memory allocation greater than 50000000. =20 Is there a reason for that? =20 I would like to change this limitation to 100000000. =20 Do you see a problem with that? =20 Thanks, Eli ------_=_NextPart_001_01CBD811.456E0704 Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable

dm_malloc_aux = do not allow memory allocation greater than 50000000.

 

Is there a = reason for that?

 

I would like = to change this limitation to  100000000.

 

Do you see a = problem with that?

 

Thanks,

Eli=

------_=_NextPart_001_01CBD811.456E0704-- --===============0505688097858785216== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline --===============0505688097858785216==-- From mboxrd@z Thu Jan 1 00:00:00 1970 From: Alasdair G Kergon Subject: Re: Huge memory allocation Date: Tue, 1 Mar 2011 13:46:06 +0000 Message-ID: <20110301134606.GN3626@agk-dp.fab.redhat.com> References: Reply-To: device-mapper development Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: Eli Malul Cc: dm-devel@redhat.com List-Id: dm-devel.ids On Tue, Mar 01, 2011 at 03:04:56PM +0200, Eli Malul wrote: > dm_malloc_aux do not allow memory allocation greater than 50000000. That was to catch software bugs: it was a number bigger than ever needed. Change it if you need to, but I need stronger evidence of the need for your particular way of doing things before I'll change it in the upstream tree! Alasdair From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Eli Malul" Subject: Re: Huge memory allocation Date: Thu, 3 Mar 2011 10:05:46 +0200 Message-ID: References: <20110301134606.GN3626@agk-dp.fab.redhat.com> Reply-To: device-mapper development Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-class: urn:content-classes:message List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: Alasdair G Kergon Cc: dm-devel@redhat.com List-Id: dm-devel.ids I am expecting to have lots of (up to million) scattered extents across some volumes which I am required to mirror. Since mirror mapped device with a table that large consume unbearable amount of memory (e.g. for 10,000 extents I saw about 6 Giga of memory allocated by the device-mapper) I am going to create two linear devices which maps these extents and mirror them. In addition, I am required to preserve the original extent's offsets since they are an existing user data used by DB applications. To achieve that I will create another linear device to simulate the original extent's offsets which shall be mapped to the created mirrored device so, the client will continue to read and write the same offsets. -----Original Message----- From: Alasdair G Kergon [mailto:agk@redhat.com] Sent: Tuesday, March 01, 2011 3:46 PM To: Eli Malul Cc: dm-devel@redhat.com Subject: Re: [dm-devel] Huge memory allocation On Tue, Mar 01, 2011 at 03:04:56PM +0200, Eli Malul wrote: > dm_malloc_aux do not allow memory allocation greater than 50000000. That was to catch software bugs: it was a number bigger than ever needed. Change it if you need to, but I need stronger evidence of the need for your particular way of doing things before I'll change it in the upstream tree! Alasdair From mboxrd@z Thu Jan 1 00:00:00 1970 From: Zdenek Kabelac Subject: Re: Huge memory allocation Date: Thu, 03 Mar 2011 09:41:47 +0100 Message-ID: <4D6F544B.409@redhat.com> References: <20110301134606.GN3626@agk-dp.fab.redhat.com> Reply-To: device-mapper development Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com Cc: device-mapper development , Eli Malul List-Id: dm-devel.ids Dne 3.3.2011 09:05, Eli Malul napsal(a): > I am expecting to have lots of (up to million) scattered extents across > some volumes which I am required to mirror. > Since mirror mapped device with a table that large consume unbearable > amount of memory (e.g. for 10,000 extents I saw about 6 Giga of memory > allocated by the device-mapper) I am going to create two linear devices > which maps these extents and mirror them. > > In addition, I am required to preserve the original extent's offsets > since they are an existing user data used by DB applications. > To achieve that I will create another linear device to simulate the > original extent's offsets which shall be mapped to the created mirrored > device so, the client will continue to read and write the same offsets. > Aren't you trying to reinvent dm-replicator target ? (available as an extra kernel patch) Maybe you should first describe exactly what are you trying to achieve. I'd guess there would be better ways to achieve that goal. Updating kernel tables is expensive operation - especially if you plan to have its size in the range of multiple megabytes - so it looks like wrong plan... Zdenek From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Eli Malul" Subject: Re: Huge memory allocation Date: Thu, 3 Mar 2011 10:58:51 +0200 Message-ID: References: <20110301134606.GN3626@agk-dp.fab.redhat.com> <4D6F544B.409@redhat.com> Reply-To: device-mapper development Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-class: urn:content-classes:message In-Reply-To: <4D6F544B.409@redhat.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: Zdenek Kabelac , undisclosed-recipients Cc: device-mapper development List-Id: dm-devel.ids I would like to accelerate read IO performance for user data which resides on one machine (with low IO performance due to slow media) and might be scattered on many different extents by mirroring these extents to another machine with much better IO performance and direct read requests to that machine (I already added a patch to the dm-raid1.c to set preferred read capability). Any suggestions would be appreciated... -----Original Message----- From: Zdenek Kabelac [mailto:zkabelac@redhat.com] Sent: Thursday, March 03, 2011 10:42 AM To: undisclosed-recipients Cc: Eli Malul; device-mapper development Subject: Re: [dm-devel] Huge memory allocation Dne 3.3.2011 09:05, Eli Malul napsal(a): > I am expecting to have lots of (up to million) scattered extents across > some volumes which I am required to mirror. > Since mirror mapped device with a table that large consume unbearable > amount of memory (e.g. for 10,000 extents I saw about 6 Giga of memory > allocated by the device-mapper) I am going to create two linear devices > which maps these extents and mirror them. > > In addition, I am required to preserve the original extent's offsets > since they are an existing user data used by DB applications. > To achieve that I will create another linear device to simulate the > original extent's offsets which shall be mapped to the created mirrored > device so, the client will continue to read and write the same offsets. > Aren't you trying to reinvent dm-replicator target ? (available as an extra kernel patch) Maybe you should first describe exactly what are you trying to achieve. I'd guess there would be better ways to achieve that goal. Updating kernel tables is expensive operation - especially if you plan to have its size in the range of multiple megabytes - so it looks like wrong plan... Zdenek From mboxrd@z Thu Jan 1 00:00:00 1970 From: Zdenek Kabelac Subject: Re: Huge memory allocation Date: Thu, 03 Mar 2011 11:00:19 +0100 Message-ID: <4D6F66B3.2050401@redhat.com> References: <20110301134606.GN3626@agk-dp.fab.redhat.com> <4D6F544B.409@redhat.com> Reply-To: device-mapper development Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: dm-devel@redhat.com List-Id: dm-devel.ids Dne 3.3.2011 09:58, Eli Malul napsal(a): > I would like to accelerate read IO performance for user data which > resides on one machine (with low IO performance due to slow media) and > might be scattered on many different extents by mirroring these extents > to another machine with much better IO performance and direct read > requests to that machine (I already added a patch to the dm-raid1.c to > set preferred read capability). > > Any suggestions would be appreciated... There were some projects like dm cache and probably few others. (http://visa.cis.fiu.edu/ming/dmcache/) You will need to create a new dm target for the thing you try to achieve. Handling this in userspace could only serve for prototype thing. Structures in libdm are not designed to work efficiently with millions of extents passed to kernel through ioctl operation - this path isn't probably the best one... You may still want to check dm-replicator or drdb technology. But it all depends on how do you plan to synchronize above the block level, and how the write operation is supposed to handled. Zdenek