* UBI memory leak after creating and removing volumes
@ 2009-02-17 12:01 John.Smith
2009-02-17 12:23 ` Artem Bityutskiy
0 siblings, 1 reply; 8+ messages in thread
From: John.Smith @ 2009-02-17 12:01 UTC (permalink / raw)
To: linux-mtd
Hello,
I am using a 2.6.18 Kernel, patched with MTD from Kernel 2.6.21, and
UBI from the mainline kernel a few days later on 1 May 2007. The
whole is running on an embedded MIPS device.
I have NAND, and UBI, and gluebi, and jffs2. I use jffs2, mounted on
an mtdblock device. Broadly, it all works.
I am experiencing a memory leak revealed in recent stress tests. The
stress tests create and delete many UBI volumes, and are are broadly
equivelent to the following:
( cat /proc/meminfo ;
i=0;
while [ ${i} -lt 100 ] ; do
ubimkvol -d 0 -n 15 -N ubi-vol -s 1000 ;
ubirmvol -d 0 -n 15 ;
i=$((i+1)) ;
done ;
cat /proc/meminfo ) | grep Slab
which shows slab memory increasing 80KB over 100 iterations.
Also:
( cat /proc/meminfo ;
i=0;
while [ ${i} -lt 100 ] ; do
ubimkvol -d 0 -n 15 -N ubi-vol1 -s 1000 ;
ubimkvol -d 0 -n 16 -N ubi-vol2 -s 1000 ;
ubirmvol -d 0 -n 16 ;
ubirmvol -d 0 -n 15 ;
i=$((i+1)) ;
done ; cat /proc/meminfo ) | grep Slab
which shows slab memory increasing 300KB for 100 iterations.
I encountered the second case first, and have explored. It seems there
is a single elevator queue associated with the mtdblock devices. The
sysfs items associated with the queue of the first ubi volume are not
being released.
I see a related effect by creating two ubi volumes, deleting the
first, and inspecting the entries in the /sys/block/mtdblock*
directories:
/ # ubimkvol -d 0 -n 15 -N ubi-vol-1 -s 10000
/ # ls /sys/block/mtdblock7
dev holders queue range removable ...
/ # ubimkvol -d 0 -n 16 -N ubi-vol-2 -s 10000
/ # ls /sys/block/mtdblock8
dev holders queue range removable ...
/ # ubirmvol -d 0 -n 15
/ # ls /sys/block/mtdblock8
dev holders range removable ...
Notice that the "queue" entry has disappeared.
Have I introduced problems mixing and matching the kernel?
Can you give any hints as to how to fix the leak? I am guessing that
there are multiple copies of data structures associated with the
elevator queue, and this is a bit wrong. But updating the kernel as a
whole is not really viable.
Regards
John
John Smith
This E-mail and any attachments hereto are strictly confidential and intended solely for the addressee. If you are not the intended addressee please notify the sender by return and delete the message. You must not disclose, forward or copy this E-mail or attachments to any third party without the prior consent of the sender. Pace plc is registered in England and Wales (Company no. 1672847) and our Registered Office is at Victoria Road, Saltaire, West Yorkshire, BD18 3LF, UK. Tel +44 (0) 1274 532000 Fax +44 (0) 1274 532010. <http://www.pace.com>
Save where otherwise agreed in writing between you and Pace (i) all orders for goods and/or services placed by you are made pursuant to Pace's standard terms and conditions of sale which may have been provided to you, or in any event are available at http://www.pace.com/uktcsale.pdf (ii) all orders for goods and/or services placed by Pace are subject to Pace's standard terms and conditions of purchase which may have been provided to you, or in any event are available at http://www.pace.com/uktcpurch.pdf. All other inconsistent terms in any other documentation including without limitation any purchase order, reschedule instruction, order acknowledgement, delivery note or invoice are hereby excluded.
This message has been scanned for viruses by BlackSpider MailControl - www.blackspider.com
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: UBI memory leak after creating and removing volumes
2009-02-17 12:01 John.Smith
@ 2009-02-17 12:23 ` Artem Bityutskiy
0 siblings, 0 replies; 8+ messages in thread
From: Artem Bityutskiy @ 2009-02-17 12:23 UTC (permalink / raw)
To: John.Smith; +Cc: linux-mtd
On Tue, 2009-02-17 at 12:01 +0000, John.Smith@pace.com wrote:
> Hello,
> I am using a 2.6.18 Kernel, patched with MTD from Kernel 2.6.21, and
> UBI from the mainline kernel a few days later on 1 May 2007. The
> whole is running on an embedded MIPS device.
>
> I have NAND, and UBI, and gluebi, and jffs2. I use jffs2, mounted on
> an mtdblock device. Broadly, it all works.
>
> I am experiencing a memory leak revealed in recent stress tests. The
> stress tests create and delete many UBI volumes, and are are broadly
> equivelent to the following:
Can you please enable /proc/slab_allocators - it tracks all allocators
and if we have a leak - it may point to the function which is guilty.
To have /proc/slab_allocators, do the following:
1. Enable SLAB, not SLUB. In kernel config menu got to
"General setup --->", then
"Choose SLAB allocator (SLUB (Unqueued Allocator)) --->"
and choose SLAB.
2. Go to the root menu, then go to
"Kernel hacking --->" and enable
"[*] Debug slab memory allocations" and
"[*] Memory leak debugging"
Recompile the kernel, and you will have a nice instrumentation to find
memory leak - the /proc/slab_allocators file. Please, play with this.
--
Best regards,
Artem Bityutskiy (Битюцкий Артём)
^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: UBI memory leak after creating and removing volumes
@ 2009-02-17 14:33 John.Smith
2009-02-17 15:24 ` Artem Bityutskiy
2009-02-17 15:39 ` Artem Bityutskiy
0 siblings, 2 replies; 8+ messages in thread
From: John.Smith @ 2009-02-17 14:33 UTC (permalink / raw)
To: dedekind; +Cc: linux-mtd
Artem Bityutskiy wrote:
>
> On Tue, 2009-02-17 at 12:01 +0000, John.Smith@pace.com wrote:
>
> > Hello,
>
> > I am using a 2.6.18 Kernel, patched with MTD from Kernel 2.6.21, and
> > UBI from the mainline kernel a few days later on 1 May 2007. The
> > whole is running on an embedded MIPS device.
> >
> > I have NAND, and UBI, and gluebi, and jffs2. I use jffs2, mounted
> > on an mtdblock device. Broadly, it all works.
> >
> > I am experiencing a memory leak revealed in recent stress tests.
The
> > stress tests create and delete many UBI volumes, and are are broadly
> > equivelent to the following:
>
> Can you please enable /proc/slab_allocators - it tracks all allocators
> and if we have a leak - it may point to the function which is guilty.
>
> Recompile the kernel, and you will have a nice instrumentation to find
> memory leak - the /proc/slab_allocators file. Please, play with this.
After 0, 1000 and 2000 iterations of a test of creating 2 UBI volumes,
then removing them, /proc/slab_allocators shows these three items
obviously increasing:
inode_cache: 327 alloc_inode+0x140/0x148
inode_cache: 3329 alloc_inode+0x140/0x148
inode_cache: 6329 alloc_inode+0x140/0x148
(3 objects per iteration)
sysfs_dir_cache: 1402 sysfs_new_dirent+0x2c/0xa0
sysfs_dir_cache: 15402 sysfs_new_dirent+0x2c/0xa0
sysfs_dir_cache: 29402 sysfs_new_dirent+0x2c/0xa0
(14 objects per iteration)
dentry_cache: 669 d_alloc+0x30/0x214
dentry_cache: 3823 d_alloc+0x30/0x214
dentry_cache: 6823 d_alloc+0x30/0x214
(3 objects per iteration)
I don't know how to track these things down fully. But I
believe they relate to the elevator queue.
I added some printks to elv_register_queue and __elv_unregister_queue
so the functions are like:
int elv_register_queue(struct request_queue *q)
{
elevator_t *e = q->elevator;
printk("elv_register_queue: e->kobj.dentry at %p val %p\n",
&e->kobj.dentry, e->kobj.dentry );
kobject_add(&e->kobj);
/* Do more stuff */
printk("elv_register_queue: e->kobj.dentry at %p val %p AFTER\n",
&e->kobj.dentry, e->kobj.dentry );
}
static void __elv_unregister_queue(elevator_t *e)
{
printk("__elv_unregister_queue: e->kobj.dentry at %p val %p\n",
&e->kobj.dentry, e->kobj.dentry );
kobject_uevent(&e->kobj, KOBJ_REMOVE);
kobject_del(&e->kobj);
printk("__elv_unregister_queue: e->kobj.dentry at %p val %p AFTER\n",
&e->kobj.dentry, e->kobj.dentry );
}
When creating and deleting volumes, e->kobj.dentry is always at the
same address.
After the register_queue operation, e->kobj.dentry points to a new
dentry. So e->kobj.dentry is getting over-written.
After the unregister_queue operation, e->kobj.dentry is NULL. The
kobject_del calls sysfs_remove_dir which either does or does not
delete memory according to whether e->kobj.dentry is NULL. So
the second ubi fails.
But I don't know what to do plug the leak.
Regards,
John
John Smith
This E-mail and any attachments hereto are strictly confidential and intended solely for the addressee. If you are not the intended addressee please notify the sender by return and delete the message. You must not disclose, forward or copy this E-mail or attachments to any third party without the prior consent of the sender. Pace plc is registered in England and Wales (Company no. 1672847) and our Registered Office is at Victoria Road, Saltaire, West Yorkshire, BD18 3LF, UK. Tel +44 (0) 1274 532000 Fax +44 (0) 1274 532010. <http://www.pace.com>
Save where otherwise agreed in writing between you and Pace (i) all orders for goods and/or services placed by you are made pursuant to Pace's standard terms and conditions of sale which may have been provided to you, or in any event are available at http://www.pace.com/uktcsale.pdf (ii) all orders for goods and/or services placed by Pace are subject to Pace's standard terms and conditions of purchase which may have been provided to you, or in any event are available at http://www.pace.com/uktcpurch.pdf. All other inconsistent terms in any other documentation including without limitation any purchase order, reschedule instruction, order acknowledgement, delivery note or invoice are hereby excluded.
This message has been scanned for viruses by BlackSpider MailControl - www.blackspider.com
^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: UBI memory leak after creating and removing volumes
2009-02-17 14:33 UBI memory leak after creating and removing volumes John.Smith
@ 2009-02-17 15:24 ` Artem Bityutskiy
2009-02-17 15:24 ` Artem Bityutskiy
2009-02-17 22:02 ` John Smith
2009-02-17 15:39 ` Artem Bityutskiy
1 sibling, 2 replies; 8+ messages in thread
From: Artem Bityutskiy @ 2009-02-17 15:24 UTC (permalink / raw)
To: John.Smith; +Cc: linux-mtd
On Tue, 2009-02-17 at 14:33 +0000, John.Smith@pace.com wrote:
> After 0, 1000 and 2000 iterations of a test of creating 2 UBI volumes,
> then removing them, /proc/slab_allocators shows these three items
> obviously increasing:
>
> inode_cache: 327 alloc_inode+0x140/0x148
> inode_cache: 3329 alloc_inode+0x140/0x148
> inode_cache: 6329 alloc_inode+0x140/0x148
> (3 objects per iteration)
>
> sysfs_dir_cache: 1402 sysfs_new_dirent+0x2c/0xa0
> sysfs_dir_cache: 15402 sysfs_new_dirent+0x2c/0xa0
> sysfs_dir_cache: 29402 sysfs_new_dirent+0x2c/0xa0
> (14 objects per iteration)
>
> dentry_cache: 669 d_alloc+0x30/0x214
> dentry_cache: 3823 d_alloc+0x30/0x214
> dentry_cache: 6823 d_alloc+0x30/0x214
> (3 objects per iteration)
Hmm, may be this is related to sysfs? Every time you create or delete
a volume UBIFS creates/deletes sysfs entries. May be some are forgotten,
or it messes up kobject refcounting, so the kobjects are never released.
> I don't know how to track these things down fully. But I
> believe they relate to the elevator queue.
Hmm? Sorry, did not realize how elevator may be involved into
UBI volume creation/deletion. You mean when we create a volume,
userspace udev is called, and creates device node on your host
FS, which involves elevators? You can try disabling udev then.
Anyway, too late, I have to go home ow.
--
Best regards,
Artem Bityutskiy (Битюцкий Артём)
^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: UBI memory leak after creating and removing volumes
2009-02-17 15:24 ` Artem Bityutskiy
@ 2009-02-17 15:24 ` Artem Bityutskiy
2009-02-17 22:02 ` John Smith
1 sibling, 0 replies; 8+ messages in thread
From: Artem Bityutskiy @ 2009-02-17 15:24 UTC (permalink / raw)
To: John.Smith; +Cc: linux-mtd
On Tue, 2009-02-17 at 17:24 +0200, Artem Bityutskiy wrote:
> On Tue, 2009-02-17 at 14:33 +0000, John.Smith@pace.com wrote:
> > After 0, 1000 and 2000 iterations of a test of creating 2 UBI volumes,
> > then removing them, /proc/slab_allocators shows these three items
> > obviously increasing:
> >
> > inode_cache: 327 alloc_inode+0x140/0x148
> > inode_cache: 3329 alloc_inode+0x140/0x148
> > inode_cache: 6329 alloc_inode+0x140/0x148
> > (3 objects per iteration)
> >
> > sysfs_dir_cache: 1402 sysfs_new_dirent+0x2c/0xa0
> > sysfs_dir_cache: 15402 sysfs_new_dirent+0x2c/0xa0
> > sysfs_dir_cache: 29402 sysfs_new_dirent+0x2c/0xa0
> > (14 objects per iteration)
> >
> > dentry_cache: 669 d_alloc+0x30/0x214
> > dentry_cache: 3823 d_alloc+0x30/0x214
> > dentry_cache: 6823 d_alloc+0x30/0x214
> > (3 objects per iteration)
>
> Hmm, may be this is related to sysfs? Every time you create or delete
> a volume UBIFS creates/deletes sysfs entries. May be some are forgotten,
s/UBIFS/UBI/
--
Best regards,
Artem Bityutskiy (Битюцкий Артём)
^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: UBI memory leak after creating and removing volumes
2009-02-17 14:33 UBI memory leak after creating and removing volumes John.Smith
2009-02-17 15:24 ` Artem Bityutskiy
@ 2009-02-17 15:39 ` Artem Bityutskiy
2009-02-21 21:30 ` John Smith
1 sibling, 1 reply; 8+ messages in thread
From: Artem Bityutskiy @ 2009-02-17 15:39 UTC (permalink / raw)
To: John.Smith; +Cc: linux-mtd
On Tue, 2009-02-17 at 14:33 +0000, John.Smith@pace.com wrote:
> Artem Bityutskiy wrote:
> >
> > On Tue, 2009-02-17 at 12:01 +0000, John.Smith@pace.com wrote:
> >
> > > Hello,
> >
> > > I am using a 2.6.18 Kernel, patched with MTD from Kernel 2.6.21, and
> > > UBI from the mainline kernel a few days later on 1 May 2007. The
> > > whole is running on an embedded MIPS device.
> > >
> > > I have NAND, and UBI, and gluebi, and jffs2. I use jffs2, mounted
> > > on an mtdblock device. Broadly, it all works.
> > >
> > > I am experiencing a memory leak revealed in recent stress tests.
> The
> > > stress tests create and delete many UBI volumes, and are are broadly
> > > equivelent to the following:
> >
> > Can you please enable /proc/slab_allocators - it tracks all allocators
> > and if we have a leak - it may point to the function which is guilty.
> >
> > Recompile the kernel, and you will have a nice instrumentation to find
> > memory leak - the /proc/slab_allocators file. Please, play with this.
>
> After 0, 1000 and 2000 iterations of a test of creating 2 UBI volumes,
> then removing them, /proc/slab_allocators shows these three items
> obviously increasing:
>
> inode_cache: 327 alloc_inode+0x140/0x148
> inode_cache: 3329 alloc_inode+0x140/0x148
> inode_cache: 6329 alloc_inode+0x140/0x148
> (3 objects per iteration)
>
> sysfs_dir_cache: 1402 sysfs_new_dirent+0x2c/0xa0
> sysfs_dir_cache: 15402 sysfs_new_dirent+0x2c/0xa0
> sysfs_dir_cache: 29402 sysfs_new_dirent+0x2c/0xa0
> (14 objects per iteration)
>
> dentry_cache: 669 d_alloc+0x30/0x214
> dentry_cache: 3823 d_alloc+0x30/0x214
> dentry_cache: 6823 d_alloc+0x30/0x214
> (3 objects per iteration)
Wait, may be these are just caches? Did you try to drop
caches? May be there is no leak and we just have inode/dentry
caches growing, probably because of udev?
Try echo 3 > /proc/sys/vm/drop_caches
(see Documentation/sysctl/vm.txt)
> This E-mail and any attachments hereto are strictly confidential and intended solely for the addressee. If you are not the intended addressee please notify the sender by return and delete the message. You must not disclose, forward or copy this E-mail or attachments to any third party without the prior consent of the sender. Pace plc is registered in England and Wales (Company no. 1672847) and our Registered Office is at Victoria Road, Saltaire, West Yorkshire, BD18 3LF, UK. Tel +44 (0) 1274 532000 Fax +44 (0) 1274 532010. <http://www.pace.com>
> Save where otherwise agreed in writing between you and Pace (i) all orders for goods and/or services placed by you are made pursuant to Pace's standard terms and conditions of sale which may have been provided to you, or in any event are available at http://www.pace.com/uktcsale.pdf (ii) all orders for goods and/or services placed by Pace are subject to Pace's standard terms and conditions of purchase which may have been provided to you, or in any event are available at http://www.pace.com/uktcpurch.pdf. All other inconsistent terms in any other documentation including without limitation any purchase order, reschedule instruction, order acknowledgement, delivery note or invoice are hereby excluded.
Would you please remove this disclaimer which makes no sense in the
public mailing list.
--
Best regards,
Artem Bityutskiy (Битюцкий Артём)
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: UBI memory leak after creating and removing volumes
2009-02-17 15:24 ` Artem Bityutskiy
2009-02-17 15:24 ` Artem Bityutskiy
@ 2009-02-17 22:02 ` John Smith
1 sibling, 0 replies; 8+ messages in thread
From: John Smith @ 2009-02-17 22:02 UTC (permalink / raw)
To: dedekind; +Cc: linux-mtd
Artem Bityutskiy wrote:
> On Tue, 2009-02-17 at 14:33 +0000, John.Smith@pace.com wrote:
>> After 0, 1000 and 2000 iterations of a test of creating 2 UBI volumes,
>> then removing them, /proc/slab_allocators shows these three items
>> obviously increasing:
>>
>> inode_cache: 327 alloc_inode+0x140/0x148
>> inode_cache: 3329 alloc_inode+0x140/0x148
>> inode_cache: 6329 alloc_inode+0x140/0x148
>> (3 objects per iteration)
>>
>> sysfs_dir_cache: 1402 sysfs_new_dirent+0x2c/0xa0
>> sysfs_dir_cache: 15402 sysfs_new_dirent+0x2c/0xa0
>> sysfs_dir_cache: 29402 sysfs_new_dirent+0x2c/0xa0
>> (14 objects per iteration)
>>
>> dentry_cache: 669 d_alloc+0x30/0x214
>> dentry_cache: 3823 d_alloc+0x30/0x214
>> dentry_cache: 6823 d_alloc+0x30/0x214
>> (3 objects per iteration)
>
> Hmm, may be this is related to sysfs? Every time you create or delete
> a volume UBIFS creates/deletes sysfs entries. May be some are forgotten,
> or it messes up kobject refcounting, so the kobjects are never released.
>
I agree, they seem to be sysfs things.
>> I don't know how to track these things down fully. But I
>> believe they relate to the elevator queue.
>
> Hmm? Sorry, did not realize how elevator may be involved into
> UBI volume creation/deletion. You mean when we create a volume,
> userspace udev is called, and creates device node on your host
> FS, which involves elevators? You can try disabling udev then.
>
I don't think udev is involved.
This is my understanding:
When an UBI volume is created, a matching MTD device is created.
The MTD device has both character and block device variants.
A block device is typically like a disk, so needs some sort of
queue and elevator algorithm to manage the accesses to the disk.
Typically there is one queue per disk. But in MTD world there
is no real need for a queue and cunning algorithm, so the is
just one queue for all the MTD devices, managed by a kernel thread.
In some sense the sysfs entries corresponding to the queue
should be linked to some toplevel MTD entity, and referenced
from the MTD block devices. Instead the queue seems to be present
in all the mtd block devices, and something doesn't quite work.
Thanks for your replies,
Regards,
John
John Smith
P.S. Please forgive the disclaimer which is probably about to
be appended automatically. I will try to get it removed.
This E-mail and any attachments hereto are strictly confidential and intended solely for the addressee. If you are not the intended addressee please notify the sender by return and delete the message. You must not disclose, forward or copy this E-mail or attachments to any third party without the prior consent of the sender. Pace plc is registered in England and Wales (Company no. 1672847) and our Registered Office is at Victoria Road, Saltaire, West Yorkshire, BD18 3LF, UK. Tel +44 (0) 1274 532000 Fax +44 (0) 1274 532010. <http://www.pace.com>
Save where otherwise agreed in writing between you and Pace (i) all orders for goods and/or services placed by you are made pursuant to Pace's standard terms and conditions of sale which may have been provided to you, or in any event are available at http://www.pace.com/uktcsale.pdf (ii) all orders for goods and/or services placed by Pace are subject to Pace's standard terms and conditions of purchase which may have been provided to you, or in any event are available at http://www.pace.com/uktcpurch.pdf. All other inconsistent terms in any other documentation including without limitation any purchase order, reschedule instruction, order acknowledgement, delivery note or invoice are hereby excluded.
This message has been scanned for viruses by BlackSpider MailControl - www.blackspider.com
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: UBI memory leak after creating and removing volumes
2009-02-17 15:39 ` Artem Bityutskiy
@ 2009-02-21 21:30 ` John Smith
0 siblings, 0 replies; 8+ messages in thread
From: John Smith @ 2009-02-21 21:30 UTC (permalink / raw)
To: dedekind; +Cc: linux-mtd
Previously I said:
> On Tue, 2009-02-17 at 12:01 +0000, John.Smith@pace.com wrote:
> > I am using a 2.6.18 Kernel, patched with MTD from Kernel 2.6.21, and
> > UBI from the mainline kernel a few days later on 1 May 2007. The
> > whole is running on an embedded MIPS device.
> >
The problem is still present on current versions of Linux. I have
tested on 2.6.28.6 using QEMU to make a virtual x86.
With these settings in the kernel .config file:
CONFIG_MTD=y
CONFIG_MTD_BLKDEVS=y
CONFIG_MTD_MTDRAM=y
CONFIG_MTD_UBI=y
then a sequence of calls of the form
for ((i=0; i<100; i++)) ; do
ubimkvol -d 0 -n 2 -N vol2 -s 1000
ubimkvol -d 0 -n 3 -N vol3 -s 1000
ubirmvol -d 0 -n 3
ubirmvol -d 0 -n 2
done
cat /proc/meminfo | grep Slab
shows Slab increasing by about 1.6kB per iteration
The problem can also be demonstrated using MTD modules,
without using UBI. If I build two MTD simulated devices
as kernel modules using these settings in .config:
CONFIG_MTD_MTDRAM=m
CONFIG_MTD_NAND_NANDSIM=m
then
insmod mtdram.ko
insmod nandsim.ko
rmmod nandsim.ko
rmmod mtdram.ko
shows a similar leak of about 1.6kB.
The leak does not happen if the calls are not nested. So
insmod mtdram.ko
rmmod mtdram.ko
insmod nandsim.ko
rmmod nandsim.ko
and
ubimkvol -d 0 -n 2 -N vol2 -s 1000
ubirmvol -d 0 -n 2
ubimkvol -d 0 -n 3 -N vol3 -s 1000
ubirmvol -d 0 -n 3
do not leak.
The leak does not happen if
# CONFIG_MTD_BLKDEVS is not set.
From /proc/slabinfo it seems that we are leaking sysfs_dir_cache
objects.
John
John Smith
p.s. some automatic system is about to add a silly disclaimer. Ingore it...
This E-mail and any attachments hereto are strictly confidential and intended solely for the addressee. If you are not the intended addressee please notify the sender by return and delete the message. You must not disclose, forward or copy this E-mail or attachments to any third party without the prior consent of the sender. Pace plc is registered in England and Wales (Company no. 1672847) and our Registered Office is at Victoria Road, Saltaire, West Yorkshire, BD18 3LF, UK. Tel +44 (0) 1274 532000 Fax +44 (0) 1274 532010. <http://www.pace.com>
Save where otherwise agreed in writing between you and Pace (i) all orders for goods and/or services placed by you are made pursuant to Pace's standard terms and conditions of sale which may have been provided to you, or in any event are available at http://www.pace.com/uktcsale.pdf (ii) all orders for goods and/or services placed by Pace are subject to Pace's standard terms and conditions of purchase which may have been provided to you, or in any event are available at http://www.pace.com/uktcpurch.pdf. All other inconsistent terms in any other documentation including without limitation any purchase order, reschedule instruction, order acknowledgement, delivery note or invoice are hereby excluded.
This message has been scanned for viruses by BlackSpider MailControl - www.blackspider.com
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2009-02-21 22:10 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-02-17 14:33 UBI memory leak after creating and removing volumes John.Smith
2009-02-17 15:24 ` Artem Bityutskiy
2009-02-17 15:24 ` Artem Bityutskiy
2009-02-17 22:02 ` John Smith
2009-02-17 15:39 ` Artem Bityutskiy
2009-02-21 21:30 ` John Smith
-- strict thread matches above, loose matches on Subject: below --
2009-02-17 12:01 John.Smith
2009-02-17 12:23 ` Artem Bityutskiy
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox