linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
 messages from 2009-05-19 07:31:10 to 2009-05-29 13:41:55 UTC [more...]

[PATCH v2 00/11] Asynchronous raid6 acceleration (part 1 of 3)
 2009-05-29 13:41 UTC  (29+ messages)
` [PATCH v2 01/11] async_tx: rename zero_sum to val
` [PATCH v2 02/11] async_tx: kill ASYNC_TX_DEP_ACK flag
` [PATCH v2 03/11] async_tx: structify submission arguments, add scribble
` [PATCH v2 04/11] async_xor: permit callers to pass in a 'dma/page scribble' region
` [PATCH v2 05/11] md/raid5: add scribble region for buffer lists
` [PATCH v2 06/11] async_tx: add sum check flags
` [PATCH v2 08/11] async_tx: add support for asynchronous GF multiplication
` [PATCH v2 09/11] async_tx: add support for asynchronous RAID6 recovery operations
` [PATCH v2 11/11] async_tx: raid6 recovery self test

[PATCH/RFC 0/2] md: personality pushdown patches -- intro
 2009-05-29 13:18 UTC  (3+ messages)
` [PATCH 1/2] md: Push down reconstruction log message to personality code
` [PATCH 2/2] md: Move check for bitmap presence "

Adaptec 2405 : hardware or software raid?
 2009-05-29  8:58 UTC  (10+ messages)
  ` Upgrading a software RAID

forcing check of RAID1 arrays causes lockup
 2009-05-28 23:25 UTC  (5+ messages)

Need information regarding RAID 6 Async APIs for kernel version 2.6.27
 2009-05-28 22:56 UTC  (2+ messages)

[RFC PATCH] dm-csum: A new device mapper target that checks data integrity
 2009-05-28 19:29 UTC  (9+ messages)
` [RFC PATCH v2] "

Subject: [001/002 ] raid0 reshape
 2009-05-28 19:07 UTC  (15+ messages)
    ` OT: busting a gut (was Re: Subject: [001/002 ] raid0 reshape)

raid failure and LVM volume group availability
 2009-05-28 18:48 UTC  (7+ messages)

resync duration ?
 2009-05-28 16:43 UTC  (5+ messages)

Western Digital RE3
 2009-05-28 13:29 UTC  (10+ messages)

FW: Detecting errors on the RAID disks
 2009-05-28  8:19 UTC  (3+ messages)

Western Digital RE3
 2009-05-27 15:15 UTC 

[PATCH 0/6] md: More sector_t conversions -- intro
 2009-05-27  8:02 UTC  (14+ messages)
` [PATCH 1/6] md: Make mddev->chunk_size sector-based
` [PATCH 2/6] md: Fix a bug in super_1_sync()
` [PATCH 3/6] md: Convert mddev->new_chunk to sectors
` [PATCH 4/6] md: convert conf->chunk_size and conf->prev_chunk "
` [PATCH 5/6] md/raid5: Use is_power_of_2() in raid5_reconfig()/raid6_reconfig()
` [PATCH 6/6] md/raid5: Kill outdated comment

Raid and badblocks
 2009-05-26 15:27 UTC  (5+ messages)

How to un-degrade an array after a totally spurious failure?
 2009-05-26 10:47 UTC  (5+ messages)

[PULL REQUEST] md - various fixed for 2.6.30
 2009-05-26  3:06 UTC 

LVM->RAID->LVM
 2009-05-25 12:32 UTC  (3+ messages)

[md PATCH 3/3] md: export 'frozen' resync state through sysfs
 2009-05-25  4:33 UTC  (3+ messages)
` [md PATCH 2/3] md: bitmap: improve bitmap maintenance code
` [md PATCH 1/3] md: improve errno return when setting array_size

Missing md superblock on added devices after grow 
 2009-05-24 18:06 UTC  (5+ messages)

internal write-intent bitmap, chunksize, superblock
 2009-05-23 21:07 UTC  (2+ messages)

Performance of a software raid 5
 2009-05-22 23:00 UTC  (8+ messages)
  ` Poor write performance with write-intent bitmap?

Upgrading a RAID configuration
 2009-05-22 17:15 UTC 

[PATCH] md: Protecting mddev with barriers to avoid races
 2009-05-22 13:41 UTC  (6+ messages)

Does raid5 have error handling while reading?
 2009-05-22 11:21 UTC  (4+ messages)

[PATCH 0/3] Asynchronous raid6 acceleration (part 2 of 3)
 2009-05-21 19:04 UTC  (4+ messages)
` [PATCH 1/3] iop-adma: cleanup iop_adma_run_tx_complete_actions
` [PATCH 2/3] iop-adma: P+Q support for iop13xx adma engines
` [PATCH 3/3] iop-adma: P+Q self test

"raid array not clean" messages
 2009-05-21 18:38 UTC  (7+ messages)

Starting RAID 5
 2009-05-21 18:27 UTC  (7+ messages)

Subject: [PATCH 006/009]: raid1: chunk size check in run
 2009-05-21 13:32 UTC  (6+ messages)

[PATCH] md: Enhancements and clean to linear RAID
 2009-05-21  3:47 UTC  (9+ messages)

Subject [ md PATCH 4/6] : md to support page size chunks in the case of raid 0
 2009-05-21  3:13 UTC  (8+ messages)

linear raid : mddev not protected in linear_add
 2009-05-20 22:35 UTC  (4+ messages)

Subject: [PATCH 002/009]: have raid0 report its formation
 2009-05-20 13:50 UTC  (3+ messages)

Subject [ raid0 PATCH 3/6] : Add support to chunk size of 4K*n instead of 4K*2^n
 2009-05-20  8:03 UTC  (3+ messages)

Subject: [PATCH 004/009]: md. chunk size check
 2009-05-20  1:43 UTC  (2+ messages)

Subject: [PATCH 007/009]: raid10: chunk size check in run
 2009-05-20  1:41 UTC  (2+ messages)

Subject: [PATCH 008/009]: raid5: chunk size check in run
 2009-05-20  1:39 UTC  (2+ messages)

Subject: [PATCH 003/009]: raid0 :Enables chunk size other than 4K
 2009-05-20  1:38 UTC  (2+ messages)

Subject: [PATCH 001/009]: have raid0 compile with MD_DEBUG on
 2009-05-20  1:26 UTC  (2+ messages)

Iron hot-rod forever!
 2009-05-19 21:56 UTC 

Subject: [PATCH 007/009]: raid10: chunk size check in run
 2009-05-19 16:10 UTC 

Subject: [PATCH 007/009]: raid10: chunk size check in run
 2009-05-19 16:10 UTC 

Subject: [PATCH 007/009]: raid10: chunk size check in run
 2009-05-19 16:10 UTC 

Subject: [PATCH 002/009]: have raid0 report its formation
 2009-05-19 16:09 UTC 

Subject: [PATCH 007/009]: raid10: chunk size check in run
 2009-05-19 16:09 UTC 

Subject: [PATCH 004/009]: md. chunk size check
 2009-05-19 16:08 UTC 

Subject: [PATCH 002/009]: have raid0 report its formation
 2009-05-19 16:07 UTC 

Subject: [PATCH 005/009]: raid0: chunk size check in raid0_run
 2009-05-19 16:07 UTC 

Subject: [PATCH 009/009]: mdadm: 1K chunks for raid0
 2009-05-19 16:04 UTC 

Subject : [ md PATCHE 5/6] : 1K *n chunks
 2009-05-19 13:21 UTC  (3+ messages)

PATCH[03/03] md: Binary search in linear raid
 2009-05-19 10:52 UTC 

PATCH [02/03] md: Removing num_sector and replacing start_sector with end_sector
 2009-05-19 10:51 UTC 

PATCH [01/03] md: Removal of hash table in linear raid
 2009-05-19 10:49 UTC 

RCU detected CPU 1 stall (t=4295904002/751 jiffies) Pid: 902, comm: md1_raid5
 2009-05-19 10:30 UTC  (3+ messages)


This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).