* Linux 2.6.29
@ 2009-03-23 23:29 Linus Torvalds
2009-03-24 6:19 ` Jesper Krogh
2009-03-27 13:35 ` Hans-Peter Jansen
0 siblings, 2 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-03-23 23:29 UTC (permalink / raw)
To: Linux Kernel Mailing List
It's out there now, or at least in the process of getting mirrored out.
The most obvious change is the (temporary) change of logo to Tuz, the
Tasmanian Devil. But there's a number of driver updates and some m68k
header updates (fixing headers_install after the merge of non-MMU/MMU)
that end up being pretty noticeable in the diffs.
The shortlog (from -rc8, obviously - the full logs from 2.6.28 are too big
to even contemplate attaching here) is appended, and most of the non-logo
changes really shouldn't be all that noticeable to most people. Nothing
really exciting, although I admit to fleetingly considering another -rc
series just because the changes are bigger than I would have wished for
this late in the game. But there was little point in holding off the real
release any longer, I feel.
This obviously starts the merge window for 2.6.30, although as usual, I'll
probably wait a day or two before I start actively merging. I do that in
order to hopefully result in people testing the final plain 2.6.29 a bit
more before all the crazy changes start up again.
Linus
---
Aaro Koskinen (2):
ARM: OMAP: sched_clock() corrected
ARM: OMAP: Allow I2C bus driver to be compiled as a module
Abhijeet Joglekar (2):
[SCSI] libfc: Pass lport in exch_mgr_reset
[SCSI] libfc: when rport goes away (re-plogi), clean up exchanges to/from rport
Achilleas Kotsis (1):
USB: Add device id for Option GTM380 to option driver
Al Viro (1):
net: fix sctp breakage
Alan Stern (2):
USB: usbfs: keep async URBs until the device file is closed
USB: EHCI: expedite unlinks when the root hub is suspended
Albert Pauw (1):
USB: option.c: add ZTE 622 modem device
Alexander Duyck (1):
igb: remove ASPM L0s workaround
Andrew Vasquez (4):
[SCSI] qla2xxx: Correct address range checking for option-rom updates.
[SCSI] qla2xxx: Correct truncation in return-code status checking.
[SCSI] qla2xxx: Correct overwrite of pre-assigned init-control-block structure size.
[SCSI] qla2xxx: Update version number to 8.03.00-k4.
Andy Whitcroft (1):
suspend: switch the Asus Pundit P1-AH2 to old ACPI sleep ordering
Anirban Chakraborty (1):
[SCSI] qla2xxx: Correct vport delete bug.
Anton Vorontsov (1):
ucc_geth: Fix oops when using fixed-link support
Antti Palosaari (1):
V4L/DVB (10972): zl10353: i2c_gate_ctrl bug fix
Axel Wachtler (1):
USB: serial: add FTDI USB/Serial converter devices
Ben Dooks (6):
[ARM] S3C64XX: Set GPIO pin when select IRQ_EINT type
[ARM] S3C64XX: Rename IRQ_UHOST to IRQ_USBH
[ARM] S3C64XX: Fix name of USB host clock.
[ARM] S3C64XX: Fix USB host clock mux list
[ARM] S3C64XX: sparse warnings in arch/arm/plat-s3c64xx/s3c6400-clock.c
[ARM] S3C64XX: sparse warnings in arch/arm/plat-s3c64xx/irq.c
Benjamin Herrenschmidt (2):
emac: Fix clock control for 405EX and 405EXr chips
radeonfb: Whack the PCI PM register until it sticks
Benny Halevy (1):
NFSD: provide encode routine for OP_OPENATTR
Bjørn Mork (1):
ipv6: fix display of local and remote sit endpoints
Borislav Petkov (1):
ide-floppy: do not map dataless cmds to an sg
Carlos Corbacho (2):
acpi-wmi: Unmark as 'experimental'
acer-wmi: Unmark as 'experimental'
Chris Leech (3):
[SCSI] libfc: rport retry on LS_RJT from certain ELS
[SCSI] fcoe: fix handling of pending queue, prevent out of order frames (v3)
ixgbe: fix multiple unicast address support
Chris Mason (2):
Btrfs: Fix locking around adding new space_info
Btrfs: Clear space_info full when adding new devices
Christoph Paasch (2):
netfilter: conntrack: fix dropping packet after l4proto->packet()
netfilter: conntrack: check for NEXTHDR_NONE before header sanity checking
Chuck Lever (2):
NLM: Shrink the IPv4-only version of nlm_cmp_addr()
NLM: Fix GRANT callback address comparison when IPv6 is enabled
Corentin Chary (4):
asus-laptop: restore acpi_generate_proc_event()
eeepc-laptop: restore acpi_generate_proc_event()
asus-laptop: use select instead of depends on
platform/x86: depends instead of select for laptop platform drivers
Cyrill Gorcunov (1):
acpi: check for pxm_to_node_map overflow
Daisuke Nishimura (1):
vmscan: pgmoved should be cleared after updating recent_rotated
Dan Carpenter (1):
acer-wmi: double free in acer_rfkill_exit()
Dan Williams (1):
USB: Option: let cdc-acm handle Sony Ericsson F3507g / Dell 5530
Darius Augulis (1):
MX1 fix include
Dave Jones (1):
via-velocity: Fix DMA mapping length errors on transmit.
David Brownell (2):
ARM: OMAP: Fix compile error if pm.h is included
dm9000: locking bugfix
David S. Miller (3):
dnet: Fix warnings on 64-bit.
xfrm: Fix xfrm_state_find() wrt. wildcard source address.
sparc64: Reschedule KGDB capture to a software interrupt.
Davide Libenzi (1):
eventfd: remove fput() call from possible IRQ context
Dhananjay Phadke (1):
netxen: remove old flash check.
Dirk Hohndel (1):
USB: Add Vendor/Product ID for new CDMA U727 to option driver
Eilon Greenstein (3):
bnx2x: Adding restriction on sge_buf_size
bnx2x: Casting page alignment
bnx2x: Using DMAE to initialize the chip
Enrik Berkhan (1):
nommu: ramfs: pages allocated to an inode's pagecache may get wrongly discarded
Eric Sandeen (3):
ext4: fix header check in ext4_ext_search_right() for deep extent trees.
ext4: fix bogus BUG_ONs in in mballoc code
ext4: fix bb_prealloc_list corruption due to wrong group locking
FUJITA Tomonori (1):
ide: save the returned value of dma_map_sg
Geert Uytterhoeven (1):
ps3/block: Replace mtd/ps3vram by block/ps3vram
Geoff Levand (1):
powerpc/ps3: ps3_defconfig updates
Gerald Schaefer (1):
[S390] Dont check for pfn_valid() in uaccess_pt.c
Gertjan van Wingerde (1):
Update my email address
Grant Grundler (2):
parisc: fix wrong assumption about bus->self
parisc: update MAINTAINERS
Grant Likely (1):
Fix Xilinx SystemACE driver to handle empty CF slot
Greg Kroah-Hartman (3):
USB: usbtmc: fix stupid bug in open()
USB: usbtmc: add protocol 1 support
Staging: benet: remove driver now that it is merged in drivers/net/
Greg Ungerer (8):
m68k: merge the non-MMU and MMU versions of param.h
m68k: merge the non-MMU and MMU versions of swab.h
m68k: merge the non-MMU and MMU versions of sigcontext.h
m68k: use MMU version of setup.h for both MMU and non-MMU
m68k: merge the non-MMU and MMU versions of ptrace.h
m68k: merge the non-MMU and MMU versions of signal.h
m68k: use the MMU version of unistd.h for all m68k platforms
m68k: merge the non-MMU and MMU versions of siginfo.h
Gregory Lardiere (1):
V4L/DVB (10789): m5602-s5k4aa: Split up the initial sensor probe in chunks.
Hans Werner (1):
V4L/DVB (10977): STB6100 init fix, the call to stb6100_set_bandwidth needs an argument
Hartley Sweeten (1):
[ARM] 5419/1: ep93xx: fix build warnings about struct i2c_board_info
Heiko Carstens (2):
[S390] topology: define SD_MC_INIT to fix performance regression
[S390] ftrace/mcount: fix kernel stack backchain
Helge Deller (7):
parisc: BUG_ON() cleanup
parisc: fix section mismatch warnings
parisc: fix `struct pt_regs' declared inside parameter list warning
parisc: remove unused local out_putf label
parisc: fix dev_printk() compile warnings for accessing a device struct
parisc: add braces around arguments in assembler macros
parisc: fix 64bit build
Herbert Xu (1):
gro: Fix legacy path napi_complete crash
Huang Ying (1):
dm crypt: fix kcryptd_async_done parameter
Ian Dall (1):
Bug 11061, NFS mounts dropped
Igor M. Liplianin (1):
V4L/DVB (10976): Bug fix: For legacy applications stv0899 performs search only first time after insmod.
Ilya Yanok (3):
dnet: Dave DNET ethernet controller driver (updated)
dnet: replace obsolete *netif_rx_* functions with *napi_*
dnet: DNET should depend on HAS_IOMEM
Ingo Molnar (1):
kconfig: improve seed in randconfig
J. Bruce Fields (1):
nfsd: nfsd should drop CAP_MKNOD for non-root
James Bottomley (1):
parisc: remove klist iterators
Jan Dumon (1):
USB: unusual_devs: Add support for GI 0431 SD-Card interface
Jay Vosburgh (1):
bonding: Fix updating of speed/duplex changes
Jeff Moyer (1):
aio: lookup_ioctx can return the wrong value when looking up a bogus context
Jiri Slaby (8):
ACPI: remove doubled status checking
USB: atm/cxacru, fix lock imbalance
USB: image/mdc800, fix lock imbalance
USB: misc/adutux, fix lock imbalance
USB: misc/vstusb, fix lock imbalance
USB: wusbcore/wa-xfer, fix lock imbalance
ALSA: pcm_oss, fix locking typo
ALSA: mixart, fix lock imbalance
Jody McIntyre (1):
trivial: fix orphan dates in ext2 documentation
Johannes Weiner (3):
HID: fix incorrect free in hiddev
HID: fix waitqueue usage in hiddev
nommu: ramfs: don't leak pages when adding to page cache fails
John Dykstra (1):
ipv6: Fix BUG when disabled ipv6 module is unloaded
John W. Linville (1):
lib80211: silence excessive crypto debugging messages
Jorge Boncompte [DTI2] (1):
netns: oops in ip[6]_frag_reasm incrementing stats
Jouni Malinen (3):
mac80211: Fix panic on fragmentation with power saving
zd1211rw: Do not panic on device eject when associated
nl80211: Check that function pointer != NULL before using it
Karsten Wiese (1):
USB: EHCI: Fix isochronous URB leak
Kay Sievers (1):
parisc: dino: struct device - replace bus_id with dev_name(), dev_set_name()
Koen Kooi (1):
ARM: OMAP: board-omap3beagle: set i2c-3 to 100kHz
Krzysztof Helt (1):
ALSA: opl3sa2 - Fix NULL dereference when suspending snd_opl3sa2
Kumar Gala (2):
powerpc/mm: Respect _PAGE_COHERENT on classic ppc32 SW
powerpc/mm: Fix Respect _PAGE_COHERENT on classic ppc32 SW TLB load machines
Kyle McMartin (8):
parisc: fix use of new cpumask api in irq.c
parisc: convert (read|write)bwlq to inlines
parisc: convert cpu_check_affinity to new cpumask api
parisc: define x->x mmio accessors
parisc: update defconfigs
parisc: sba_iommu: fix build bug when CONFIG_PARISC_AGP=y
tulip: fix crash on iface up with shirq debug
Build with -fno-dwarf2-cfi-asm
Lalit Chandivade (1):
[SCSI] qla2xxx: Use correct value for max vport in LOOP topology.
Len Brown (1):
Revert "ACPI: make some IO ports off-limits to AML"
Lennert Buytenhek (1):
mv643xx_eth: fix unicast address filter corruption on mtu change
Li Zefan (1):
block: fix memory leak in bio_clone()
Linus Torvalds (7):
Fix potential fast PIT TSC calibration startup glitch
Fast TSC calibration: calculate proper frequency error bounds
Avoid 64-bit "switch()" statements on 32-bit architectures
Add '-fwrapv' to gcc CFLAGS
Fix race in create_empty_buffers() vs __set_page_dirty_buffers()
Move cc-option to below arch-specific setup
Linux 2.6.29
Luis R. Rodriguez (2):
ath9k: implement IO serialization
ath9k: AR9280 PCI devices must serialize IO as well
Maciej Sosnowski (1):
dca: add missing copyright/license headers
Manu Abraham (1):
V4L/DVB (10975): Bug: Use signed types, Offsets and range can be negative
Mark Brown (5):
[ARM] S3C64XX: Fix section mismatch for s3c64xx_register_clocks()
[ARM] SMDK6410: Correct I2C device name for WM8580
[ARM] SMDK6410: Declare iodesc table static
[ARM] S3C64XX: Staticise s3c64xx_init_irq_eint()
[ARM] S3C64XX: Do gpiolib configuration earlier
Mark Lord (1):
sata_mv: fix MSI irq race condition
Martin Schwidefsky (3):
[S390] __div64_31 broken for CONFIG_MARCH_G5
[S390] make page table walking more robust
[S390] make page table upgrade work again
Masami Hiramatsu (2):
prevent boosting kprobes on exception address
module: fix refptr allocation and release order
Mathieu Chouquet-Stringer (1):
thinkpad-acpi: fix module autoloading for older models
Matthew Wilcox (1):
[SCSI] sd: Don't try to spin up drives that are connected to an inactive port
Matthias Schwarzzot (1):
V4L/DVB (10978): Report tuning algorith correctly
Mauro Carvalho Chehab (1):
V4L/DVB (10834): zoran: auto-select bt866 for AverMedia 6 Eyes
Michael Chan (1):
bnx2: Fix problem of using wrong IRQ handler.
Michael Hennerich (1):
USB: serial: ftdi: enable UART detection on gnICE JTAG adaptors blacklist interface0
Mike Travis (1):
parisc: update parisc for new irq_desc
Miklos Szeredi (1):
fix ptrace slowness
Mikulas Patocka (3):
dm table: rework reference counting fix
dm io: respect BIO_MAX_PAGES limit
sparc64: Fix crash with /proc/iomem
Milan Broz (2):
dm ioctl: validate name length when renaming
dm crypt: wait for endio to complete before destruction
Moritz Muehlenhoff (1):
USB: Updated unusual-devs entry for USB mass storage on Nokia 6233
Nobuhiro Iwamatsu (2):
sh_eth: Change handling of IRQ
sh_eth: Fix mistake of the address of SH7763
Pablo Neira Ayuso (2):
netfilter: conntrack: don't deliver events for racy packets
netfilter: ctnetlink: fix crash during expectation creation
Pantelis Koukousoulas (1):
virtio_net: Make virtio_net support carrier detection
Piotr Ziecik (1):
powerpc/5200: Enable CPU_FTR_NEED_COHERENT for MPC52xx
Ralf Baechle (1):
MIPS: Mark Eins: Fix configuration.
Robert Love (11):
[SCSI] libfc: Don't violate transport template for rogue port creation
[SCSI] libfc: correct RPORT_TO_PRIV usage
[SCSI] libfc: rename rp to rdata in fc_disc_new_target()
[SCSI] libfc: check for err when recv and state is incorrect
[SCSI] libfc: Cleanup libfc_function_template comments
[SCSI] libfc, fcoe: Fix kerneldoc comments
[SCSI] libfc, fcoe: Cleanup function formatting and minor typos
[SCSI] libfc, fcoe: Remove unnecessary cast by removing inline wrapper
[SCSI] fcoe: Use setup_timer() and mod_timer()
[SCSI] fcoe: Correct fcoe_transports initialization vs. registration
[SCSI] fcoe: Change fcoe receive thread nice value from 19 (lowest priority) to -20
Robert M. Kenney (1):
USB: serial: new cp2101 device id
Roel Kluin (3):
[SCSI] fcoe: fix kfree(skb)
acpi-wmi: unsigned cannot be less than 0
net: kfree(napi->skb) => kfree_skb
Ron Mercer (4):
qlge: bugfix: Increase filter on inbound csum.
qlge: bugfix: Tell hw to strip vlan header.
qlge: bugfix: Move netif_napi_del() to common call point.
qlge: bugfix: Pad outbound frames smaller than 60 bytes.
Russell King (2):
[ARM] update mach-types
[ARM] Fix virtual to physical translation macro corner cases
Rusty Russell (1):
linux.conf.au 2009: Tuz
Saeed Bishara (1):
[ARM] orion5x: pass dram mbus data to xor driver
Sam Ravnborg (1):
kconfig: fix randconfig for choice blocks
Sathya Perla (3):
net: Add be2net driver.
be2net: replenish when posting to rx-queue is starved in out of mem conditions
be2net: fix to restore vlan ids into BE2 during a IF DOWN->UP cycle
Scott James Remnant (1):
sbus: Auto-load openprom module when device opened.
Sigmund Augdal (1):
V4L/DVB (10974): Use Diseqc 3/3 mode to send data
Stanislaw Gruszka (1):
net: Document /proc/sys/net/core/netdev_budget
Stephen Hemminger (1):
sungem: missing net_device_ops
Stephen Rothwell (1):
net: update dnet.c for bus_id removal
Steve Glendinning (1):
smsc911x: reset last known duplex and carrier on open
Steve Ma (1):
[SCSI] libfc: exch mgr is freed while lport still retrying sequences
Stuart MENEFY (1):
libata: Keep shadow last_ctl up to date during resets
Suresh Jayaraman (1):
NFS: Handle -ESTALE error in access()
Takashi Iwai (3):
ALSA: hda - Fix DMA mask for ATI controllers
ALSA: hda - Workaround for buggy DMA position on ATI controllers
ALSA: Fix vunmap and free order in snd_free_sgbuf_pages()
Tao Ma (2):
ocfs2: Fix a bug found by sparse check.
ocfs2: Use xs->bucket to set xattr value outside
Tejun Heo (1):
ata_piix: add workaround for Samsung DB-P70
Theodore Ts'o (1):
ext4: Print the find_group_flex() warning only once
Thomas Bartosik (1):
USB: storage: Unusual USB device Prolific 2507 variation added
Tiger Yang (2):
ocfs2: reserve xattr block for new directory with inline data
ocfs2: tweak to get the maximum inline data size with xattr
Tilman Schmidt (1):
bas_gigaset: correctly allocate USB interrupt transfer buffer
Trond Myklebust (6):
SUNRPC: Tighten up the task locking rules in __rpc_execute()
NFS: Fix misparsing of nfsv4 fs_locations attribute (take 2)
NFSv3: Fix posix ACL code
SUNRPC: Fix an Oops due to socket not set up yet...
SUNRPC: xprt_connect() don't abort the task if the transport isn't bound
NFS: Fix the fix to Bugzilla #11061, when IPv6 isn't defined...
Tyler Hicks (3):
eCryptfs: don't encrypt file key with filename key
eCryptfs: Allocate a variable number of pages for file headers
eCryptfs: NULL crypt_stat dereference during lookup
Uwe Kleine-König (2):
[ARM] 5418/1: restore lr before leaving mcount
[ARM] 5421/1: ftrace: fix crash due to tracing of __naked functions
Vasu Dev (5):
[SCSI] libfc: handle RRQ exch timeout
[SCSI] libfc: fixed a soft lockup issue in fc_exch_recv_abts
[SCSI] libfc, fcoe: fixed locking issues with lport->lp_mutex around lport->link_status
[SCSI] libfc: fixed a read IO data integrity issue when a IO data frame lost
[SCSI] fcoe: Out of order tx frames was causing several check condition SCSI status
Viral Mehta (1):
ALSA: oss-mixer - Fixes recording gain control
Vitaly Wool (1):
V4L/DVB (10832): tvaudio: Avoid breakage with tda9874a
Werner Almesberger (1):
[ARM] S3C64XX: Fix s3c64xx_setrate_clksrc
Yi Zou (2):
[SCSI] libfc: do not change the fh_rx_id of a recevied frame
[SCSI] fcoe: ETH_P_8021Q is already in if_ether and fcoe is not using it anyway
Zhang Le (2):
MIPS: Fix TIF_32BIT undefined problem when seccomp is disabled
filp->f_pos not correctly updated in proc_task_readdir
Zhang Rui (1):
ACPI suspend: Blacklist Toshiba Satellite L300 that requires to set SCI_EN directly on resume
françois romieu (2):
r8169: use hardware auto-padding.
r8169: revert "r8169: read MAC address from EEPROM on init (2nd attempt)"
un'ichi Nomura (1):
block: Add gfp_mask parameter to bio_integrity_clone()
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-23 23:29 Linus Torvalds
@ 2009-03-24 6:19 ` Jesper Krogh
2009-03-24 6:46 ` David Rees
2009-04-02 14:00 ` Mathieu Desnoyers
2009-03-27 13:35 ` Hans-Peter Jansen
1 sibling, 2 replies; 419+ messages in thread
From: Jesper Krogh @ 2009-03-24 6:19 UTC (permalink / raw)
To: Linus Torvalds, Linux Kernel Mailing List
Linus Torvalds wrote:
> This obviously starts the merge window for 2.6.30, although as usual, I'll
> probably wait a day or two before I start actively merging. I do that in
> order to hopefully result in people testing the final plain 2.6.29 a bit
> more before all the crazy changes start up again.
I know this has been discussed before:
[129401.996244] INFO: task updatedb.mlocat:31092 blocked for more than
480 seconds.
[129402.084667] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[129402.179331] updatedb.mloc D 0000000000000000 0 31092 31091
[129402.179335] ffff8805ffa1d900 0000000000000082 ffff8803ff5688a8
0000000000001000
[129402.179338] ffffffff806cc000 ffffffff806cc000 ffffffff806d3e80
ffffffff806d3e80
[129402.179341] ffffffff806cfe40 ffffffff806d3e80 ffff8801fb9f87e0
000000000000ffff
[129402.179343] Call Trace:
[129402.179353] [<ffffffff802d3ff0>] sync_buffer+0x0/0x50
[129402.179358] [<ffffffff80493a50>] io_schedule+0x20/0x30
[129402.179360] [<ffffffff802d402b>] sync_buffer+0x3b/0x50
[129402.179362] [<ffffffff80493d2f>] __wait_on_bit+0x4f/0x80
[129402.179364] [<ffffffff802d3ff0>] sync_buffer+0x0/0x50
[129402.179366] [<ffffffff80493dda>] out_of_line_wait_on_bit+0x7a/0xa0
[129402.179369] [<ffffffff80252730>] wake_bit_function+0x0/0x30
[129402.179396] [<ffffffffa0264346>] ext3_find_entry+0xf6/0x610 [ext3]
[129402.179399] [<ffffffff802d3453>] __find_get_block+0x83/0x170
[129402.179403] [<ffffffff802c4a90>] ifind_fast+0x50/0xa0
[129402.179405] [<ffffffff802c5874>] iget_locked+0x44/0x180
[129402.179412] [<ffffffffa0266435>] ext3_lookup+0x55/0x100 [ext3]
[129402.179415] [<ffffffff802c32a7>] d_alloc+0x127/0x1c0
[129402.179417] [<ffffffff802ba2a7>] do_lookup+0x1b7/0x250
[129402.179419] [<ffffffff802bc51d>] __link_path_walk+0x76d/0xd60
[129402.179421] [<ffffffff802ba17f>] do_lookup+0x8f/0x250
[129402.179424] [<ffffffff802c8b37>] mntput_no_expire+0x27/0x150
[129402.179426] [<ffffffff802bcb64>] path_walk+0x54/0xb0
[129402.179428] [<ffffffff802bfd10>] filldir+0x0/0xf0
[129402.179430] [<ffffffff802bcc8a>] do_path_lookup+0x7a/0x150
[129402.179432] [<ffffffff802bbb55>] getname+0xe5/0x1f0
[129402.179434] [<ffffffff802bd8d4>] user_path_at+0x44/0x80
[129402.179437] [<ffffffff802b53b5>] cp_new_stat+0xe5/0x100
[129402.179440] [<ffffffff802b56d0>] vfs_lstat_fd+0x20/0x60
[129402.179442] [<ffffffff802b5737>] sys_newlstat+0x27/0x50
[129402.179445] [<ffffffff8020c35b>] system_call_fastpath+0x16/0x1b
Consensus seems to be something with large memory machines, lots of
dirty pages and a long writeout time due to ext3.
At the moment this the largest "usabillity" issue in the serversetup I'm
working with. Can there be done something to "autotune" it .. or perhaps
even fix it? .. or is it just to shift to xfs or wait for ext4?
Jesper
--
Jesper
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 6:19 ` Jesper Krogh
@ 2009-03-24 6:46 ` David Rees
2009-03-24 7:32 ` Jesper Krogh
2009-03-24 9:15 ` Alan Cox
2009-04-02 14:00 ` Mathieu Desnoyers
1 sibling, 2 replies; 419+ messages in thread
From: David Rees @ 2009-03-24 6:46 UTC (permalink / raw)
To: Jesper Krogh; +Cc: Linus Torvalds, Linux Kernel Mailing List
On Mon, Mar 23, 2009 at 11:19 PM, Jesper Krogh <jesper@krogh.cc> wrote:
> I know this has been discussed before:
>
> [129401.996244] INFO: task updatedb.mlocat:31092 blocked for more than 480
> seconds.
Ouch - 480 seconds, how much memory is in that machine, and how slow
are the disks? What's your vm.dirty_background_ratio and
vm.dirty_ratio set to?
> Consensus seems to be something with large memory machines, lots of dirty
> pages and a long writeout time due to ext3.
All filesystems seem to suffer from this issue to some degree. I
posted to the list earlier trying to see if there was anything that
could be done to help my specific case. I've got a system where if
someone starts writing out a large file, it kills client NFS writes.
Makes the system unusable:
http://marc.info/?l=linux-kernel&m=123732127919368&w=2
Only workaround I've found is to reduce dirty_background_ratio and
dirty_ratio to tiny levels. Or throw good SSDs and/or a fast RAID
array at it so that large writes complete faster. Have you tried the
new vm_dirty_bytes in 2.6.29?
> At the moment this the largest "usabillity" issue in the serversetup I'm
> working with. Can there be done something to "autotune" it .. or perhaps
> even fix it? .. or is it just to shift to xfs or wait for ext4?
Everyone seems to agree that "autotuning" it is the way to go. But no
one seems willing to step up and try to do it. Probably because it's
hard to get right!
-Dave
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 6:46 ` David Rees
@ 2009-03-24 7:32 ` Jesper Krogh
2009-03-24 8:16 ` Ingo Molnar
2009-03-24 19:00 ` David Rees
2009-03-24 9:15 ` Alan Cox
1 sibling, 2 replies; 419+ messages in thread
From: Jesper Krogh @ 2009-03-24 7:32 UTC (permalink / raw)
To: David Rees; +Cc: Linus Torvalds, Linux Kernel Mailing List
David Rees wrote:
> On Mon, Mar 23, 2009 at 11:19 PM, Jesper Krogh <jesper@krogh.cc> wrote:
>> I know this has been discussed before:
>>
>> [129401.996244] INFO: task updatedb.mlocat:31092 blocked for more than 480
>> seconds.
>
> Ouch - 480 seconds, how much memory is in that machine, and how slow
> are the disks?
The 480 secondes is not the "wait time" but the time gone before the
message is printed. It the kernel-default it was earlier 120 seconds but
thats changed by Ingo Molnar back in september. I do get a lot of less
noise but it really doesn't tell anything about the nature of the problem.
The systes spec:
32GB of memory. The disks are a Nexsan SataBeast with 42 SATA drives in
Raid10 connected using 4Gbit fibre-channel. I'll let it up to you to
decide if thats fast or slow?
The strange thing is actually that the above process (updatedb.mlocate)
is writing to / which is a device without any activity at all. All
activity is on the Fibre Channel device above, but process writing
outsid that seems to be effected as well.
> What's your vm.dirty_background_ratio and
> vm.dirty_ratio set to?
2.6.29-rc8 defaults:
jk@hest:/proc/sys/vm$ cat dirty_background_ratio
5
jk@hest:/proc/sys/vm$ cat dirty_ratio
10
>> Consensus seems to be something with large memory machines, lots of dirty
>> pages and a long writeout time due to ext3.
>
> All filesystems seem to suffer from this issue to some degree. I
> posted to the list earlier trying to see if there was anything that
> could be done to help my specific case. I've got a system where if
> someone starts writing out a large file, it kills client NFS writes.
> Makes the system unusable:
> http://marc.info/?l=linux-kernel&m=123732127919368&w=2
Yes, I've hit 120s+ penalties just by saving a file in vim.
> Only workaround I've found is to reduce dirty_background_ratio and
> dirty_ratio to tiny levels. Or throw good SSDs and/or a fast RAID
> array at it so that large writes complete faster. Have you tried the
> new vm_dirty_bytes in 2.6.29?
No.. What would you suggest to be a reasonable setting for that?
> Everyone seems to agree that "autotuning" it is the way to go. But no
> one seems willing to step up and try to do it. Probably because it's
> hard to get right!
I can test patches.. but I'm not a kernel-developer.. unfortunately.
Jesper
--
Jesper
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 7:32 ` Jesper Krogh
@ 2009-03-24 8:16 ` Ingo Molnar
2009-03-24 11:10 ` Jesper Krogh
2009-03-24 19:00 ` David Rees
1 sibling, 1 reply; 419+ messages in thread
From: Ingo Molnar @ 2009-03-24 8:16 UTC (permalink / raw)
To: Jesper Krogh; +Cc: David Rees, Linus Torvalds, Linux Kernel Mailing List
* Jesper Krogh <jesper@krogh.cc> wrote:
> David Rees wrote:
>> On Mon, Mar 23, 2009 at 11:19 PM, Jesper Krogh <jesper@krogh.cc> wrote:
>>> I know this has been discussed before:
>>>
>>> [129401.996244] INFO: task updatedb.mlocat:31092 blocked for more than 480
>>> seconds.
>>
>> Ouch - 480 seconds, how much memory is in that machine, and how slow
>> are the disks?
>
> The 480 secondes is not the "wait time" but the time gone before
> the message is printed. It the kernel-default it was earlier 120
> seconds but thats changed by Ingo Molnar back in september. I do
> get a lot of less noise but it really doesn't tell anything about
> the nature of the problem.
That's true - the detector is really simple and only tries to flag
suspiciously long uninterruptible waits. It prints out the context
it finds but otherwise does not try to go deep about exactly why
that delay happened.
Would you agree that the message is correct, and that there is some
sort of "tasks wait way too long" problem on your system?
Considering:
> The systes spec:
> 32GB of memory. The disks are a Nexsan SataBeast with 42 SATA drives in
> Raid10 connected using 4Gbit fibre-channel. I'll let it up to you to
> decide if thats fast or slow?
[...]
> Yes, I've hit 120s+ penalties just by saving a file in vim.
i think it's fair to say that an almost 10 minutes uninterruptible
sleep sucks to the user, by any reasonable standard. It is the year
2009, not 1959.
The delay might be difficult to fix, but it's still reality - and
that's the purpose of this particular debug helper: to rub reality
under our noses, whether we like it or not.
( _My_ personal pain threshold for waiting for the computer is
around 1 _second_. If any command does something that i cannot
Ctrl-C or Ctrl-Z my way out of i get annoyed. So the historic
limit for the hung tasks check was 10 seconds, then 60 seconds.
But people argued that it's too low so it was raised to 120 then
480 seconds. If almost 10 minutes of uninterruptible wait is still
acceptable then the watchdog can be turned off (because it's
basically pointless to run it in that case - no amount of delay
will be 'bad'). )
Ingo
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 6:46 ` David Rees
2009-03-24 7:32 ` Jesper Krogh
@ 2009-03-24 9:15 ` Alan Cox
2009-03-24 9:32 ` Ingo Molnar
1 sibling, 1 reply; 419+ messages in thread
From: Alan Cox @ 2009-03-24 9:15 UTC (permalink / raw)
To: David Rees; +Cc: Jesper Krogh, Linus Torvalds, Linux Kernel Mailing List
> posted to the list earlier trying to see if there was anything that
> could be done to help my specific case. I've got a system where if
> someone starts writing out a large file, it kills client NFS writes.
> Makes the system unusable:
> http://marc.info/?l=linux-kernel&m=123732127919368&w=2
I have not had this problem since I applied Arjan's (for some reason
repeatedly rejected) patch to change the ioprio of the various writeback
daemons. Under some loads changing to the noop I/O scheduler also seems
to help (as do most of the non default ones)
> Everyone seems to agree that "autotuning" it is the way to go. But no
> one seems willing to step up and try to do it. Probably because it's
> hard to get right!
If this is a VM problem why does fixing the I/O priority of the various
daemons seem to cure at least some of it ?
Alan
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 9:15 ` Alan Cox
@ 2009-03-24 9:32 ` Ingo Molnar
2009-03-24 10:10 ` Alan Cox
0 siblings, 1 reply; 419+ messages in thread
From: Ingo Molnar @ 2009-03-24 9:32 UTC (permalink / raw)
To: Alan Cox
Cc: David Rees, Jesper Krogh, Linus Torvalds,
Linux Kernel Mailing List
* Alan Cox <alan@lxorguk.ukuu.org.uk> wrote:
> > posted to the list earlier trying to see if there was anything that
> > could be done to help my specific case. I've got a system where if
> > someone starts writing out a large file, it kills client NFS writes.
> > Makes the system unusable:
> > http://marc.info/?l=linux-kernel&m=123732127919368&w=2
>
> I have not had this problem since I applied Arjan's (for some reason
> repeatedly rejected) patch to change the ioprio of the various writeback
> daemons. Under some loads changing to the noop I/O scheduler also seems
> to help (as do most of the non default ones)
(link would be useful)
Ingo
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 9:32 ` Ingo Molnar
@ 2009-03-24 10:10 ` Alan Cox
2009-03-24 10:31 ` Ingo Molnar
2009-03-24 12:27 ` Andi Kleen
0 siblings, 2 replies; 419+ messages in thread
From: Alan Cox @ 2009-03-24 10:10 UTC (permalink / raw)
To: Ingo Molnar
Cc: David Rees, Jesper Krogh, Linus Torvalds,
Linux Kernel Mailing List
> > I have not had this problem since I applied Arjan's (for some reason
> > repeatedly rejected) patch to change the ioprio of the various writeback
> > daemons. Under some loads changing to the noop I/O scheduler also seems
> > to help (as do most of the non default ones)
>
> (link would be useful)
"Give kjournald a IOPRIO_CLASS_RT io priority"
October 2007 (yes its that old)
And do the same as per discussion to the writeback tasks.
Which isn't to say there are not also vm problems - look at the I/O
patterns with any kernel after about 2.6.18/19 and there seems to be a
serious problem with writeback from the mm and fs writes falling over
each other and turning the smooth writeout into thrashing back and forth
as both try to write out different bits of the same stuff.
<Rant>
Really someone needs to sit down and actually build a proper model of the
VM behaviour in a tool like netlogo rather than continually keep adding
ever more complex and thus unpredictable hacks to it. That way we might
better understand what is occurring and why.
</Rant>
Alan
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 10:10 ` Alan Cox
@ 2009-03-24 10:31 ` Ingo Molnar
2009-03-24 11:12 ` Andrew Morton
2009-03-24 13:20 ` Theodore Tso
2009-03-24 12:27 ` Andi Kleen
1 sibling, 2 replies; 419+ messages in thread
From: Ingo Molnar @ 2009-03-24 10:31 UTC (permalink / raw)
To: Alan Cox, Arjan van de Ven, Andrew Morton, Peter Zijlstra,
Nick Piggin, Theodore Tso, Jens Axboe
Cc: David Rees, Jesper Krogh, Linus Torvalds,
Linux Kernel Mailing List
* Alan Cox <alan@lxorguk.ukuu.org.uk> wrote:
> > > I have not had this problem since I applied Arjan's (for some reason
> > > repeatedly rejected) patch to change the ioprio of the various writeback
> > > daemons. Under some loads changing to the noop I/O scheduler also seems
> > > to help (as do most of the non default ones)
> >
> > (link would be useful)
>
>
> "Give kjournald a IOPRIO_CLASS_RT io priority"
>
> October 2007 (yes its that old)
thx. A more recent submission from Arjan would be:
http://lkml.org/lkml/2008/10/1/405
Resolution was that Tytso indicated it went into some sort of ext4
patch queue:
| I've ported the patch to the ext4 filesystem, and dropped it into
| the unstable portion of the ext4 patch queue.
|
| ext4: akpm's locking hack to fix locking delays
but 6 months down the line and i can find no trace of this upstream
anywhere.
<let-me-rant-too>
The thing is ... this is a _bad_ ext3 design bug affecting ext3
users in the last decade or so of ext3 existence. Why is this issue
not handled with the utmost high priority and why wasnt it fixed 5
years ago already? :-)
It does not matter whether we have extents or htrees when there are
_trivially reproducible_ basic usability problems with ext3.
Ingo
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 8:16 ` Ingo Molnar
@ 2009-03-24 11:10 ` Jesper Krogh
0 siblings, 0 replies; 419+ messages in thread
From: Jesper Krogh @ 2009-03-24 11:10 UTC (permalink / raw)
To: Ingo Molnar; +Cc: David Rees, Linus Torvalds, Linux Kernel Mailing List
Ingo Molnar wrote:
> * Jesper Krogh <jesper@krogh.cc> wrote:
>
>> David Rees wrote:
>>> On Mon, Mar 23, 2009 at 11:19 PM, Jesper Krogh <jesper@krogh.cc> wrote:
>>>> I know this has been discussed before:
>>>>
>>>> [129401.996244] INFO: task updatedb.mlocat:31092 blocked for more than 480
>>>> seconds.
>>> Ouch - 480 seconds, how much memory is in that machine, and how slow
>>> are the disks?
>> The 480 secondes is not the "wait time" but the time gone before
>> the message is printed. It the kernel-default it was earlier 120
>> seconds but thats changed by Ingo Molnar back in september. I do
>> get a lot of less noise but it really doesn't tell anything about
>> the nature of the problem.
>
> That's true - the detector is really simple and only tries to flag
> suspiciously long uninterruptible waits. It prints out the context
> it finds but otherwise does not try to go deep about exactly why
> that delay happened.
>
> Would you agree that the message is correct, and that there is some
> sort of "tasks wait way too long" problem on your system?
The message is absolutely correct (it was even at 120s).. thats too long
for what I consider good.
> Considering:
>
>> The systes spec:
>> 32GB of memory. The disks are a Nexsan SataBeast with 42 SATA drives in
>> Raid10 connected using 4Gbit fibre-channel. I'll let it up to you to
>> decide if thats fast or slow?
> [...]
>> Yes, I've hit 120s+ penalties just by saving a file in vim.
>
> i think it's fair to say that an almost 10 minutes uninterruptible
> sleep sucks to the user, by any reasonable standard. It is the year
> 2009, not 1959.
>
> The delay might be difficult to fix, but it's still reality - and
> that's the purpose of this particular debug helper: to rub reality
> under our noses, whether we like it or not.
>
> ( _My_ personal pain threshold for waiting for the computer is
> around 1 _second_. If any command does something that i cannot
> Ctrl-C or Ctrl-Z my way out of i get annoyed. So the historic
> limit for the hung tasks check was 10 seconds, then 60 seconds.
> But people argued that it's too low so it was raised to 120 then
> 480 seconds. If almost 10 minutes of uninterruptible wait is still
> acceptable then the watchdog can be turned off (because it's
> basically pointless to run it in that case - no amount of delay
> will be 'bad'). )
Thats about the same definitions for me. But I can accept that if I
happen to be doing something really crazy.. but this is merely about
reading some files in and generating indexes out of them. None of the
file are "huge".. < 15GB for the top 3, average < 100MB.
--
Jesper
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 10:31 ` Ingo Molnar
@ 2009-03-24 11:12 ` Andrew Morton
2009-03-24 12:23 ` Alan Cox
` (2 more replies)
2009-03-24 13:20 ` Theodore Tso
1 sibling, 3 replies; 419+ messages in thread
From: Andrew Morton @ 2009-03-24 11:12 UTC (permalink / raw)
To: Ingo Molnar
Cc: Alan Cox, Arjan van de Ven, Peter Zijlstra, Nick Piggin,
Theodore Tso, Jens Axboe, David Rees, Jesper Krogh,
Linus Torvalds, Linux Kernel Mailing List
On Tue, 24 Mar 2009 11:31:11 +0100 Ingo Molnar <mingo@elte.hu> wrote:
>
> * Alan Cox <alan@lxorguk.ukuu.org.uk> wrote:
>
> > > > I have not had this problem since I applied Arjan's (for some reason
> > > > repeatedly rejected) patch to change the ioprio of the various writeback
> > > > daemons. Under some loads changing to the noop I/O scheduler also seems
> > > > to help (as do most of the non default ones)
> > >
> > > (link would be useful)
> >
> >
> > "Give kjournald a IOPRIO_CLASS_RT io priority"
> >
> > October 2007 (yes its that old)
>
> thx. A more recent submission from Arjan would be:
>
> http://lkml.org/lkml/2008/10/1/405
>
> Resolution was that Tytso indicated it went into some sort of ext4
> patch queue:
>
> | I've ported the patch to the ext4 filesystem, and dropped it into
> | the unstable portion of the ext4 patch queue.
> |
> | ext4: akpm's locking hack to fix locking delays
>
> but 6 months down the line and i can find no trace of this upstream
> anywhere.
>
> <let-me-rant-too>
>
> The thing is ... this is a _bad_ ext3 design bug affecting ext3
> users in the last decade or so of ext3 existence. Why is this issue
> not handled with the utmost high priority and why wasnt it fixed 5
> years ago already? :-)
>
> It does not matter whether we have extents or htrees when there are
> _trivially reproducible_ basic usability problems with ext3.
>
It's all there in that Oct 2008 thread.
The proposed tweak to kjournald is a bad fix - partly because it will
elevate the priority of vast amounts of IO whose priority we don't _want_
elevated.
But mainly because the problem lies elsewhere - in an area of contention
between the committing and running transactions which we knowingly and
reluctantly added to fix a bug in
commit 773fc4c63442fbd8237b4805627f6906143204a8
Author: akpm <akpm>
AuthorDate: Sun May 19 23:23:01 2002 +0000
Commit: akpm <akpm>
CommitDate: Sun May 19 23:23:01 2002 +0000
[PATCH] fix ext3 buffer-stealing
Patch from sct fixes a long-standing (I did it!) and rather complex
problem with ext3.
The problem is to do with buffers which are continually being dirtied
by an external agent. I had code in there (for easily-triggerable
livelock avoidance) which steals the buffer from checkpoint mode and
reattaches it to the running transaction. This violates ext3 ordering
requirements - it can permit journal space to be reclaimed before the
relevant data has really been written out.
Also, we do have to reliably get a lock on the buffer when moving it
between lists and inspecting its internal state. Otherwise a competing
read from the underlying block device can trigger an assertion failure,
and a competing write to the underlying block device can confuse ext3
journalling state completely.
Now this:
> Resolution was that Tytso indicated it went into some sort of ext4
> patch queue:
was not a fix at all. It was a known-buggy hack which I proposed simply to
remove that contention point to let us find out if we're on the right
track. IIRC Ric was going to ask someone to do some performance testing of
that hack, but we never heard back.
The bottom line is that someone needs to do some serious rooting through
the very heart of JBD transaction logic and nobody has yet put their hand
up. If we do that, and it turns out to be just too hard to fix then yes,
perhaps that's the time to start looking at palliative bandaids.
The number of people who can be looked at to do serious ext3/JBD work is
pretty small now. Ted, Stephen and I got old and died. Jan does good work
but is spread thinly.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 11:12 ` Andrew Morton
@ 2009-03-24 12:23 ` Alan Cox
2009-03-24 13:37 ` Theodore Tso
2009-03-25 12:37 ` Jan Kara
2 siblings, 0 replies; 419+ messages in thread
From: Alan Cox @ 2009-03-24 12:23 UTC (permalink / raw)
To: Andrew Morton
Cc: Ingo Molnar, Arjan van de Ven, Peter Zijlstra, Nick Piggin,
Theodore Tso, Jens Axboe, David Rees, Jesper Krogh,
Linus Torvalds, Linux Kernel Mailing List
> The proposed tweak to kjournald is a bad fix - partly because it will
> elevate the priority of vast amounts of IO whose priority we don't _want_
> elevated.
Its a huge improvement in practice because it both fixes the stupid
stalls and smooths out the rest of the I/O traffic. I spend a lot of my
time looking at what the disk driver is getting fed and its not a good
mix. Even more revealing is the noop scheduler and the fact this
frequently outperforms all the fancy I/O scheduling we do even on
relatively dumb hardware (as well as showing how mixed up our I/O
patterns currently are).
> But mainly because the problem lies elsewhere - in an area of contention
> between the committing and running transactions which we knowingly and
> reluctantly added to fix a bug in
The problem emerges about 2007 not 2002, so its not that simple.
> The number of people who can be looked at to do serious ext3/JBD work is
> pretty small now. Ted, Stephen and I got old and died. Jan does good work
> but is spread thinly.
Which is all the more reason to use a temporary fix in the meantime so
the OS is usable. I think its pretty poor that for over a year those in
the know who need a good performing system are having to apply out of
tree trivial patches rejected on the basis that "eventually like maybe
whenever perhaps we'll possibly some day you know consider fixing this,
but don't hold your breath"
There is a second reason to do this: If ext4 is the future then it is far
better to fix this stuff in ext4 properly and leave ext3 clear of
extremely invasive high risk fixes when a quick bandaid will do just fine
for the remaining lifetime of fs/jbd
Also not kjournald is only one of the afflicted threads - the same is
true of the crypto, and of the vm writeback. Also note the other point
about the disk scheduler defaults being terrible for some streaming I/O
patterns and the patch for that is also stuck in bugzilla.
If picking "no-op" speeds up my generic x86 box with random onboard SATA
we are doing something very non-optimal
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 10:10 ` Alan Cox
2009-03-24 10:31 ` Ingo Molnar
@ 2009-03-24 12:27 ` Andi Kleen
1 sibling, 0 replies; 419+ messages in thread
From: Andi Kleen @ 2009-03-24 12:27 UTC (permalink / raw)
To: Alan Cox
Cc: Ingo Molnar, David Rees, Jesper Krogh, Linus Torvalds,
Linux Kernel Mailing List
Alan Cox <alan@lxorguk.ukuu.org.uk> writes:
>> > I have not had this problem since I applied Arjan's (for some reason
>> > repeatedly rejected) patch to change the ioprio of the various writeback
>> > daemons. Under some loads changing to the noop I/O scheduler also seems
>> > to help (as do most of the non default ones)
>>
>> (link would be useful)
>
>
> "Give kjournald a IOPRIO_CLASS_RT io priority"
>
> October 2007 (yes its that old)
One issue discussed back then (also for a similar XFS patch) was
that having the kernel use the RT priorities by default makes
them useless as user override.
The proposal was to have a new priority level between normal and RT
for this, but noone implemented this.
-Andi
--
ak@linux.intel.com -- Speaking for myself only.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 10:31 ` Ingo Molnar
2009-03-24 11:12 ` Andrew Morton
@ 2009-03-24 13:20 ` Theodore Tso
2009-03-24 13:30 ` Ingo Molnar
` (4 more replies)
1 sibling, 5 replies; 419+ messages in thread
From: Theodore Tso @ 2009-03-24 13:20 UTC (permalink / raw)
To: Ingo Molnar
Cc: Alan Cox, Arjan van de Ven, Andrew Morton, Peter Zijlstra,
Nick Piggin, Jens Axboe, David Rees, Jesper Krogh, Linus Torvalds,
Linux Kernel Mailing List
On Tue, Mar 24, 2009 at 11:31:11AM +0100, Ingo Molnar wrote:
> >
> > "Give kjournald a IOPRIO_CLASS_RT io priority"
> >
> > October 2007 (yes its that old)
>
> thx. A more recent submission from Arjan would be:
>
> http://lkml.org/lkml/2008/10/1/405
>
> Resolution was that Tytso indicated it went into some sort of ext4
> patch queue:
>
> | I've ported the patch to the ext4 filesystem, and dropped it into
> | the unstable portion of the ext4 patch queue.
> |
> | ext4: akpm's locking hack to fix locking delays
>
> but 6 months down the line and i can find no trace of this upstream
> anywhere.
Andrew really didn't like Arjan's patch because it forces
non-synchronous writes to have a real-time I/O priority. He suggested
an alternative approach which I coded up as "akpm's locking hack to
fix locking delays"; unfortunately, it doesn't work.
In ext4, I quietly put in a mount option, journal_ioprio, and set the
default to be slightly higher than the default I/O priority (but no a
real-time class priority) to prevent the write starvation problem.
This definitely helps for some workloads (when some task is reading
enough to starve out the rights).
More recently (as in this past weekend), I went back to the ext3
problem, and found a better solution, here:
http://lkml.org/lkml/2009/3/21/304
http://lkml.org/lkml/2009/3/21/302
http://lkml.org/lkml/2009/3/21/303
These patches cause the synchronous writes caused by an fsync() to be
submitted using WRITE_SYNC, instead of WRITE, which definitely helps
in the case where there is a heavy read workload in the background.
They don't solve the problem where there is a *huge* amount of writes
going on, though --- if something is dirtying pages at a rate far
greater than the local disk can write it out, say, either "dd
if=/dev/zero of=/mnt/make-lots-of-writes" or a massive distcc cluster
driving a huge amount of data towards a single system or a wget over a
local 100 megabit ethernet from a massive NFS server where everything
is in cache, then you can have a major delay with the fsync().
However, what I've found, though, is that if you're just doing a local
copy from one hard drive to another, or downloading a huge iso file
from an ftp server over a wide area network, the fsync() delays really
don't get *that* bad, even with ext3. At least, I haven't found a
workload that doesn't involve either dd if=/dev/zero or a massive
amount of data coming in over the network that will cause fsync()
delays in the > 1-2 second category. Ext3 has been around for a long
time, and it's only been the last couple of years that people have
really complained about this; my theory is that it was the rise of >
10 megabit ethernets and the use of systems like distcc that really
made this problem really become visible. The only realistic workload
I've found that triggers this requires a fast network dumping data to
a local filesystem.
(I'm sure someone will be ingeniuous enough to find something else
though, and if they're interested, I've attached an fsync latency
tester to this note. If you find something; let me know, I'd be
interested.)
> <let-me-rant-too>
>
> The thing is ... this is a _bad_ ext3 design bug affecting ext3
> users in the last decade or so of ext3 existence. Why is this issue
> not handled with the utmost high priority and why wasnt it fixed 5
> years ago already? :-)
OK, so there are a couple of solutions to this problem. One is to use
ext4 and delayed allocation. This solves the problem by simply not
allocating the blocks in the first place, so we don't have to force
them out to solve the security problem that data=ordered was trying to
solve. Simply mounting an ext3 filesystem using ext4, without making
any change to the filesystem format, should solve the problem.
Another is to use the mount option data=writeback. The whole reason
for forcing the writes out to disk was simply to prevent a security
problem that occurs if your system crashes before the data blocks get
forced out to disk. This could expose previously written data, which
could belong to another user, and might be his e-mail or p0rn.
Historically, this was always a problem with the BSD Fast Filesystem;
it sync'ed out data every 30 seconds, and metadata every 5 seconds.
(This is where the default ext3 commit interval of 5 seconds, and the
default /proc/sys/vm/dirty_expire_centiseconds came from.) After a
system crash, it was possible for files written just before the crash
to point to blocks that had not yet been written, and which contain
some other users' data files. This was the reason for Stephen Tweedie
implementing the data=ordered mode, and making it the default.
However, these days, nearly all Linux boxes are single user machines,
so the security concern is much less of a problem. So maybe the best
solution for now is to make data=writeback the default. This solves
the problem too. The only problem with this is that there are a lot
of sloppy application writers out there, and they've gotten lazy about
using fsync() where it's necessary; combine that with Ubuntu shipping
massively unstable video drivers that crash if you breath on the
system wrong (or exit World of Goo :-), and you've got the problem
which was recently slashdotted, and which I wrote about here:
http://thunk.org/tytso/blog/2009/03/12/delayed-allocation-and-the-zero-length-file-problem/
> It does not matter whether we have extents or htrees when there are
> _trivially reproducible_ basic usability problems with ext3.
Try ext4, I think you'll like it. :-)
Failing that, data=writeback for single-user machines is probably your
best bet.
- Ted
/*
* fsync-tester.c
*
* Written by Theodore Ts'o, 3/21/09.
*
* This file may be redistributed under the terms of the GNU Public
* License, version 2.
*/
#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <time.h>
#include <fcntl.h>
#include <string.h>
#define SIZE (32768*32)
static float timeval_subtract(struct timeval *tv1, struct timeval *tv2)
{
return ((tv1->tv_sec - tv2->tv_sec) +
((float) (tv1->tv_usec - tv2->tv_usec)) / 1000000);
}
int main(int argc, char **argv)
{
int fd;
struct timeval tv, tv2;
char buf[SIZE];
fd = open("fsync-tester.tst-file", O_WRONLY|O_CREAT);
if (fd < 0) {
perror("open");
exit(1);
}
memset(buf, 'a', SIZE);
while (1) {
pwrite(fd, buf, SIZE, 0);
gettimeofday(&tv, NULL);
fsync(fd);
gettimeofday(&tv2, NULL);
printf("fsync time: %5.4f\n", timeval_subtract(&tv2, &tv));
sleep(1);
}
}
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 13:20 ` Theodore Tso
@ 2009-03-24 13:30 ` Ingo Molnar
2009-03-24 13:51 ` Theodore Tso
2009-03-24 13:52 ` Alan Cox
` (3 subsequent siblings)
4 siblings, 1 reply; 419+ messages in thread
From: Ingo Molnar @ 2009-03-24 13:30 UTC (permalink / raw)
To: Theodore Tso, Alan Cox, Arjan van de Ven, Andrew Morton,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linus Torvalds, Linux Kernel Mailing List
* Theodore Tso <tytso@mit.edu> wrote:
> More recently (as in this past weekend), I went back to the ext3
> problem, and found a better solution, here:
>
> http://lkml.org/lkml/2009/3/21/304
> http://lkml.org/lkml/2009/3/21/302
> http://lkml.org/lkml/2009/3/21/303
>
> These patches cause the synchronous writes caused by an fsync() to
> be submitted using WRITE_SYNC, instead of WRITE, which definitely
> helps in the case where there is a heavy read workload in the
> background.
>
> They don't solve the problem where there is a *huge* amount of
> writes going on, though --- if something is dirtying pages at a
> rate far greater than the local disk can write it out, say, either
> "dd if=/dev/zero of=/mnt/make-lots-of-writes" or a massive distcc
> cluster driving a huge amount of data towards a single system or a
> wget over a local 100 megabit ethernet from a massive NFS server
> where everything is in cache, then you can have a major delay with
> the fsync().
Nice, thanks for the update! The situation isnt nearly as bleak as i
feared they are :)
> However, what I've found, though, is that if you're just doing a
> local copy from one hard drive to another, or downloading a huge
> iso file from an ftp server over a wide area network, the fsync()
> delays really don't get *that* bad, even with ext3. At least, I
> haven't found a workload that doesn't involve either dd
> if=/dev/zero or a massive amount of data coming in over the
> network that will cause fsync() delays in the > 1-2 second
> category. Ext3 has been around for a long time, and it's only
> been the last couple of years that people have really complained
> about this; my theory is that it was the rise of > 10 megabit
> ethernets and the use of systems like distcc that really made this
> problem really become visible. The only realistic workload I've
> found that triggers this requires a fast network dumping data to a
> local filesystem.
i think the problem became visible via the rise in memory size,
combined with the non-improvement of the performance of rotational
disks.
The disk speed versus RAM size ratio has become dramatically worse -
and our "5% of RAM" dirty ratio on a 32 GB box is 1.6 GB - which
takes an eternity to write out if you happen to sync on that. When
we had 1 GB of RAM 5% meant 51 MB - one or two seconds to flush out
- and worse than that, chances are that it's spread out widely on
the disk, the whole thing becoming seek-limited as well.
That's where the main difference in perception of this problem comes
from i believe. The problem was always there, but only in the last
1-2 years did 4G/8G systems become really common for people to
notice.
SSDs will save us eventually, but they will take up to a decade to
trickle through for us to forget about this problem altogether.
Ingo
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 11:12 ` Andrew Morton
2009-03-24 12:23 ` Alan Cox
@ 2009-03-24 13:37 ` Theodore Tso
2009-03-25 12:37 ` Jan Kara
2 siblings, 0 replies; 419+ messages in thread
From: Theodore Tso @ 2009-03-24 13:37 UTC (permalink / raw)
To: Andrew Morton
Cc: Ingo Molnar, Alan Cox, Arjan van de Ven, Peter Zijlstra,
Nick Piggin, Jens Axboe, David Rees, Jesper Krogh, Linus Torvalds,
Linux Kernel Mailing List
On Tue, Mar 24, 2009 at 04:12:49AM -0700, Andrew Morton wrote:
> But mainly because the problem lies elsewhere - in an area of contention
> between the committing and running transactions which we knowingly and
> reluctantly added to fix a bug in "[PATCH] fix ext3 buffer-stealing"
Well, let's be clear here. The contention between committing and
running transaction is an issue, even if we solved this problem, it
wouldn't solve the issue of fsync() taking a long time in ext3's
data=ordered mode in the case of massive write starvation caused by a
read-heavy workload, or a vast number of dirty buffers associated with
an inode which is about to be committed, and a process triggers an
fsync(). So fixing this issue wouldn't have solved the problem which
Ingo complained about (which was an editor calling fsync() leading to
long delay when saving a file during or right after a
distcc-accelerated kernel compile) or the infamous Firefox 3.0 bug.
Fixing this contention *would* fix the problem where a normal process
which is doing normal file I/O could end up getting stalled
unnecessarily, but that's not what most people are complaining about
--- and shortening the amount of time that it takes do a commit
(either with ext4's delayed allocation or ext3's data=writeback mount
option) would also address this problem. That doesn't mean that it's
not worth it to fix this particular contention, but there are multiple
issues going on here.
(Basically we're here:
http://www.kernel.org/pub/linux/kernel/people/paulmck/Confessions/FOSSElephant.html
... in Paul Mckenney's version of parable of the blind men and the elephant:
http://www.kernel.org/pub/linux/kernel/people/paulmck/Confessions/
:-)
> Now this:
>
> > Resolution was that Tytso indicated it went into some sort of ext4
> > patch queue:
>
> was not a fix at all. It was a known-buggy hack which I proposed simply to
> remove that contention point to let us find out if we're on the right
> track. IIRC Ric was going to ask someone to do some performance testing of
> that hack, but we never heard back.
Ric did do some preliminary performance testing, and it wasn't
encouraging. It's still in the unstable portion of the ext4 patch
queue, and it's in my "wish I had more time to look at it; I don't get
to work on ext3/4 full-time" queue.
> The bottom line is that someone needs to do some serious rooting through
> the very heart of JBD transaction logic and nobody has yet put their hand
> up. If we do that, and it turns out to be just too hard to fix then yes,
> perhaps that's the time to start looking at palliative bandaids.
I disagree that they are _just_ palliative bandaids, because you need
these in order to make sure fsync() completes in a reasonable time, so
that people like Ingo don't get cranky. :-) Fixing the contention
between the running and committing transaction is a good thing, and I
hope someone puts up their hand or I magically get the time I need to
really dive into the jbd layer, but it won't help the Firefox 3.0
problem or Ingo's problem with saving files during a distcc run.
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 13:30 ` Ingo Molnar
@ 2009-03-24 13:51 ` Theodore Tso
2009-03-24 16:34 ` Jesper Krogh
2009-03-24 18:20 ` Mark Lord
0 siblings, 2 replies; 419+ messages in thread
From: Theodore Tso @ 2009-03-24 13:51 UTC (permalink / raw)
To: Ingo Molnar
Cc: Alan Cox, Arjan van de Ven, Andrew Morton, Peter Zijlstra,
Nick Piggin, Jens Axboe, David Rees, Jesper Krogh, Linus Torvalds,
Linux Kernel Mailing List
On Tue, Mar 24, 2009 at 02:30:11PM +0100, Ingo Molnar wrote:
> i think the problem became visible via the rise in memory size,
> combined with the non-improvement of the performance of rotational
> disks.
>
> The disk speed versus RAM size ratio has become dramatically worse -
> and our "5% of RAM" dirty ratio on a 32 GB box is 1.6 GB - which
> takes an eternity to write out if you happen to sync on that. When
> we had 1 GB of RAM 5% meant 51 MB - one or two seconds to flush out
> - and worse than that, chances are that it's spread out widely on
> the disk, the whole thing becoming seek-limited as well.
That's definitely a problem too, but keep in mind that by default the
journal gets committed every 5 seconds, so the data gets flushed out
that often. So the question is how quickly can you *dirty* 1.6GB of
memory?
"dd if=/dev/zero of=/u1/dirty-me-harder" will certainly do it, but
normally we're doing something useful, and so you're either copying
data from local disk, at which point you're limited by the read speed
of your local disk (I suppose it could be in cache, but how common of
a case is that?), *or*, you're copying from the network, and to copy
in 1.6GB of data in 5 seconds, that means you're moving 320
megabytes/second, which if we're copying in the data from the network,
requires a 10 gigabit ethernet.
Hence my statement that this probably became much more visible with
fast ethernets --- but you're right, the huge increase in memory sizes
was also a key factor; otherwise, write throttling would have kicked
in and the VM would have started pushing the dirty pages to disk much
sooner.
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 13:20 ` Theodore Tso
2009-03-24 13:30 ` Ingo Molnar
@ 2009-03-24 13:52 ` Alan Cox
2009-03-24 14:28 ` Theodore Tso
2009-03-24 17:55 ` Jan Kara
2009-03-24 17:55 ` Linus Torvalds
` (2 subsequent siblings)
4 siblings, 2 replies; 419+ messages in thread
From: Alan Cox @ 2009-03-24 13:52 UTC (permalink / raw)
To: Theodore Tso
Cc: Ingo Molnar, Arjan van de Ven, Andrew Morton, Peter Zijlstra,
Nick Piggin, Jens Axboe, David Rees, Jesper Krogh, Linus Torvalds,
Linux Kernel Mailing List
> They don't solve the problem where there is a *huge* amount of writes
> going on, though --- if something is dirtying pages at a rate far
At very high rates other things seem to go pear shaped. I've not traced
it back far enough to be sure but what I suspect occurs from the I/O at
disk level is that two people are writing stuff out at once - presumably
the vm paging pressure and the file system - as I see two streams of I/O
that are each reasonably ordered but are interleaved.
> don't get *that* bad, even with ext3. At least, I haven't found a
> workload that doesn't involve either dd if=/dev/zero or a massive
> amount of data coming in over the network that will cause fsync()
> delays in the > 1-2 second category. Ext3 has been around for a long
I see it with a desktop when it pages hard and also when doing heavy
desktop I/O (in my case the repeatable every time case is saving large
images in the gimp - A4 at 600-1200dpi).
The other one (#8636) seems to be a bug in the I/O schedulers as it goes
away if you use a different I/O sched.
> solve. Simply mounting an ext3 filesystem using ext4, without making
> any change to the filesystem format, should solve the problem.
I will try this experiment but not with production data just yet 8)
> some other users' data files. This was the reason for Stephen Tweedie
> implementing the data=ordered mode, and making it the default.
Yes and in the server environment or for typical enterprise customers
this is a *big issue*, especially the risk of it being undetected that
they just inadvertently did something like put your medical data into the
end of something public during a crash.
> Try ext4, I think you'll like it. :-)
I need to, so that I can double check none of the open jbd locking bugs
are there and close more bugzilla entries (#8147)
Thanks for the reply - I hadn't realised a lot of this was getting fixed
but in ext4 and quietly
Alan
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 13:52 ` Alan Cox
@ 2009-03-24 14:28 ` Theodore Tso
2009-03-24 15:18 ` Alan Cox
2009-03-24 17:55 ` Jan Kara
1 sibling, 1 reply; 419+ messages in thread
From: Theodore Tso @ 2009-03-24 14:28 UTC (permalink / raw)
To: Alan Cox
Cc: Ingo Molnar, Arjan van de Ven, Andrew Morton, Peter Zijlstra,
Nick Piggin, Jens Axboe, David Rees, Jesper Krogh, Linus Torvalds,
Linux Kernel Mailing List
On Tue, Mar 24, 2009 at 01:52:49PM +0000, Alan Cox wrote:
>
> At very high rates other things seem to go pear shaped. I've not traced
> it back far enough to be sure but what I suspect occurs from the I/O at
> disk level is that two people are writing stuff out at once - presumably
> the vm paging pressure and the file system - as I see two streams of I/O
> that are each reasonably ordered but are interleaved.
Surely the elevator should have reordered the writes reasonably? (Or
is that what you meant by "the other one -- #8636 (I assume this is a
kernel Bugzilla #?) seems to be a bug in the I/O schedulers as it goes
away if you use a different I/O sched.?")
> > don't get *that* bad, even with ext3. At least, I haven't found a
> > workload that doesn't involve either dd if=/dev/zero or a massive
> > amount of data coming in over the network that will cause fsync()
> > delays in the > 1-2 second category. Ext3 has been around for a long
>
> I see it with a desktop when it pages hard and also when doing heavy
> desktop I/O (in my case the repeatable every time case is saving large
> images in the gimp - A4 at 600-1200dpi).
Yeah, I could see that doing it. How big is the image, and out of
curiosity, can you run the fsync-tester.c program I posted while
saving the gimp image, and tell me how much of a delay you end up
seeing?
> > solve. Simply mounting an ext3 filesystem using ext4, without making
> > any change to the filesystem format, should solve the problem.
>
> I will try this experiment but not with production data just yet 8)
Where's your bravery, man? :-)
I've been using it on my laptop since July, and haven't lost
significant amounts of data yet. (The only thing I did lose was bits
of a git repository fairly early on, and I was able to repair by
replacing the missing objects.)
> > some other users' data files. This was the reason for Stephen Tweedie
> > implementing the data=ordered mode, and making it the default.
>
> Yes and in the server environment or for typical enterprise customers
> this is a *big issue*, especially the risk of it being undetected that
> they just inadvertently did something like put your medical data into the
> end of something public during a crash.
True enough; changing the defaults to be data=writeback for the server
environment is probably not a good idea. (Then again, in the server
environment most of the workloads generally don't end up hitting the
nasty data=ordered failure modes; they tend to be
transaction-oriented, and fsync().)
> > Try ext4, I think you'll like it. :-)
>
> I need to, so that I can double check none of the open jbd locking bugs
> are there and close more bugzilla entries (#8147)
More testing would be appreciated --- and yeah, we need to groom the
bugzilla. For a long time no one in ext3 land was paying attention to
bugzilla, and more recently I've been trying to keep up with the
ext4-related bugs, but I don't get to do ext4 work full-time, and
occasionally Stacey gets annoyed at me when I work late into night...
> Thanks for the reply - I hadn't realised a lot of this was getting fixed
> but in ext4 and quietly
Yeah, there are a bunch of things, like the barrier=1 default, which
akpm has rejected for ext3, but which we've fixed in ext4. More help
in shaking down the bugs would definitely be appreciated.
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 14:28 ` Theodore Tso
@ 2009-03-24 15:18 ` Alan Cox
0 siblings, 0 replies; 419+ messages in thread
From: Alan Cox @ 2009-03-24 15:18 UTC (permalink / raw)
To: Theodore Tso
Cc: Ingo Molnar, Arjan van de Ven, Andrew Morton, Peter Zijlstra,
Nick Piggin, Jens Axboe, David Rees, Jesper Krogh, Linus Torvalds,
Linux Kernel Mailing List
> Surely the elevator should have reordered the writes reasonably? (Or
> is that what you meant by "the other one -- #8636 (I assume this is a
> kernel Bugzilla #?) seems to be a bug in the I/O schedulers as it goes
> away if you use a different I/O sched.?")
There are two cases there. One is a bug #8636 (kernel bugzilla) which is
where things like dump show awful performance with certain I/O scheduler
settings. That seems to be totally not connected to the fs but it is a
problem (and has a patch)
The second one the elevator is clearly trying to sort out but its
behaving as if someone is writing the file starting at say 0 and someone
else is trying to write it back starting some large distance further down
the file. The elevator can only do so much then.
> Yeah, I could see that doing it. How big is the image, and out of
> curiosity, can you run the fsync-tester.c program I posted while
150MB+ for the pnm files from gimp used as temporaries by Eve (Etch
Validation Engine), more like 10MB for xcf/tif files.
> saving the gimp image, and tell me how much of a delay you end up
> seeing?
Added to the TODO list once I can set up a suitable test box (my new dev
box is somewhere between Dell and my desk right now)
> More testing would be appreciated --- and yeah, we need to groom the
> bugzilla.
I'm currently doing this on a large scale (closed about 300 so far this
run). Bug 8147 might be worth a look as its a case where the jbd locking
and the jbd comments seem to disagree (the comments say you must hold a
lock but we don't seem to)
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 13:51 ` Theodore Tso
@ 2009-03-24 16:34 ` Jesper Krogh
2009-03-24 17:32 ` Linus Torvalds
2009-03-24 18:20 ` Mark Lord
1 sibling, 1 reply; 419+ messages in thread
From: Jesper Krogh @ 2009-03-24 16:34 UTC (permalink / raw)
To: Theodore Tso, Ingo Molnar, Alan Cox, Arjan van de Ven,
Andrew Morton, Peter Zijlstra, Nick Piggin, Jens Axboe,
David Rees, Jesper Krogh, Linus Torvalds,
Linux Kernel Mailing List
Theodore Tso wrote:
> On Tue, Mar 24, 2009 at 02:30:11PM +0100, Ingo Molnar wrote:
>> i think the problem became visible via the rise in memory size,
>> combined with the non-improvement of the performance of rotational
>> disks.
>>
>> The disk speed versus RAM size ratio has become dramatically worse -
>> and our "5% of RAM" dirty ratio on a 32 GB box is 1.6 GB - which
>> takes an eternity to write out if you happen to sync on that. When
>> we had 1 GB of RAM 5% meant 51 MB - one or two seconds to flush out
>> - and worse than that, chances are that it's spread out widely on
>> the disk, the whole thing becoming seek-limited as well.
>
> That's definitely a problem too, but keep in mind that by default the
> journal gets committed every 5 seconds, so the data gets flushed out
> that often. So the question is how quickly can you *dirty* 1.6GB of
> memory?
Say it's a file that you allready have in memory cache read in.. there
is plenty of space in 16GB for that.. then you can dirty it at
memory-speed.. that about ½sec. (correct me if I'm wrong).
Ok, this is probably unrealistic, but memory grows the largest we have
at the moment is 32GB and its steadily growing with the core-counts.
Then the available memory is used to cache the "active" portion of the
filsystems. I would even say that in the NFS-servers I depend on it to
do this efficiently. (2.6.29-rc8 delivered 1050MB/s over af 10GbitE
using nfsd - send speed to multiple clients).
The current workload is based of an active dataset of 600GB where
index'es are being generated and written back to the same disk. So
there is a fairly high read/write load on the machine (as you said was
required). The majority (perhaps 550GB ) is only read once where the
rest of the time it is stuff in the last 50GB being rewritten.
> "dd if=/dev/zero of=/u1/dirty-me-harder" will certainly do it, but
> normally we're doing something useful, and so you're either copying
> data from local disk, at which point you're limited by the read speed
> of your local disk (I suppose it could be in cache, but how common of
> a case is that?),
Increasingly the case as memory sizes grows.
> *or*, you're copying from the network, and to copy
> in 1.6GB of data in 5 seconds, that means you're moving 320
> megabytes/second, which if we're copying in the data from the network,
> requires a 10 gigabit ethernet.
or just around being processed on the 16-32 cores on the system.
Jesper
--
Jesper
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 16:34 ` Jesper Krogh
@ 2009-03-24 17:32 ` Linus Torvalds
0 siblings, 0 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-03-24 17:32 UTC (permalink / raw)
To: Jesper Krogh
Cc: Theodore Tso, Ingo Molnar, Alan Cox, Arjan van de Ven,
Andrew Morton, Peter Zijlstra, Nick Piggin, Jens Axboe,
David Rees, Linux Kernel Mailing List
On Tue, 24 Mar 2009, Jesper Krogh wrote:
>
> Theodore Tso wrote:
> > That's definitely a problem too, but keep in mind that by default the
> > journal gets committed every 5 seconds, so the data gets flushed out
> > that often. So the question is how quickly can you *dirty* 1.6GB of
> > memory?
Doesn't at least ext4 default to the _insane_ model of "data is less
important than meta-data, and it doesn't get journalled"?
And ext3 with "data=writeback" does the same, no?
Both of which are - as far as I can tell - total braindamage. At least
with ext3 it's not the _default_ mode.
I never understood how anybody doing filesystems (especially ones that
claim to be crash-resistant due to journalling) would _ever_ accept the
"writeback" behavior of having "clean fsck, but data loss".
> Say it's a file that you allready have in memory cache read in.. there
> is plenty of space in 16GB for that.. then you can dirty it at memory-speed..
> that about ½sec. (correct me if I'm wrong).
No, you'll still have to get per-page locks etc. If you use mmap(), you'll
page-fault on each page, if you use write() you'll do all the page lookups
etc. But yes, it can be pretty quick - the biggest cost probably _will_ be
the speed of memory itself (doing one-byte writes at each block would
change that, and the bottle-neck would become the system call and page
lookup/locking path, but it's probably in the same rough cost as cost of
writing out one page one page).
That said, this is all why we now have 'dirty_*bytes' limits too.
The problem is that the dirty_[background_]bytes value really should be
scaled up by the speed of IO. And we currently have no way to do that.
Some machines can write a gigabyte in a second with some fancy RAID
setups. Others will take minutes (or hours) to do that (crappy SSD's that
get 25kB/s throughput on random writes).
The "dirty_[background_ratio" percentage doesn't scale up by the speed of
IO either, of course, but at least historically there was generally a
pretty good correlation between amount of memory and speed of IO. The
machines that had gigs and gigs of RAM tended to always have fast IO too.
So scaling up dirty limits by memory size made sense both in the "we have
tons of memory, so allow tons of it to be dirty" sense _and_ in the "we
likely have a fast disk, so allow more pending dirty data".
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 13:52 ` Alan Cox
2009-03-24 14:28 ` Theodore Tso
@ 2009-03-24 17:55 ` Jan Kara
1 sibling, 0 replies; 419+ messages in thread
From: Jan Kara @ 2009-03-24 17:55 UTC (permalink / raw)
To: Alan Cox
Cc: Theodore Tso, Ingo Molnar, Arjan van de Ven, Andrew Morton,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linus Torvalds, Linux Kernel Mailing List
> > They don't solve the problem where there is a *huge* amount of writes
> > going on, though --- if something is dirtying pages at a rate far
>
> At very high rates other things seem to go pear shaped. I've not traced
> it back far enough to be sure but what I suspect occurs from the I/O at
> disk level is that two people are writing stuff out at once - presumably
> the vm paging pressure and the file system - as I see two streams of I/O
> that are each reasonably ordered but are interleaved.
There are different problems leading to this:
1) JBD commit code writes ordered data on each transaction commit. This
is done in dirtied-time order which is not necessarily optimal in case
of random access IO. IO scheduler helps here though because we submit a
lot of IO at once. ext4 has at least the randomness part of this problem
"fixed" because it submits ordered data via writepages(). Doing this
change requires non-trivial changes to the journaling layer so I wasn't
brave enough to do it with ext3 and JBD as well (although porting the
patch is trivial).
2) When we do dirty throttling, there are going to be several threads
writing out on the filesystem (if you have more pdflush threads which
translates to having more than one CPU). Jens' per-BDI writeback
threads could help here (but I haven't yet got to reading his patches in
detail to be sure).
These two problems together result in non-optimal IO pattern. At least
that's where I got to when I was looking into why Berkeley DB is so
slow. I was trying to somehow serialize more pdflush threads on the
filesystem but a stupid solution does not really help much - either I
was starving some throttled thread by other threads doing writeback or
I didn't quite keep the disk busy. So something like Jens' approach
is probably the way to go in the end.
> > don't get *that* bad, even with ext3. At least, I haven't found a
> > workload that doesn't involve either dd if=/dev/zero or a massive
> > amount of data coming in over the network that will cause fsync()
> > delays in the > 1-2 second category. Ext3 has been around for a long
>
> I see it with a desktop when it pages hard and also when doing heavy
> desktop I/O (in my case the repeatable every time case is saving large
> images in the gimp - A4 at 600-1200dpi).
>
> The other one (#8636) seems to be a bug in the I/O schedulers as it goes
> away if you use a different I/O sched.
>
> > solve. Simply mounting an ext3 filesystem using ext4, without making
> > any change to the filesystem format, should solve the problem.
>
> I will try this experiment but not with production data just yet 8)
>
> > some other users' data files. This was the reason for Stephen Tweedie
> > implementing the data=ordered mode, and making it the default.
>
> Yes and in the server environment or for typical enterprise customers
> this is a *big issue*, especially the risk of it being undetected that
> they just inadvertently did something like put your medical data into the
> end of something public during a crash.
>
> > Try ext4, I think you'll like it. :-)
>
> I need to, so that I can double check none of the open jbd locking bugs
> are there and close more bugzilla entries (#8147)
This one is still there. I'll have a look at it tomorrow and hopefully
will be able to answer...
Honza
--
Jan Kara <jack@suse.cz>
SuSE CR Labs
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 13:20 ` Theodore Tso
2009-03-24 13:30 ` Ingo Molnar
2009-03-24 13:52 ` Alan Cox
@ 2009-03-24 17:55 ` Linus Torvalds
2009-03-24 18:41 ` Kyle Moffett
2009-03-24 18:45 ` Theodore Tso
2009-03-24 20:24 ` David Rees
2009-03-24 23:03 ` Jesse Barnes
4 siblings, 2 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-03-24 17:55 UTC (permalink / raw)
To: Theodore Tso
Cc: Ingo Molnar, Alan Cox, Arjan van de Ven, Andrew Morton,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Tue, 24 Mar 2009, Theodore Tso wrote:
>
> Try ext4, I think you'll like it. :-)
>
> Failing that, data=writeback for single-user machines is probably your
> best bet.
Isn't that the same fix? ext4 just defaults to the crappy "writeback"
behavior, which is insane.
Sure, it makes things _much_ smoother, since now the actual data is no
longer in the critical path for any journal writes, but anybody who thinks
that's a solution is just incompetent.
We might as well go back to ext2 then. If your data gets written out long
after the metadata hit the disk, you are going to hit all kinds of bad
issues if the machine ever goes down.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 13:51 ` Theodore Tso
2009-03-24 16:34 ` Jesper Krogh
@ 2009-03-24 18:20 ` Mark Lord
2009-03-24 18:41 ` Eric Sandeen
1 sibling, 1 reply; 419+ messages in thread
From: Mark Lord @ 2009-03-24 18:20 UTC (permalink / raw)
To: Theodore Tso, Ingo Molnar, Alan Cox, Arjan van de Ven,
Andrew Morton, Peter Zijlstra, Nick Piggin, Jens Axboe,
David Rees, Jesper Krogh, Linus Torvalds,
Linux Kernel Mailing List
Theodore Tso wrote:
> So the question is how quickly can you *dirty* 1.6GB of memory?
..
MythTV: rm /some/really/huge/video/file ; sync
## disk light stays on for several minutes..
Note quite the same thing, I suppose, but it does break
the shutdown scripts of every major Linux distribution.
Simple solution for MythTV is what people already do: use xfs instead.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 18:20 ` Mark Lord
@ 2009-03-24 18:41 ` Eric Sandeen
0 siblings, 0 replies; 419+ messages in thread
From: Eric Sandeen @ 2009-03-24 18:41 UTC (permalink / raw)
To: Mark Lord
Cc: Theodore Tso, Ingo Molnar, Alan Cox, Arjan van de Ven,
Andrew Morton, Peter Zijlstra, Nick Piggin, Jens Axboe,
David Rees, Jesper Krogh, Linus Torvalds,
Linux Kernel Mailing List
Mark Lord wrote:
> Theodore Tso wrote:
>> So the question is how quickly can you *dirty* 1.6GB of memory?
> ..
>
> MythTV: rm /some/really/huge/video/file ; sync
> ## disk light stays on for several minutes..
>
> Note quite the same thing, I suppose, but it does break
> the shutdown scripts of every major Linux distribution.
It is indeed a different issue. ext3 does a fair bit of IO on a (here
60G file) delete:
http://people.redhat.com/~esandeen/rm_test/ext3_rm.png
ext4 is much better:
http://people.redhat.com/~esandeen/rm_test/ext4_rm.png
> Simple solution for MythTV is what people already do: use xfs instead.
and yes, xfs does it very quickly:
http://people.redhat.com/~esandeen/rm_test/xfs_rm.png
-Eric
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 17:55 ` Linus Torvalds
@ 2009-03-24 18:41 ` Kyle Moffett
2009-03-24 19:17 ` Linus Torvalds
2009-03-24 18:45 ` Theodore Tso
1 sibling, 1 reply; 419+ messages in thread
From: Kyle Moffett @ 2009-03-24 18:41 UTC (permalink / raw)
To: Linus Torvalds
Cc: Theodore Tso, Ingo Molnar, Alan Cox, Arjan van de Ven,
Andrew Morton, Peter Zijlstra, Nick Piggin, Jens Axboe,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Tue, Mar 24, 2009 at 1:55 PM, Linus Torvalds
<torvalds@linux-foundation.org> wrote:
> On Tue, 24 Mar 2009, Theodore Tso wrote:
>> Try ext4, I think you'll like it. :-)
>>
>> Failing that, data=writeback for single-user machines is probably your
>> best bet.
>
> Isn't that the same fix? ext4 just defaults to the crappy "writeback"
> behavior, which is insane.
>
> Sure, it makes things _much_ smoother, since now the actual data is no
> longer in the critical path for any journal writes, but anybody who thinks
> that's a solution is just incompetent.
>
> We might as well go back to ext2 then. If your data gets written out long
> after the metadata hit the disk, you are going to hit all kinds of bad
> issues if the machine ever goes down.
Not really...
Regardless of any journalling, a power-fail or a crash is almost
certainly going to cause "data loss" of some variety. We simply
didn't get to sync everything we needed to (otherwise we'd all be
shutting down our computers with the SCRAM switches just for kicks).
The difference is, with ext3/4 (in any journal mode) we guarantee our
metadata is consistent. This means that we won't double-allocate or
leak inodes or blocks, which means that we can safely *write* to the
filesystem as soon as we replay the journal. With ext2 you *CAN'T* do
that at all, as somebody may have allocated an inode but not yet
marked it as in use. The only way to safely figure all that out
without journalling is an fsck run.
That difference between ext4 and ext3-in-writeback-mode is this: If
you get a crash in the narrow window *after* writing initial metadata
and before writing the data, ext4 will give you a zero length file,
whereas ext3-in-writeback-mode will give you a proper-length file
filled with whatever used to be on disk (might be the contents of a
previous /etc/shadow, or maybe somebody's finance files).
In that same situation, ext3 in data-ordered or data-journal mode will
"close" the window by preventing anybody else from making forward
progress until the data and the metadata are both updated. The thing
is, even on ext3 I can get exactly the same kind of behavior with an
appropriately timed "kill -STOP $dumb_program", followed by a power
failure 60 seconds later. It's a relatively obvious race condition...
When you create a file, you can't guarantee that all of that file's
data and metadata has hit disk until after an fsync() call returns.
The only *possible* exceptions are in cases like the
previously-mentioned (and now patched)
open(A)+write(A)+close(A)+rename(A,B), where the
rename-over-existing-file should act as an implicit filesystem
barrier. It should ensure that all writes to the file get flushed
before it is renamed on top of an existing file, simply because so
much UNIX software expects it to act that way.
When you're dealing with programs that simply
open()+ftruncate()+write()+close(), however... there's always going to
be a window in-between the ftruncate and the write where the file *is*
an empty file, and in that case no amount of operating-system-level
cleverness can deal with application-level bugs.
Cheers,
Kyle Moffett
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 17:55 ` Linus Torvalds
2009-03-24 18:41 ` Kyle Moffett
@ 2009-03-24 18:45 ` Theodore Tso
2009-03-24 19:21 ` Linus Torvalds
1 sibling, 1 reply; 419+ messages in thread
From: Theodore Tso @ 2009-03-24 18:45 UTC (permalink / raw)
To: Linus Torvalds
Cc: Ingo Molnar, Alan Cox, Arjan van de Ven, Andrew Morton,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Tue, Mar 24, 2009 at 10:55:40AM -0700, Linus Torvalds wrote:
>
>
> On Tue, 24 Mar 2009, Theodore Tso wrote:
> >
> > Try ext4, I think you'll like it. :-)
> >
> > Failing that, data=writeback for single-user machines is probably your
> > best bet.
>
> Isn't that the same fix? ext4 just defaults to the crappy "writeback"
> behavior, which is insane.
Technically, it's not data=writeback. It's more like XFS's delayed
allocation; I've added workarounds so that files that which are
replaced via truncate or rename get pushed out right away, which
should solve most of the problems involved with files becoming
zero-length after a system crash.
> Sure, it makes things _much_ smoother, since now the actual data is no
> longer in the critical path for any journal writes, but anybody who thinks
> that's a solution is just incompetent.
>
> We might as well go back to ext2 then. If your data gets written out long
> after the metadata hit the disk, you are going to hit all kinds of bad
> issues if the machine ever goes down.
With ext2 after a system crash you need to run fsck. With ext4, fsck
isn't an issue, but if the application doesn't use fsync(), yes,
there's no guarantee (other than the workarounds for
replace-via-truncate and replace-via-rename), but there's plenty of
prior history that says that applications that care about data hitting
the disk should use fsync(). Otherwise, it will get spread out over a
few minutes; and for some files, that really won't make a difference.
For precious files, applications that use fsync() will be safe ---
otherwise, even with ext3, you can end up losing the contents of the
file if you crash right before 5 second commit window. At least back
in the days when people were proud of their Linux systems having 2-3
year uptimes, and where jiffies could actually wrap from time to time,
the difference between 5 seconds and 3 minutes really wasn't that big
of a deal. People who really care about this can turn off delayed
allocation with the nodelalloc mount option. Of course then they will
have the ext3 slower fsync() problem.
You are right that data=writeback and delayed allocation do both mean
that data can get pushed out much later than the metadata. But that's
allowed by POSIX, and it does give some very nice performance
benefits.
With either data=writeback or delayed allocation, we can also adjust
the default commit interval and the writeback timer settings; if we
say, change the default commit interval to be 30 seconds, and change
the writeback expire interval to be 15 seconds, it will also smooth
out the writes significantly. So that's yet another solution, with a
different set of tradeoffs.
Depending on the set of applications someone is running on their
system, running and the reliability of their hardware/power/system in
general, different tradeoffs will be more or less appropriate for the
system administrator in question.
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 7:32 ` Jesper Krogh
2009-03-24 8:16 ` Ingo Molnar
@ 2009-03-24 19:00 ` David Rees
2009-03-25 17:42 ` Jesper Krogh
2009-03-25 18:30 ` Theodore Tso
1 sibling, 2 replies; 419+ messages in thread
From: David Rees @ 2009-03-24 19:00 UTC (permalink / raw)
To: Jesper Krogh; +Cc: Linus Torvalds, Linux Kernel Mailing List
On Tue, Mar 24, 2009 at 12:32 AM, Jesper Krogh <jesper@krogh.cc> wrote:
> David Rees wrote:
> The 480 secondes is not the "wait time" but the time gone before the
> message is printed. It the kernel-default it was earlier 120 seconds but
> thats changed by Ingo Molnar back in september. I do get a lot of less
> noise but it really doesn't tell anything about the nature of the problem.
>
> The systes spec:
> 32GB of memory. The disks are a Nexsan SataBeast with 42 SATA drives in
> Raid10 connected using 4Gbit fibre-channel. I'll let it up to you to decide
> if thats fast or slow?
The drives should be fast enough to saturate 4Gbit FC in streaming
writes. How fast is the array in practice?
> The strange thing is actually that the above process (updatedb.mlocate) is
> writing to / which is a device without any activity at all. All activity is
> on the Fibre Channel device above, but process writing outsid that seems to
> be effected as well.
Ah. Sounds like your setup would benefit immensely from the per-bdi
patches from Jens Axobe. I'm sure he would appreciate some feedback
from users like you on them.
>> What's your vm.dirty_background_ratio and
>>
>> vm.dirty_ratio set to?
>
> 2.6.29-rc8 defaults:
> jk@hest:/proc/sys/vm$ cat dirty_background_ratio
> 5
> jk@hest:/proc/sys/vm$ cat dirty_ratio
> 10
On a 32GB system that's 1.6GB of dirty data, but your array should be
able to write that out fairly quickly (in a couple seconds) as long as
it's not too random. If it's spread all over the disk, write
throughput will drop significantly - how fast is data being written to
disk when your system suffers from large write latency?
>>> Consensus seems to be something with large memory machines, lots of dirty
>>> pages and a long writeout time due to ext3.
>>
>> All filesystems seem to suffer from this issue to some degree. I
>> posted to the list earlier trying to see if there was anything that
>> could be done to help my specific case. I've got a system where if
>> someone starts writing out a large file, it kills client NFS writes.
>> Makes the system unusable:
>> http://marc.info/?l=linux-kernel&m=123732127919368&w=2
>
> Yes, I've hit 120s+ penalties just by saving a file in vim.
Yeah, your disks aren't keeping up and/or data isn't being written out
efficiently.
>> Only workaround I've found is to reduce dirty_background_ratio and
>> dirty_ratio to tiny levels. Or throw good SSDs and/or a fast RAID
>> array at it so that large writes complete faster. Have you tried the
>> new vm_dirty_bytes in 2.6.29?
>
> No.. What would you suggest to be a reasonable setting for that?
Look at whatever is there by default and try cutting them in half to start.
>> Everyone seems to agree that "autotuning" it is the way to go. But no
>> one seems willing to step up and try to do it. Probably because it's
>> hard to get right!
>
> I can test patches.. but I'm not a kernel-developer.. unfortunately.
Me either - but luckily there have been plenty chiming in on this thread now.
-Dave
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 18:41 ` Kyle Moffett
@ 2009-03-24 19:17 ` Linus Torvalds
0 siblings, 0 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-03-24 19:17 UTC (permalink / raw)
To: Kyle Moffett
Cc: Theodore Tso, Ingo Molnar, Alan Cox, Arjan van de Ven,
Andrew Morton, Peter Zijlstra, Nick Piggin, Jens Axboe,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Tue, 24 Mar 2009, Kyle Moffett wrote:
>
> Regardless of any journalling, a power-fail or a crash is almost
> certainly going to cause "data loss" of some variety.
The point is, if you write your metadata earlier (say, every 5 sec) and
the real data later (say, every 30 sec), you're actually MORE LIKELY to
see corrupt files than if you try to write them together.
And if you write your data _first_, you're never going to see corruption
at all.
This is why I absolutely _detest_ the idiotic ext3 writeback behavior. It
literally does everything the wrong way around - writing data later than
the metadata that points to it. Whoever came up with that solution was a
moron. No ifs, buts, or maybes about it.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 18:45 ` Theodore Tso
@ 2009-03-24 19:21 ` Linus Torvalds
2009-03-24 19:40 ` Ric Wheeler
2009-03-24 19:55 ` Jeff Garzik
0 siblings, 2 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-03-24 19:21 UTC (permalink / raw)
To: Theodore Tso
Cc: Ingo Molnar, Alan Cox, Arjan van de Ven, Andrew Morton,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Tue, 24 Mar 2009, Theodore Tso wrote:
>
> With ext2 after a system crash you need to run fsck. With ext4, fsck
> isn't an issue,
Bah. A corrupt filesystem is a corrupt filesystem. Whether you have to
fsck it or not should be a secondary concern.
I personally find silent corruption to be _worse_ than the non-silent one.
At least if there's some program that says "oops, your inode so-and-so
seems to be scrogged" that's better than just silently having bad data in
it.
Of course, never having bad data _nor_ needing fsck is clearly optimal.
data=ordered gets pretty close (and data=journal is unacceptable for
performance reasons).
But I really don't understand filesystem people who think that "fsck" is
the important part, regardless of whether the data is valid or not. That's
just stupid and _obviously_ bogus.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 19:21 ` Linus Torvalds
@ 2009-03-24 19:40 ` Ric Wheeler
2009-03-24 19:55 ` Jeff Garzik
1 sibling, 0 replies; 419+ messages in thread
From: Ric Wheeler @ 2009-03-24 19:40 UTC (permalink / raw)
To: Linus Torvalds
Cc: Theodore Tso, Ingo Molnar, Alan Cox, Arjan van de Ven,
Andrew Morton, Peter Zijlstra, Nick Piggin, Jens Axboe,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Linus Torvalds wrote:
> On Tue, 24 Mar 2009, Theodore Tso wrote:
>
>> With ext2 after a system crash you need to run fsck. With ext4, fsck
>> isn't an issue,
>>
>
> Bah. A corrupt filesystem is a corrupt filesystem. Whether you have to
> fsck it or not should be a secondary concern.
>
> I personally find silent corruption to be _worse_ than the non-silent one.
> At least if there's some program that says "oops, your inode so-and-so
> seems to be scrogged" that's better than just silently having bad data in
> it.
>
> Of course, never having bad data _nor_ needing fsck is clearly optimal.
> data=ordered gets pretty close (and data=journal is unacceptable for
> performance reasons).
>
> But I really don't understand filesystem people who think that "fsck" is
> the important part, regardless of whether the data is valid or not. That's
> just stupid and _obviously_ bogus.
>
> Linus
>
It is always interesting to try to explain to users that just because
fsck ran cleanly does not mean anything that they care about is actually
safely on disk. The speed that fsck can run at is important when you are
trying to recover data from a really hosed file system, but that is
thankfully relatively rare for most people.
Having been involved in many calls with customers after crashes, what
they really want to know is pretty routine - do you have all of the data
I wrote? can you prove that it is the same data that I wrote? if not,
what data is missing and needs to be restored?
We can get help answer those questions with checksums or digital hashes
to validate the actual user data of files (open question is when to
compute it, where to store, would the SCSI T10 DIF/DIX stuff be
sufficient), putting in place some background scrubbers to detect
corruptions (which can happen even without an IO error), etc.
Being able to pin point what was impacted is actually enormously useful
- for example, being able to map a bad sector back into some meaningful
object like a user file, meta-data (translation, run fsck) or so on.
Ric
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 19:21 ` Linus Torvalds
2009-03-24 19:40 ` Ric Wheeler
@ 2009-03-24 19:55 ` Jeff Garzik
2009-03-25 9:34 ` Benny Halevy
2009-03-25 9:39 ` Jens Axboe
1 sibling, 2 replies; 419+ messages in thread
From: Jeff Garzik @ 2009-03-24 19:55 UTC (permalink / raw)
To: Linus Torvalds
Cc: Theodore Tso, Ingo Molnar, Alan Cox, Arjan van de Ven,
Andrew Morton, Peter Zijlstra, Nick Piggin, Jens Axboe,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Linus Torvalds wrote:
> But I really don't understand filesystem people who think that "fsck" is
> the important part, regardless of whether the data is valid or not. That's
> just stupid and _obviously_ bogus.
I think I can understand that point of view, at least:
More customers complain about hours-long fsck times than they do about
silent data corruption of non-fsync'd files.
> The point is, if you write your metadata earlier (say, every 5 sec) and
> the real data later (say, every 30 sec), you're actually MORE LIKELY to
> see corrupt files than if you try to write them together.
>
> And if you write your data _first_, you're never going to see corruption
> at all.
Amen.
And, personal filesystem pet peeve: please encourage proper FLUSH CACHE
use to give users the data guarantees they deserve. Linux's sync(2) and
fsync(2) (and fdatasync, etc.) should poke the block layer to guarantee
a media write.
Jeff
P.S. Overall, I am thrilled that this ext3/ext4 transition and
associated slashdotting has spurred debate over filesystem data
guarantees. This is the kind of discussion that has needed to happen
for years, IMO.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 13:20 ` Theodore Tso
` (2 preceding siblings ...)
2009-03-24 17:55 ` Linus Torvalds
@ 2009-03-24 20:24 ` David Rees
2009-03-25 7:30 ` David Rees
2009-03-24 23:03 ` Jesse Barnes
4 siblings, 1 reply; 419+ messages in thread
From: David Rees @ 2009-03-24 20:24 UTC (permalink / raw)
To: Theodore Tso, Ingo Molnar, Alan Cox, Arjan van de Ven,
Andrew Morton, Peter Zijlstra, Nick Piggin, Jens Axboe,
David Rees, Jesper Krogh, Linus Torvalds,
Linux Kernel Mailing List
On Tue, Mar 24, 2009 at 6:20 AM, Theodore Tso <tytso@mit.edu> wrote:
> However, what I've found, though, is that if you're just doing a local
> copy from one hard drive to another, or downloading a huge iso file
> from an ftp server over a wide area network, the fsync() delays really
> don't get *that* bad, even with ext3. At least, I haven't found a
> workload that doesn't involve either dd if=/dev/zero or a massive
> amount of data coming in over the network that will cause fsync()
> delays in the > 1-2 second category. Ext3 has been around for a long
> time, and it's only been the last couple of years that people have
> really complained about this; my theory is that it was the rise of >
> 10 megabit ethernets and the use of systems like distcc that really
> made this problem really become visible. The only realistic workload
> I've found that triggers this requires a fast network dumping data to
> a local filesystem.
It's pretty easy to reproduce it these days. Here's my setup, and
it's not even that fancy: Dual core Xeon, 8GB RAM, SATA RAID1 array,
GigE network. All it takes is a single client writing a large file
using Samba or NFS to introduce huge latencies.
Looking at the raw throughput, the server's disks can sustain
30-60MB/s writes (older disks), but the network can handle up to
~100MB/s. Throw in some other random seeky IO on the server, a bunch
of fragmentation and it's sustained write throughput in reality for
these writes is more like 10-25MB/s, far slower than the rate at which
a client can throw data at it.
5% dirty_ratrio * 8GB is 400MB. Let's say in reality the system is
flushing 20MB/s to disk, this is a delay of up to 20 seconds. Let's
say you have a user application which needs to fsync a number of small
files (and unfortunately they are done serially) and now I've got
applications (like Firefox) which basically remain unresponsive the
entire time the write is being done.
> (I'm sure someone will be ingeniuous enough to find something else
> though, and if they're interested, I've attached an fsync latency
> tester to this note. If you find something; let me know, I'd be
> interested.)
Thanks - I'll give the program a shot later with my test case and see
what it reports. My simple test case[1] for reproducing this has
reported 6-45 seconds depending on the system. I'll try it with the
previously mentioned workload as well.
-Dave
[1] http://bugzilla.kernel.org/show_bug.cgi?id=12309#c249
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 13:20 ` Theodore Tso
` (3 preceding siblings ...)
2009-03-24 20:24 ` David Rees
@ 2009-03-24 23:03 ` Jesse Barnes
2009-03-25 0:05 ` Arjan van de Ven
` (2 more replies)
4 siblings, 3 replies; 419+ messages in thread
From: Jesse Barnes @ 2009-03-24 23:03 UTC (permalink / raw)
To: Theodore Tso
Cc: Ingo Molnar, Alan Cox, Arjan van de Ven, Andrew Morton,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linus Torvalds, Linux Kernel Mailing List
On Tue, 24 Mar 2009 09:20:32 -0400
Theodore Tso <tytso@mit.edu> wrote:
> They don't solve the problem where there is a *huge* amount of writes
> going on, though --- if something is dirtying pages at a rate far
> greater than the local disk can write it out, say, either "dd
> if=/dev/zero of=/mnt/make-lots-of-writes" or a massive distcc cluster
> driving a huge amount of data towards a single system or a wget over a
> local 100 megabit ethernet from a massive NFS server where everything
> is in cache, then you can have a major delay with the fsync().
You make it sound like this is hard to do... I was running into this
problem *every day* until I moved to XFS recently. I'm running a
fairly beefy desktop (VMware running a crappy Windows install w/AV junk
on it, builds, icecream and large mailboxes) and have a lot of RAM, but
it became unusable for minutes at a time, which was just totally
unacceptable, thus the switch. Things have been better since, but are
still a little choppy.
I remember early in the 2.6.x days there was a lot of focus on making
interactive performance good, and for a long time it was. But this I/O
problem has been around for a *long* time now... What happened? Do not
many people run into this daily? Do all the filesystem hackers run
with special mount options to mitigate the problem?
--
Jesse Barnes, Intel Open Source Technology Center
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 23:03 ` Jesse Barnes
@ 2009-03-25 0:05 ` Arjan van de Ven
2009-03-25 17:59 ` David Rees
2009-03-25 18:40 ` Stephen Clark
2009-03-25 2:09 ` Theodore Tso
2009-03-27 11:27 ` Martin Steigerwald
2 siblings, 2 replies; 419+ messages in thread
From: Arjan van de Ven @ 2009-03-25 0:05 UTC (permalink / raw)
To: Jesse Barnes
Cc: Theodore Tso, Ingo Molnar, Alan Cox, Andrew Morton,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linus Torvalds, Linux Kernel Mailing List
On Tue, 24 Mar 2009 16:03:53 -0700
Jesse Barnes <jbarnes@virtuousgeek.org> wrote:
>
> I remember early in the 2.6.x days there was a lot of focus on making
> interactive performance good, and for a long time it was. But this
> I/O problem has been around for a *long* time now... What happened?
> Do not many people run into this daily? Do all the filesystem
> hackers run with special mount options to mitigate the problem?
>
the people that care use my kernel patch on ext3 ;-)
(or the userland equivalent tweak in /etc/rc.local)
--
Arjan van de Ven Intel Open Source Technology Centre
For development, discussion and tips for power savings,
visit http://www.lesswatts.org
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 23:03 ` Jesse Barnes
2009-03-25 0:05 ` Arjan van de Ven
@ 2009-03-25 2:09 ` Theodore Tso
2009-03-25 3:57 ` Jesse Barnes
2009-03-27 11:27 ` Martin Steigerwald
2 siblings, 1 reply; 419+ messages in thread
From: Theodore Tso @ 2009-03-25 2:09 UTC (permalink / raw)
To: Jesse Barnes
Cc: Ingo Molnar, Alan Cox, Arjan van de Ven, Andrew Morton,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linus Torvalds, Linux Kernel Mailing List
On Tue, Mar 24, 2009 at 04:03:53PM -0700, Jesse Barnes wrote:
>
> You make it sound like this is hard to do... I was running into this
> problem *every day* until I moved to XFS recently. I'm running a
> fairly beefy desktop (VMware running a crappy Windows install w/AV junk
> on it, builds, icecream and large mailboxes) and have a lot of RAM, but
> it became unusable for minutes at a time, which was just totally
> unacceptable, thus the switch. Things have been better since, but are
> still a little choppy.
>
I have 4 gigs of memory on my laptop, and I've never seen it these
sorts of issues. So maybe filesystem hackers don't have enough
memory; or we don't use the right workloads? It would help if I
understood how to trigger these disaster cases. I've had to work
*really* hard (as in dd if=/dev/zero of=/mnt/dirty-me-harder) in order
to get even a 30 second fsync() delay. So understanding what sort of
things you do that cause that many files data blocks to be dirtied,
and/or what is causing a major read workload, would be useful.
It may be that we just need to tune the VM to be much more aggressive
about pushing dirty pages to the disk sooner. Understanding how the
dynamics are working would be the first step.
> I remember early in the 2.6.x days there was a lot of focus on making
> interactive performance good, and for a long time it was. But this I/O
> problem has been around for a *long* time now... What happened? Do not
> many people run into this daily? Do all the filesystem hackers run
> with special mount options to mitigate the problem?
All I can tell you is that *I* don't run into them, even when I was
using ext3 and before I got an SSD in my laptop. I don't understand
why; maybe because I don't get really nice toys like systems with
32G's of memory. Or maybe it's because I don't use icecream (whatever
that is). What ever it is, it would be useful to get some solid
reproduction information, with details about hardware configuration,
and information collecting using sar and scripts that gather
/proc/meminfo every 5 seconds, and what the applications were doing at
the time.
It might also be useful for someone to try reducing the amount of
memory the system is using by using mem= on the boot line, and see if
that changes things, and to try simplifying the application workload,
and/or using iotop to determine what is most contributing to the
problem. (And of course, this needs to be done with someone using
ext3, since both ext4 and XFS use delayed allocation, which will
largely make this problem go away.)
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 2:09 ` Theodore Tso
@ 2009-03-25 3:57 ` Jesse Barnes
0 siblings, 0 replies; 419+ messages in thread
From: Jesse Barnes @ 2009-03-25 3:57 UTC (permalink / raw)
To: Theodore Tso
Cc: Ingo Molnar, Alan Cox, Arjan van de Ven, Andrew Morton,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linus Torvalds, Linux Kernel Mailing List
On Tue, 24 Mar 2009 22:09:15 -0400
Theodore Tso <tytso@mit.edu> wrote:
> On Tue, Mar 24, 2009 at 04:03:53PM -0700, Jesse Barnes wrote:
> >
> > You make it sound like this is hard to do... I was running into
> > this problem *every day* until I moved to XFS recently. I'm
> > running a fairly beefy desktop (VMware running a crappy Windows
> > install w/AV junk on it, builds, icecream and large mailboxes) and
> > have a lot of RAM, but it became unusable for minutes at a time,
> > which was just totally unacceptable, thus the switch. Things have
> > been better since, but are still a little choppy.
> >
>
> I have 4 gigs of memory on my laptop, and I've never seen it these
> sorts of issues. So maybe filesystem hackers don't have enough
> memory; or we don't use the right workloads? It would help if I
> understood how to trigger these disaster cases. I've had to work
> *really* hard (as in dd if=/dev/zero of=/mnt/dirty-me-harder) in order
> to get even a 30 second fsync() delay. So understanding what sort of
> things you do that cause that many files data blocks to be dirtied,
> and/or what is causing a major read workload, would be useful.
>
> It may be that we just need to tune the VM to be much more aggressive
> about pushing dirty pages to the disk sooner. Understanding how the
> dynamics are working would be the first step.
Well I think that's part of the problem; this is bigger than just
filesystems; I've been using ext3 since before I started seeing this,
so it seems like a bad VM/fs interaction may be to blame.
> > I remember early in the 2.6.x days there was a lot of focus on
> > making interactive performance good, and for a long time it was.
> > But this I/O problem has been around for a *long* time now... What
> > happened? Do not many people run into this daily? Do all the
> > filesystem hackers run with special mount options to mitigate the
> > problem?
>
> All I can tell you is that *I* don't run into them, even when I was
> using ext3 and before I got an SSD in my laptop. I don't understand
> why; maybe because I don't get really nice toys like systems with
> 32G's of memory. Or maybe it's because I don't use icecream (whatever
> that is). What ever it is, it would be useful to get some solid
> reproduction information, with details about hardware configuration,
> and information collecting using sar and scripts that gather
> /proc/meminfo every 5 seconds, and what the applications were doing at
> the time.
icecream is a distributed compiler system. Like distcc but a bit more
cross-compile & heterogeneous compiler friendly.
> It might also be useful for someone to try reducing the amount of
> memory the system is using by using mem= on the boot line, and see if
> that changes things, and to try simplifying the application workload,
> and/or using iotop to determine what is most contributing to the
> problem. (And of course, this needs to be done with someone using
> ext3, since both ext4 and XFS use delayed allocation, which will
> largely make this problem go away.)
Yep, and that's where my blame comes in. I whined about this to a few
people, like Arjan, who provided workarounds, but never got beyond
that. Some real debugging would be needed to find & fix the root
cause(s).
--
Jesse Barnes, Intel Open Source Technology Center
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 20:24 ` David Rees
@ 2009-03-25 7:30 ` David Rees
0 siblings, 0 replies; 419+ messages in thread
From: David Rees @ 2009-03-25 7:30 UTC (permalink / raw)
To: Theodore Tso, Ingo Molnar, Alan Cox, Arjan van de Ven,
Andrew Morton, Peter Zijlstra, Nick Piggin, Jens Axboe,
David Rees, Jesper Krogh, Linus Torvalds,
Linux Kernel Mailing List
On Tue, Mar 24, 2009 at 1:24 PM, David Rees <drees76@gmail.com> wrote:
> On Tue, Mar 24, 2009 at 6:20 AM, Theodore Tso <tytso@mit.edu> wrote:
>> The only realistic workload
>> I've found that triggers this requires a fast network dumping data to
>> a local filesystem.
>
> It's pretty easy to reproduce it these days. Here's my setup, and
> it's not even that fancy: Dual core Xeon, 8GB RAM, SATA RAID1 array,
> GigE network. All it takes is a single client writing a large file
> using Samba or NFS to introduce huge latencies.
>
> Looking at the raw throughput, the server's disks can sustain
> 30-60MB/s writes (older disks), but the network can handle up to
> ~100MB/s. Throw in some other random seeky IO on the server, a bunch
> of fragmentation and it's sustained write throughput in reality for
> these writes is more like 10-25MB/s, far slower than the rate at which
> a client can throw data at it.
>
>> (I'm sure someone will be ingeniuous enough to find something else
>> though, and if they're interested, I've attached an fsync latency
>> tester to this note. If you find something; let me know, I'd be
>> interested.)
OK, two simple tests on this system produce latencies well over 1-2s
using your fsync-tester.
The network client writing to disk scenario (~1GB file) resulted in this:
fsync time: 6.5272
fsync time: 35.6803
fsync time: 15.6488
fsync time: 0.3570
One thing to note - writing to this particular array seems to have
higher than expected latency without the big write, on the order of
0.2 seconds or so. I think this is because the system is not idle and
has a good number of programs on it doing logging and other small bits
of IO. vmstat 5 shows the system writing out about 300-1000 under the
bo column.
Copying that file to a separate disk was not as bad, but there were
still some big spikes:
fsync time: 6.8808
fsync time: 18.4634
fsync time: 9.6852
fsync time: 10.6146
fsync time: 8.5015
fsync time: 5.2160
The destination disk did not have any significant IO on it at the time.
The system is running Fedora 10 2.6.27.19-78.2.30.fc9.x86_64 and has
two RAID1 arrays attached to an aacraid controller. ext3 filesystems
mounted with noatime.
-Dave
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 19:55 ` Jeff Garzik
@ 2009-03-25 9:34 ` Benny Halevy
2009-03-25 9:39 ` Jens Axboe
1 sibling, 0 replies; 419+ messages in thread
From: Benny Halevy @ 2009-03-25 9:34 UTC (permalink / raw)
To: Jeff Garzik
Cc: Linus Torvalds, Theodore Tso, Ingo Molnar, Alan Cox,
Arjan van de Ven, Andrew Morton, Peter Zijlstra, Nick Piggin,
Jens Axboe, David Rees, Jesper Krogh, Linux Kernel Mailing List
On Mar. 24, 2009, 21:55 +0200, Jeff Garzik <jeff@garzik.org> wrote:
> Linus Torvalds wrote:
>> But I really don't understand filesystem people who think that "fsck" is
>> the important part, regardless of whether the data is valid or not. That's
>> just stupid and _obviously_ bogus.
>
> I think I can understand that point of view, at least:
>
> More customers complain about hours-long fsck times than they do about
> silent data corruption of non-fsync'd files.
>
>
>> The point is, if you write your metadata earlier (say, every 5 sec) and
>> the real data later (say, every 30 sec), you're actually MORE LIKELY to
>> see corrupt files than if you try to write them together.
>>
>> And if you write your data _first_, you're never going to see corruption
>> at all.
>
> Amen.
>
> And, personal filesystem pet peeve: please encourage proper FLUSH CACHE
> use to give users the data guarantees they deserve. Linux's sync(2) and
> fsync(2) (and fdatasync, etc.) should poke the block layer to guarantee
> a media write.
I completely agree. This also applies to nfsd_sync, by the way.
What's the right place to implement that?
How about sync_blockdev?
Benny
>
> Jeff
>
>
> P.S. Overall, I am thrilled that this ext3/ext4 transition and
> associated slashdotting has spurred debate over filesystem data
> guarantees. This is the kind of discussion that has needed to happen
> for years, IMO.
>
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
Benny Halevy
Software Architect
Panasas, Inc.
bhalevy@panasas.com
Tel/Fax: +972-3-647-8340
Mobile: +972-54-802-8340
Panasas: The Leader in Parallel Storage
www.panasas.com
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 19:55 ` Jeff Garzik
2009-03-25 9:34 ` Benny Halevy
@ 2009-03-25 9:39 ` Jens Axboe
2009-03-25 19:32 ` Jeff Garzik
1 sibling, 1 reply; 419+ messages in thread
From: Jens Axboe @ 2009-03-25 9:39 UTC (permalink / raw)
To: Jeff Garzik
Cc: Linus Torvalds, Theodore Tso, Ingo Molnar, Alan Cox,
Arjan van de Ven, Andrew Morton, Peter Zijlstra, Nick Piggin,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Tue, Mar 24 2009, Jeff Garzik wrote:
> Linus Torvalds wrote:
>> But I really don't understand filesystem people who think that "fsck"
>> is the important part, regardless of whether the data is valid or not.
>> That's just stupid and _obviously_ bogus.
>
> I think I can understand that point of view, at least:
>
> More customers complain about hours-long fsck times than they do about
> silent data corruption of non-fsync'd files.
>
>
>> The point is, if you write your metadata earlier (say, every 5 sec) and
>> the real data later (say, every 30 sec), you're actually MORE LIKELY to
>> see corrupt files than if you try to write them together.
>>
>> And if you write your data _first_, you're never going to see
>> corruption at all.
>
> Amen.
>
> And, personal filesystem pet peeve: please encourage proper FLUSH CACHE
> use to give users the data guarantees they deserve. Linux's sync(2) and
> fsync(2) (and fdatasync, etc.) should poke the block layer to guarantee
> a media write.
fsync already does that, at least if you have barriers enabled on your
drive.
--
Jens Axboe
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 11:12 ` Andrew Morton
2009-03-24 12:23 ` Alan Cox
2009-03-24 13:37 ` Theodore Tso
@ 2009-03-25 12:37 ` Jan Kara
2009-03-25 15:00 ` Theodore Tso
2 siblings, 1 reply; 419+ messages in thread
From: Jan Kara @ 2009-03-25 12:37 UTC (permalink / raw)
To: Andrew Morton
Cc: Ingo Molnar, Alan Cox, Arjan van de Ven, Peter Zijlstra,
Nick Piggin, Theodore Tso, Jens Axboe, David Rees, Jesper Krogh,
Linus Torvalds, Linux Kernel Mailing List
On Tue 24-03-09 04:12:49, Andrew Morton wrote:
> On Tue, 24 Mar 2009 11:31:11 +0100 Ingo Molnar <mingo@elte.hu> wrote:
> > The thing is ... this is a _bad_ ext3 design bug affecting ext3
> > users in the last decade or so of ext3 existence. Why is this issue
> > not handled with the utmost high priority and why wasnt it fixed 5
> > years ago already? :-)
> >
> > It does not matter whether we have extents or htrees when there are
> > _trivially reproducible_ basic usability problems with ext3.
> >
>
> It's all there in that Oct 2008 thread.
>
> The proposed tweak to kjournald is a bad fix - partly because it will
> elevate the priority of vast amounts of IO whose priority we don't _want_
> elevated.
>
> But mainly because the problem lies elsewhere - in an area of contention
> between the committing and running transactions which we knowingly and
> reluctantly added to fix a bug in
>
> commit 773fc4c63442fbd8237b4805627f6906143204a8
> Author: akpm <akpm>
> AuthorDate: Sun May 19 23:23:01 2002 +0000
> Commit: akpm <akpm>
> CommitDate: Sun May 19 23:23:01 2002 +0000
>
> [PATCH] fix ext3 buffer-stealing
>
> Patch from sct fixes a long-standing (I did it!) and rather complex
> problem with ext3.
>
> The problem is to do with buffers which are continually being dirtied
> by an external agent. I had code in there (for easily-triggerable
> livelock avoidance) which steals the buffer from checkpoint mode and
> reattaches it to the running transaction. This violates ext3 ordering
> requirements - it can permit journal space to be reclaimed before the
> relevant data has really been written out.
>
> Also, we do have to reliably get a lock on the buffer when moving it
> between lists and inspecting its internal state. Otherwise a competing
> read from the underlying block device can trigger an assertion failure,
> and a competing write to the underlying block device can confuse ext3
> journalling state completely.
I've looked at this a bit. I suppose you mean the contention arising from
us taking the buffer lock in do_get_write_access()? But it's not obvious
to me why we'd be contending there... We call this function only for
metadata buffers (unless in data=journal mode) so there isn't huge amount
of these blocks. This buffer should be locked for a longer time only when
we do writeout for checkpoint (hmm, maybe you meant this one?). In
particular, note that we don't take the buffer lock when committing this
block to journal - we lock only the BJ_IO buffer. But in this case we wait
when the buffer is on BJ_Shadow list later so there is some contention in
this case.
Also when I emailed with a few people about these sync problems, they
wrote that switching to data=writeback mode helps considerably so this
would indicate that handling of ordered mode data buffers is causing most
of the slowdown...
Honza
--
Jan Kara <jack@suse.cz>
SUSE Labs, CR
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 12:37 ` Jan Kara
@ 2009-03-25 15:00 ` Theodore Tso
2009-03-25 17:29 ` Linus Torvalds
0 siblings, 1 reply; 419+ messages in thread
From: Theodore Tso @ 2009-03-25 15:00 UTC (permalink / raw)
To: Jan Kara
Cc: Andrew Morton, Ingo Molnar, Alan Cox, Arjan van de Ven,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linus Torvalds, Linux Kernel Mailing List
On Wed, Mar 25, 2009 at 01:37:44PM +0100, Jan Kara wrote:
> > Also, we do have to reliably get a lock on the buffer when moving it
> > between lists and inspecting its internal state. Otherwise a competing
> > read from the underlying block device can trigger an assertion failure,
> > and a competing write to the underlying block device can confuse ext3
> > journalling state completely.
>
> I've looked at this a bit. I suppose you mean the contention arising from
> us taking the buffer lock in do_get_write_access()? But it's not obvious
> to me why we'd be contending there... We call this function only for
> metadata buffers (unless in data=journal mode) so there isn't huge amount
> of these blocks.
There isn't a huge number of those blocks, but if inode #1220 was
modified in the previous transaction which is now being committed, and
we then need to modify and write out inode #1221 in the current
contention, and they share the same inode table block, that would
cause the contention. That probably doesn't happen that often in a
synchronous code path, but it probably happens more often that you're
thinking. I still think the fsync() problem is the much bigger deal,
and solving the contention problem isn't going to solve the fsync()
latency problem with ext3 data=ordered mode.
> Also when I emailed with a few people about these sync problems, they
> wrote that switching to data=writeback mode helps considerably so this
> would indicate that handling of ordered mode data buffers is causing most
> of the slowdown...
Yes, but we need to be clear whether this was an fsync() problem or
some other random delay problem. If it's the fsync() problem,
obviously data=writeback will solve the fsync() latency delay problem.
(As will using delayed allocation in ext4 or XFS.)
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
[not found] ` <cjeS0-5nC-33@gated-at.bofh.it>
@ 2009-03-25 15:19 ` Bodo Eggert
0 siblings, 0 replies; 419+ messages in thread
From: Bodo Eggert @ 2009-03-25 15:19 UTC (permalink / raw)
To: Theodore Tso, Theodore Tso, Ingo Molnar, Alan Cox,
Arjan van de Ven, Andrew Morton, Peter Zijlstra, Nick Piggin,
Jens Axboe, David Rees, Jesper Krogh, Linus Torvalds,
Linux Kernel Mailing List
Theodore Tso <tytso@mit.edu> wrote:
> OK, so there are a couple of solutions to this problem. One is to use
> ext4 and delayed allocation. This solves the problem by simply not
> allocating the blocks in the first place, so we don't have to force
> them out to solve the security problem that data=ordered was trying to
> solve. Simply mounting an ext3 filesystem using ext4, without making
> any change to the filesystem format, should solve the problem.
[...]
> However, these days, nearly all Linux boxes are single user machines,
> so the security concern is much less of a problem. So maybe the best
> solution for now is to make data=writeback the default. This solves
> the problem too. The only problem with this is that there are a lot
> of sloppy application writers out there, and they've gotten lazy about
> using fsync() where it's necessary;
The problem is not having accidential data loss because the inode /happened/
to be written before the data, but having /guaranteed/ data loss in a
60-seconds-window. This is about as acceptable as having a filesystem
replace _any_ data with "deadbeef" on each crash unless fsync was called.
Besides that: If the problem is due to crappy VM writeout (is it?), reducing
security to DOS level is not the answer. You'd want your fs to be usable on
servers, wouldn't you?
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 15:00 ` Theodore Tso
@ 2009-03-25 17:29 ` Linus Torvalds
2009-03-25 17:57 ` Alan Cox
` (2 more replies)
0 siblings, 3 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-03-25 17:29 UTC (permalink / raw)
To: Theodore Tso
Cc: Jan Kara, Andrew Morton, Ingo Molnar, Alan Cox, Arjan van de Ven,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Wed, 25 Mar 2009, Theodore Tso wrote:
>
> I still think the fsync() problem is the much bigger deal, and solving
> the contention problem isn't going to solve the fsync() latency problem
> with ext3 data=ordered mode.
The fsync() problem is really annoying, but what is doubly annoying is
that sometimes one process doing fsync() (or sync) seems to cause other
processes to hickup too.
Now, I personally solved that problem by moving to (good) SSD's on my
desktop, and I think that's indeed the long-term solution. But it would be
good to try to figure out a solution in the short term for people who
don't have new hardware thrown at them from random companies too.
I suspect it's a combination of filesystem transaction locking, together
with the VM wanting to write out some unrelated blocks or inodes due to
the system just being close to the dirty limits. Which is why the
system-wide hickups then happen especially when writing big files.
The VM _tries_ to do writes in the background, but if the writepage() path
hits a filesystem-level blocking lock, that background write suddenly
becomes largely synchronous.
I suspect there is also some possibility of confusion with inter-file
(false) metadata dependencies. If a filesystem were to think that the file
size is metadata that should be journaled (in a single journal), and the
journaling code then decides that it needs to do those meta-data updates
in the correct order (ie the big file write _before_ the file write that
wants to be fsync'ed), then the fsync() will be delayed by a totally
irrelevant large file having to have its data written out (due to
data=ordered or whatever).
I'd like to think that no filesystem designer would ever be that silly,
but I'm too scared to try to actually go and check. Because I could well
imagine that somebody really thought that "size" is metadata.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 19:00 ` David Rees
@ 2009-03-25 17:42 ` Jesper Krogh
2009-03-25 18:16 ` David Rees
2009-03-25 18:30 ` Theodore Tso
1 sibling, 1 reply; 419+ messages in thread
From: Jesper Krogh @ 2009-03-25 17:42 UTC (permalink / raw)
To: David Rees; +Cc: Linus Torvalds, Linux Kernel Mailing List
David Rees wrote:
> On Tue, Mar 24, 2009 at 12:32 AM, Jesper Krogh <jesper@krogh.cc> wrote:
>> David Rees wrote:
>> The 480 secondes is not the "wait time" but the time gone before the
>> message is printed. It the kernel-default it was earlier 120 seconds but
>> thats changed by Ingo Molnar back in september. I do get a lot of less
>> noise but it really doesn't tell anything about the nature of the problem.
>>
>> The systes spec:
>> 32GB of memory. The disks are a Nexsan SataBeast with 42 SATA drives in
>> Raid10 connected using 4Gbit fibre-channel. I'll let it up to you to decide
>> if thats fast or slow?
>
> The drives should be fast enough to saturate 4Gbit FC in streaming
> writes. How fast is the array in practice?
Thats allways a good question.. This is by far not being the only user
of the array at the time of testing.. (there are 4 FC-channel connected
to a switch). Creating a fresh slice.. and just dd'ing onto it from
/dev/zero gives:
jk@hest:~$ sudo dd if=/dev/zero of=/dev/sdh bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 78.0557 s, 134 MB/s
jk@hest:~$ sudo dd if=/dev/zero of=/dev/sdh bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 8.11019 s, 129 MB/s
Watching using dstat while dd'ing it peaks at 220M/s
If I watch numbers on "dstat" output in production. It gets at peak
around the same(130MB/s) but average is in the 90-100 MB/s range.
It has 2GB of battery backed cache. I'm fairly sure that when it was new
(and I only had connected one host) I could get it up at around 350MB/s.
>> The strange thing is actually that the above process (updatedb.mlocate) is
>> writing to / which is a device without any activity at all. All activity is
>> on the Fibre Channel device above, but process writing outsid that seems to
>> be effected as well.
>
> Ah. Sounds like your setup would benefit immensely from the per-bdi
> patches from Jens Axobe. I'm sure he would appreciate some feedback
> from users like you on them.
>
>>> What's your vm.dirty_background_ratio and
>>>
>>> vm.dirty_ratio set to?
>> 2.6.29-rc8 defaults:
>> jk@hest:/proc/sys/vm$ cat dirty_background_ratio
>> 5
>> jk@hest:/proc/sys/vm$ cat dirty_ratio
>> 10
>
> On a 32GB system that's 1.6GB of dirty data, but your array should be
> able to write that out fairly quickly (in a couple seconds) as long as
> it's not too random. If it's spread all over the disk, write
> throughput will drop significantly - how fast is data being written to
> disk when your system suffers from large write latency?
Thats another thing. I havent been debugging while hitting it (yet) but
if I go ind and do a sync on the system manually. Then it doesn't get
above 50MB/s in writeout (measured using dstat). But even that doesn't
sum up to 8 minutes .. 1.6GB at 50MB/s ..=> 32 s.
--
Jesper
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 17:29 ` Linus Torvalds
@ 2009-03-25 17:57 ` Alan Cox
2009-03-25 18:09 ` David Rees
2009-03-25 18:58 ` Theodore Tso
2 siblings, 0 replies; 419+ messages in thread
From: Alan Cox @ 2009-03-25 17:57 UTC (permalink / raw)
To: Linus Torvalds
Cc: Theodore Tso, Jan Kara, Andrew Morton, Ingo Molnar,
Arjan van de Ven, Peter Zijlstra, Nick Piggin, Jens Axboe,
David Rees, Jesper Krogh, Linux Kernel Mailing List
> The fsync() problem is really annoying, but what is doubly annoying is
> that sometimes one process doing fsync() (or sync) seems to cause other
> processes to hickup too.
Bug #5942 (interaction with anticipatory io scheduler)
Bug #9546 (with reproducer & logs)
Bug #9911 including a rather natty tester (albeit in java)
Bug #7372 (some info and figures on certain revs it seemed to get worse)
Bug #12309 (more info, including kjournald hack fix using ioprio)
General consensus seems to be 2.6.18 is where the manure intersected with
the air impeller
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 0:05 ` Arjan van de Ven
@ 2009-03-25 17:59 ` David Rees
2009-03-25 18:40 ` Stephen Clark
1 sibling, 0 replies; 419+ messages in thread
From: David Rees @ 2009-03-25 17:59 UTC (permalink / raw)
To: Arjan van de Ven
Cc: Jesse Barnes, Theodore Tso, Ingo Molnar, Alan Cox, Andrew Morton,
Peter Zijlstra, Nick Piggin, Jens Axboe, Jesper Krogh,
Linus Torvalds, Linux Kernel Mailing List
On Tue, Mar 24, 2009 at 5:05 PM, Arjan van de Ven <arjan@infradead.org> wrote:
> On Tue, 24 Mar 2009 16:03:53 -0700
> Jesse Barnes <jbarnes@virtuousgeek.org> wrote:
>> I remember early in the 2.6.x days there was a lot of focus on making
>> interactive performance good, and for a long time it was. But this
>> I/O problem has been around for a *long* time now... What happened?
>> Do not many people run into this daily? Do all the filesystem
>> hackers run with special mount options to mitigate the problem?
>
> the people that care use my kernel patch on ext3 ;-)
> (or the userland equivalent tweak in /etc/rc.local)
There's a couple of comments in bug 12309 [1] which confirm that
increasing the priority of kjournald reduces latency significantly
since I posted your tweak there yesterday. I hope to do some testing
today on my systems to see if it helps on them, too.
-Dave
[1] http://bugzilla.kernel.org/show_bug.cgi?id=12309
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 17:29 ` Linus Torvalds
2009-03-25 17:57 ` Alan Cox
@ 2009-03-25 18:09 ` David Rees
2009-03-25 18:21 ` Linus Torvalds
2009-03-25 18:58 ` Theodore Tso
2 siblings, 1 reply; 419+ messages in thread
From: David Rees @ 2009-03-25 18:09 UTC (permalink / raw)
To: Linus Torvalds
Cc: Theodore Tso, Jan Kara, Andrew Morton, Ingo Molnar, Alan Cox,
Arjan van de Ven, Peter Zijlstra, Nick Piggin, Jens Axboe,
Jesper Krogh, Linux Kernel Mailing List
On Wed, Mar 25, 2009 at 10:29 AM, Linus Torvalds
<torvalds@linux-foundation.org> wrote:
> On Wed, 25 Mar 2009, Theodore Tso wrote:
>> I still think the fsync() problem is the much bigger deal, and solving
>> the contention problem isn't going to solve the fsync() latency problem
>> with ext3 data=ordered mode.
>
> The fsync() problem is really annoying, but what is doubly annoying is
> that sometimes one process doing fsync() (or sync) seems to cause other
> processes to hickup too.
>
> Now, I personally solved that problem by moving to (good) SSD's on my
> desktop, and I think that's indeed the long-term solution. But it would be
> good to try to figure out a solution in the short term for people who
> don't have new hardware thrown at them from random companies too.
Throwing SSDs at it only increases the limit before which it becomes
an issue. They hide the underlying issue and are only a workaround.
Create enough dirty data and you'll get the same latencies, it's just
that that limit is now a lot higher. Your Intel SSD will write
streaming data 2-4 times faster than your typical disk - and can be an
order of magnitude faster when it comes to small, random writes.
> I suspect it's a combination of filesystem transaction locking, together
> with the VM wanting to write out some unrelated blocks or inodes due to
> the system just being close to the dirty limits. Which is why the
> system-wide hickups then happen especially when writing big files.
>
> The VM _tries_ to do writes in the background, but if the writepage() path
> hits a filesystem-level blocking lock, that background write suddenly
> becomes largely synchronous.
>
> I suspect there is also some possibility of confusion with inter-file
> (false) metadata dependencies. If a filesystem were to think that the file
> size is metadata that should be journaled (in a single journal), and the
> journaling code then decides that it needs to do those meta-data updates
> in the correct order (ie the big file write _before_ the file write that
> wants to be fsync'ed), then the fsync() will be delayed by a totally
> irrelevant large file having to have its data written out (due to
> data=ordered or whatever).
It certainly "feels" like that is the case from the workloads I have
that generate high latencies.
-Dave
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 17:42 ` Jesper Krogh
@ 2009-03-25 18:16 ` David Rees
2009-03-25 18:46 ` Jesper Krogh
0 siblings, 1 reply; 419+ messages in thread
From: David Rees @ 2009-03-25 18:16 UTC (permalink / raw)
To: Jesper Krogh; +Cc: Linus Torvalds, Linux Kernel Mailing List
On Wed, Mar 25, 2009 at 10:42 AM, Jesper Krogh <jesper@krogh.cc> wrote:
> David Rees wrote:
>> On Tue, Mar 24, 2009 at 12:32 AM, Jesper Krogh <jesper@krogh.cc> wrote:
>>> David Rees wrote:
>>> The 480 secondes is not the "wait time" but the time gone before the
>>> message is printed. It the kernel-default it was earlier 120 seconds but
>>> thats changed by Ingo Molnar back in september. I do get a lot of less
>>> noise but it really doesn't tell anything about the nature of the
>>> problem.
>>>
>>> The systes spec:
>>> 32GB of memory. The disks are a Nexsan SataBeast with 42 SATA drives in
>>> Raid10 connected using 4Gbit fibre-channel. I'll let it up to you to
>>> decide
>>> if thats fast or slow?
>>
>> The drives should be fast enough to saturate 4Gbit FC in streaming
>> writes. How fast is the array in practice?
>
> Thats allways a good question.. This is by far not being the only user
> of the array at the time of testing.. (there are 4 FC-channel connected to a
> switch). Creating a fresh slice.. and just dd'ing onto it from /dev/zero
> gives:
> jk@hest:~$ sudo dd if=/dev/zero of=/dev/sdh bs=1M count=10000
> 10000+0 records in
> 10000+0 records out
> 10485760000 bytes (10 GB) copied, 78.0557 s, 134 MB/s
> jk@hest:~$ sudo dd if=/dev/zero of=/dev/sdh bs=1M count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB) copied, 8.11019 s, 129 MB/s
>
> Watching using dstat while dd'ing it peaks at 220M/s
Hmm, not as fast as I expected.
> It has 2GB of battery backed cache. I'm fairly sure that when it was new
> (and I only had connected one host) I could get it up at around 350MB/s.
With 2GB of BBC, I'm surprised you are seeing as much latency as you
are. It should be able to suck down writes as fast as you can throw
at it. Is the array configured in writeback mode?
>> On a 32GB system that's 1.6GB of dirty data, but your array should be
>> able to write that out fairly quickly (in a couple seconds) as long as
>> it's not too random. If it's spread all over the disk, write
>> throughput will drop significantly - how fast is data being written to
>> disk when your system suffers from large write latency?
>
> Thats another thing. I havent been debugging while hitting it (yet) but if I
> go ind and do a sync on the system manually. Then it doesn't get above
> 50MB/s in writeout (measured using dstat). But even that doesn't sum up to 8
> minutes .. 1.6GB at 50MB/s ..=> 32 s.
Have you also tried increasing the IO priority of the kjournald
processes as a workaround as Arjan van de Ven suggests?
You must have a significant amount of activity going to that FC array
from other clients - it certainly doesn't seem to be performing as
well as it could/should be.
-Dave
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 18:09 ` David Rees
@ 2009-03-25 18:21 ` Linus Torvalds
2009-03-25 18:26 ` Linus Torvalds
0 siblings, 1 reply; 419+ messages in thread
From: Linus Torvalds @ 2009-03-25 18:21 UTC (permalink / raw)
To: David Rees
Cc: Theodore Tso, Jan Kara, Andrew Morton, Ingo Molnar, Alan Cox,
Arjan van de Ven, Peter Zijlstra, Nick Piggin, Jens Axboe,
Jesper Krogh, Linux Kernel Mailing List
On Wed, 25 Mar 2009, David Rees wrote:
>
> Your Intel SSD will write streaming data 2-4 times faster than your
> typical disk
Don't even bother with streaming data. The problem is _never_ streaming
data.
Even a suck-ass laptop drive can write streaming data fast enough that
people don't care. The problem is invariably that writes from different
sources (much of it being metadata) interact and cause seeking.
> and can be an order of magnitude faster when it comes to small, random
> writes.
Umm. More like two orders of magnitude or more.
Random writes on a disk (even a fast one) tends to be in the hundreds of
kilobytes per second. Have you worked with an Intel SSD? It does tens of
MB/s on pure random writes.
The problem really is gone with an SSD.
And please realize that the problem for me was never 30-second stalls. For
me, a 3-second stall is unacceptable. It's just very annoying.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 18:21 ` Linus Torvalds
@ 2009-03-25 18:26 ` Linus Torvalds
2009-03-25 18:48 ` Ric Wheeler
2009-03-25 18:49 ` Alan Cox
0 siblings, 2 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-03-25 18:26 UTC (permalink / raw)
To: David Rees
Cc: Theodore Tso, Jan Kara, Andrew Morton, Ingo Molnar, Alan Cox,
Arjan van de Ven, Peter Zijlstra, Nick Piggin, Jens Axboe,
Jesper Krogh, Linux Kernel Mailing List
On Wed, 25 Mar 2009, Linus Torvalds wrote:
>
> Even a suck-ass laptop drive can write streaming data fast enough that
> people don't care. The problem is invariably that writes from different
> sources (much of it being metadata) interact and cause seeking.
Actually, not just writes.
The IO priority thing is almost certainly that _reads_ (which get higher
priority by default due to being synchronous) get interspersed with the
writes, and then even if you _could_ be having streaming writes, what you
actually end up with is lots of seeking.
Again, good SSD's don't care. Disks do. It doesn't matter if you have a FC
disk array that can eat 300MB/s when streaming - once you start seeking,
that 300MB/s goes down like a rock. Battery-protected write caches will
help - but not a whole lot when streaming more data than they have RAM.
Basic queuing theory.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 19:00 ` David Rees
2009-03-25 17:42 ` Jesper Krogh
@ 2009-03-25 18:30 ` Theodore Tso
2009-03-25 18:40 ` Linus Torvalds
1 sibling, 1 reply; 419+ messages in thread
From: Theodore Tso @ 2009-03-25 18:30 UTC (permalink / raw)
To: David Rees; +Cc: Jesper Krogh, Linus Torvalds, Linux Kernel Mailing List
On Tue, Mar 24, 2009 at 12:00:41PM -0700, David Rees wrote:
> >>> Consensus seems to be something with large memory machines, lots of dirty
> >>> pages and a long writeout time due to ext3.
> >>
> >> All filesystems seem to suffer from this issue to some degree. I
> >> posted to the list earlier trying to see if there was anything that
> >> could be done to help my specific case. I've got a system where if
> >> someone starts writing out a large file, it kills client NFS writes.
> >> Makes the system unusable:
> >> http://marc.info/?l=linux-kernel&m=123732127919368&w=2
> >
> > Yes, I've hit 120s+ penalties just by saving a file in vim.
>
> Yeah, your disks aren't keeping up and/or data isn't being written out
> efficiently.
Agreed; we probably will need to get some blktrace outputs to see what
is going on.
> >> Only workaround I've found is to reduce dirty_background_ratio and
> >> dirty_ratio to tiny levels. Or throw good SSDs and/or a fast RAID
> >> array at it so that large writes complete faster. Have you tried the
> >> new vm_dirty_bytes in 2.6.29?
> >
> > No.. What would you suggest to be a reasonable setting for that?
>
> Look at whatever is there by default and try cutting them in half to start.
I'm beginning to think that using a "ratio" may be the wrong way to
go. We probably need to add an optional dirty_max_megabytes field
where we start pushing dirty blocks out when the number of dirty
blocks exceeds either the dirty_ratio or the dirty_max_megabytes,
which ever comes first. The problem is that 5% might make sense for a
small machine with only 1G of memory, but it might not make so much
sense if you have 32G of memory.
But the other problem is whether we are issuing the writes in an
efficient way, and that means we need to see what is going on at the
blktrace level as a starting point, and maybe we'll need some
custom-designed trace outputs to see what is going on at the
inode/logical block level, not just at the physical block level.
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 18:30 ` Theodore Tso
@ 2009-03-25 18:40 ` Linus Torvalds
2009-03-25 22:05 ` Theodore Tso
0 siblings, 1 reply; 419+ messages in thread
From: Linus Torvalds @ 2009-03-25 18:40 UTC (permalink / raw)
To: Theodore Tso; +Cc: David Rees, Jesper Krogh, Linux Kernel Mailing List
On Wed, 25 Mar 2009, Theodore Tso wrote:
>
> I'm beginning to think that using a "ratio" may be the wrong way to
> go. We probably need to add an optional dirty_max_megabytes field
> where we start pushing dirty blocks out when the number of dirty
> blocks exceeds either the dirty_ratio or the dirty_max_megabytes,
> which ever comes first.
We have that. Except it's called "dirty_bytes" and
"dirty_background_bytes", and it defaults to zero (off).
The problem being that unlike the ratio, there's no sane default value
that you can at least argue is not _entirely_ pointless.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 0:05 ` Arjan van de Ven
2009-03-25 17:59 ` David Rees
@ 2009-03-25 18:40 ` Stephen Clark
2009-03-26 23:53 ` Mark Lord
1 sibling, 1 reply; 419+ messages in thread
From: Stephen Clark @ 2009-03-25 18:40 UTC (permalink / raw)
To: Arjan van de Ven
Cc: Jesse Barnes, Theodore Tso, Ingo Molnar, Alan Cox, Andrew Morton,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linus Torvalds, Linux Kernel Mailing List
Arjan van de Ven wrote:
> On Tue, 24 Mar 2009 16:03:53 -0700
> Jesse Barnes <jbarnes@virtuousgeek.org> wrote:
>
>> I remember early in the 2.6.x days there was a lot of focus on making
>> interactive performance good, and for a long time it was. But this
>> I/O problem has been around for a *long* time now... What happened?
>> Do not many people run into this daily? Do all the filesystem
>> hackers run with special mount options to mitigate the problem?
>>
>
> the people that care use my kernel patch on ext3 ;-)
> (or the userland equivalent tweak in /etc/rc.local)
>
>
>
Ok, I bite what is the userland tweak?
--
"They that give up essential liberty to obtain temporary safety,
deserve neither liberty nor safety." (Ben Franklin)
"The course of history shows that as a government grows, liberty
decreases." (Thomas Jefferson)
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 18:16 ` David Rees
@ 2009-03-25 18:46 ` Jesper Krogh
0 siblings, 0 replies; 419+ messages in thread
From: Jesper Krogh @ 2009-03-25 18:46 UTC (permalink / raw)
To: David Rees; +Cc: Linus Torvalds, Linux Kernel Mailing List
David Rees wrote:
>>> writes. How fast is the array in practice?
>> Thats allways a good question.. This is by far not being the only user
>> of the array at the time of testing.. (there are 4 FC-channel connected to a
>> switch). Creating a fresh slice.. and just dd'ing onto it from /dev/zero
>> gives:
>> jk@hest:~$ sudo dd if=/dev/zero of=/dev/sdh bs=1M count=10000
>> 10000+0 records in
>> 10000+0 records out
>> 10485760000 bytes (10 GB) copied, 78.0557 s, 134 MB/s
>> jk@hest:~$ sudo dd if=/dev/zero of=/dev/sdh bs=1M count=1000
>> 1000+0 records in
>> 1000+0 records out
>> 1048576000 bytes (1.0 GB) copied, 8.11019 s, 129 MB/s
>>
>> Watching using dstat while dd'ing it peaks at 220M/s
>
> Hmm, not as fast as I expected.
Me neither, but I always get disappointed.
>> It has 2GB of battery backed cache. I'm fairly sure that when it was new
>> (and I only had connected one host) I could get it up at around 350MB/s.
>
> With 2GB of BBC, I'm surprised you are seeing as much latency as you
> are. It should be able to suck down writes as fast as you can throw
> at it. Is the array configured in writeback mode?
Yes, but I triple checked.. the memory upgrade hadn't been installed, so
its actually only 512MB.
>
>>> On a 32GB system that's 1.6GB of dirty data, but your array should be
>>> able to write that out fairly quickly (in a couple seconds) as long as
>>> it's not too random. If it's spread all over the disk, write
>>> throughput will drop significantly - how fast is data being written to
>>> disk when your system suffers from large write latency?
>> Thats another thing. I havent been debugging while hitting it (yet) but if I
>> go ind and do a sync on the system manually. Then it doesn't get above
>> 50MB/s in writeout (measured using dstat). But even that doesn't sum up to 8
>> minutes .. 1.6GB at 50MB/s ..=> 32 s.
>
> Have you also tried increasing the IO priority of the kjournald
> processes as a workaround as Arjan van de Ven suggests?
No. I'll try to slip that one in.
--
Jesper
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 18:26 ` Linus Torvalds
@ 2009-03-25 18:48 ` Ric Wheeler
2009-03-25 18:49 ` Alan Cox
1 sibling, 0 replies; 419+ messages in thread
From: Ric Wheeler @ 2009-03-25 18:48 UTC (permalink / raw)
To: Linus Torvalds
Cc: David Rees, Theodore Tso, Jan Kara, Andrew Morton, Ingo Molnar,
Alan Cox, Arjan van de Ven, Peter Zijlstra, Nick Piggin,
Jens Axboe, Jesper Krogh, Linux Kernel Mailing List
Linus Torvalds wrote:
> On Wed, 25 Mar 2009, Linus Torvalds wrote:
>
>> Even a suck-ass laptop drive can write streaming data fast enough that
>> people don't care. The problem is invariably that writes from different
>> sources (much of it being metadata) interact and cause seeking.
>>
>
> Actually, not just writes.
>
> The IO priority thing is almost certainly that _reads_ (which get higher
> priority by default due to being synchronous) get interspersed with the
> writes, and then even if you _could_ be having streaming writes, what you
> actually end up with is lots of seeking.
>
> Again, good SSD's don't care. Disks do. It doesn't matter if you have a FC
> disk array that can eat 300MB/s when streaming - once you start seeking,
> that 300MB/s goes down like a rock. Battery-protected write caches will
> help - but not a whole lot when streaming more data than they have RAM.
> Basic queuing theory.
>
> Linus
>
This is actually not really true - random writes to an enterprise disk
array will make your Intel SSD look slow. Effectively, they are
extremely large, battery backed banks of DRAM with lots of fibre channel
ports. Some of the bigger ones can have several hundred GB of DRAM and
dozens of fibre channel ports to feed them.
Of course, if your random writes exceed the cache capacity and you fall
back to their internal disks (SSD or traditional), your random write
speed will drop.
Ric
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 18:26 ` Linus Torvalds
2009-03-25 18:48 ` Ric Wheeler
@ 2009-03-25 18:49 ` Alan Cox
2009-03-25 18:55 ` Ric Wheeler
1 sibling, 1 reply; 419+ messages in thread
From: Alan Cox @ 2009-03-25 18:49 UTC (permalink / raw)
To: Linus Torvalds
Cc: David Rees, Theodore Tso, Jan Kara, Andrew Morton, Ingo Molnar,
Arjan van de Ven, Peter Zijlstra, Nick Piggin, Jens Axboe,
Jesper Krogh, Linux Kernel Mailing List
> Again, good SSD's don't care. Disks do. It doesn't matter if you have a FC
> disk array that can eat 300MB/s when streaming - once you start seeking,
> that 300MB/s goes down like a rock. Battery-protected write caches will
> help - but not a whole lot when streaming more data than they have RAM.
> Basic queuing theory.
Subtly more complex than that. If your mashed up I/O streams fit into the
2GB or so of cache (minus one stream to disk) you win. You also win
because you take a lot of fragmented OS I/O and turn it into bigger
chunks of writing better scheduled. The latter win arguably shouldn't
happen but it does occur (I guess in part that says we suck) and it
occurs big time when you've got multiple accessors to a shared storage
system (where the host OS's can't help)
Alan
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 18:49 ` Alan Cox
@ 2009-03-25 18:55 ` Ric Wheeler
0 siblings, 0 replies; 419+ messages in thread
From: Ric Wheeler @ 2009-03-25 18:55 UTC (permalink / raw)
To: Alan Cox
Cc: Linus Torvalds, David Rees, Theodore Tso, Jan Kara, Andrew Morton,
Ingo Molnar, Arjan van de Ven, Peter Zijlstra, Nick Piggin,
Jens Axboe, Jesper Krogh, Linux Kernel Mailing List
Alan Cox wrote:
>> Again, good SSD's don't care. Disks do. It doesn't matter if you have a FC
>> disk array that can eat 300MB/s when streaming - once you start seeking,
>> that 300MB/s goes down like a rock. Battery-protected write caches will
>> help - but not a whole lot when streaming more data than they have RAM.
>> Basic queuing theory.
>>
>
> Subtly more complex than that. If your mashed up I/O streams fit into the
> 2GB or so of cache (minus one stream to disk) you win. You also win
> because you take a lot of fragmented OS I/O and turn it into bigger
> chunks of writing better scheduled. The latter win arguably shouldn't
> happen but it does occur (I guess in part that says we suck) and it
> occurs big time when you've got multiple accessors to a shared storage
> system (where the host OS's can't help)
>
> Alan
>
The other thing that can impact random writes on arrays is their
internal "track" size - if the random write is of a partial track, it
forces a read-modify-write with a back end disk read. Some arrays have
large internal tracks, others have smaller ones.
Again, not unlike what you see with some SSD's and their erase block
size - give them even multiples of that and they are quite happy.
Ric
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 17:29 ` Linus Torvalds
2009-03-25 17:57 ` Alan Cox
2009-03-25 18:09 ` David Rees
@ 2009-03-25 18:58 ` Theodore Tso
2009-03-25 19:48 ` Christoph Hellwig
2009-03-25 20:45 ` Linus Torvalds
2 siblings, 2 replies; 419+ messages in thread
From: Theodore Tso @ 2009-03-25 18:58 UTC (permalink / raw)
To: Linus Torvalds
Cc: Jan Kara, Andrew Morton, Ingo Molnar, Alan Cox, Arjan van de Ven,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Wed, Mar 25, 2009 at 10:29:48AM -0700, Linus Torvalds wrote:
> I suspect there is also some possibility of confusion with inter-file
> (false) metadata dependencies. If a filesystem were to think that the file
> size is metadata that should be journaled (in a single journal), and the
> journaling code then decides that it needs to do those meta-data updates
> in the correct order (ie the big file write _before_ the file write that
> wants to be fsync'ed), then the fsync() will be delayed by a totally
> irrelevant large file having to have its data written out (due to
> data=ordered or whatever).
It's not just the file size; it's the block allocation decisions.
Ext3 doesn't have delayed allocation, so as soon as you issue the
write, we have to allocate the block, which means grabbing blocks and
making changes to the block bitmap, and then updating the inode with
those block allocation decisions. It's a lot more than just i_size.
And the problem is that if we do this for the big file write, and the
small file write happens to also touch the same inode table block
and/or block allocation bitmap, when we fsync() the small file, when
we end up pushing out the metadata updates associated with the big
file write, and so thus we need to flush out the data blocks
associated with the big file write as well.
Now, there are three ways of solving this problem. One is to use
delayed allocation, where we don't make the block allocation decisions
until the very last minute. This is what ext4 and XFS does. The
problem with this is that when we have unrelated filesystem operations
that end up causing zero length files before the file write (i.e.,
replace-via-truncate, where the application does open/truncate/write/
close) or the after the file write (i.e., replace-via-rename, where
the application does open/write/close/rename) and the application
omits the fsync(). So with ext4 we has workarounds that start pushing
out the data blocks in the for replace-via-rename and
replace-via-truncate cases, while XFS will do an implied fsync for
replace-via-truncate only, and btrfs will do an implied fsync for
replace-via-rename only.
The second solution is we could add a huge amount of machinery to try
track these logical dependencies, and then be able to "back out" the
changes to the inode table or block allocation bitmap for the big file
write when we want to fsync out the small file. This is roughly what
the BSD Soft Updates mechanisms does, and it works, but at the cost of
a *huge* amount of complexity. The amount of accounting data you have
to track so that you can partially back out various filesystem
operations, and then the state tables that make use of this accounting
data is not trivial. One of the downsides of this mechanism is that
it makes it extremely difficult to add new features/functionality such
as extended attributes or ACL's, since very few people understand the
complexities needed to support it. As a result Linux had acl and
xattr support long before Kirk McKusick got around to adding those
features in UFS2.
The third potential solution we can try doing is to make some tuning
adjustments to the VM so that we start pushing out these data blocks
much more aggressively out to the disk. If we assume that many
applications aren't going to be using fsync, and we need to worry
about all sorts of implied dependencies where a small file gets pushed
out to disk, but a large file does not, you can have endless amounts
of fun in terms of "application level file corruption", which is
simply caused by the fact that a small file has been pushed out to
disk, and a large file hasn't been pushed out to disk yet. If it's
going to be considered fair game that application programmers aren't
going to be required to use fsync() when they need to depend on
something being on stable storage after a crash, then we need to tune
the VM to much more aggressively clean dirty pages. Even if we remove
the false dependencies at the filesystem level (i.e., fsck-detectable
consistency problems), there is no way for the filesystem to be able
to guess about implied dependencies between different files at the
application level.
Traditionally, the way applications told us about such dependencies
was fsync(). But if application programmers are demanding that
fsync() is no longer required for correct operation after a filesystem
crash, all we can do is push things out to disk much more
aggressively.
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 9:39 ` Jens Axboe
@ 2009-03-25 19:32 ` Jeff Garzik
2009-03-25 19:43 ` Christoph Hellwig
2009-03-25 19:43 ` Jens Axboe
0 siblings, 2 replies; 419+ messages in thread
From: Jeff Garzik @ 2009-03-25 19:32 UTC (permalink / raw)
To: Jens Axboe
Cc: Linus Torvalds, Theodore Tso, Ingo Molnar, Alan Cox,
Arjan van de Ven, Andrew Morton, Peter Zijlstra, Nick Piggin,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Jens Axboe wrote:
> On Tue, Mar 24 2009, Jeff Garzik wrote:
>> Linus Torvalds wrote:
>>> But I really don't understand filesystem people who think that "fsck"
>>> is the important part, regardless of whether the data is valid or not.
>>> That's just stupid and _obviously_ bogus.
>> I think I can understand that point of view, at least:
>>
>> More customers complain about hours-long fsck times than they do about
>> silent data corruption of non-fsync'd files.
>>
>>
>>> The point is, if you write your metadata earlier (say, every 5 sec) and
>>> the real data later (say, every 30 sec), you're actually MORE LIKELY to
>>> see corrupt files than if you try to write them together.
>>>
>>> And if you write your data _first_, you're never going to see
>>> corruption at all.
>> Amen.
>>
>> And, personal filesystem pet peeve: please encourage proper FLUSH CACHE
>> use to give users the data guarantees they deserve. Linux's sync(2) and
>> fsync(2) (and fdatasync, etc.) should poke the block layer to guarantee
>> a media write.
>
> fsync already does that, at least if you have barriers enabled on your
> drive.
Erm, no, you don't enable barriers on your drive, they are not a
hardware feature. You enable barriers via your filesystem.
Stating "fsync already does that" borders on false, because that assumes
(a) the user has a fs that supports barriers
(b) the user is actually aware of a 'barriers' mount option and what it
means
(c) the user has turned on an option normally defaulted to off.
Or in other words, it pretty much never happens.
Furthermore, a blatantly obvious place to flush data to media --
fsync(2), fdatasync(2) and sync_file_range(2) -- should cause the block
layer to issue a FLUSH CACHE for __any__ filesystem. But that doesn't
happen either.
So, no, for 95% of Linux users, fsync does _not_ already do that. If
you are lucky enough to use XFS or ext4, you're covered. That's it.
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 19:32 ` Jeff Garzik
@ 2009-03-25 19:43 ` Christoph Hellwig
2009-03-25 19:43 ` Jens Axboe
1 sibling, 0 replies; 419+ messages in thread
From: Christoph Hellwig @ 2009-03-25 19:43 UTC (permalink / raw)
To: Jeff Garzik
Cc: Jens Axboe, Linus Torvalds, Theodore Tso, Ingo Molnar, Alan Cox,
Arjan van de Ven, Andrew Morton, Peter Zijlstra, Nick Piggin,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Wed, Mar 25, 2009 at 03:32:13PM -0400, Jeff Garzik wrote:
> So, no, for 95% of Linux users, fsync does _not_ already do that. If
> you are lucky enough to use XFS or ext4, you're covered. That's it.
reiserfs also does the correct thing. As does ext3 on suse kernels.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 19:32 ` Jeff Garzik
2009-03-25 19:43 ` Christoph Hellwig
@ 2009-03-25 19:43 ` Jens Axboe
2009-03-25 19:49 ` Ric Wheeler
` (2 more replies)
1 sibling, 3 replies; 419+ messages in thread
From: Jens Axboe @ 2009-03-25 19:43 UTC (permalink / raw)
To: Jeff Garzik
Cc: Linus Torvalds, Theodore Tso, Ingo Molnar, Alan Cox,
Arjan van de Ven, Andrew Morton, Peter Zijlstra, Nick Piggin,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Wed, Mar 25 2009, Jeff Garzik wrote:
> Jens Axboe wrote:
>> On Tue, Mar 24 2009, Jeff Garzik wrote:
>>> Linus Torvalds wrote:
>>>> But I really don't understand filesystem people who think that
>>>> "fsck" is the important part, regardless of whether the data is
>>>> valid or not. That's just stupid and _obviously_ bogus.
>>> I think I can understand that point of view, at least:
>>>
>>> More customers complain about hours-long fsck times than they do
>>> about silent data corruption of non-fsync'd files.
>>>
>>>
>>>> The point is, if you write your metadata earlier (say, every 5 sec)
>>>> and the real data later (say, every 30 sec), you're actually MORE
>>>> LIKELY to see corrupt files than if you try to write them together.
>>>>
>>>> And if you write your data _first_, you're never going to see
>>>> corruption at all.
>>> Amen.
>>>
>>> And, personal filesystem pet peeve: please encourage proper FLUSH
>>> CACHE use to give users the data guarantees they deserve. Linux's
>>> sync(2) and fsync(2) (and fdatasync, etc.) should poke the block
>>> layer to guarantee a media write.
>>
>> fsync already does that, at least if you have barriers enabled on your
>> drive.
>
> Erm, no, you don't enable barriers on your drive, they are not a
> hardware feature. You enable barriers via your filesystem.
Thanks for the lesson Jeff, I'm obviously not aware how that stuff
works...
> Stating "fsync already does that" borders on false, because that assumes
> (a) the user has a fs that supports barriers
> (b) the user is actually aware of a 'barriers' mount option and what it
> means
> (c) the user has turned on an option normally defaulted to off.
>
> Or in other words, it pretty much never happens.
That is true, except if you use xfs/ext4. And this discussion is fine,
as was the one a few months back that got ext4 to enable barriers by
default. If I had submitted patches to do that back in 2001/2 when the
barrier stuff was written, I would have been shot for introducing such a
slow down. After people found out that it just wasn't something silly,
then you have a way to enable it.
I'd still wager that most people would rather have a 'good enough
fsync' on their desktops than incur the penalty of barriers or write
through caching. I know I do.
> Furthermore, a blatantly obvious place to flush data to media --
> fsync(2), fdatasync(2) and sync_file_range(2) -- should cause the block
> layer to issue a FLUSH CACHE for __any__ filesystem. But that doesn't
> happen either.
>
> So, no, for 95% of Linux users, fsync does _not_ already do that. If
> you are lucky enough to use XFS or ext4, you're covered. That's it.
The point is that you need to expose this choice somewhere, and that
'somewhere' isn't manually editing fstab and enabling barriers or
fsync-for-real. And it should be easier.
Another problem is that FLUSH_CACHE sucks. Really. And not just on
ext3/ordered, generally. Write a 50 byte file, fsync, flush cache and
wit for the world to finish. Pretty hard to teach people to use a nicer
fdatasync(), when the majority of the cost now becomes flushing the
cache of that 1TB drive you happen to have 8 partitions on. Good luck
with that.
--
Jens Axboe
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 18:58 ` Theodore Tso
@ 2009-03-25 19:48 ` Christoph Hellwig
2009-03-25 21:50 ` Theodore Tso
2009-03-25 20:45 ` Linus Torvalds
1 sibling, 1 reply; 419+ messages in thread
From: Christoph Hellwig @ 2009-03-25 19:48 UTC (permalink / raw)
To: Theodore Tso, Linus Torvalds, Jan Kara, Andrew Morton,
Ingo Molnar, Alan Cox, Arjan van de Ven, Peter Zijlstra,
Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Wed, Mar 25, 2009 at 02:58:24PM -0400, Theodore Tso wrote:
> omits the fsync(). So with ext4 we has workarounds that start pushing
> out the data blocks in the for replace-via-rename and
> replace-via-truncate cases, while XFS will do an implied fsync for
> replace-via-truncate only, and btrfs will do an implied fsync for
> replace-via-rename only.
The XFS one and the ext4 one that I saw only start an _asynchronous_
writeout. Which is not an implied fsync but snake oil to make the
most common complaints go away without providing hard guarantees.
IFF we want to go down this route we should better provide strong
guranteed semantics and document the propery. And of course implement
it consistently on all native filesystems.
> Traditionally, the way applications told us about such dependencies
> was fsync(). But if application programmers are demanding that
> fsync() is no longer required for correct operation after a filesystem
> crash, all we can do is push things out to disk much more
> aggressively.
Note that the rename for atomic commits trick originated in mail severs
which always did the proper fsync. When the word spread into the
desktop world it looks like this wisdom got lost.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 19:43 ` Jens Axboe
@ 2009-03-25 19:49 ` Ric Wheeler
2009-03-25 19:57 ` Jens Axboe
2009-03-25 20:16 ` Jeff Garzik
2009-03-25 20:25 ` Jeff Garzik
2009-03-31 20:49 ` Jeff Garzik
2 siblings, 2 replies; 419+ messages in thread
From: Ric Wheeler @ 2009-03-25 19:49 UTC (permalink / raw)
To: Jens Axboe
Cc: Jeff Garzik, Linus Torvalds, Theodore Tso, Ingo Molnar, Alan Cox,
Arjan van de Ven, Andrew Morton, Peter Zijlstra, Nick Piggin,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Jens Axboe wrote:
> On Wed, Mar 25 2009, Jeff Garzik wrote:
>
>> Jens Axboe wrote:
>>
>>> On Tue, Mar 24 2009, Jeff Garzik wrote:
>>>
>>>> Linus Torvalds wrote:
>>>>
>>>>> But I really don't understand filesystem people who think that
>>>>> "fsck" is the important part, regardless of whether the data is
>>>>> valid or not. That's just stupid and _obviously_ bogus.
>>>>>
>>>> I think I can understand that point of view, at least:
>>>>
>>>> More customers complain about hours-long fsck times than they do
>>>> about silent data corruption of non-fsync'd files.
>>>>
>>>>
>>>>
>>>>> The point is, if you write your metadata earlier (say, every 5 sec)
>>>>> and the real data later (say, every 30 sec), you're actually MORE
>>>>> LIKELY to see corrupt files than if you try to write them together.
>>>>>
>>>>> And if you write your data _first_, you're never going to see
>>>>> corruption at all.
>>>>>
>>>> Amen.
>>>>
>>>> And, personal filesystem pet peeve: please encourage proper FLUSH
>>>> CACHE use to give users the data guarantees they deserve. Linux's
>>>> sync(2) and fsync(2) (and fdatasync, etc.) should poke the block
>>>> layer to guarantee a media write.
>>>>
>>> fsync already does that, at least if you have barriers enabled on your
>>> drive.
>>>
>> Erm, no, you don't enable barriers on your drive, they are not a
>> hardware feature. You enable barriers via your filesystem.
>>
>
> Thanks for the lesson Jeff, I'm obviously not aware how that stuff
> works...
>
>
>> Stating "fsync already does that" borders on false, because that assumes
>> (a) the user has a fs that supports barriers
>> (b) the user is actually aware of a 'barriers' mount option and what it
>> means
>> (c) the user has turned on an option normally defaulted to off.
>>
>> Or in other words, it pretty much never happens.
>>
>
> That is true, except if you use xfs/ext4. And this discussion is fine,
> as was the one a few months back that got ext4 to enable barriers by
> default. If I had submitted patches to do that back in 2001/2 when the
> barrier stuff was written, I would have been shot for introducing such a
> slow down. After people found out that it just wasn't something silly,
> then you have a way to enable it.
>
> I'd still wager that most people would rather have a 'good enough
> fsync' on their desktops than incur the penalty of barriers or write
> through caching. I know I do.
>
>
>> Furthermore, a blatantly obvious place to flush data to media --
>> fsync(2), fdatasync(2) and sync_file_range(2) -- should cause the block
>> layer to issue a FLUSH CACHE for __any__ filesystem. But that doesn't
>> happen either.
>>
>> So, no, for 95% of Linux users, fsync does _not_ already do that. If
>> you are lucky enough to use XFS or ext4, you're covered. That's it.
>>
>
> The point is that you need to expose this choice somewhere, and that
> 'somewhere' isn't manually editing fstab and enabling barriers or
> fsync-for-real. And it should be easier.
>
> Another problem is that FLUSH_CACHE sucks. Really. And not just on
> ext3/ordered, generally. Write a 50 byte file, fsync, flush cache and
> wit for the world to finish. Pretty hard to teach people to use a nicer
> fdatasync(), when the majority of the cost now becomes flushing the
> cache of that 1TB drive you happen to have 8 partitions on. Good luck
> with that.
>
>
And, as I am sure that you do know, to add insult to injury, FLUSH_CACHE
is per device (not file system).
When you issue an fsync() on a disk with multiple partitions, you will
flush the data for all of its partitions from the write cache....
ric
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 19:49 ` Ric Wheeler
@ 2009-03-25 19:57 ` Jens Axboe
2009-03-25 20:41 ` Hugh Dickins
2009-03-25 20:16 ` Jeff Garzik
1 sibling, 1 reply; 419+ messages in thread
From: Jens Axboe @ 2009-03-25 19:57 UTC (permalink / raw)
To: Ric Wheeler
Cc: Jeff Garzik, Linus Torvalds, Theodore Tso, Ingo Molnar, Alan Cox,
Arjan van de Ven, Andrew Morton, Peter Zijlstra, Nick Piggin,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Wed, Mar 25 2009, Ric Wheeler wrote:
> Jens Axboe wrote:
>> On Wed, Mar 25 2009, Jeff Garzik wrote:
>>
>>> Jens Axboe wrote:
>>>
>>>> On Tue, Mar 24 2009, Jeff Garzik wrote:
>>>>
>>>>> Linus Torvalds wrote:
>>>>>
>>>>>> But I really don't understand filesystem people who think that
>>>>>> "fsck" is the important part, regardless of whether the data is
>>>>>> valid or not. That's just stupid and _obviously_ bogus.
>>>>>>
>>>>> I think I can understand that point of view, at least:
>>>>>
>>>>> More customers complain about hours-long fsck times than they do
>>>>> about silent data corruption of non-fsync'd files.
>>>>>
>>>>>
>>>>>
>>>>>> The point is, if you write your metadata earlier (say, every 5
>>>>>> sec) and the real data later (say, every 30 sec), you're
>>>>>> actually MORE LIKELY to see corrupt files than if you try to
>>>>>> write them together.
>>>>>>
>>>>>> And if you write your data _first_, you're never going to see
>>>>>> corruption at all.
>>>>>>
>>>>> Amen.
>>>>>
>>>>> And, personal filesystem pet peeve: please encourage proper
>>>>> FLUSH CACHE use to give users the data guarantees they deserve.
>>>>> Linux's sync(2) and fsync(2) (and fdatasync, etc.) should poke
>>>>> the block layer to guarantee a media write.
>>>>>
>>>> fsync already does that, at least if you have barriers enabled on your
>>>> drive.
>>>>
>>> Erm, no, you don't enable barriers on your drive, they are not a
>>> hardware feature. You enable barriers via your filesystem.
>>>
>>
>> Thanks for the lesson Jeff, I'm obviously not aware how that stuff
>> works...
>>
>>
>>> Stating "fsync already does that" borders on false, because that assumes
>>> (a) the user has a fs that supports barriers
>>> (b) the user is actually aware of a 'barriers' mount option and what
>>> it means
>>> (c) the user has turned on an option normally defaulted to off.
>>>
>>> Or in other words, it pretty much never happens.
>>>
>>
>> That is true, except if you use xfs/ext4. And this discussion is fine,
>> as was the one a few months back that got ext4 to enable barriers by
>> default. If I had submitted patches to do that back in 2001/2 when the
>> barrier stuff was written, I would have been shot for introducing such a
>> slow down. After people found out that it just wasn't something silly,
>> then you have a way to enable it.
>>
>> I'd still wager that most people would rather have a 'good enough
>> fsync' on their desktops than incur the penalty of barriers or write
>> through caching. I know I do.
>>
>>
>>> Furthermore, a blatantly obvious place to flush data to media --
>>> fsync(2), fdatasync(2) and sync_file_range(2) -- should cause the
>>> block layer to issue a FLUSH CACHE for __any__ filesystem. But that
>>> doesn't happen either.
>>>
>>> So, no, for 95% of Linux users, fsync does _not_ already do that. If
>>> you are lucky enough to use XFS or ext4, you're covered. That's it.
>>>
>>
>> The point is that you need to expose this choice somewhere, and that
>> 'somewhere' isn't manually editing fstab and enabling barriers or
>> fsync-for-real. And it should be easier.
>>
>> Another problem is that FLUSH_CACHE sucks. Really. And not just on
>> ext3/ordered, generally. Write a 50 byte file, fsync, flush cache and
>> wit for the world to finish. Pretty hard to teach people to use a nicer
>> fdatasync(), when the majority of the cost now becomes flushing the
>> cache of that 1TB drive you happen to have 8 partitions on. Good luck
>> with that.
>>
>>
> And, as I am sure that you do know, to add insult to injury, FLUSH_CACHE
> is per device (not file system).
>
> When you issue an fsync() on a disk with multiple partitions, you will
> flush the data for all of its partitions from the write cache....
Exactly, that's what my (vague) 8 partition reference was for :-)
A range flush would be so much more palatable.
--
Jens Axboe
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 19:49 ` Ric Wheeler
2009-03-25 19:57 ` Jens Axboe
@ 2009-03-25 20:16 ` Jeff Garzik
2009-03-25 20:25 ` Ric Wheeler
2009-03-25 21:27 ` Benny Halevy
1 sibling, 2 replies; 419+ messages in thread
From: Jeff Garzik @ 2009-03-25 20:16 UTC (permalink / raw)
To: Ric Wheeler
Cc: Jens Axboe, Linus Torvalds, Theodore Tso, Ingo Molnar, Alan Cox,
Arjan van de Ven, Andrew Morton, Peter Zijlstra, Nick Piggin,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Ric Wheeler wrote:> And, as I am sure that you do know, to add insult to
injury, FLUSH_CACHE
> is per device (not file system).
>
> When you issue an fsync() on a disk with multiple partitions, you will
> flush the data for all of its partitions from the write cache....
SCSI'S SYNCHRONIZE CACHE command already accepts an (LBA, length) pair.
We could make use of that.
And I bet we could convince T13 to add FLUSH CACHE RANGE, if we could
demonstrate clear benefit.
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 20:16 ` Jeff Garzik
@ 2009-03-25 20:25 ` Ric Wheeler
2009-03-25 21:22 ` James Bottomley
2009-03-25 21:27 ` Benny Halevy
1 sibling, 1 reply; 419+ messages in thread
From: Ric Wheeler @ 2009-03-25 20:25 UTC (permalink / raw)
To: Jeff Garzik, James Bottomley
Cc: Ric Wheeler, Jens Axboe, Linus Torvalds, Theodore Tso,
Ingo Molnar, Alan Cox, Arjan van de Ven, Andrew Morton,
Peter Zijlstra, Nick Piggin, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Jeff Garzik wrote:
> Ric Wheeler wrote:> And, as I am sure that you do know, to add insult
> to injury, FLUSH_CACHE
>> is per device (not file system).
>>
>> When you issue an fsync() on a disk with multiple partitions, you
>> will flush the data for all of its partitions from the write cache....
>
> SCSI'S SYNCHRONIZE CACHE command already accepts an (LBA, length)
> pair. We could make use of that.
>
> And I bet we could convince T13 to add FLUSH CACHE RANGE, if we could
> demonstrate clear benefit.
>
> Jeff
How well supported is this in SCSI? Can we try it out with a commodity
SAS drive?
Ric
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 19:43 ` Jens Axboe
2009-03-25 19:49 ` Ric Wheeler
@ 2009-03-25 20:25 ` Jeff Garzik
2009-03-25 20:40 ` Linus Torvalds
2009-03-27 7:46 ` Jens Axboe
2009-03-31 20:49 ` Jeff Garzik
2 siblings, 2 replies; 419+ messages in thread
From: Jeff Garzik @ 2009-03-25 20:25 UTC (permalink / raw)
Cc: Linus Torvalds, Theodore Tso, Ingo Molnar, Alan Cox,
Arjan van de Ven, Andrew Morton, Peter Zijlstra, Nick Piggin,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Jens Axboe wrote:
> On Wed, Mar 25 2009, Jeff Garzik wrote:
>> Stating "fsync already does that" borders on false, because that assumes
>> (a) the user has a fs that supports barriers
>> (b) the user is actually aware of a 'barriers' mount option and what it
>> means
>> (c) the user has turned on an option normally defaulted to off.
>>
>> Or in other words, it pretty much never happens.
>
> That is true, except if you use xfs/ext4. And this discussion is fine,
> as was the one a few months back that got ext4 to enable barriers by
> default. If I had submitted patches to do that back in 2001/2 when the
> barrier stuff was written, I would have been shot for introducing such a
> slow down. After people found out that it just wasn't something silly,
> then you have a way to enable it.
>
> I'd still wager that most people would rather have a 'good enough
> fsync' on their desktops than incur the penalty of barriers or write
> through caching. I know I do.
That's a strawman argument: The choice is not between "good enough
fsync" and full use of barriers / write-through caching, at all.
It is clearly possible to implement an fsync(2) that causes FLUSH CACHE
to be issued, without adding full barrier support to a filesystem. It
is likely doable to avoid touching per-filesystem code at all, if we
issue the flush from a generic fsync(2) code path in the kernel.
Thus, you have a "third way": fsync(2) gives the guarantee it is
supposed to, but you do not take the full performance hit of
barriers-all-the-time.
Remember, fsync(2) means that the user _expects_ a performance hit.
And they took the extra step to call fsync(2) because they want a
guarantee, not a lie.
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 20:25 ` Jeff Garzik
@ 2009-03-25 20:40 ` Linus Torvalds
2009-03-25 20:57 ` Ric Wheeler
` (2 more replies)
2009-03-27 7:46 ` Jens Axboe
1 sibling, 3 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-03-25 20:40 UTC (permalink / raw)
To: Jeff Garzik
Cc: Theodore Tso, Ingo Molnar, Alan Cox, Arjan van de Ven,
Andrew Morton, Peter Zijlstra, Nick Piggin, David Rees,
Jesper Krogh, Linux Kernel Mailing List
On Wed, 25 Mar 2009, Jeff Garzik wrote:
>
> It is clearly possible to implement an fsync(2) that causes FLUSH CACHE to be
> issued, without adding full barrier support to a filesystem. It is likely
> doable to avoid touching per-filesystem code at all, if we issue the flush
> from a generic fsync(2) code path in the kernel.
We could easily do that. It would even work for most cases. The
problematic ones are where filesystems do their own disk management, but I
guess those people can do their own fsync() management too.
Somebody send me the patch, we can try it out.
> Remember, fsync(2) means that the user _expects_ a performance hit.
Within reason, though.
OS X, for example, doesn't do the disk barrier. It requires you to do a
separate FULL_FSYNC (or something similar) ioctl to get that. Apparently
exactly because users don't expect quite _that_ big of a performance hit.
(Or maybe just because it was easier to do that way. Never attribute to
malice what can be sufficiently explained by stupidity).
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 19:57 ` Jens Axboe
@ 2009-03-25 20:41 ` Hugh Dickins
2009-03-26 8:57 ` Jens Axboe
0 siblings, 1 reply; 419+ messages in thread
From: Hugh Dickins @ 2009-03-25 20:41 UTC (permalink / raw)
To: Jens Axboe
Cc: Ric Wheeler, Jeff Garzik, Linus Torvalds, Theodore Tso,
Ingo Molnar, Alan Cox, Arjan van de Ven, Andrew Morton,
Peter Zijlstra, Nick Piggin, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Wed, 25 Mar 2009, Jens Axboe wrote:
> On Wed, Mar 25 2009, Ric Wheeler wrote:
> > Jens Axboe wrote:
> >>
> >> Another problem is that FLUSH_CACHE sucks. Really. And not just on
> >> ext3/ordered, generally. Write a 50 byte file, fsync, flush cache and
> >> wit for the world to finish. Pretty hard to teach people to use a nicer
> >> fdatasync(), when the majority of the cost now becomes flushing the
> >> cache of that 1TB drive you happen to have 8 partitions on. Good luck
> >> with that.
> >>
> > And, as I am sure that you do know, to add insult to injury, FLUSH_CACHE
> > is per device (not file system).
> >
> > When you issue an fsync() on a disk with multiple partitions, you will
> > flush the data for all of its partitions from the write cache....
>
> Exactly, that's what my (vague) 8 partition reference was for :-)
> A range flush would be so much more palatable.
Tangential question, but am I right in thinking that BIO_RW_BARRIER
similarly bars across all partitions, whereas its WRITE_BARRIER and
DISCARD_BARRIER users would actually prefer it to apply to just one?
Hugh
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 18:58 ` Theodore Tso
2009-03-25 19:48 ` Christoph Hellwig
@ 2009-03-25 20:45 ` Linus Torvalds
2009-03-25 21:51 ` Theodore Tso
1 sibling, 1 reply; 419+ messages in thread
From: Linus Torvalds @ 2009-03-25 20:45 UTC (permalink / raw)
To: Theodore Tso
Cc: Jan Kara, Andrew Morton, Ingo Molnar, Alan Cox, Arjan van de Ven,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Wed, 25 Mar 2009, Theodore Tso wrote:
>
> Now, there are three ways of solving this problem.
You seem to disregard the "write in the right order" approach. Or is that
your:
> The third potential solution we can try doing is to make some tuning
> adjustments to the VM so that we start pushing out these data blocks
> much more aggressively out to the disk.
Yes. but at least one problem is, as mentioned, that when the VM calls
writepage[s]() to start async writeback, many filesystems do seem to just
_block_ on it.
So the VM has a really hard time doing anything sanely early - the
filesystems seem to take a perverse pleasure in synchronizing things using
blocking semaphores.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 20:40 ` Linus Torvalds
@ 2009-03-25 20:57 ` Ric Wheeler
2009-03-25 23:02 ` Linus Torvalds
2009-03-25 21:33 ` Jeff Garzik
2009-03-27 7:57 ` Jens Axboe
2 siblings, 1 reply; 419+ messages in thread
From: Ric Wheeler @ 2009-03-25 20:57 UTC (permalink / raw)
To: Linus Torvalds
Cc: Jeff Garzik, Theodore Tso, Ingo Molnar, Alan Cox,
Arjan van de Ven, Andrew Morton, Peter Zijlstra, Nick Piggin,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Linus Torvalds wrote:
> On Wed, 25 Mar 2009, Jeff Garzik wrote:
>
>> It is clearly possible to implement an fsync(2) that causes FLUSH CACHE to be
>> issued, without adding full barrier support to a filesystem. It is likely
>> doable to avoid touching per-filesystem code at all, if we issue the flush
>> from a generic fsync(2) code path in the kernel.
>>
>
> We could easily do that. It would even work for most cases. The
> problematic ones are where filesystems do their own disk management, but I
> guess those people can do their own fsync() management too.
>
One concern with doing this above the file system is that you are not in
the context of a transaction so you have no clean promises about what is
on disk and persistent when. Flushing the cache is primitive at best,
but the way barriers work today is designed to give the transactions
some pretty critical ordering semantics for journalling file systems at
least.
I don't see how you could use this approach to make a really robust,
failure proof storage system, but it might appear to work most of the
time for most people :-)
ric
> Somebody send me the patch, we can try it out.
>
>
>> Remember, fsync(2) means that the user _expects_ a performance hit.
>>
>
> Within reason, though.
>
> OS X, for example, doesn't do the disk barrier. It requires you to do a
> separate FULL_FSYNC (or something similar) ioctl to get that. Apparently
> exactly because users don't expect quite _that_ big of a performance hit.
>
> (Or maybe just because it was easier to do that way. Never attribute to
> malice what can be sufficiently explained by stupidity).
>
> Linus
>
>
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 20:25 ` Ric Wheeler
@ 2009-03-25 21:22 ` James Bottomley
2009-03-26 8:59 ` Jens Axboe
0 siblings, 1 reply; 419+ messages in thread
From: James Bottomley @ 2009-03-25 21:22 UTC (permalink / raw)
To: Ric Wheeler
Cc: Jeff Garzik, Jens Axboe, Linus Torvalds, Theodore Tso,
Ingo Molnar, Alan Cox, Arjan van de Ven, Andrew Morton,
Peter Zijlstra, Nick Piggin, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Wed, 2009-03-25 at 16:25 -0400, Ric Wheeler wrote:
> Jeff Garzik wrote:
> > Ric Wheeler wrote:> And, as I am sure that you do know, to add insult
> > to injury, FLUSH_CACHE
> >> is per device (not file system).
> >>
> >> When you issue an fsync() on a disk with multiple partitions, you
> >> will flush the data for all of its partitions from the write cache....
> >
> > SCSI'S SYNCHRONIZE CACHE command already accepts an (LBA, length)
> > pair. We could make use of that.
> >
> > And I bet we could convince T13 to add FLUSH CACHE RANGE, if we could
> > demonstrate clear benefit.
> >
> > Jeff
>
> How well supported is this in SCSI? Can we try it out with a commodity
> SAS drive?
What do you mean by well supported? The way the SCSI standard is
written, a device can do a complete cache flush when a range flush is
requested and still be fully standards compliant. There's no easy way
to tell if it does a complete cache flush every time other than by
taking the firmware apart (or asking the manufacturer).
James
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 20:16 ` Jeff Garzik
2009-03-25 20:25 ` Ric Wheeler
@ 2009-03-25 21:27 ` Benny Halevy
1 sibling, 0 replies; 419+ messages in thread
From: Benny Halevy @ 2009-03-25 21:27 UTC (permalink / raw)
To: Jeff Garzik
Cc: Ric Wheeler, Jens Axboe, Linus Torvalds, Theodore Tso,
Ingo Molnar, Alan Cox, Arjan van de Ven, Andrew Morton,
Peter Zijlstra, Nick Piggin, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Mar. 25, 2009, 22:16 +0200, Jeff Garzik <jeff@garzik.org> wrote:
> Ric Wheeler wrote:> And, as I am sure that you do know, to add insult to
> injury, FLUSH_CACHE
>> is per device (not file system).
>>
>> When you issue an fsync() on a disk with multiple partitions, you will
>> flush the data for all of its partitions from the write cache....
>
> SCSI'S SYNCHRONIZE CACHE command already accepts an (LBA, length) pair.
> We could make use of that.
>
> And I bet we could convince T13 to add FLUSH CACHE RANGE, if we could
> demonstrate clear benefit.
One more example of flexible, fine grain flush (though quite far out) are
T10 OSDs with which you can flush a byte range of a single object
(or collection, partition, or the whole device LUN)
Benny
>
> Jeff
>
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 20:40 ` Linus Torvalds
2009-03-25 20:57 ` Ric Wheeler
@ 2009-03-25 21:33 ` Jeff Garzik
2009-03-27 7:57 ` Jens Axboe
2 siblings, 0 replies; 419+ messages in thread
From: Jeff Garzik @ 2009-03-25 21:33 UTC (permalink / raw)
To: Linus Torvalds
Cc: Theodore Tso, Ingo Molnar, Alan Cox, Arjan van de Ven,
Andrew Morton, Peter Zijlstra, Nick Piggin, David Rees,
Jesper Krogh, Linux Kernel Mailing List
Linus Torvalds wrote:
> OS X, for example, doesn't do the disk barrier. It requires you to do a
> separate FULL_FSYNC (or something similar) ioctl to get that. Apparently
> exactly because users don't expect quite _that_ big of a performance hit.
I can understand that, more from an admin standpoint than anything...
ATA disks' FLUSH CACHE is horribly coarse-grained, all-or-nothing.
SCSI's SYNCHRONIZE CACHE at least gives us an optional (LBA, length)
pair that can be used to avoid to flushing everything in the cache.
Microsoft has publicly proposed a WRITE BARRIER command for ATA, to try
and improve the situation:
http://www.t13.org/Documents/UploadedDocuments/docs2007/e07174r0-Write_Barrier_Command_Proposal.doc
but that isn't in the field yet (if ever?)
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 19:48 ` Christoph Hellwig
@ 2009-03-25 21:50 ` Theodore Tso
2009-03-26 2:10 ` Matthew Garrett
0 siblings, 1 reply; 419+ messages in thread
From: Theodore Tso @ 2009-03-25 21:50 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Linus Torvalds, Jan Kara, Andrew Morton, Ingo Molnar, Alan Cox,
Arjan van de Ven, Peter Zijlstra, Nick Piggin, Jens Axboe,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Wed, Mar 25, 2009 at 03:48:51PM -0400, Christoph Hellwig wrote:
> On Wed, Mar 25, 2009 at 02:58:24PM -0400, Theodore Tso wrote:
> > omits the fsync(). So with ext4 we has workarounds that start pushing
> > out the data blocks in the for replace-via-rename and
> > replace-via-truncate cases, while XFS will do an implied fsync for
> > replace-via-truncate only, and btrfs will do an implied fsync for
> > replace-via-rename only.
>
> The XFS one and the ext4 one that I saw only start an _asynchronous_
> writeout. Which is not an implied fsync but snake oil to make the
> most common complaints go away without providing hard guarantees.
It actually does the right thing for ext4, because once we allocate
the blocks, the default data=ordered mode means that we flush the
datablocks before we execute the commit. Hence, in the case of
open/write/close/rename, the rename will trigger an async writeout,
but before the commit block is actually written, we'll have flushed
out the data blocks.
I was under the impression that XFS was doing a synchronous fsync
before allowing the close() return, but all it is triggering an async
writeout, then yes, your concern is correct. The bigger problem from
my perspective is that XFS is only doing this for the truncate case,
and (from what I've been told) not for the rename case. The truncate
is fundamentally racy and application writers that don't do this
definitely don't deserve our solicitude, IMHO. But people who do
open/write/close/rename, and omit the fsync before the rename, are at
least somewhat more deserving for some kind of workaround than the
idiots that do open/truncate/write/close.
> IFF we want to go down this route we should better provide strong
> guranteed semantics and document the propery. And of course implement
> it consistently on all native filesystems.
That's something we should talk about at LSF. I'm not all that eager
(or happy) about doing this, but I think that, given that the
application writers massively outnumber us, we are going to be bullied
into it.
> Note that the rename for atomic commits trick originated in mail severs
> which always did the proper fsync. When the word spread into the
> desktop world it looks like this wisdom got lost.
Yep, agreed.
To be fair, though, one problem which Matthew Garrett has pointed out
is that if lots of applications issue fsync(), it will have the
tendency to wake up the hard drive a lot, and do a real number on
power utilization. I believe the right solution for this is an
extension to laptop mode which synchronizes the filesystem at a clean
point, and then which suppresses fsync()'s until the hard drive wakes
up, at which point it should flush all dirty data to the drive, and
then freezes writes to the disk again. Presumably that should be OK,
because who are using laptop mode are inherently trading off a certain
amount of safety for power savings; but then other people who want to
run a mysql server on a laptop get cranky, and then if we start
implementing ways that applications can exempt themselves from the
fsync() suppression, the complexity level starts rising.
This is a pretty complicated problem.... if people want to mount the
filesystem with the sync mount option, sure, but when people want
safety, speed, efficiency, power savings, *and* they want to use
crappy proprietary device drivers that crash if you look at them
funny, *and* be solicitous to application writers that rewrite
hundreds of files on desktop startup (even though it's not clear *why*
it is useful for KDE or GNOME to rewrite hundreds of files when the
user logs in and initializes the desktop), something has got to give.
There's nothing to trade off, other than the sanity of the file system
maintainers. (But that's OK, Linus has called us crazy already. :-/)
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 20:45 ` Linus Torvalds
@ 2009-03-25 21:51 ` Theodore Tso
2009-03-25 23:21 ` Linus Torvalds
0 siblings, 1 reply; 419+ messages in thread
From: Theodore Tso @ 2009-03-25 21:51 UTC (permalink / raw)
To: Linus Torvalds
Cc: Jan Kara, Andrew Morton, Ingo Molnar, Alan Cox, Arjan van de Ven,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Wed, Mar 25, 2009 at 01:45:43PM -0700, Linus Torvalds wrote:
> > The third potential solution we can try doing is to make some tuning
> > adjustments to the VM so that we start pushing out these data blocks
> > much more aggressively out to the disk.
>
> Yes. but at least one problem is, as mentioned, that when the VM calls
> writepage[s]() to start async writeback, many filesystems do seem to just
> _block_ on it.
Um, no, ext3 shouldn't block on writepage(). Since it doesn't do
delayed allocation, it should always be able to push out a dirty page
to the disk.
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 18:40 ` Linus Torvalds
@ 2009-03-25 22:05 ` Theodore Tso
2009-03-25 23:23 ` Linus Torvalds
2009-03-26 2:50 ` Neil Brown
0 siblings, 2 replies; 419+ messages in thread
From: Theodore Tso @ 2009-03-25 22:05 UTC (permalink / raw)
To: Linus Torvalds; +Cc: David Rees, Jesper Krogh, Linux Kernel Mailing List
On Wed, Mar 25, 2009 at 11:40:28AM -0700, Linus Torvalds wrote:
> On Wed, 25 Mar 2009, Theodore Tso wrote:
> > I'm beginning to think that using a "ratio" may be the wrong way to
> > go. We probably need to add an optional dirty_max_megabytes field
> > where we start pushing dirty blocks out when the number of dirty
> > blocks exceeds either the dirty_ratio or the dirty_max_megabytes,
> > which ever comes first.
>
> We have that. Except it's called "dirty_bytes" and
> "dirty_background_bytes", and it defaults to zero (off).
>
> The problem being that unlike the ratio, there's no sane default value
> that you can at least argue is not _entirely_ pointless.
Well, if the maximum time that someone wants to wait for an fsync() to
return is one second, and the RAID array can write 100MB/sec, then
setting a value of 100MB makes a certain amount of sense. Yes, this
doesn't take seek overheads into account, and it may be that we're not
writing things out in an optimal order, as Alan as pointed out. But
100MB is much lower number than 5% of 32GB (1.6GB). It would be
better if these numbers were accounted on a per-filesystem instead of
a global threshold, but for people who are complaining about huge
latencies, it at least a partial workaround that they can use today.
I agree, it's not perfect, but this is a fundamentally hard problem.
We have multiple solutions, such as ext4 and XFS's delayed allocation,
which some people don't like because applications aren't calling
fsync(). We can boost the I/O priority of kjournald which definitely
helps, as Arjan has suggested, but Andrew has vetoed that. I have a
patch which hopefully is less controversial, that posts writes using
WRITE_SYNC instead of WRITE, but which only will help in some
circumstances, but not in the distcc/icecream/fast downloads
scnearios. We can use data=writeback, but folks don't like the
security implications of that.
People can call file system developers idiots if it makes them feel
better --- sure, OK, we all suck. If someone wants to try to create a
better file system, show us how to do better, or send us some patches.
But this is not a problem that's easy to solve in a way that's going
to make everyone happy; else it would have been solved already.
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 20:57 ` Ric Wheeler
@ 2009-03-25 23:02 ` Linus Torvalds
2009-03-26 0:28 ` Ric Wheeler
0 siblings, 1 reply; 419+ messages in thread
From: Linus Torvalds @ 2009-03-25 23:02 UTC (permalink / raw)
To: Ric Wheeler
Cc: Jeff Garzik, Theodore Tso, Ingo Molnar, Alan Cox,
Arjan van de Ven, Andrew Morton, Peter Zijlstra, Nick Piggin,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Wed, 25 Mar 2009, Ric Wheeler wrote:
>
> One concern with doing this above the file system is that you are not in the
> context of a transaction so you have no clean promises about what is on disk
> and persistent when. Flushing the cache is primitive at best, but the way
> barriers work today is designed to give the transactions some pretty critical
> ordering semantics for journalling file systems at least.
>
> I don't see how you could use this approach to make a really robust, failure
> proof storage system, but it might appear to work most of the time for most
> people :-)
You just do a write barrier after doing all the filesystem writing, and
you return with the guarantee that all the writes the filesystem did are
actually on disk.
No gray areas. No questions. No "might appear to work".
Sure, there might be other writes that got flushed _too_, but nobody
cares. If you have a crash later on, that's always true - you don't get
crashes at nice well-defined points.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 21:51 ` Theodore Tso
@ 2009-03-25 23:21 ` Linus Torvalds
2009-03-25 23:50 ` Jan Kara
2009-03-25 23:57 ` Linus Torvalds
0 siblings, 2 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-03-25 23:21 UTC (permalink / raw)
To: Theodore Tso
Cc: Jan Kara, Andrew Morton, Ingo Molnar, Alan Cox, Arjan van de Ven,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Wed, 25 Mar 2009, Theodore Tso wrote:
>
> Um, no, ext3 shouldn't block on writepage(). Since it doesn't do
> delayed allocation, it should always be able to push out a dirty page
> to the disk.
Umm. Maybe I'm mis-reading something, but they seem to all synchronize
with the journal with "ext3_journal_start/stop".
Which will at a minimum wait for 'j_barrier_count == 0' and 't_state !=
T_LOCKED'. Along with making sure that there are enough transaction
buffers.
Do I understand _why_ ext3 does that? Hell no. The code makes no sense to
me. But I don't think I'm wrong.
Look at the sane case (data=ordered): it still does
handle = ext3_journal_start(inode, ext3_writepage_trans_blocks(inode));
...
err = ext3_journal_stop(handle);
around all the IO starting. Never mind that the IO shouldn't be needing
any journal activity at all afaik in any common case.
Yes, yes, it may need to allocate backing store (a page that was dirtied
by mmap), and I'm sure that's the reason for it all, but the point is,
most of the time there should be no journal activity at all, yet it looks
very much like a simple writepage() will synchronize with a full journal
and wait for the journal to get space.
No?
So tell me again how the VM can rely on the filesystem not blocking at
random points.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 22:05 ` Theodore Tso
@ 2009-03-25 23:23 ` Linus Torvalds
2009-03-25 23:46 ` Bron Gondwana
2009-03-27 0:11 ` Andrew Morton
2009-03-26 2:50 ` Neil Brown
1 sibling, 2 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-03-25 23:23 UTC (permalink / raw)
To: Theodore Tso; +Cc: David Rees, Jesper Krogh, Linux Kernel Mailing List
On Wed, 25 Mar 2009, Theodore Tso wrote:
> >
> > The problem being that unlike the ratio, there's no sane default value
> > that you can at least argue is not _entirely_ pointless.
>
> Well, if the maximum time that someone wants to wait for an fsync() to
> return is one second, and the RAID array can write 100MB/sec
How are you going to tell the kernel that the RAID array can write
100MB/s?
The kernel has no idea.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 23:23 ` Linus Torvalds
@ 2009-03-25 23:46 ` Bron Gondwana
2009-03-26 0:32 ` Ric Wheeler
2009-03-27 0:11 ` Andrew Morton
1 sibling, 1 reply; 419+ messages in thread
From: Bron Gondwana @ 2009-03-25 23:46 UTC (permalink / raw)
To: Linus Torvalds
Cc: Theodore Tso, David Rees, Jesper Krogh, Linux Kernel Mailing List
On Wed, Mar 25, 2009 at 04:23:08PM -0700, Linus Torvalds wrote:
>
>
> On Wed, 25 Mar 2009, Theodore Tso wrote:
> > >
> > > The problem being that unlike the ratio, there's no sane default value
> > > that you can at least argue is not _entirely_ pointless.
> >
> > Well, if the maximum time that someone wants to wait for an fsync() to
> > return is one second, and the RAID array can write 100MB/sec
>
> How are you going to tell the kernel that the RAID array can write
> 100MB/s?
>
> The kernel has no idea.
Not at boot up, but after it's been using the RAID array for a little
while it could...
Bron (... imagining a tunable "max_fsync_wait_target_centisecs = 100"
which caused the kernel to notice how long flushes were taking
and tune its buffer sizes to be approximately right over time )
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 23:21 ` Linus Torvalds
@ 2009-03-25 23:50 ` Jan Kara
2009-03-26 0:04 ` Linus Torvalds
2009-03-25 23:57 ` Linus Torvalds
1 sibling, 1 reply; 419+ messages in thread
From: Jan Kara @ 2009-03-25 23:50 UTC (permalink / raw)
To: Linus Torvalds
Cc: Theodore Tso, Andrew Morton, Ingo Molnar, Alan Cox,
Arjan van de Ven, Peter Zijlstra, Nick Piggin, Jens Axboe,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Wed 25-03-09 16:21:56, Linus Torvalds wrote:
> On Wed, 25 Mar 2009, Theodore Tso wrote:
> >
> > Um, no, ext3 shouldn't block on writepage(). Since it doesn't do
> > delayed allocation, it should always be able to push out a dirty page
> > to the disk.
>
> Umm. Maybe I'm mis-reading something, but they seem to all synchronize
> with the journal with "ext3_journal_start/stop".
>
> Which will at a minimum wait for 'j_barrier_count == 0' and 't_state !=
> T_LOCKED'. Along with making sure that there are enough transaction
> buffers.
>
> Do I understand _why_ ext3 does that? Hell no. The code makes no sense to
> me. But I don't think I'm wrong.
>
> Look at the sane case (data=ordered): it still does
>
> handle = ext3_journal_start(inode, ext3_writepage_trans_blocks(inode));
> ...
> err = ext3_journal_stop(handle);
>
> around all the IO starting. Never mind that the IO shouldn't be needing
> any journal activity at all afaik in any common case.
>
> Yes, yes, it may need to allocate backing store (a page that was dirtied
> by mmap), and I'm sure that's the reason for it all, but the point is,
> most of the time there should be no journal activity at all, yet it looks
> very much like a simple writepage() will synchronize with a full journal
> and wait for the journal to get space.
>
> No?
Yes, you got it right. Furthermore in ordered mode we need to attach
buffers to the running transaction if they aren't there (but for checking
whether they are we need to pin the running transaction and we are
basically where we started.. damn). But maybe there's a way out of it.
We don't have to guarantee data written via mmap are on disk when "the
transaction running when somebody decided to call writepage" commits (in
case no block allocation happen) and so we could just submit those buffers
for IO and don't attach them to the transaction...
> So tell me again how the VM can rely on the filesystem not blocking at
> random points.
I can write a patch to make writepage() in the non-"mmapped creation"
case non-blocking on journal. But I'll also have to find out whether it
really helps something. But it's probably worth trying...
Honza
--
Jan Kara <jack@suse.cz>
SUSE Labs, CR
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 23:21 ` Linus Torvalds
2009-03-25 23:50 ` Jan Kara
@ 2009-03-25 23:57 ` Linus Torvalds
2009-03-26 0:22 ` Jan Kara
1 sibling, 1 reply; 419+ messages in thread
From: Linus Torvalds @ 2009-03-25 23:57 UTC (permalink / raw)
To: Theodore Tso
Cc: Jan Kara, Andrew Morton, Ingo Molnar, Alan Cox, Arjan van de Ven,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Wed, 25 Mar 2009, Linus Torvalds wrote:
>
> Yes, yes, it may need to allocate backing store (a page that was dirtied
> by mmap), and I'm sure that's the reason for it all,
Hmm. Thinking about that, I'm not so sure. Shouldn't that backing store
allocation happen when the page is actually dirtied on ext3?
I _suspect_ that goes back to the fact that ext3 is older than the
"aops->set_page_dirty()" callback, and nobody taught ext3 to do the bmap's
at dirty time, so now it does it at writeout time.
Anyway, there we are. Old filesystems do the wrong thing (block allocation
while doing writeout because they don't do it when dirtying), and newer
filesystems do the wrong thing (block allocations during writeout, because
they want to do delayed allocation to do the inode dirtying after doing
writeback).
And in either case, the VM is screwed, and can't ask for writeout, because
it will be randomly throttled by the filesystem. So we do lots of async
bdflush threads, which then causes IO ordering problems because now the
writeout is all in random order.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 23:50 ` Jan Kara
@ 2009-03-26 0:04 ` Linus Torvalds
0 siblings, 0 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-03-26 0:04 UTC (permalink / raw)
To: Jan Kara
Cc: Theodore Tso, Andrew Morton, Ingo Molnar, Alan Cox,
Arjan van de Ven, Peter Zijlstra, Nick Piggin, Jens Axboe,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Thu, 26 Mar 2009, Jan Kara wrote:
>
> I can write a patch to make writepage() in the non-"mmapped creation"
> case non-blocking on journal. But I'll also have to find out whether it
> really helps something. But it's probably worth trying...
Actually, it really should be easier to make a patch that just does the
journal thing if ->set_page_dirty() is called, and buffers weren't already
allocated.
Then ext3_[ordered|writeback]_writepage() _should_ just become something
like
if (test_opt(inode->i_sb, NOBH))
return nobh_writepage(page, ext3_get_block, wbc);
return block_write_full_page(page, ext3_get_block, wbc);
and that's it. The code would be simpler to understand to boot.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 23:57 ` Linus Torvalds
@ 2009-03-26 0:22 ` Jan Kara
2009-03-26 1:34 ` Linus Torvalds
0 siblings, 1 reply; 419+ messages in thread
From: Jan Kara @ 2009-03-26 0:22 UTC (permalink / raw)
To: Linus Torvalds
Cc: Theodore Tso, Andrew Morton, Ingo Molnar, Alan Cox,
Arjan van de Ven, Peter Zijlstra, Nick Piggin, Jens Axboe,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Wed 25-03-09 16:57:21, Linus Torvalds wrote:
>
>
> On Wed, 25 Mar 2009, Linus Torvalds wrote:
> >
> > Yes, yes, it may need to allocate backing store (a page that was dirtied
> > by mmap), and I'm sure that's the reason for it all,
>
> Hmm. Thinking about that, I'm not so sure. Shouldn't that backing store
> allocation happen when the page is actually dirtied on ext3?
We don't do it currently. We could do it (it would also solve the problem
that we currently silently discard users data when he reaches his quota or
filesystem gets ENOSPC) but there are problems with it as well:
1) We have to writeout blocks full of zeros on allocation so that we don't
expose unallocated data => slight slowdown
2) When blocksize < pagesize we must play nasty tricks for this to work
(think about i_size = 1024, set_page_dirty(), truncate(f, 8192),
writepage() -> uhuh, not enough space allocated)
3) We'll do allocation in the order in which pages are dirtied. Generally,
I'd suspect this order to be less linear than the order in which writepages
submit IO and thus it will result in the larger fragmentation of the file.
So it's not a clear win IMHO.
> I _suspect_ that goes back to the fact that ext3 is older than the
> "aops->set_page_dirty()" callback, and nobody taught ext3 to do the bmap's
> at dirty time, so now it does it at writeout time.
>
> Anyway, there we are. Old filesystems do the wrong thing (block allocation
> while doing writeout because they don't do it when dirtying), and newer
> filesystems do the wrong thing (block allocations during writeout, because
> they want to do delayed allocation to do the inode dirtying after doing
> writeback).
>
> And in either case, the VM is screwed, and can't ask for writeout, because
> it will be randomly throttled by the filesystem. So we do lots of async
> bdflush threads, which then causes IO ordering problems because now the
> writeout is all in random order.
Honza
--
Jan Kara <jack@suse.cz>
SUSE Labs, CR
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 23:02 ` Linus Torvalds
@ 2009-03-26 0:28 ` Ric Wheeler
2009-03-26 1:36 ` Linus Torvalds
0 siblings, 1 reply; 419+ messages in thread
From: Ric Wheeler @ 2009-03-26 0:28 UTC (permalink / raw)
To: Linus Torvalds
Cc: Jeff Garzik, Theodore Tso, Ingo Molnar, Alan Cox,
Arjan van de Ven, Andrew Morton, Peter Zijlstra, Nick Piggin,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Linus Torvalds wrote:
> On Wed, 25 Mar 2009, Ric Wheeler wrote:
>
>> One concern with doing this above the file system is that you are not in the
>> context of a transaction so you have no clean promises about what is on disk
>> and persistent when. Flushing the cache is primitive at best, but the way
>> barriers work today is designed to give the transactions some pretty critical
>> ordering semantics for journalling file systems at least.
>>
>> I don't see how you could use this approach to make a really robust, failure
>> proof storage system, but it might appear to work most of the time for most
>> people :-)
>>
>
> You just do a write barrier after doing all the filesystem writing, and
> you return with the guarantee that all the writes the filesystem did are
> actually on disk.
>
>
In this case, you have not gained anything - same number of barrier
operations/cache flushes and looser semantics for the transactions?
> No gray areas. No questions. No "might appear to work".
>
> Sure, there might be other writes that got flushed _too_, but nobody
> cares. If you have a crash later on, that's always true - you don't get
> crashes at nice well-defined points.
>
> Linus
>
This is pretty much how write barriers work today - you carry down other
transactions (even for other partitions on the same disk) with you...
ric
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 23:46 ` Bron Gondwana
@ 2009-03-26 0:32 ` Ric Wheeler
0 siblings, 0 replies; 419+ messages in thread
From: Ric Wheeler @ 2009-03-26 0:32 UTC (permalink / raw)
To: Bron Gondwana
Cc: Linus Torvalds, Theodore Tso, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Bron Gondwana wrote:
> On Wed, Mar 25, 2009 at 04:23:08PM -0700, Linus Torvalds wrote:
>
>> On Wed, 25 Mar 2009, Theodore Tso wrote:
>>
>>>> The problem being that unlike the ratio, there's no sane default value
>>>> that you can at least argue is not _entirely_ pointless.
>>>>
>>> Well, if the maximum time that someone wants to wait for an fsync() to
>>> return is one second, and the RAID array can write 100MB/sec
>>>
>> How are you going to tell the kernel that the RAID array can write
>> 100MB/s?
>>
>> The kernel has no idea.
>>
>
> Not at boot up, but after it's been using the RAID array for a little
> while it could...
>
> Bron (... imagining a tunable "max_fsync_wait_target_centisecs = 100"
> which caused the kernel to notice how long flushes were taking
> and tune its buffer sizes to be approximately right over time )
>
This tuning logic is the core of what Josef Bacik did for the
transaction batching code for ext4....
ric
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-26 0:22 ` Jan Kara
@ 2009-03-26 1:34 ` Linus Torvalds
2009-03-26 2:59 ` Theodore Tso
2009-03-26 16:24 ` Jan Kara
0 siblings, 2 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-03-26 1:34 UTC (permalink / raw)
To: Jan Kara
Cc: Theodore Tso, Andrew Morton, Ingo Molnar, Alan Cox,
Arjan van de Ven, Peter Zijlstra, Nick Piggin, Jens Axboe,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Thu, 26 Mar 2009, Jan Kara wrote:
>
> 1) We have to writeout blocks full of zeros on allocation so that we don't
> expose unallocated data => slight slowdown
Why?
This is in _no_ way different from a regular "write()" system call. And
there, we just attach the buffers to the page. If something crashes before
the page actually gets written out, then we'll have hopefully never
written out the metadata (that's what "data=ordered" means).
> 2) When blocksize < pagesize we must play nasty tricks for this to work
> (think about i_size = 1024, set_page_dirty(), truncate(f, 8192),
> writepage() -> uhuh, not enough space allocated)
Good point. I suspect not enough people have played around with
"set_page_dirty()" to find these kinds of things. The VFS layer probably
doesn't help sufficiently with the half-dirty pages, although the FS can
obviously always look up the previously last page and do things manually
if it wants to.
But yes, this is nasty.
> 3) We'll do allocation in the order in which pages are dirtied. Generally,
> I'd suspect this order to be less linear than the order in which writepages
> submit IO and thus it will result in the larger fragmentation of the file.
> So it's not a clear win IMHO.
Yes, that may be the case.
Of course, the approach of just checking whether the buffer heads already
exists and are mapped (before bothering with anything else) probably works
fine in practice. In most loads, pages will have been dirtied by regular
"write()" system calls, and then we will have the buffers pre-allocated
regardless.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-26 0:28 ` Ric Wheeler
@ 2009-03-26 1:36 ` Linus Torvalds
0 siblings, 0 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-03-26 1:36 UTC (permalink / raw)
To: Ric Wheeler
Cc: Jeff Garzik, Theodore Tso, Ingo Molnar, Alan Cox,
Arjan van de Ven, Andrew Morton, Peter Zijlstra, Nick Piggin,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Wed, 25 Mar 2009, Ric Wheeler wrote:
>
> In this case, you have not gained anything - same number of barrier
> operations/cache flushes and looser semantics for the transactions?
Um. Except you gained the fact that the filesystem doesn't have to care
and screw it up. And then we can know that it gets done, regardless of
what odd things the low-level fs does.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 21:50 ` Theodore Tso
@ 2009-03-26 2:10 ` Matthew Garrett
2009-03-26 2:36 ` Jeff Garzik
[not found] ` <f73f7ab80903251944s581166bbk31c26db50750814a@mail.gmail.com>
0 siblings, 2 replies; 419+ messages in thread
From: Matthew Garrett @ 2009-03-26 2:10 UTC (permalink / raw)
To: Theodore Tso, Christoph Hellwig, Linus Torvalds, Jan Kara,
Andrew Morton, Ingo Molnar, Alan Cox, Arjan van de Ven,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Wed, Mar 25, 2009 at 05:50:16PM -0400, Theodore Tso wrote:
> To be fair, though, one problem which Matthew Garrett has pointed out
> is that if lots of applications issue fsync(), it will have the
> tendency to wake up the hard drive a lot, and do a real number on
> power utilization. I believe the right solution for this is an
> extension to laptop mode which synchronizes the filesystem at a clean
> point, and then which suppresses fsync()'s until the hard drive wakes
> up, at which point it should flush all dirty data to the drive, and
> then freezes writes to the disk again. Presumably that should be OK,
> because who are using laptop mode are inherently trading off a certain
> amount of safety for power savings; but then other people who want to
> run a mysql server on a laptop get cranky, and then if we start
> implementing ways that applications can exempt themselves from the
> fsync() suppression, the complexity level starts rising.
I disagree with this approach. If fsync() means anything other than "Get
my data on disk and then return" then we're breaking guarantees to
applications. The problem is that you're insisting that the only way
applications can ensure that their requests occur in order is to use
fsync(), which will achieve that but also provides guarantees above and
beyond what the majority of applications want.
I've done some benchmarking now and I'm actually fairly happy with the
behaviour of ext4 now - it seems that the real world impact of doing the
block allocation at rename time isn't that significant, and if that's
the only practical way to ensure ordering guarantees in ext4 then fine.
But given that, I don't think there's any reason to try to convince
application authors to use fsync() more.
--
Matthew Garrett | mjg59@srcf.ucam.org
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-26 2:10 ` Matthew Garrett
@ 2009-03-26 2:36 ` Jeff Garzik
2009-03-26 2:42 ` Matthew Garrett
[not found] ` <f73f7ab80903251944s581166bbk31c26db50750814a@mail.gmail.com>
1 sibling, 1 reply; 419+ messages in thread
From: Jeff Garzik @ 2009-03-26 2:36 UTC (permalink / raw)
To: Matthew Garrett
Cc: Theodore Tso, Christoph Hellwig, Linus Torvalds, Jan Kara,
Andrew Morton, Ingo Molnar, Alan Cox, Arjan van de Ven,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Matthew Garrett wrote:
> I disagree with this approach. If fsync() means anything other than "Get
> my data on disk and then return" then we're breaking guarantees to
> applications.
Due to lack of storage dev writeback cache flushing, we are indeed
breaking that guarantee in many situations...
> The problem is that you're insisting that the only way
> applications can ensure that their requests occur in order is to use
> fsync(), which will achieve that but also provides guarantees above and
> beyond what the majority of applications want.
That remains a true statement... without the *sync* syscalls, you
still do not have a _guarantee_ writes occur in a certain order.
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-26 2:36 ` Jeff Garzik
@ 2009-03-26 2:42 ` Matthew Garrett
0 siblings, 0 replies; 419+ messages in thread
From: Matthew Garrett @ 2009-03-26 2:42 UTC (permalink / raw)
To: Jeff Garzik
Cc: Theodore Tso, Christoph Hellwig, Linus Torvalds, Jan Kara,
Andrew Morton, Ingo Molnar, Alan Cox, Arjan van de Ven,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Wed, Mar 25, 2009 at 10:36:31PM -0400, Jeff Garzik wrote:
> Matthew Garrett wrote:
> >The problem is that you're insisting that the only way
> >applications can ensure that their requests occur in order is to use
> >fsync(), which will achieve that but also provides guarantees above and
> >beyond what the majority of applications want.
>
> That remains a true statement... without the *sync* syscalls, you
> still do not have a _guarantee_ writes occur in a certain order.
The interesting case is whether data hits disk before metadata when
renaming over the top of an existing file, which appears to be
guaranteed in the default ext4 configuration now? I'm sure there are
filesystems where this isn't the case, but that's mostly just an
argument that it's not sensible to use those filesystems if your
system's at any risk of crashing.
--
Matthew Garrett | mjg59@srcf.ucam.org
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
[not found] ` <f73f7ab80903251944s581166bbk31c26db50750814a@mail.gmail.com>
@ 2009-03-26 2:46 ` Kyle Moffett
2009-03-26 2:51 ` Jeff Garzik
2009-03-26 2:47 ` Matthew Garrett
1 sibling, 1 reply; 419+ messages in thread
From: Kyle Moffett @ 2009-03-26 2:46 UTC (permalink / raw)
To: Matthew Garrett
Cc: Theodore Tso, Christoph Hellwig, Linus Torvalds, Jan Kara,
Andrew Morton, Ingo Molnar, Alan Cox, Arjan van de Ven,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Apologies for the HTML email, resent in ASCII below:
> On Wed, Mar 25, 2009 at 10:10 PM, Matthew Garrett <mjg59@srcf.ucam.org> wrote:
>>
>> If fsync() means anything other than "Get
>> my data on disk and then return" then we're breaking guarantees to
>> applications. The problem is that you're insisting that the only way
>> applications can ensure that their requests occur in order is to use
>> fsync(), which will achieve that but also provides guarantees above and
>> beyond what the majority of applications want.
>>
>> I've done some benchmarking now and I'm actually fairly happy with the
>> behaviour of ext4 now - it seems that the real world impact of doing the
>> block allocation at rename time isn't that significant, and if that's
>> the only practical way to ensure ordering guarantees in ext4 then fine.
>> But given that, I don't think there's any reason to try to convince
>> application authors to use fsync() more.
>
> Really, the problem is the filesystem interfaces are incomplete. There are plenty of ways to specify a "FLUSH CACHE"-type command for an individual file or for the whole filesystem, but there aren't really any ways for programs to specify barriers (either whole-blockdev or per-LBA-range). An fsync() implies you want to *wait* for the data... there's no way to ask it all to be queued with some ordering constraints.
> Perhaps we ought to add a couple extra open flags, O_BARRIER_BEFORE and O_BARRIER_AFTER, and rename3(), etc functions that take flags arguments?
> Or maybe a new set of syscalls like barrier(file1, file2) and fbarrier(fd1, fd2), which cause all pending changes (perhaps limit to this process?) to the file at fd1 to occur before any successive changes (again limited to this process?) to the file at fd2.
> It seems that rename(oldfile, newfile) with an already-existing newfile should automatically imply barrier(oldfile, newfile) before it occurs, simply because so many programs rely on that.
> In the cross-filesystem case, the fbarrier() might simply fsync(fd1), since that would provide the equivalent guarantee, albeit with possibly significant performance penalties. I can't think of any easy way to prevent one filesystem from syncing writes to a particular file until another filesystem has finished an asynchronous fsync() call. Perhaps a half-way solution would be to asynchronously fsync(fd1) and simply block the next write()/ioctl()/etc on fd2 until the async fsync returns.
> Are there other ideas for useful barrier()-generating file APIs?
> Cheers,
> Kyle Moffett
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
[not found] ` <f73f7ab80903251944s581166bbk31c26db50750814a@mail.gmail.com>
2009-03-26 2:46 ` Kyle Moffett
@ 2009-03-26 2:47 ` Matthew Garrett
2009-03-26 2:54 ` Kyle Moffett
1 sibling, 1 reply; 419+ messages in thread
From: Matthew Garrett @ 2009-03-26 2:47 UTC (permalink / raw)
To: Kyle Moffett
Cc: Theodore Tso, Christoph Hellwig, Linus Torvalds, Jan Kara,
Andrew Morton, Ingo Molnar, Alan Cox, Arjan van de Ven,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Wed, Mar 25, 2009 at 10:44:44PM -0400, Kyle Moffett wrote:
> Perhaps we ought to add a couple extra open flags, O_BARRIER_BEFORE and
> O_BARRIER_AFTER, and rename3(), etc functions that take flags arguments?
> Or maybe a new set of syscalls like barrier(file1, file2) and
> fbarrier(fd1, fd2), which cause all pending changes (perhaps limit to this
> process?) to the file at fd1 to occur before any successive changes (again
> limited to this process?) to the file at fd2.
That's an option, but what would benefit? If rename is expected to
preserve ordering (which I think it has to, in order to avoid breaking
existing code) then are there any other interesting use cases?
--
Matthew Garrett | mjg59@srcf.ucam.org
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 22:05 ` Theodore Tso
2009-03-25 23:23 ` Linus Torvalds
@ 2009-03-26 2:50 ` Neil Brown
2009-03-26 3:13 ` Theodore Tso
1 sibling, 1 reply; 419+ messages in thread
From: Neil Brown @ 2009-03-26 2:50 UTC (permalink / raw)
To: Theodore Tso
Cc: Linus Torvalds, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Wednesday March 25, tytso@mit.edu wrote:
> On Wed, Mar 25, 2009 at 11:40:28AM -0700, Linus Torvalds wrote:
> > On Wed, 25 Mar 2009, Theodore Tso wrote:
> > > I'm beginning to think that using a "ratio" may be the wrong way to
> > > go. We probably need to add an optional dirty_max_megabytes field
> > > where we start pushing dirty blocks out when the number of dirty
> > > blocks exceeds either the dirty_ratio or the dirty_max_megabytes,
> > > which ever comes first.
> >
> > We have that. Except it's called "dirty_bytes" and
> > "dirty_background_bytes", and it defaults to zero (off).
> >
> > The problem being that unlike the ratio, there's no sane default value
> > that you can at least argue is not _entirely_ pointless.
>
> Well, if the maximum time that someone wants to wait for an fsync() to
> return is one second, and the RAID array can write 100MB/sec, then
> setting a value of 100MB makes a certain amount of sense. Yes, this
> doesn't take seek overheads into account, and it may be that we're not
> writing things out in an optimal order, as Alan as pointed out. But
> 100MB is much lower number than 5% of 32GB (1.6GB). It would be
> better if these numbers were accounted on a per-filesystem instead of
> a global threshold, but for people who are complaining about huge
> latencies, it at least a partial workaround that they can use today.
We do a lot of dirty accounting on a per-backing_device basis. This
was added to stop slow devices from sucking up too much for the "40%
dirty" space. The allowable dirty space is now shared among all
devices in rough proportion to how quickly they write data out.
My memory of how it works isn't perfect, but we count write-out
completions both globally and per-bdi and maintain a fraction:
my-writeout-completions
--------------------------
total-writeout-completions
That device then gets a share of the available dirty space based on
the fraction.
The counts decay some-how so that the fraction represents recent
activity.
I shouldn't be too hard to add some concept of total time to this.
If we track the number of write-outs per unit time and use that together
with a "target time for fsync" to scale the 'dirty_bytes' number, we
might be able to auto-tune the amount of dirty space to fit the speeds
of the drives.
We would probably start with each device having a very low "max dirty"
number which would cause writeouts to start soon. Once the device
demonstrates that it can do n-per-second (or whatever) the VM would
allow the "max dirty" number to drift upwards. I'm not sure how best
to get it to move downwards if the device slows down (or the kernel
over-estimated). Maybe it should regularly decay so that the device
keeps have to "prove" itself.
We would still leave the "dirty_ratio" as an upper-limit because we
don't want all of memory to be dirty (and 40% still sounds about
right). But we would not have a time-based value to set a more
realistic limit when there is enough memory to keep the devices busy
for multiple minutes.
Sorry, no code yet. But I think the idea is sound.
NeilBrown
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-26 2:46 ` Kyle Moffett
@ 2009-03-26 2:51 ` Jeff Garzik
2009-03-26 3:03 ` Kyle Moffett
0 siblings, 1 reply; 419+ messages in thread
From: Jeff Garzik @ 2009-03-26 2:51 UTC (permalink / raw)
To: Kyle Moffett
Cc: Matthew Garrett, Theodore Tso, Christoph Hellwig, Linus Torvalds,
Jan Kara, Andrew Morton, Ingo Molnar, Alan Cox, Arjan van de Ven,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Kyle Moffett wrote:
>> Really, the problem is the filesystem interfaces are incomplete. There are plenty of ways to specify a "FLUSH CACHE"-type command for an individual file or for the whole filesystem, but there aren't really any ways for programs to specify barriers (either whole-blockdev or per-LBA-range). An fsync() implies you want to *wait* for the data... there's no way to ask it all to be queued with some ordering constraints.
>> Perhaps we ought to add a couple extra open flags, O_BARRIER_BEFORE and O_BARRIER_AFTER, and rename3(), etc functions that take flags arguments?
>> Or maybe a new set of syscalls like barrier(file1, file2) and fbarrier(fd1, fd2), which cause all pending changes (perhaps limit to this process?) to the file at fd1 to occur before any successive changes (again limited to this process?) to the file at fd2.
>> It seems that rename(oldfile, newfile) with an already-existing newfile should automatically imply barrier(oldfile, newfile) before it occurs, simply because so many programs rely on that.
>> In the cross-filesystem case, the fbarrier() might simply fsync(fd1), since that would provide the equivalent guarantee, albeit with possibly significant performance penalties. I can't think of any easy way to prevent one filesystem from syncing writes to a particular file until another filesystem has finished an asynchronous fsync() call. Perhaps a half-way solution would be to asynchronously fsync(fd1) and simply block the next write()/ioctl()/etc on fd2 until the async fsync returns.
Then you have just reinvented the transactional userspace API that
people often want to replace POSIX API with. Maybe one day they will
succeed.
But "POSIX API replacement" is an area never short of proposals... :)
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-26 2:47 ` Matthew Garrett
@ 2009-03-26 2:54 ` Kyle Moffett
0 siblings, 0 replies; 419+ messages in thread
From: Kyle Moffett @ 2009-03-26 2:54 UTC (permalink / raw)
To: Matthew Garrett
Cc: Theodore Tso, Christoph Hellwig, Linus Torvalds, Jan Kara,
Andrew Morton, Ingo Molnar, Alan Cox, Arjan van de Ven,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Wed, Mar 25, 2009 at 10:47 PM, Matthew Garrett <mjg59@srcf.ucam.org> wrote:
> On Wed, Mar 25, 2009 at 10:44:44PM -0400, Kyle Moffett wrote:
>
>> Perhaps we ought to add a couple extra open flags, O_BARRIER_BEFORE and
>> O_BARRIER_AFTER, and rename3(), etc functions that take flags arguments?
>> Or maybe a new set of syscalls like barrier(file1, file2) and
>> fbarrier(fd1, fd2), which cause all pending changes (perhaps limit to this
>> process?) to the file at fd1 to occur before any successive changes (again
>> limited to this process?) to the file at fd2.
>
> That's an option, but what would benefit? If rename is expected to
> preserve ordering (which I think it has to, in order to avoid breaking
> existing code) then are there any other interesting use cases?
The use cases would be programs like GIT (or any other kind of
database) where you want to ensure that your new pulled packfile has
fully hit disk before the ref update does. If that ordering
constraint is applied, then we don't really care when we crash,
because either we have a partial packfile update (and we have to pull
again) or we have the whole thing. The rename() barrier would ensure
that we either have the old ref or the new ref, but it would not check
to ensure that the whole packfile is on disk yet.
I would imagine that databases like MySQL could also use such support
to help speed up their database transaction support, instead of having
to run a bunch of threads which fsync() and buffer data internally.
Cheers,
Kyle Moffett
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-26 1:34 ` Linus Torvalds
@ 2009-03-26 2:59 ` Theodore Tso
2009-03-26 16:24 ` Jan Kara
1 sibling, 0 replies; 419+ messages in thread
From: Theodore Tso @ 2009-03-26 2:59 UTC (permalink / raw)
To: Linus Torvalds
Cc: Jan Kara, Andrew Morton, Ingo Molnar, Alan Cox, Arjan van de Ven,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Wed, Mar 25, 2009 at 06:34:32PM -0700, Linus Torvalds wrote:
>
> Of course, the approach of just checking whether the buffer heads already
> exists and are mapped (before bothering with anything else) probably works
> fine in practice. In most loads, pages will have been dirtied by regular
> "write()" system calls, and then we will have the buffers pre-allocated
> regardless.
>
Yeah, I agree; solving the problem in the case of files being dirtied
via write() is going to solve a much percentage of the cases compared
to those cases where the pages are dirtied via mmap()'ed pages.
I thought we were doing this already, but clearly I should have looked
at the code first. :-(
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-26 2:51 ` Jeff Garzik
@ 2009-03-26 3:03 ` Kyle Moffett
2009-03-26 3:40 ` Linus Torvalds
0 siblings, 1 reply; 419+ messages in thread
From: Kyle Moffett @ 2009-03-26 3:03 UTC (permalink / raw)
To: Jeff Garzik
Cc: Matthew Garrett, Theodore Tso, Christoph Hellwig, Linus Torvalds,
Jan Kara, Andrew Morton, Ingo Molnar, Alan Cox, Arjan van de Ven,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Wed, Mar 25, 2009 at 10:51 PM, Jeff Garzik <jeff@garzik.org> wrote:
> Then you have just reinvented the transactional userspace API that people
> often want to replace POSIX API with. Maybe one day they will succeed.
>
> But "POSIX API replacement" is an area never short of proposals... :)
Well, I think the goal is not to *replace* the POSIX API or even
provide "transactional" guarantees. The performance penalty for
atomic transactions is pretty high, and most programs (like GIT) don't
really give a damn, as they provide that on a higher level.
It's like the difference between a modern SMP system that supports
memory barriers and write snooping and one of the theoretical
"transactional memory" designs that have never caught on.
To be honest I think we could provide much better data consistency
guarantees and remove a lot of fsync() calls with just a basic
per-filesystem barrier() call.
Cheers,
Kyle Moffett
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-26 2:50 ` Neil Brown
@ 2009-03-26 3:13 ` Theodore Tso
0 siblings, 0 replies; 419+ messages in thread
From: Theodore Tso @ 2009-03-26 3:13 UTC (permalink / raw)
To: Neil Brown
Cc: Linus Torvalds, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Thu, Mar 26, 2009 at 01:50:10PM +1100, Neil Brown wrote:
> I shouldn't be too hard to add some concept of total time to this.
> If we track the number of write-outs per unit time and use that together
> with a "target time for fsync" to scale the 'dirty_bytes' number, we
> might be able to auto-tune the amount of dirty space to fit the speeds
> of the drives.
>
> We would probably start with each device having a very low "max dirty"
> number which would cause writeouts to start soon. Once the device
> demonstrates that it can do n-per-second (or whatever) the VM would
> allow the "max dirty" number to drift upwards. I'm not sure how best
> to get it to move downwards if the device slows down (or the kernel
> over-estimated). Maybe it should regularly decay so that the device
> keeps have to "prove" itself.
This seems like a really cool idea.
-Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-26 3:03 ` Kyle Moffett
@ 2009-03-26 3:40 ` Linus Torvalds
2009-03-26 3:57 ` David Miller
2009-03-26 4:58 ` Kyle Moffett
0 siblings, 2 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-03-26 3:40 UTC (permalink / raw)
To: Kyle Moffett
Cc: Jeff Garzik, Matthew Garrett, Theodore Tso, Christoph Hellwig,
Jan Kara, Andrew Morton, Ingo Molnar, Alan Cox, Arjan van de Ven,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Wed, 25 Mar 2009, Kyle Moffett wrote:
>
> Well, I think the goal is not to *replace* the POSIX API or even
> provide "transactional" guarantees. The performance penalty for
> atomic transactions is pretty high, and most programs (like GIT) don't
> really give a damn, as they provide that on a higher level.
Speaking with my 'git' hat on, I can tell that
- git was designed to have almost minimal requirements from the
filesystem, and to not do anything even half-way clever.
- despite that, we've hit an absolute metric sh*tload of filesystem bugs
and misfeatures. Some very much in Linux. And some I bet git was the
first to ever notice, exactly because git tries to be really anal, in
ways that I can pretty much guarantee no normal program _ever_ is.
For example, the latest one came from git actually checking the error code
from 'close()'. Tell me the last time you saw anybody do that in a real
program. Hint: it's just not done. EVER. Git does it (and even then, git
does it only for the core git object files that we care about so much),
and we found a real data-loss CIFS bug thanks to that. Afaik, the bug has
been there for a year and half. Don't tell me nobody uses cifs.
Before that, we had cross-directory rename bugs. Or the inexplicable
"pread() doesn't work correctly on HP-UX". Or the "readdir() returns the
same entry multiple times" bug. And all of this without ever doing
anything even _remotely_ odd. No file locking, no rewriting of old files,
no lseek()ing in directories, no nothing.
Anybody who wants more complex and subtle filesystem interfaces is just
crazy. Not only will they never get used, they'll definitely not be
stable.
> To be honest I think we could provide much better data consistency
> guarantees and remove a lot of fsync() calls with just a basic
> per-filesystem barrier() call.
The problem is not that we have a lot of fsync() calls. Quite the reverse.
fsync() is really really rare. So is being careful in general. The number
of applications that do even the _minimal_ safety-net of "create new file,
rename it atomically over an old one" is basically zero. Almost everybody
ends up rewriting files with something like
open(name, O_CREAT | O_TRUNC, 0666)
write();
close();
where there isn't an fsync in sight, nor any "create temp file", nor
likely even any real error checking on the write(), much less the
close().
And if we have a Linux-specific magic system call or sync action, it's
going to be even more rarely used than fsync(). Do you think anybody
really uses the OS X FSYNC_FULL ioctl? Nope. Outside of a few databases,
it is almost certainly not going to be used, and fsync() will not be
reliable in general.
So rather than come up with new barriers that nobody will use, filesystem
people should aim to make "badly written" code "just work" unless people
are really really unlucky. Because like it or not, that's what 99% of all
code is.
The undeniable FACT that people don't tend to check errors from close()
should, for example, mean that delayed allocation must still track disk
full conditions, for example. If your filesystem returns ENOSPC at close()
rather than at write(), you just lost error coverage for disk full cases
from 90% of all apps. It's that simple.
Crying that it's an application bug is like crying over the speed of
light: you should deal with *reality*, not what you wish reality was. Same
goes for any complaints that "people should write a temp-file, fsync it,
and rename it over the original". You may wish that was what they did, but
reality is that "open(filename, O_TRUNC | O_CREAT, 0666)" thing.
Harsh, I know. And in the end, even the _good_ applications will decide
that it's not worth the performance penalty of doing an fsync(). In git,
for example, where we generally try to be very very very careful,
'fsync()' on the object files is turned off by default.
Why? Because turning it on results in unacceptable behavior on ext3. Now,
admittedly, the git design means that a lost new DB file isn't deadly,
just potentially very very annoying and confusing - you may have to roll
back and re-do your operation by hand, and you have to know enough to be
able to do it in the first place.
The point here? Sometimes those filesystem people who say "you must use
fsync() to get well-defined semantics" are the same people who SCREWED IT
UP SO DAMN BADLY THAT FSYNC ISN'T ACTUALLY REALISTICALLY USEABLE!
Theory and practice sometimes clash. And when that happens, theory loses.
Every single time.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-26 3:40 ` Linus Torvalds
@ 2009-03-26 3:57 ` David Miller
2009-03-26 4:58 ` Kyle Moffett
1 sibling, 0 replies; 419+ messages in thread
From: David Miller @ 2009-03-26 3:57 UTC (permalink / raw)
To: torvalds
Cc: kyle, jeff, mjg59, tytso, hch, jack, akpm, mingo, alan, arjan,
a.p.zijlstra, npiggin, jens.axboe, drees76, jesper, linux-kernel
From: Linus Torvalds <torvalds@linux-foundation.org>
Date: Wed, 25 Mar 2009 20:40:23 -0700 (PDT)
> For example, the latest one came from git actually checking the error code
> from 'close()'. Tell me the last time you saw anybody do that in a real
> program. Hint: it's just not done. EVER.
Emacs does it too, and I know that you consider GNU emacs to be the
definition of abnormal :-)
That's how we found some misbehaviors in NFS a while ago, we used to
return -EAGAIN or something like that from close() on NFS files. This
was like 12 years ago and it gave emacs massive heartburn.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-26 3:40 ` Linus Torvalds
2009-03-26 3:57 ` David Miller
@ 2009-03-26 4:58 ` Kyle Moffett
2009-03-26 6:24 ` Jeff Garzik
1 sibling, 1 reply; 419+ messages in thread
From: Kyle Moffett @ 2009-03-26 4:58 UTC (permalink / raw)
To: Linus Torvalds
Cc: Jeff Garzik, Matthew Garrett, Theodore Tso, Christoph Hellwig,
Jan Kara, Andrew Morton, Ingo Molnar, Alan Cox, Arjan van de Ven,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Wed, Mar 25, 2009 at 11:40 PM, Linus Torvalds
<torvalds@linux-foundation.org> wrote:
> On Wed, 25 Mar 2009, Kyle Moffett wrote:
>> To be honest I think we could provide much better data consistency
>> guarantees and remove a lot of fsync() calls with just a basic
>> per-filesystem barrier() call.
>
> The problem is not that we have a lot of fsync() calls. Quite the reverse.
> fsync() is really really rare. So is being careful in general. The number
> of applications that do even the _minimal_ safety-net of "create new file,
> rename it atomically over an old one" is basically zero. Almost everybody
> ends up rewriting files with something like
>
> open(name, O_CREAT | O_TRUNC, 0666)
> write();
> close();
>
> where there isn't an fsync in sight, nor any "create temp file", nor
> likely even any real error checking on the write(), much less the
> close().
Really, I think virtually all of the database programs would be
perfectly happy with an "fsbarrier(fd, flags)" syscall, where if "fd"
points to a regular file or directory then it instructs the underlying
filesystem to do whatever internal barrier it supports, and if not
just fail with -ENOTSUPP (so you can fall back to fdatasync(), etc).
Perhaps "flags" would allow a "data" or "metadata" barrier, but if not
it's not a big issue.
I've ended up having to write a fair amount of high-performance
filesystem library code which almost never ends up using fsync() quite
simply because the performance on it sucks so badly. This is one of
the big reasons why so many critical database programs use O_DIRECT
and reinvent the the wheel^H^H^H^H^H^H pagecache. The only way you
can actually use it in high-bandwidth transaction applications is by
doing your own IO-thread and buffering system.
You have to have your own buffer ordering dependencies and call
fdatasync() or fsync() from individual threads in-between specific
ordered IOs. The threading helps you keep other IO in flight while
waiting for the flush to finish. For big databases on spinning media
(SSDs don't work precisely because they are small and your databases
are big) the overhead of a full flush may still be too large. Even
with SSDs, with multiple processes vying for IO bandwidth you still
want some kind of application-level barrier to avoid introducing
bubbles in your IO pipeline.
It all comes down to a trivial calculation: if you can't get
(bandwidth * latency-to-stable-storage) bytes of data queued *behind*
a flush then your disk is going to sit idle waiting for more data
after completing it. If a user-level tool needs to enforce ordering
between IOs the only tool right now is is a full flush; when
database-oriented tools can use a barrier()-ish call instead, they can
issue the op and immediately resume keeping the IO queues full.
Cheers,
Kyle Moffett
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-26 4:58 ` Kyle Moffett
@ 2009-03-26 6:24 ` Jeff Garzik
2009-03-26 12:49 ` Kyle Moffett
0 siblings, 1 reply; 419+ messages in thread
From: Jeff Garzik @ 2009-03-26 6:24 UTC (permalink / raw)
To: Kyle Moffett
Cc: Linus Torvalds, Matthew Garrett, Theodore Tso, Christoph Hellwig,
Jan Kara, Andrew Morton, Ingo Molnar, Alan Cox, Arjan van de Ven,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Kyle Moffett wrote:
> Really, I think virtually all of the database programs would be
> perfectly happy with an "fsbarrier(fd, flags)" syscall, where if "fd"
> points to a regular file or directory then it instructs the underlying
> filesystem to do whatever internal barrier it supports, and if not
> just fail with -ENOTSUPP (so you can fall back to fdatasync(), etc).
> Perhaps "flags" would allow a "data" or "metadata" barrier, but if not
> it's not a big issue.
If you want a per-fd barrier call, there is always sync_file_range(2)
> If a user-level tool needs to enforce ordering
> between IOs the only tool right now is is a full flush
or sync_file_range(2)...
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 20:41 ` Hugh Dickins
@ 2009-03-26 8:57 ` Jens Axboe
2009-03-26 14:47 ` Hugh Dickins
0 siblings, 1 reply; 419+ messages in thread
From: Jens Axboe @ 2009-03-26 8:57 UTC (permalink / raw)
To: Hugh Dickins
Cc: Ric Wheeler, Jeff Garzik, Linus Torvalds, Theodore Tso,
Ingo Molnar, Alan Cox, Arjan van de Ven, Andrew Morton,
Peter Zijlstra, Nick Piggin, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Wed, Mar 25 2009, Hugh Dickins wrote:
> On Wed, 25 Mar 2009, Jens Axboe wrote:
> > On Wed, Mar 25 2009, Ric Wheeler wrote:
> > > Jens Axboe wrote:
> > >>
> > >> Another problem is that FLUSH_CACHE sucks. Really. And not just on
> > >> ext3/ordered, generally. Write a 50 byte file, fsync, flush cache and
> > >> wit for the world to finish. Pretty hard to teach people to use a nicer
> > >> fdatasync(), when the majority of the cost now becomes flushing the
> > >> cache of that 1TB drive you happen to have 8 partitions on. Good luck
> > >> with that.
> > >>
> > > And, as I am sure that you do know, to add insult to injury, FLUSH_CACHE
> > > is per device (not file system).
> > >
> > > When you issue an fsync() on a disk with multiple partitions, you will
> > > flush the data for all of its partitions from the write cache....
> >
> > Exactly, that's what my (vague) 8 partition reference was for :-)
> > A range flush would be so much more palatable.
>
> Tangential question, but am I right in thinking that BIO_RW_BARRIER
> similarly bars across all partitions, whereas its WRITE_BARRIER and
> DISCARD_BARRIER users would actually prefer it to apply to just one?
All the barriers refer to just that range which the barrier itself
references. The problem with the full device flushes is implementation
on the hardware side, since we can't do small range flushes. So it's not
as-designed, but rather the best we can do...
--
Jens Axboe
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 21:22 ` James Bottomley
@ 2009-03-26 8:59 ` Jens Axboe
0 siblings, 0 replies; 419+ messages in thread
From: Jens Axboe @ 2009-03-26 8:59 UTC (permalink / raw)
To: James Bottomley
Cc: Ric Wheeler, Jeff Garzik, Linus Torvalds, Theodore Tso,
Ingo Molnar, Alan Cox, Arjan van de Ven, Andrew Morton,
Peter Zijlstra, Nick Piggin, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Wed, Mar 25 2009, James Bottomley wrote:
> On Wed, 2009-03-25 at 16:25 -0400, Ric Wheeler wrote:
> > Jeff Garzik wrote:
> > > Ric Wheeler wrote:> And, as I am sure that you do know, to add insult
> > > to injury, FLUSH_CACHE
> > >> is per device (not file system).
> > >>
> > >> When you issue an fsync() on a disk with multiple partitions, you
> > >> will flush the data for all of its partitions from the write cache....
> > >
> > > SCSI'S SYNCHRONIZE CACHE command already accepts an (LBA, length)
> > > pair. We could make use of that.
> > >
> > > And I bet we could convince T13 to add FLUSH CACHE RANGE, if we could
> > > demonstrate clear benefit.
> > >
> > > Jeff
> >
> > How well supported is this in SCSI? Can we try it out with a commodity
> > SAS drive?
>
> What do you mean by well supported? The way the SCSI standard is
> written, a device can do a complete cache flush when a range flush is
> requested and still be fully standards compliant. There's no easy way
> to tell if it does a complete cache flush every time other than by
> taking the firmware apart (or asking the manufacturer).
That's the fear of range flushes, if it was added to t13 as well. Unless
that Other OS uses range flushes, most firmware writers would most
likely implement any range as 0...-1 and it wouldn't help us at all. In
fact it would make things worse, as we would have done extra work to
actually find these ranges, unless you went cheap and said 'just flush
this partition'.
--
Jens Axboe
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-26 6:24 ` Jeff Garzik
@ 2009-03-26 12:49 ` Kyle Moffett
0 siblings, 0 replies; 419+ messages in thread
From: Kyle Moffett @ 2009-03-26 12:49 UTC (permalink / raw)
To: Jeff Garzik
Cc: Linus Torvalds, Matthew Garrett, Theodore Tso, Christoph Hellwig,
Jan Kara, Andrew Morton, Ingo Molnar, Alan Cox, Arjan van de Ven,
Peter Zijlstra, Nick Piggin, Jens Axboe, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Thu, Mar 26, 2009 at 2:24 AM, Jeff Garzik <jeff@garzik.org> wrote:
> Kyle Moffett wrote:
>> Really, I think virtually all of the database programs would be
>> perfectly happy with an "fsbarrier(fd, flags)" syscall, where if "fd"
>> points to a regular file or directory then it instructs the underlying
>> filesystem to do whatever internal barrier it supports, and if not
>> just fail with -ENOTSUPP (so you can fall back to fdatasync(), etc).
>> Perhaps "flags" would allow a "data" or "metadata" barrier, but if not
>> it's not a big issue.
>
> If you want a per-fd barrier call, there is always sync_file_range(2)
The issue is that sync_file_range doesn't seem to be documented to
have any inter-file barrier semantics. Even then, from the manpage it
doesn't look like
write(fd)+sync_file_range(fd,SYNC_FILE_RANGE_WRITE)+write(fd) would
actually prevent the second write from occurring before the first has
actually hit disk (assuming both are within the specified range).
Cheers,
Kyle Moffett
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-26 8:57 ` Jens Axboe
@ 2009-03-26 14:47 ` Hugh Dickins
2009-03-26 15:46 ` Jens Axboe
0 siblings, 1 reply; 419+ messages in thread
From: Hugh Dickins @ 2009-03-26 14:47 UTC (permalink / raw)
To: Jens Axboe
Cc: Ric Wheeler, Jeff Garzik, Linus Torvalds, Theodore Tso,
Ingo Molnar, Alan Cox, Arjan van de Ven, Andrew Morton,
Peter Zijlstra, Nick Piggin, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Thu, 26 Mar 2009, Jens Axboe wrote:
> On Wed, Mar 25 2009, Hugh Dickins wrote:
> >
> > Tangential question, but am I right in thinking that BIO_RW_BARRIER
> > similarly bars across all partitions, whereas its WRITE_BARRIER and
> > DISCARD_BARRIER users would actually prefer it to apply to just one?
>
> All the barriers refer to just that range which the barrier itself
> references.
Ah, thank you: then I had a fundamental misunderstanding of them,
and need to go away and work that out some more.
Though I didn't read it before asking, doesn't the I/O Barriers section
of Documentation/block/biodoc.txt give a very different impression?
> The problem with the full device flushes is implementation
> on the hardware side, since we can't do small range flushes. So it's not
> as-designed, but rather the best we can do...
Right, that part of it I did get.
Hugh
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-26 14:47 ` Hugh Dickins
@ 2009-03-26 15:46 ` Jens Axboe
2009-03-26 18:21 ` Hugh Dickins
0 siblings, 1 reply; 419+ messages in thread
From: Jens Axboe @ 2009-03-26 15:46 UTC (permalink / raw)
To: Hugh Dickins
Cc: Ric Wheeler, Jeff Garzik, Linus Torvalds, Theodore Tso,
Ingo Molnar, Alan Cox, Arjan van de Ven, Andrew Morton,
Peter Zijlstra, Nick Piggin, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Thu, Mar 26 2009, Hugh Dickins wrote:
> On Thu, 26 Mar 2009, Jens Axboe wrote:
> > On Wed, Mar 25 2009, Hugh Dickins wrote:
> > >
> > > Tangential question, but am I right in thinking that BIO_RW_BARRIER
> > > similarly bars across all partitions, whereas its WRITE_BARRIER and
> > > DISCARD_BARRIER users would actually prefer it to apply to just one?
> >
> > All the barriers refer to just that range which the barrier itself
> > references.
>
> Ah, thank you: then I had a fundamental misunderstanding of them,
> and need to go away and work that out some more.
>
> Though I didn't read it before asking, doesn't the I/O Barriers section
> of Documentation/block/biodoc.txt give a very different impression?
I'm sensing a miscommunication here... The ordering constraint is across
devices, at least that is how it is implemented. For file system
barriers (like BIO_RW_BARRIER), it could be per-partition instead. Doing
so would involve some changes at the block layer side, not necessarily
trivial. So I think you were asking about ordering, I was answering
about the write guarantee :-)
--
Jens Axboe
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-26 1:34 ` Linus Torvalds
2009-03-26 2:59 ` Theodore Tso
@ 2009-03-26 16:24 ` Jan Kara
1 sibling, 0 replies; 419+ messages in thread
From: Jan Kara @ 2009-03-26 16:24 UTC (permalink / raw)
To: Linus Torvalds
Cc: Theodore Tso, Andrew Morton, Ingo Molnar, Alan Cox,
Arjan van de Ven, Peter Zijlstra, Nick Piggin, Jens Axboe,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Wed 25-03-09 18:34:32, Linus Torvalds wrote:
> On Thu, 26 Mar 2009, Jan Kara wrote:
> >
> > 1) We have to writeout blocks full of zeros on allocation so that we don't
> > expose unallocated data => slight slowdown
>
> Why?
>
> This is in _no_ way different from a regular "write()" system call. And
> there, we just attach the buffers to the page. If something crashes before
> the page actually gets written out, then we'll have hopefully never
> written out the metadata (that's what "data=ordered" means).
Sorry, I wasn't exact enough. We'll attach buffers to the running
transaction and they'll get written out at the transaction commit which is
usually earlier than when the writepage() is called and then later
writepage() will write the data again (this is a consequence of the fact
that JBD commit code just writes buffers without calling
clear_page_dirty_for_io())...
At least ext4 has this fixed because JBD2 already writes out ordered data
via writepages().
Honza
--
Jan Kara <jack@suse.cz>
SUSE Labs, CR
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-26 15:46 ` Jens Axboe
@ 2009-03-26 18:21 ` Hugh Dickins
2009-03-26 18:32 ` Jens Axboe
0 siblings, 1 reply; 419+ messages in thread
From: Hugh Dickins @ 2009-03-26 18:21 UTC (permalink / raw)
To: Jens Axboe
Cc: Ric Wheeler, Jeff Garzik, Linus Torvalds, Theodore Tso,
Ingo Molnar, Alan Cox, Arjan van de Ven, Andrew Morton,
Peter Zijlstra, Nick Piggin, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Thu, 26 Mar 2009, Jens Axboe wrote:
> On Thu, Mar 26 2009, Hugh Dickins wrote:
> > On Thu, 26 Mar 2009, Jens Axboe wrote:
> > > On Wed, Mar 25 2009, Hugh Dickins wrote:
> > > >
> > > > Tangential question, but am I right in thinking that BIO_RW_BARRIER
> > > > similarly bars across all partitions, whereas its WRITE_BARRIER and
> > > > DISCARD_BARRIER users would actually prefer it to apply to just one?
> > >
> > > All the barriers refer to just that range which the barrier itself
> > > references.
> >
> > Ah, thank you: then I had a fundamental misunderstanding of them,
> > and need to go away and work that out some more.
> >
> > Though I didn't read it before asking, doesn't the I/O Barriers section
> > of Documentation/block/biodoc.txt give a very different impression?
>
> I'm sensing a miscommunication here... The ordering constraint is across
> devices, at least that is how it is implemented. For file system
> barriers (like BIO_RW_BARRIER), it could be per-partition instead. Doing
> so would involve some changes at the block layer side, not necessarily
> trivial. So I think you were asking about ordering, I was answering
> about the write guarantee :-)
Ah, thank you again, perhaps I did understand after all.
So, directing a barrier (WRITE_BARRIER or DISCARD_BARRIER) to a range
of sectors in one partition interposes a barrier into the queue of I/O
across (all partitions of) that whole device.
I think that's not how filesystems really want barriers to behave,
and might tend to discourage us from using barriers more freely.
But I have zero appreciation of whether it's a significant issue
worth non-trivial change - just wanted to get it out into the open.
Hugh
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-26 18:21 ` Hugh Dickins
@ 2009-03-26 18:32 ` Jens Axboe
2009-03-26 19:00 ` Hugh Dickins
0 siblings, 1 reply; 419+ messages in thread
From: Jens Axboe @ 2009-03-26 18:32 UTC (permalink / raw)
To: Hugh Dickins
Cc: Ric Wheeler, Jeff Garzik, Linus Torvalds, Theodore Tso,
Ingo Molnar, Alan Cox, Arjan van de Ven, Andrew Morton,
Peter Zijlstra, Nick Piggin, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Thu, Mar 26 2009, Hugh Dickins wrote:
> On Thu, 26 Mar 2009, Jens Axboe wrote:
> > On Thu, Mar 26 2009, Hugh Dickins wrote:
> > > On Thu, 26 Mar 2009, Jens Axboe wrote:
> > > > On Wed, Mar 25 2009, Hugh Dickins wrote:
> > > > >
> > > > > Tangential question, but am I right in thinking that BIO_RW_BARRIER
> > > > > similarly bars across all partitions, whereas its WRITE_BARRIER and
> > > > > DISCARD_BARRIER users would actually prefer it to apply to just one?
> > > >
> > > > All the barriers refer to just that range which the barrier itself
> > > > references.
> > >
> > > Ah, thank you: then I had a fundamental misunderstanding of them,
> > > and need to go away and work that out some more.
> > >
> > > Though I didn't read it before asking, doesn't the I/O Barriers section
> > > of Documentation/block/biodoc.txt give a very different impression?
> >
> > I'm sensing a miscommunication here... The ordering constraint is across
> > devices, at least that is how it is implemented. For file system
> > barriers (like BIO_RW_BARRIER), it could be per-partition instead. Doing
> > so would involve some changes at the block layer side, not necessarily
> > trivial. So I think you were asking about ordering, I was answering
> > about the write guarantee :-)
>
> Ah, thank you again, perhaps I did understand after all.
>
> So, directing a barrier (WRITE_BARRIER or DISCARD_BARRIER) to a range
> of sectors in one partition interposes a barrier into the queue of I/O
> across (all partitions of) that whole device.
Correct
> I think that's not how filesystems really want barriers to behave,
> and might tend to discourage us from using barriers more freely.
> But I have zero appreciation of whether it's a significant issue
> worth non-trivial change - just wanted to get it out into the open.
Per-partition definitely makes sense. The problem is that we do sorting
on a per-device basis right now. But it's a good point, I'll try and
take a look at how much work it would be to make it per-partition
instead. It wont be trivial :-)
--
Jens Axboe
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-26 18:32 ` Jens Axboe
@ 2009-03-26 19:00 ` Hugh Dickins
2009-03-26 19:03 ` Jens Axboe
0 siblings, 1 reply; 419+ messages in thread
From: Hugh Dickins @ 2009-03-26 19:00 UTC (permalink / raw)
To: Jens Axboe
Cc: Ric Wheeler, Jeff Garzik, Linus Torvalds, Theodore Tso,
Ingo Molnar, Alan Cox, Arjan van de Ven, Andrew Morton,
Peter Zijlstra, Nick Piggin, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Thu, 26 Mar 2009, Jens Axboe wrote:
> On Thu, Mar 26 2009, Hugh Dickins wrote:
> >
> > So, directing a barrier (WRITE_BARRIER or DISCARD_BARRIER) to a range
> > of sectors in one partition interposes a barrier into the queue of I/O
> > across (all partitions of) that whole device.
>
> Correct
>
> > I think that's not how filesystems really want barriers to behave,
> > and might tend to discourage us from using barriers more freely.
> > But I have zero appreciation of whether it's a significant issue
> > worth non-trivial change - just wanted to get it out into the open.
>
> Per-partition definitely makes sense. The problem is that we do sorting
> on a per-device basis right now. But it's a good point, I'll try and
> take a look at how much work it would be to make it per-partition
> instead. It wont be trivial :-)
Thanks, that would be interesting.
Trivial bores you anyway, doesn't it?
Hugh
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-26 19:00 ` Hugh Dickins
@ 2009-03-26 19:03 ` Jens Axboe
0 siblings, 0 replies; 419+ messages in thread
From: Jens Axboe @ 2009-03-26 19:03 UTC (permalink / raw)
To: Hugh Dickins
Cc: Ric Wheeler, Jeff Garzik, Linus Torvalds, Theodore Tso,
Ingo Molnar, Alan Cox, Arjan van de Ven, Andrew Morton,
Peter Zijlstra, Nick Piggin, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Thu, Mar 26 2009, Hugh Dickins wrote:
> On Thu, 26 Mar 2009, Jens Axboe wrote:
> > On Thu, Mar 26 2009, Hugh Dickins wrote:
> > >
> > > So, directing a barrier (WRITE_BARRIER or DISCARD_BARRIER) to a range
> > > of sectors in one partition interposes a barrier into the queue of I/O
> > > across (all partitions of) that whole device.
> >
> > Correct
> >
> > > I think that's not how filesystems really want barriers to behave,
> > > and might tend to discourage us from using barriers more freely.
> > > But I have zero appreciation of whether it's a significant issue
> > > worth non-trivial change - just wanted to get it out into the open.
> >
> > Per-partition definitely makes sense. The problem is that we do sorting
> > on a per-device basis right now. But it's a good point, I'll try and
> > take a look at how much work it would be to make it per-partition
> > instead. It wont be trivial :-)
>
> Thanks, that would be interesting.
> Trivial bores you anyway, doesn't it?
You're a good motivator, Hugh!
--
Jens Axboe
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 18:40 ` Stephen Clark
@ 2009-03-26 23:53 ` Mark Lord
0 siblings, 0 replies; 419+ messages in thread
From: Mark Lord @ 2009-03-26 23:53 UTC (permalink / raw)
To: sclark46
Cc: Arjan van de Ven, Jesse Barnes, Theodore Tso, Ingo Molnar,
Alan Cox, Andrew Morton, Peter Zijlstra, Nick Piggin, Jens Axboe,
David Rees, Jesper Krogh, Linus Torvalds,
Linux Kernel Mailing List
Stephen Clark wrote:
> Arjan van de Ven wrote:
..
>> the people that care use my kernel patch on ext3 ;-)
>> (or the userland equivalent tweak in /etc/rc.local)
>>
>>
>>
> Ok, I bite what is the userland tweak?
## /etc/rc.local:
for i in `pidof kjournald` ; do ionice -c1 -p $i ; done
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 23:23 ` Linus Torvalds
2009-03-25 23:46 ` Bron Gondwana
@ 2009-03-27 0:11 ` Andrew Morton
2009-03-27 0:27 ` Linus Torvalds
2009-03-27 9:58 ` Alan Cox
1 sibling, 2 replies; 419+ messages in thread
From: Andrew Morton @ 2009-03-27 0:11 UTC (permalink / raw)
To: Linus Torvalds
Cc: Theodore Tso, David Rees, Jesper Krogh, Linux Kernel Mailing List
On Wed, 25 Mar 2009 16:23:08 -0700 (PDT) Linus Torvalds <torvalds@linux-foundation.org> wrote:
>
>
> On Wed, 25 Mar 2009, Theodore Tso wrote:
> > >
> > > The problem being that unlike the ratio, there's no sane default value
> > > that you can at least argue is not _entirely_ pointless.
> >
> > Well, if the maximum time that someone wants to wait for an fsync() to
> > return is one second, and the RAID array can write 100MB/sec
>
> How are you going to tell the kernel that the RAID array can write
> 100MB/s?
>
> The kernel has no idea.
>
userspace can do it quite easily. Run a self-tuning script after
installation and when the disk hardware changes significantly.
It is very disappointing that nobody appears to have attempted to do
_any_ sensible tuning of these controls in all this time - we just keep
thrashing around trying to pick better magic numbers in the base kernel.
Maybe we should set the tunables to 99.9% to make it suck enough to
motivate someone.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 0:11 ` Andrew Morton
@ 2009-03-27 0:27 ` Linus Torvalds
2009-03-27 0:47 ` Andrew Morton
2009-03-27 0:51 ` Linus Torvalds
2009-03-27 9:58 ` Alan Cox
1 sibling, 2 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-03-27 0:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Theodore Tso, David Rees, Jesper Krogh, Linux Kernel Mailing List
On Thu, 26 Mar 2009, Andrew Morton wrote:
>
> userspace can do it quite easily. Run a self-tuning script after
> installation and when the disk hardware changes significantly.
Uhhuh.
"user space can do it".
That's the global cop-out.
The fact is, user-space isn't doing it, and never has done anything even
_remotely_ like it.
In fact, I claim that it's impossible to do. If you give me a number for
the throughput of your harddisk, I will laugh in your face and call you a
moron.
Why? Because no such number exists. It depends on the access patterns. If
you write one large file, the number will be very different (and not just
by a few percent) from the numbers of you writing thousands of small
files, or re-writing a large database in random order.
So no. User space CAN NOT DO IT, and the fact that you even claim
something like that shows a distinct lack of thought.
> Maybe we should set the tunables to 99.9% to make it suck enough to
> motivate someone.
The only times tunables have worked for us is when they auto-tune.
IOW, we don't have "use 35% of memory for buffer cache" tunables, we just
dynamically auto-tune memory use. And no, we don't expect user space to
run some "tuning program for their load" either.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 0:27 ` Linus Torvalds
@ 2009-03-27 0:47 ` Andrew Morton
2009-03-27 1:03 ` Linus Torvalds
2009-03-27 0:51 ` Linus Torvalds
1 sibling, 1 reply; 419+ messages in thread
From: Andrew Morton @ 2009-03-27 0:47 UTC (permalink / raw)
To: Linus Torvalds
Cc: Theodore Tso, David Rees, Jesper Krogh, Linux Kernel Mailing List
On Thu, 26 Mar 2009 17:27:43 -0700 (PDT) Linus Torvalds <torvalds@linux-foundation.org> wrote:
>
>
> On Thu, 26 Mar 2009, Andrew Morton wrote:
> >
> > userspace can do it quite easily. Run a self-tuning script after
> > installation and when the disk hardware changes significantly.
>
> Uhhuh.
>
> "user space can do it".
>
> That's the global cop-out.
userspace can get closer than the kernel can.
> The fact is, user-space isn't doing it, and never has done anything even
> _remotely_ like it.
>
> In fact, I claim that it's impossible to do. If you give me a number for
> the throughput of your harddisk, I will laugh in your face and call you a
> moron.
>
> Why? Because no such number exists. It depends on the access patterns.
Those access patterns are observable!
> If
> you write one large file, the number will be very different (and not just
> by a few percent) from the numbers of you writing thousands of small
> files, or re-writing a large database in random order.
>
> So no. User space CAN NOT DO IT, and the fact that you even claim
> something like that shows a distinct lack of thought.
userspace can get closer. Even if it's asking the user "what sort of
applications will this machine be running" and then use a set of canned
tunables based on that.
Better would be to observe system behaviour, perhaps in real time and
make adjustments.
> > Maybe we should set the tunables to 99.9% to make it suck enough to
> > motivate someone.
>
> The only times tunables have worked for us is when they auto-tune.
>
> IOW, we don't have "use 35% of memory for buffer cache" tunables, we just
> dynamically auto-tune memory use. And no, we don't expect user space to
> run some "tuning program for their load" either.
>
This particular case is exceptional - it's just too hard for the kernel
to be able to predict the future for this one.
It wouldn't be terribly hard for a userspace daemon to produce better
results than we can achieve in-kernel. That might of course require
additional kernel work to support it well.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 0:27 ` Linus Torvalds
2009-03-27 0:47 ` Andrew Morton
@ 2009-03-27 0:51 ` Linus Torvalds
2009-03-27 1:03 ` Andrew Morton
1 sibling, 1 reply; 419+ messages in thread
From: Linus Torvalds @ 2009-03-27 0:51 UTC (permalink / raw)
To: Andrew Morton
Cc: Theodore Tso, David Rees, Jesper Krogh, Linux Kernel Mailing List
On Thu, 26 Mar 2009, Linus Torvalds wrote:
>
> The only times tunables have worked for us is when they auto-tune.
>
> IOW, we don't have "use 35% of memory for buffer cache" tunables, we just
> dynamically auto-tune memory use. And no, we don't expect user space to
> run some "tuning program for their load" either.
IOW, what we could reasonably do is something along the lines of:
- start off with some reasonable value for max background dirty (per
block device) that defaults to something sane (quite possibly based on
simply memory size).
- assume that "foreground dirty" is just always 2* background dirty.
- if we hit the "max foreground dirty" during memory allocation, then we
shrink the background dirty value (logic: we never want to have to wait
synchronously)
- if we hit some maximum latency on writeback, shrink dirty aggressively
and based on how long the latency was (because at that point we have a
real _measure_ of how costly it is with that load).
- if we start doing background dirtying, but never hit the foreground
dirty even in dirty balancing (ie when a writer is actually _writing_,
as opposed to hitting it when allocating memory by a non-writer), then
slowly open up the window - we may be limiting too early.
.. add heuristics to taste. The point being, that if we do this based on
real loads, and based on hitting the real problems, then we might actually
be getting somewhere. In particular, if the filesystem sucks at writeout
(ie the limiter is not the _disk_, but the filesystem serialization), then
it should automatically also shrink the max dirty state.
The tunable then could become the maximum latency we accept or something
like that. Or the hysteresis limits/rules for the soft "grow" or "shrink"
events. At that point, maybe we could even find something that works for
most people.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 0:51 ` Linus Torvalds
@ 2009-03-27 1:03 ` Andrew Morton
0 siblings, 0 replies; 419+ messages in thread
From: Andrew Morton @ 2009-03-27 1:03 UTC (permalink / raw)
To: Linus Torvalds
Cc: Theodore Tso, David Rees, Jesper Krogh, Linux Kernel Mailing List
On Thu, 26 Mar 2009 17:51:44 -0700 (PDT) Linus Torvalds <torvalds@linux-foundation.org> wrote:
>
>
> On Thu, 26 Mar 2009, Linus Torvalds wrote:
> >
> > The only times tunables have worked for us is when they auto-tune.
> >
> > IOW, we don't have "use 35% of memory for buffer cache" tunables, we just
> > dynamically auto-tune memory use. And no, we don't expect user space to
> > run some "tuning program for their load" either.
>
> IOW, what we could reasonably do is something along the lines of:
>
> - start off with some reasonable value for max background dirty (per
> block device) that defaults to something sane (quite possibly based on
> simply memory size).
>
> - assume that "foreground dirty" is just always 2* background dirty.
>
> - if we hit the "max foreground dirty" during memory allocation, then we
> shrink the background dirty value (logic: we never want to have to wait
> synchronously)
>
> - if we hit some maximum latency on writeback, shrink dirty aggressively
> and based on how long the latency was (because at that point we have a
> real _measure_ of how costly it is with that load).
>
> - if we start doing background dirtying, but never hit the foreground
> dirty even in dirty balancing (ie when a writer is actually _writing_,
> as opposed to hitting it when allocating memory by a non-writer), then
> slowly open up the window - we may be limiting too early.
>
> .. add heuristics to taste. The point being, that if we do this based on
> real loads, and based on hitting the real problems, then we might actually
> be getting somewhere. In particular, if the filesystem sucks at writeout
> (ie the limiter is not the _disk_, but the filesystem serialization), then
> it should automatically also shrink the max dirty state.
>
> The tunable then could become the maximum latency we accept or something
> like that. Or the hysteresis limits/rules for the soft "grow" or "shrink"
> events. At that point, maybe we could even find something that works for
> most people.
>
hm.
It may not be too hard to account for seekiness. Simplest case: if we
dirty a page and that page is file-contiguous to another already dirty
page then don't increment the dirty page count by "1": increment it by
0.01.
Another simple case would be to keep track of the _number_ of dirty
inodes rather than simply lumping all dirty pages together.
And then there's metadata. The dirty balancing code doesn't account
for dirty inodes _at all_ at present.
(Many years ago there was a bug wherein we could have zillions of dirty
inodes and exactly zero dirty pages, and the writeback code wouldn't
trigger at all - the inodes would just sit there until a page got
dirtied - this might still be there).
Then again, perhaps we don't need all those discrete heuristic things.
Maybe it can all be done in mark_buffer_dirty(). Do some clever
math+data-structure to track the seekiness of our dirtiness. Delayed
allocation would mess that up though.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 0:47 ` Andrew Morton
@ 2009-03-27 1:03 ` Linus Torvalds
2009-03-27 1:25 ` Andrew Morton
2009-03-27 3:23 ` Theodore Tso
0 siblings, 2 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-03-27 1:03 UTC (permalink / raw)
To: Andrew Morton
Cc: Theodore Tso, David Rees, Jesper Krogh, Linux Kernel Mailing List
On Thu, 26 Mar 2009, Andrew Morton wrote:
>
> userspace can get closer than the kernel can.
Andrew, that's SIMPLY NOT TRUE.
You state that without any amount of data to back it up, as if it was some
kind of truism. It's not.
> > Why? Because no such number exists. It depends on the access patterns.
>
> Those access patterns are observable!
Not by user space they aren't, and not dynamically. At least not as well
as they are for the kernel.
So when you say "user space can do it better", you base that statement on
exactly what? The night-time whisperings of the small creatures living in
your basement?
The fact is, user space can't do better. And perhaps equally importantly,
we have 16 years of history with user space tuning, and that history tells
us unequivocally that user space never does anything like this.
Name _one_ case where even simple tuning has happened, and where it has
actually _worked_?
I claim you cannot. And I have counter-examples. Just look at the utter
fiasco that was user-space "tuning" of nice-levels that distros did. Ooh.
Yeah, it didn't work so well, did it? Especially not when the kernel
changed subtly, and the "tuning" that had been done was shown to be
utter crap.
> > dynamically auto-tune memory use. And no, we don't expect user space to
> > run some "tuning program for their load" either.
> >
>
> This particular case is exceptional - it's just too hard for the kernel
> to be able to predict the future for this one.
We've never even tried.
The dirty limit was never about trying to tune things, it started out as
protection against deadlocks and other catastrophic failures. We used to
allow 50% dirty or something like that (which is not unlike our old buffer
cache limits, btw), and then when we had a HIGHMEM lockup issue it got
severly cut down. At no point was that number even _trying_ to limit
latency, other than as a "hey, it's probably good to not have all memory
tied up in dirty pages" kind of secondary way.
I claim that the whole balancing between inodes/dentries/pagecache/swap/
anonymous memory/what-not is likely a much harder problem. And no, I'm not
claiming that we "solved" that problem, but we've clearly done a pretty
good job over the years of getting to a reasonable end result.
Sure, you can still tune "swappiness" (nobody much does), but even there
you don't actually tune how much memory you use for swap cache, you do
more of a "meta-tuning" where you tune how the auto-tuning works.
That is something we have shown to work historically.
That said, the real problem isn't even the tuning. The real problem is a
filesystem issue. If "fsync()" cost was roughly proportional to the size
of the changes to the file we are fsync'ing, nobody would even complain.
Everybody accepts that if you've written a 20MB file and then call
"fsync()" on it, it's going to take a while. But when you've written a 2kB
file, and "fsync()" takes 20 seconds, because somebody else is just
writing normally, _that_ is a bug. And it is actually almost totally
unrelated to the whole 'dirty_limit' thing.
At least it _should_ be.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 1:03 ` Linus Torvalds
@ 2009-03-27 1:25 ` Andrew Morton
2009-03-27 2:21 ` David Rees
` (4 more replies)
2009-03-27 3:23 ` Theodore Tso
1 sibling, 5 replies; 419+ messages in thread
From: Andrew Morton @ 2009-03-27 1:25 UTC (permalink / raw)
To: Linus Torvalds
Cc: Theodore Tso, David Rees, Jesper Krogh, Linux Kernel Mailing List
On Thu, 26 Mar 2009 18:03:15 -0700 (PDT) Linus Torvalds <torvalds@linux-foundation.org> wrote:
>
>
> On Thu, 26 Mar 2009, Andrew Morton wrote:
> >
> > userspace can get closer than the kernel can.
>
> Andrew, that's SIMPLY NOT TRUE.
>
> You state that without any amount of data to back it up, as if it was some
> kind of truism. It's not.
I've seen you repeatedly fiddle the in-kernel defaults based on
in-field experience. That could just as easily have been done in
initscripts by distros, and much more effectively because it doesn't
need a new kernel. That's data.
The fact that this hasn't even been _attempted_ (afaik) is deplorable.
Why does everyone just sit around waiting for the kernel to put a new
value into two magic numbers which userspace scripts could have set?
My /etc/rc.local has been tweaking dirty_ratio, dirty_background_ratio
and swappiness for many years. I guess I'm just incredibly advanced.
> Everybody accepts that if you've written a 20MB file and then call
> "fsync()" on it, it's going to take a while. But when you've written a 2kB
> file, and "fsync()" takes 20 seconds, because somebody else is just
> writing normally, _that_ is a bug. And it is actually almost totally
> unrelated to the whole 'dirty_limit' thing.
>
> At least it _should_ be.
That's different. It's inherent JBD/ext3-ordered brain damage.
Unfixable without turning the fs into something which just isn't jbd/ext3
any more. data=writeback is a workaround, with the obvious integrity
issues.
The JBD journal is a massive designed-in contention point. It's why
for several years I've been telling anyone who will listen that we need
a new fs. Hopefully our response to all these problems will soon be
"did you try btrfs?".
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 1:25 ` Andrew Morton
@ 2009-03-27 2:21 ` David Rees
2009-03-27 3:03 ` Matthew Garrett
2009-03-27 3:36 ` Dave Jones
2009-03-27 3:01 ` Matthew Garrett
` (3 subsequent siblings)
4 siblings, 2 replies; 419+ messages in thread
From: David Rees @ 2009-03-27 2:21 UTC (permalink / raw)
To: Andrew Morton
Cc: Linus Torvalds, Theodore Tso, Jesper Krogh,
Linux Kernel Mailing List
On Thu, Mar 26, 2009 at 6:25 PM, Andrew Morton
<akpm@linux-foundation.org> wrote:
> Why does everyone just sit around waiting for the kernel to put a new
> value into two magic numbers which userspace scripts could have set?
>
> My /etc/rc.local has been tweaking dirty_ratio, dirty_background_ratio
> and swappiness for many years. I guess I'm just incredibly advanced.
The only people who bother to tune those values are people who get
annoyed enough to do the research to see if it's something that's
tunable - hackers.
Everyone else simply says "man, Linux *sucks*" and lives life hoping
it will get better some day. From posts in this thread - even most
developers just live with it, and have been doing so for *years*.
Even Linux distros don't bother modifying init scripts - they patch
them into kernel instead. I routinely watch Fedora kernel changelogs
and found these comments in the changelog recently:
* Mon Mar 23 2009 xx <xx@xx.xx> 2.6.29-2
- Change default swappiness setting from 60 to 30.
* Thu Mar 19 2009 xx <xx@xx.xx> 2.6.29-0.66.rc8.git4
- Raise default vm dirty data limits from 5/10 to 10/20 percent.
Why are the going in the kernel package instead of /etc/sysctl.conf?
Why is Fedora deviating from upstream? (probably sqlite performance)
Maybe there's a good reason to put them into the kernel - for some
reason the latest kernels perform better with those values where the
previous ones didn't. But still - why ship those 2 bytes of
configuration in a 75MB package instead of one that could be a
fraction of that size?
Does *any* distro fiddle those bits in userspace instead of patching the kernel?
-Dave
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 1:25 ` Andrew Morton
2009-03-27 2:21 ` David Rees
@ 2009-03-27 3:01 ` Matthew Garrett
2009-03-27 3:38 ` Linus Torvalds
` (2 subsequent siblings)
4 siblings, 0 replies; 419+ messages in thread
From: Matthew Garrett @ 2009-03-27 3:01 UTC (permalink / raw)
To: Andrew Morton
Cc: Linus Torvalds, Theodore Tso, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Thu, Mar 26, 2009 at 06:25:19PM -0700, Andrew Morton wrote:
> On Thu, 26 Mar 2009 18:03:15 -0700 (PDT) Linus Torvalds <torvalds@linux-foundation.org> wrote:
> > You state that without any amount of data to back it up, as if it was some
> > kind of truism. It's not.
>
> I've seen you repeatedly fiddle the in-kernel defaults based on
> in-field experience. That could just as easily have been done in
> initscripts by distros, and much more effectively because it doesn't
> need a new kernel. That's data.
If there's a sensible default then it belongs in the kernel. Forcing
these decisions out to userspace just means that every distribution
needs to work out what these settings are, and the evidence we've seen
when they attempt to do this is that we end up with things like broken
cpufreq parameters because these are difficult problems. The simple
reality is that almost every single distribution lacks developers with
sufficient understanding of the problem to make the correct choice.
The typical distribution lifecycle is significantly longer than a kernel
release cycle. It's massively easier for people to pull updated kernels.
> Why does everyone just sit around waiting for the kernel to put a new
> value into two magic numbers which userspace scripts could have set?
If the distribution can set a globally correct value then that globally
correct value should be there in the first place!
> My /etc/rc.local has been tweaking dirty_ratio, dirty_background_ratio
> and swappiness for many years. I guess I'm just incredibly advanced.
And how have you got these values pushed into other distributions? Is
your rc.local available anywhere?
Linus is absolutely right here. Pushing these decisions out to userspace
means duplicated work in the best case - in the worst case it means most
users end up with the wrong value.
--
Matthew Garrett | mjg59@srcf.ucam.org
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 2:21 ` David Rees
@ 2009-03-27 3:03 ` Matthew Garrett
2009-03-27 3:36 ` Dave Jones
1 sibling, 0 replies; 419+ messages in thread
From: Matthew Garrett @ 2009-03-27 3:03 UTC (permalink / raw)
To: David Rees
Cc: Andrew Morton, Linus Torvalds, Theodore Tso, Jesper Krogh,
Linux Kernel Mailing List
On Thu, Mar 26, 2009 at 07:21:08PM -0700, David Rees wrote:
> Does *any* distro fiddle those bits in userspace instead of patching the kernel?
Given that the optimal values of these tunables often seems to vary
between kernel versions, it's easier to just put them in the kernel.
--
Matthew Garrett | mjg59@srcf.ucam.org
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 1:03 ` Linus Torvalds
2009-03-27 1:25 ` Andrew Morton
@ 2009-03-27 3:23 ` Theodore Tso
2009-03-27 3:47 ` Matthew Garrett
1 sibling, 1 reply; 419+ messages in thread
From: Theodore Tso @ 2009-03-27 3:23 UTC (permalink / raw)
To: Linus Torvalds
Cc: Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Thu, Mar 26, 2009 at 06:03:15PM -0700, Linus Torvalds wrote:
>
> Everybody accepts that if you've written a 20MB file and then call
> "fsync()" on it, it's going to take a while. But when you've written a 2kB
> file, and "fsync()" takes 20 seconds, because somebody else is just
> writing normally, _that_ is a bug. And it is actually almost totally
> unrelated to the whole 'dirty_limit' thing.
Yeah, well, it's caused by data=ordered, which is an ext3 unique
thing; no other filesystem (or operating system) has such a feature.
I'm beginning to wish we hadn't implemented it. Yeah, it solved a
security problem (which delayed allocation also solves), but it
trained application programs to be careless about fsync(), and it's
caused us so many other problems, including the fsync() and unrelated
commit latency problems.
We are where we are, though, and people have been trained to think
they don't need fsync(), so we're going to have to deal with the
problem by having these implied fsync for cases like
replace-via-rename, and in addition to that, some kind of hueristic to
force out writes early to avoid these huge write latencies. It would
be good to make it be autotuning it so that filesystems that don't do
ext3 data=ordered don't have to pay the price of having to force out
writes so aggressively early (since in some cases if the file
subsequently is deleted, we might be able to optimize out the write
altogether --- and that's good for SSD endurance).
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 2:21 ` David Rees
2009-03-27 3:03 ` Matthew Garrett
@ 2009-03-27 3:36 ` Dave Jones
1 sibling, 0 replies; 419+ messages in thread
From: Dave Jones @ 2009-03-27 3:36 UTC (permalink / raw)
To: David Rees
Cc: Andrew Morton, Linus Torvalds, Theodore Tso, Jesper Krogh,
Linux Kernel Mailing List
On Thu, Mar 26, 2009 at 07:21:08PM -0700, David Rees wrote:
> * Mon Mar 23 2009 xx <xx@xx.xx> 2.6.29-2
> - Change default swappiness setting from 60 to 30.
>
> * Thu Mar 19 2009 xx <xx@xx.xx> 2.6.29-0.66.rc8.git4
> - Raise default vm dirty data limits from 5/10 to 10/20 percent.
>
> Why are the going in the kernel package instead of /etc/sysctl.conf?
At least in part, because rpm sucks.
If a user has editted /etc/sysctl.conf, upgrading the initscripts package
won't change that file.
Dave
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 1:25 ` Andrew Morton
2009-03-27 2:21 ` David Rees
2009-03-27 3:01 ` Matthew Garrett
@ 2009-03-27 3:38 ` Linus Torvalds
2009-03-27 3:59 ` Linus Torvalds
2009-03-28 5:06 ` Ingo Molnar
2009-04-01 21:03 ` Lennart Sorensen
4 siblings, 1 reply; 419+ messages in thread
From: Linus Torvalds @ 2009-03-27 3:38 UTC (permalink / raw)
To: Andrew Morton
Cc: Theodore Tso, David Rees, Jesper Krogh, Linux Kernel Mailing List
On Thu, 26 Mar 2009, Andrew Morton wrote:
>
> Why does everyone just sit around waiting for the kernel to put a new
> value into two magic numbers which userspace scripts could have set?
>
> My /etc/rc.local has been tweaking dirty_ratio, dirty_background_ratio
> and swappiness for many years. I guess I'm just incredibly advanced.
.. and as a result you're also testing something that nobody else is.
Look at the complaints from people about fsync behavior that Ted says he
cannot see. Let me guess: it's because Ted probably has tweaked his
environment, because he is advanced. As a result, other people see
problems, he does not.
That's not "advanced". That's totally f*cking broken.
Having different distributions tweak all those tweakables is just even
_more_ so. It's the anti-thesis of "advanced". It's just stupid.
We should aim to get it right. The "user space can tweak any numbers they
want" is ALWAYS THE WRONG ANSWER. It's a cop-out, but more importantly,
it's a cop-out that doesn't even work, and that just results in everybody
having different setups. Then nobody is happy.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 3:23 ` Theodore Tso
@ 2009-03-27 3:47 ` Matthew Garrett
2009-03-27 5:13 ` Theodore Tso
0 siblings, 1 reply; 419+ messages in thread
From: Matthew Garrett @ 2009-03-27 3:47 UTC (permalink / raw)
To: Theodore Tso, Linus Torvalds, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
On Thu, Mar 26, 2009 at 11:23:01PM -0400, Theodore Tso wrote:
> Yeah, well, it's caused by data=ordered, which is an ext3 unique
> thing; no other filesystem (or operating system) has such a feature.
> I'm beginning to wish we hadn't implemented it. Yeah, it solved a
> security problem (which delayed allocation also solves), but it
> trained application programs to be careless about fsync(), and it's
> caused us so many other problems, including the fsync() and unrelated
> commit latency problems.
Oh, for the love of a whole range of mythological figures. ext3 didn't
train application programmers that they could be careless about fsync().
It gave them functionality that they wanted, ie the ability to do things
like rename a file over another one with the expectation that these
operations would actually occur in the same order that they were
generated. More to the point, it let them do this *without* having to
call fsync(), resulting in a significant improvement in filesystem
usability.
I'm utterly and screamingly bored of this "Blame userspace" attitude.
The simple fact of the matter is that ext4 was designed without paying
any attention to how the majority of applications behave. fsync() isn't
the interface people want. ext3 demonstrated that a filesystem could be
written that made life easier for application authors. Why on earth
would anyone think that taking a step back by requiring fsync() in a
wider range of circumstances was a good idea?
--
Matthew Garrett | mjg59@srcf.ucam.org
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 3:38 ` Linus Torvalds
@ 2009-03-27 3:59 ` Linus Torvalds
2009-03-28 23:52 ` david
0 siblings, 1 reply; 419+ messages in thread
From: Linus Torvalds @ 2009-03-27 3:59 UTC (permalink / raw)
To: Andrew Morton
Cc: Theodore Tso, David Rees, Jesper Krogh, Linux Kernel Mailing List
On Thu, 26 Mar 2009, Linus Torvalds wrote:
>
> We should aim to get it right. The "user space can tweak any numbers they
> want" is ALWAYS THE WRONG ANSWER. It's a cop-out, but more importantly,
> it's a cop-out that doesn't even work, and that just results in everybody
> having different setups. Then nobody is happy.
In fact it results in "everybody" just having the distro defaults, which
in some cases then depend on things like which particular version they
initially installed things with (because some decisions end up being
codified in long-term memory by that initial install - like the size of
the journal when you mkfs'd your filesystem, or the alignment of your
partitions, or whatever).
The exception, of course, ends up being power-users that then tweak things
on their own.
Me, I may be a power user, but I absolutely refuse to touch default
values. If they are wrong, they should be fixed. I don't want to add
"relatime" to my /etc/fstab, because then the next time I install, I'll
forget - and if I really need to do that, then the kernel should have
already done it for me as the default choice.
I also don't want to say that "Fedora should just do it right" (I'll
complain about things Fedora does badly, but not setting magic values in
/proc is not one of them), because then even if Fedora _were_ to get
things right, others won't. Or even worse, somebody will point that SuSE
or Ubuntu _did_ do it right, but the distro I happen to use is doing the
wrong thing.
And yes, I could do my own site-specific tweaks, but again, why should I?
If the tweak really is needed, I should put it in the generic kernel. I
don't do anything odd.
End result: regardless of scenario, depending on user-land tweaking is
always the wrong thing. It's the wrong thing for distributions (they'd all
need to do the exact same thing anyway, or chaos reigns, so it might as
well be a kernel default), and it's the wrong thing for individuals
(because 99.9% of individuals won't know what to do, and the remaining
0.1% should be trying to improve _other_ peoples experiences, not just
their own!).
The only excuse _ever_ for user-land tweaking is if you do something
really odd. Say that you want to get the absolutely best OLTP numbers you
can possibly get - with no regards for _any_ other workload. In that case,
you want to tweak the numbers for that exact load, and the exact machine
that runs it - and the result is going to be a totally worthless number
(since it's just benchmarketing and doesn't actually reflect any real
world scenario), but hey, that's what benchmarketing is all about.
Or say that you really are a very embedded environment, with a very
specific load. A router, a cellphone, a base station, whatever - you do
one thing, and you're not trying to be a general purpose machine. Then you
can tweak for that load. But not otherwise.
If you don't have any magical odd special workloads, you shouldn't need to
tweak a single kernel knob. Because if you need to, then the kernel is
doing something wrong to begin with.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 3:47 ` Matthew Garrett
@ 2009-03-27 5:13 ` Theodore Tso
2009-03-27 5:57 ` Matthew Garrett
2009-04-03 12:39 ` Pavel Machek
0 siblings, 2 replies; 419+ messages in thread
From: Theodore Tso @ 2009-03-27 5:13 UTC (permalink / raw)
To: Matthew Garrett
Cc: Linus Torvalds, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Fri, Mar 27, 2009 at 03:47:05AM +0000, Matthew Garrett wrote:
> Oh, for the love of a whole range of mythological figures. ext3 didn't
> train application programmers that they could be careless about fsync().
> It gave them functionality that they wanted, ie the ability to do things
> like rename a file over another one with the expectation that these
> operations would actually occur in the same order that they were
> generated. More to the point, it let them do this *without* having to
> call fsync(), resulting in a significant improvement in filesystem
> usability.
Matthew,
There were plenty of applications that were written for Unix *and*
Linux systems before ext3 existed, and they worked just fine. Back
then, people were drilled into the fact that they needed to use
fsync(), and fsync() wan't expensive, so there wasn't a big deal in
terms of usability. The fact that fsync() was expensive was precisely
because of ext3's data=ordered problem. Writing files safely meant
that you had to check error returns from fsync() *and* close().
In fact, if you care about making sure that data doesn't get lost due
to disk errors, you *must* call fsync(). Pavel may have complained
that fsync() can sometimes drop errors if some other process also has
the file open and calls fsync() --- but if you don't, and you rely on
ext3 to magically write the data blocks out as a side effect of the
commit in data=ordered mode, there's no way to signal the write error
to the application, and you are *guaranteed * to lose the I/O error
indication.
I can tell you quite authoritatively that we didn't implement
data=ordered to make life easier for application writers, and
application writers didn't come to ext3 developers asking for this
convenience. It may have **accidentally** given them convenience that
they wanted, but it also made fsync() slow.
> I'm utterly and screamingly bored of this "Blame userspace" attitude.
I'm not blaming userspace. I'm blaming ourselves, for implementing an
attractive nuisance, and not realizing that we had implemented an
attractive nuisance; which years later, is also responsible for these
latency problems, both with and without fsync() ---- *and* which have
also traied people into believing that fsync() is always expensive,
and must be avoided at all costs --- which had not previously been
true!
If I had to do it all over again, I would have argued with Stephen
about making data=writeback the default, which would have provided
behaviour on crash just like ext2, except that we wouldn't have to
fsck the partition afterwards. Back then, people lived with the
potential security exposure on a crash, and they lived with the fact
that you had to use fsync(), or manually type "sync", if you wanted to
guarantee that data would be safely written to disk. And you know
what? Things had been this way with Unix systems for 31 years before
ext3 came on the scene, and things worked pretty well during those
three decades.
So again, let it make it clear, I'm not "blaming userspace". I'm
blaming ext3 data=ordered mode. But it's trained application writers
to program systems a certain way, and it's trained them to assume that
fsync() is always evil, and they outnumber us kernel programmers, and
so we are where we are. And data=ordered mode is also responsible for
these write latency problems which seems to make Ingo so cranky ---
and rightly so. It all comes from the same source.
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 5:13 ` Theodore Tso
@ 2009-03-27 5:57 ` Matthew Garrett
2009-03-27 6:21 ` Matthew Garrett
2009-04-03 12:39 ` Pavel Machek
1 sibling, 1 reply; 419+ messages in thread
From: Matthew Garrett @ 2009-03-27 5:57 UTC (permalink / raw)
To: Theodore Tso, Linus Torvalds, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
On Fri, Mar 27, 2009 at 01:13:39AM -0400, Theodore Tso wrote:
> There were plenty of applications that were written for Unix *and*
> Linux systems before ext3 existed, and they worked just fine. Back
> then, people were drilled into the fact that they needed to use
> fsync(), and fsync() wan't expensive, so there wasn't a big deal in
> terms of usability. The fact that fsync() was expensive was precisely
> because of ext3's data=ordered problem. Writing files safely meant
> that you had to check error returns from fsync() *and* close().
And now life is better. UNIX's error handling has always meant that it's
effectively impossible to ensure that data hits disk if you wander into
a variety of error conditions, and by and large it's simply not worth
worrying about them. You're generally more likely to hit a kernel bug or
suffer hardware failure than find an error condition that can actually
be handled in a sensible way, and the probability/effectiveness ratio is
sufficiently low that there are better ways to spend your time unless
you're writing absolutely mission critical code. So let's not focus on
the risk of data loss from failing to check certain error conditions.
It's a tiny risk compared to power loss.
> I can tell you quite authoritatively that we didn't implement
> data=ordered to make life easier for application writers, and
> application writers didn't come to ext3 developers asking for this
> convenience. It may have **accidentally** given them convenience that
> they wanted, but it also made fsync() slow.
It not only gave them that convenience, it *guaranteed* that
convenience. And with ext3 being the standard filesystem in the Linux
world, and every other POSIX system being by and large irrelevant[1],
the real world effect of that was that Linux gave you that guarantee.
> > I'm utterly and screamingly bored of this "Blame userspace" attitude.
>
> I'm not blaming userspace. I'm blaming ourselves, for implementing an
> attractive nuisance, and not realizing that we had implemented an
> attractive nuisance; which years later, is also responsible for these
> latency problems, both with and without fsync() ---- *and* which have
> also traied people into believing that fsync() is always expensive,
> and must be avoided at all costs --- which had not previously been
> true!
But you're still arguing that applications should start using fsync().
I'm arguing that not only is this pointless (most of this code will
never be "fixed") but it's also regressive. In most cases applications
don't want the guarantees that fsync() makes, and given that we're going
to have people running on ext3 for years to come they also don't want
the performance hit that fsync() brings. Filesystems should just do the
right thing, rather than losing people's data and then claiming that
it's fine because POSIX said they could.
> If I had to do it all over again, I would have argued with Stephen
> about making data=writeback the default, which would have provided
> behaviour on crash just like ext2, except that we wouldn't have to
> fsck the partition afterwards. Back then, people lived with the
> potential security exposure on a crash, and they lived with the fact
> that you had to use fsync(), or manually type "sync", if you wanted to
> guarantee that data would be safely written to disk. And you know
> what? Things had been this way with Unix systems for 31 years before
> ext3 came on the scene, and things worked pretty well during those
> three decades.
Well, no. fsync() didn't appear in early Unix, so what people were
actually willing to live with was restoring from backups if the system
crashed. I'd argue that things are somewhat better these days,
especially now that we're used to filesystems that don't require us to
fsync(), close(), fsync the directory and possibly jump through even
more hoops if faced with a pathological interpretation of POSIX.
Progress is a good thing. The initial behaviour of ext4 in this respect
wasn't progress.
And, really, I'm kind of amused at someone arguing for a given behaviour
on the basis of POSIX while also suggesting that sync() is in any way
helpful for guaranteeing that data is on disk.
> So again, let it make it clear, I'm not "blaming userspace". I'm
> blaming ext3 data=ordered mode. But it's trained application writers
> to program systems a certain way, and it's trained them to assume that
> fsync() is always evil, and they outnumber us kernel programmers, and
> so we are where we are. And data=ordered mode is also responsible for
> these write latency problems which seems to make Ingo so cranky ---
> and rightly so. It all comes from the same source.
No. People continue to use fsync() where fsync() should be used - for
guaranteeing that given information has hit disk. The problem is that
you're arguing that application should use fsync() even when they don't
want or need that guarantee. If anything, ext3 has been helpful in
encouraging people to only use fsync() when they really need to - and
that's a win for everyone.
[1] MacOS has users, but it's not a significant market for pure POSIX
applications so isn't really an interesting counterexample
--
Matthew Garrett | mjg59@srcf.ucam.org
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 5:57 ` Matthew Garrett
@ 2009-03-27 6:21 ` Matthew Garrett
2009-03-27 11:24 ` Theodore Tso
0 siblings, 1 reply; 419+ messages in thread
From: Matthew Garrett @ 2009-03-27 6:21 UTC (permalink / raw)
To: Theodore Tso, Linus Torvalds, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
On Fri, Mar 27, 2009 at 05:57:50AM +0000, Matthew Garrett wrote:
> Well, no. fsync() didn't appear in early Unix, so what people were
> actually willing to live with was restoring from backups if the system
> crashed. I'd argue that things are somewhat better these days,
> especially now that we're used to filesystems that don't require us to
> fsync(), close(), fsync the directory and possibly jump through even
> more hoops if faced with a pathological interpretation of POSIX.
> Progress is a good thing. The initial behaviour of ext4 in this respect
> wasn't progress.
And, hey, fsync didn't make POSIX proper until 1996. It's not like
authors were able to depend on it for a significant period of time
before ext3 hit the scene.
(It could be argued that most relevant Unices implemented fsync() even
before then, so its status in POSIX was broadly irrelevant. The obvious
counterargument is that most relevant Unix filesystems ensure that data
is written before a clobbering rename() is carried out, so POSIX is
again not especially releant)
--
Matthew Garrett | mjg59@srcf.ucam.org
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 20:25 ` Jeff Garzik
2009-03-25 20:40 ` Linus Torvalds
@ 2009-03-27 7:46 ` Jens Axboe
1 sibling, 0 replies; 419+ messages in thread
From: Jens Axboe @ 2009-03-27 7:46 UTC (permalink / raw)
To: Jeff Garzik
Cc: no, To-header, on, "input <", ; Linus Torvalds,
Theodore Tso, Ingo Molnar, Alan Cox, Arjan van de Ven,
Andrew Morton, Peter Zijlstra, Nick Piggin, David Rees,
Jesper Krogh, Linux Kernel Mailing List
Jeff, if you drop my CC on reply, I wont see your messages for ages.
On Wed, Mar 25 2009, Jeff Garzik wrote:
> Jens Axboe wrote:
>> On Wed, Mar 25 2009, Jeff Garzik wrote:
>>> Stating "fsync already does that" borders on false, because that assumes
>>> (a) the user has a fs that supports barriers
>>> (b) the user is actually aware of a 'barriers' mount option and what
>>> it means
>>> (c) the user has turned on an option normally defaulted to off.
>>>
>>> Or in other words, it pretty much never happens.
>>
>> That is true, except if you use xfs/ext4. And this discussion is fine,
>> as was the one a few months back that got ext4 to enable barriers by
>> default. If I had submitted patches to do that back in 2001/2 when the
>> barrier stuff was written, I would have been shot for introducing such a
>> slow down. After people found out that it just wasn't something silly,
>> then you have a way to enable it.
>>
>> I'd still wager that most people would rather have a 'good enough
>> fsync' on their desktops than incur the penalty of barriers or write
>> through caching. I know I do.
>
> That's a strawman argument: The choice is not between "good enough
> fsync" and full use of barriers / write-through caching, at all.
Then let me rephrase that to "most users don't care about full integrity
fsync()". If it kills their firefox performance, most will wont to turn
it off. Personally I'd never use it on my notebook or desktop box,
simply because I don't care strongly enough. I'd rather fix things up in
the very unlikely event of a crash WITH corruption.
> It is clearly possible to implement an fsync(2) that causes FLUSH CACHE
> to be issued, without adding full barrier support to a filesystem. It
> is likely doable to avoid touching per-filesystem code at all, if we
> issue the flush from a generic fsync(2) code path in the kernel.
Of course, it would be trivial. Just add a blkdev_issue_flush() to
vfs_fsync().
> Thus, you have a "third way": fsync(2) gives the guarantee it is
> supposed to, but you do not take the full performance hit of
> barriers-all-the-time.
>
> Remember, fsync(2) means that the user _expects_ a performance hit.
>
> And they took the extra step to call fsync(2) because they want a
> guarantee, not a lie.
s/user/application.
--
Jens Axboe
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 20:40 ` Linus Torvalds
2009-03-25 20:57 ` Ric Wheeler
2009-03-25 21:33 ` Jeff Garzik
@ 2009-03-27 7:57 ` Jens Axboe
2009-03-27 14:13 ` Theodore Tso
2009-03-27 19:14 ` Chris Mason
2 siblings, 2 replies; 419+ messages in thread
From: Jens Axboe @ 2009-03-27 7:57 UTC (permalink / raw)
To: Linus Torvalds
Cc: Jeff Garzik, Theodore Tso, Ingo Molnar, Alan Cox,
Arjan van de Ven, Andrew Morton, Peter Zijlstra, Nick Piggin,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Wed, Mar 25 2009, Linus Torvalds wrote:
>
>
> On Wed, 25 Mar 2009, Jeff Garzik wrote:
> >
> > It is clearly possible to implement an fsync(2) that causes FLUSH CACHE to be
> > issued, without adding full barrier support to a filesystem. It is likely
> > doable to avoid touching per-filesystem code at all, if we issue the flush
> > from a generic fsync(2) code path in the kernel.
>
> We could easily do that. It would even work for most cases. The
> problematic ones are where filesystems do their own disk management, but I
> guess those people can do their own fsync() management too.
>
> Somebody send me the patch, we can try it out.
Here's a simple patch that does that. Not even tested, it compiles. Note
that file systems that currently do blkdev_issue_flush() in their
->sync() should then get it removed.
> > Remember, fsync(2) means that the user _expects_ a performance hit.
>
> Within reason, though.
>
> OS X, for example, doesn't do the disk barrier. It requires you to do a
> separate FULL_FSYNC (or something similar) ioctl to get that. Apparently
> exactly because users don't expect quite _that_ big of a performance hit.
>
> (Or maybe just because it was easier to do that way. Never attribute to
> malice what can be sufficiently explained by stupidity).
It'd be better to have a knob to control whether fsync() should care
about the hardware side as well, instead of trying to teach applications
to use FULL_FSYNC.
diff --git a/fs/sync.c b/fs/sync.c
index ec95a69..7a44d4e 100644
--- a/fs/sync.c
+++ b/fs/sync.c
@@ -8,6 +8,7 @@
#include <linux/module.h>
#include <linux/sched.h>
#include <linux/writeback.h>
+#include <linux/blkdev.h>
#include <linux/syscalls.h>
#include <linux/linkage.h>
#include <linux/pagemap.h>
@@ -104,6 +105,7 @@ int vfs_fsync(struct file *file, struct dentry *dentry, int datasync)
{
const struct file_operations *fop;
struct address_space *mapping;
+ struct block_device *bdev;
int err, ret;
/*
@@ -138,6 +140,13 @@ int vfs_fsync(struct file *file, struct dentry *dentry, int datasync)
err = filemap_fdatawait(mapping);
if (!ret)
ret = err;
+
+ bdev = mapping->host->i_sb->s_bdev;
+ if (bdev) {
+ err = blkdev_issue_flush(bdev, NULL);
+ if (!ret)
+ ret = err;
+ }
out:
return ret;
}
--
Jens Axboe
^ permalink raw reply related [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 0:11 ` Andrew Morton
2009-03-27 0:27 ` Linus Torvalds
@ 2009-03-27 9:58 ` Alan Cox
1 sibling, 0 replies; 419+ messages in thread
From: Alan Cox @ 2009-03-27 9:58 UTC (permalink / raw)
To: Andrew Morton
Cc: Linus Torvalds, Theodore Tso, David Rees, Jesper Krogh,
Linux Kernel Mailing List
> userspace can do it quite easily. Run a self-tuning script after
> installation and when the disk hardware changes significantly.
Which is "all the time" in some configurations. It really needs to be
self tuning internally based on the observed achieved rates (just as you
don't use a script to tune your network bandwidth each day)
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 6:21 ` Matthew Garrett
@ 2009-03-27 11:24 ` Theodore Tso
2009-03-27 14:51 ` Matthew Garrett
2009-03-27 21:11 ` Jeremy Fitzhardinge
0 siblings, 2 replies; 419+ messages in thread
From: Theodore Tso @ 2009-03-27 11:24 UTC (permalink / raw)
To: Matthew Garrett
Cc: Linus Torvalds, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Fri, Mar 27, 2009 at 06:21:14AM +0000, Matthew Garrett wrote:
> And, hey, fsync didn't make POSIX proper until 1996. It's not like
> authors were able to depend on it for a significant period of time
> before ext3 hit the scene.
Fsync() was in BSD 4.3 and it was in much earlier Unix specifications,
such as SVID, well before it appeared in POSIX. If an interface was
in both BSD and AT&T System V Unix, it was around everywhere.
> (It could be argued that most relevant Unices implemented fsync() even
> before then, so its status in POSIX was broadly irrelevant. The obvious
> counterargument is that most relevant Unix filesystems ensure that data
> is written before a clobbering rename() is carried out, so POSIX is
> again not especially releant)
Nope, not true. Most relevant Unix file systems sync'ed data blocks
on a 30 timer, and metadata on 5 second timers. They did *not* force
data to be written before a clobbering rename() was carried you;
you're rewriting history when you say that; it's simply not true.
Rename was atomic *only* where metadata was concerned, and all the
talk about rename being atomic was because back then we didn't have
flock() and you built locking primitives open(O_CREAT) and rename();
but that was only metadata, and that was only if the system didn't
crash.
When I was growing up we were trained to *always* check error returns
from *all* system calls, and to *always* fsync() if it was critical
that the data survive a crash. That was what competent Unix
programmers did. And if you are always checking error returns, the
difference in the Lines of Code between doing it right and doing
really wasn't that big --- and again, back then fsync() wan't
expensive. Making fsync expensive was ext3's data=ordered mode's
fault.
Then again, most users or system administrators of Unix systems didn't
tolerate device drivers that would crash your system when you exited a
game, either.... and I've said that I recognize the world has changed
and that crappy application programmers outnumber kernel programers,
which is why I coded the workaround for ext4. That still doesn't make
what they are doing correct.
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 23:03 ` Jesse Barnes
2009-03-25 0:05 ` Arjan van de Ven
2009-03-25 2:09 ` Theodore Tso
@ 2009-03-27 11:27 ` Martin Steigerwald
2 siblings, 0 replies; 419+ messages in thread
From: Martin Steigerwald @ 2009-03-27 11:27 UTC (permalink / raw)
To: Jesse Barnes
Cc: Theodore Tso, Ingo Molnar, Alan Cox, Arjan van de Ven,
Andrew Morton, Peter Zijlstra, Nick Piggin, Jens Axboe,
David Rees, Jesper Krogh, Linus Torvalds,
Linux Kernel Mailing List
[-- Attachment #1: Type: text/plain, Size: 3021 bytes --]
Am Mittwoch 25 März 2009 schrieb Jesse Barnes:
> On Tue, 24 Mar 2009 09:20:32 -0400
>
> Theodore Tso <tytso@mit.edu> wrote:
> > They don't solve the problem where there is a *huge* amount of writes
> > going on, though --- if something is dirtying pages at a rate far
> > greater than the local disk can write it out, say, either "dd
> > if=/dev/zero of=/mnt/make-lots-of-writes" or a massive distcc cluster
> > driving a huge amount of data towards a single system or a wget over
> > a local 100 megabit ethernet from a massive NFS server where
> > everything is in cache, then you can have a major delay with the
> > fsync().
>
> You make it sound like this is hard to do... I was running into this
> problem *every day* until I moved to XFS recently. I'm running a
> fairly beefy desktop (VMware running a crappy Windows install w/AV junk
> on it, builds, icecream and large mailboxes) and have a lot of RAM, but
> it became unusable for minutes at a time, which was just totally
> unacceptable, thus the switch. Things have been better since, but are
> still a little choppy.
>
> I remember early in the 2.6.x days there was a lot of focus on making
> interactive performance good, and for a long time it was. But this I/O
> problem has been around for a *long* time now... What happened? Do not
> many people run into this daily? Do all the filesystem hackers run
> with special mount options to mitigate the problem?
Well I always had the feeling that somewhen from one 2.6.x to another I/O
latencies increased a lot. But first I thought I was just imaging this
and when I more and more thought that this is for real, I forgot since
when I observed these increased latencies.
This is on IBM ThinkPad T42 and T23 with XFS.
I/O latencies are pathetic when dpkg reads in the database or I do tar -xf
linux-x.y.z.tar.bz2.
I never got down to what is causing these higher latencies though also I
tried different I/O schedulers, tuned XFS options, used relatime.
What I found tough is that on XFS at least a tar -xf linux-kernel / rm -rf
linux-kernel operation is way slower with barriers and write cache
enabled that with no barriers and no write cache enabled. And frankly I
never got that.
XFS crawls to a stop on metadata operations when barriers are enabled.
According to the XFS FAQ disabling drive write cache should be as safe as
enabling barriers. And I always unterstood barriers as a feature to be
have *some* ordering contraints, i.e. write before barrier go before
barrier and writes after it after it - even when a drives hardware write
cache is involved. But when this cache is enabled ordering will always be
like issued from Linux block layer cause all I/Os issued to the drive are
write-through and synchron without write cache, versus only barrier
requests are synchron with barriers and write cache.
--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7
[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 197 bytes --]
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-23 23:29 Linus Torvalds
2009-03-24 6:19 ` Jesper Krogh
@ 2009-03-27 13:35 ` Hans-Peter Jansen
2009-03-27 14:53 ` Geert Uytterhoeven
2009-03-27 16:49 ` Frans Pop
1 sibling, 2 replies; 419+ messages in thread
From: Hans-Peter Jansen @ 2009-03-27 13:35 UTC (permalink / raw)
To: linux-kernel; +Cc: Linus Torvalds
Am Dienstag, 24. März 2009 schrieb Linus Torvalds:
>
> This obviously starts the merge window for 2.6.30, although as usual,
> I'll probably wait a day or two before I start actively merging.
It would be very nice, if you could start with a commit to Makefile, that
reflects the new series: e.g.:
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 30
EXTRAVERSION = -pre
-pre for preparing state.
Thanks,
Pete
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 7:57 ` Jens Axboe
@ 2009-03-27 14:13 ` Theodore Tso
2009-03-27 14:35 ` Christoph Hellwig
2009-03-27 19:14 ` Chris Mason
1 sibling, 1 reply; 419+ messages in thread
From: Theodore Tso @ 2009-03-27 14:13 UTC (permalink / raw)
To: Jens Axboe
Cc: Linus Torvalds, Jeff Garzik, Ingo Molnar, Alan Cox,
Arjan van de Ven, Andrew Morton, Peter Zijlstra, Nick Piggin,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Fri, Mar 27, 2009 at 08:57:23AM +0100, Jens Axboe wrote:
>
> Here's a simple patch that does that. Not even tested, it compiles. Note
> that file systems that currently do blkdev_issue_flush() in their
> ->sync() should then get it removed.
>
That's going to be a mess. Ext3 implements an fsync() by requesting a
journal commit, and then waiting for the commit to have taken place.
The commit happens in another thread, kjournald. Knowing when it's OK
not to do a blkdev_issue_flush() because the commit was triggered by
an fsync() is going to be really messy. Could we at least have a flag
in struct super which says, "We'll handle the flush correctly, please
don't try to do it for us?"
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 14:13 ` Theodore Tso
@ 2009-03-27 14:35 ` Christoph Hellwig
2009-03-27 15:03 ` Ric Wheeler
2009-03-27 20:38 ` Jeff Garzik
0 siblings, 2 replies; 419+ messages in thread
From: Christoph Hellwig @ 2009-03-27 14:35 UTC (permalink / raw)
To: Theodore Tso, Jens Axboe, Linus Torvalds, Jeff Garzik,
Ingo Molnar, Alan Cox, Arjan van de Ven, Andrew Morton,
Peter Zijlstra, Nick Piggin, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Fri, Mar 27, 2009 at 10:13:33AM -0400, Theodore Tso wrote:
> On Fri, Mar 27, 2009 at 08:57:23AM +0100, Jens Axboe wrote:
> >
> > Here's a simple patch that does that. Not even tested, it compiles. Note
> > that file systems that currently do blkdev_issue_flush() in their
> > ->sync() should then get it removed.
> >
>
> That's going to be a mess. Ext3 implements an fsync() by requesting a
> journal commit, and then waiting for the commit to have taken place.
> The commit happens in another thread, kjournald. Knowing when it's OK
> not to do a blkdev_issue_flush() because the commit was triggered by
> an fsync() is going to be really messy. Could we at least have a flag
> in struct super which says, "We'll handle the flush correctly, please
> don't try to do it for us?"
Doing it in vfs_fsync also is completely wrong layering. If people want
it for simple filesystems add it to file_fsync instead of messing up
the generic helper. Removing well meaning but ill behaved policy from
the generic path has been costing me far too much time lately.
And please add a tuneable for the flush. Preferable a generic one at
the block device layer instead of the current mess where every
filesystem has a slightly different option for barrier usage.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 11:24 ` Theodore Tso
@ 2009-03-27 14:51 ` Matthew Garrett
2009-03-27 15:08 ` Alan Cox
2009-03-27 15:20 ` Giacomo A. Catenazzi
2009-03-27 21:11 ` Jeremy Fitzhardinge
1 sibling, 2 replies; 419+ messages in thread
From: Matthew Garrett @ 2009-03-27 14:51 UTC (permalink / raw)
To: Theodore Tso, Linus Torvalds, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
On Fri, Mar 27, 2009 at 07:24:38AM -0400, Theodore Tso wrote:
> On Fri, Mar 27, 2009 at 06:21:14AM +0000, Matthew Garrett wrote:
> > And, hey, fsync didn't make POSIX proper until 1996. It's not like
> > authors were able to depend on it for a significant period of time
> > before ext3 hit the scene.
>
> Fsync() was in BSD 4.3 and it was in much earlier Unix specifications,
> such as SVID, well before it appeared in POSIX. If an interface was
> in both BSD and AT&T System V Unix, it was around everywhere.
And if a behaviour is in ext3, then for the vast majority of practical
purposes it exists everywere. Users of non-Linux POSIX operating systems
are niche. Users of non-ext3 filesystems on Linux are niche.
> > (It could be argued that most relevant Unices implemented fsync() even
> > before then, so its status in POSIX was broadly irrelevant. The obvious
> > counterargument is that most relevant Unix filesystems ensure that data
> > is written before a clobbering rename() is carried out, so POSIX is
> > again not especially releant)
>
> Nope, not true. Most relevant Unix file systems sync'ed data blocks
> on a 30 timer, and metadata on 5 second timers. They did *not* force
> data to be written before a clobbering rename() was carried you;
> you're rewriting history when you say that; it's simply not true.
> Rename was atomic *only* where metadata was concerned, and all the
> talk about rename being atomic was because back then we didn't have
> flock() and you built locking primitives open(O_CREAT) and rename();
> but that was only metadata, and that was only if the system didn't
> crash.
No, you're missing my point. The other Unix file systems are irrelevant.
The number of people running them and having any real risk of system
crash is small, and they're the ones with full system backups anyway.
> When I was growing up we were trained to *always* check error returns
> from *all* system calls, and to *always* fsync() if it was critical
> that the data survive a crash. That was what competent Unix
> programmers did. And if you are always checking error returns, the
> difference in the Lines of Code between doing it right and doing
> really wasn't that big --- and again, back then fsync() wan't
> expensive. Making fsync expensive was ext3's data=ordered mode's
> fault.
When my grandmother was growing up she had to use an outside toilet.
Sometimes the past sucked and we're glad of progress being made.
> Then again, most users or system administrators of Unix systems didn't
> tolerate device drivers that would crash your system when you exited a
> game, either.... and I've said that I recognize the world has changed
> and that crappy application programmers outnumber kernel programers,
> which is why I coded the workaround for ext4. That still doesn't make
> what they are doing correct.
No, look, you're blaming userspace again. Stop it.
--
Matthew Garrett | mjg59@srcf.ucam.org
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 13:35 ` Hans-Peter Jansen
@ 2009-03-27 14:53 ` Geert Uytterhoeven
2009-03-27 15:46 ` Mike Galbraith
2009-03-27 16:49 ` Frans Pop
1 sibling, 1 reply; 419+ messages in thread
From: Geert Uytterhoeven @ 2009-03-27 14:53 UTC (permalink / raw)
To: Hans-Peter Jansen; +Cc: linux-kernel, Linus Torvalds
On Fri, 27 Mar 2009, Hans-Peter Jansen wrote:
> Am Dienstag, 24. März 2009 schrieb Linus Torvalds:
> > This obviously starts the merge window for 2.6.30, although as usual,
> > I'll probably wait a day or two before I start actively merging.
>
> It would be very nice, if you could start with a commit to Makefile, that
> reflects the new series: e.g.:
>
> VERSION = 2
> PATCHLEVEL = 6
> SUBLEVEL = 30
> EXTRAVERSION = -pre
>
> -pre for preparing state.
If you're using the kernel-of-they-day, you're probably using git, and
CONFIG_LOCALVERSION_AUTO=y should be mandatory.
My kernel is called 2.6.29-03321-gbe0ea69...
With kind regards,
Geert Uytterhoeven
Software Architect
Sony Techsoft Centre Europe
The Corporate Village · Da Vincilaan 7-D1 · B-1935 Zaventem · Belgium
Phone: +32 (0)2 700 8453
Fax: +32 (0)2 700 8622
E-mail: Geert.Uytterhoeven@sonycom.com
Internet: http://www.sony-europe.com/
A division of Sony Europe (Belgium) N.V.
VAT BE 0413.825.160 · RPR Brussels
Fortis · BIC GEBABEBB · IBAN BE41293037680010
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 14:35 ` Christoph Hellwig
@ 2009-03-27 15:03 ` Ric Wheeler
2009-03-27 20:38 ` Jeff Garzik
1 sibling, 0 replies; 419+ messages in thread
From: Ric Wheeler @ 2009-03-27 15:03 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Theodore Tso, Jens Axboe, Linus Torvalds, Jeff Garzik,
Ingo Molnar, Alan Cox, Arjan van de Ven, Andrew Morton,
Peter Zijlstra, Nick Piggin, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Christoph Hellwig wrote:
> On Fri, Mar 27, 2009 at 10:13:33AM -0400, Theodore Tso wrote:
>
>> On Fri, Mar 27, 2009 at 08:57:23AM +0100, Jens Axboe wrote:
>>
>>> Here's a simple patch that does that. Not even tested, it compiles. Note
>>> that file systems that currently do blkdev_issue_flush() in their
>>> ->sync() should then get it removed.
>>>
>>>
>> That's going to be a mess. Ext3 implements an fsync() by requesting a
>> journal commit, and then waiting for the commit to have taken place.
>> The commit happens in another thread, kjournald. Knowing when it's OK
>> not to do a blkdev_issue_flush() because the commit was triggered by
>> an fsync() is going to be really messy. Could we at least have a flag
>> in struct super which says, "We'll handle the flush correctly, please
>> don't try to do it for us?"
>>
>
> Doing it in vfs_fsync also is completely wrong layering. If people want
> it for simple filesystems add it to file_fsync instead of messing up
> the generic helper. Removing well meaning but ill behaved policy from
> the generic path has been costing me far too much time lately.
>
> And please add a tuneable for the flush. Preferable a generic one at
> the block device layer instead of the current mess where every
> filesystem has a slightly different option for barrier usage.
>
I agree that we need to be careful not to put extra device flushes if
the file system handles this properly. They can be quite expensive (say
10-20ms on a busy s-ata disk).
I have also seen some SSD devices have performance that drops into the
toilet when you start flushing their volatile caches.
ric
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 14:51 ` Matthew Garrett
@ 2009-03-27 15:08 ` Alan Cox
2009-03-27 15:22 ` Matthew Garrett
2009-03-27 15:20 ` Giacomo A. Catenazzi
1 sibling, 1 reply; 419+ messages in thread
From: Alan Cox @ 2009-03-27 15:08 UTC (permalink / raw)
To: Matthew Garrett
Cc: Theodore Tso, Linus Torvalds, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
> And if a behaviour is in ext3, then for the vast majority of practical
> purposes it exists everywere. Users of non-Linux POSIX operating systems
> are niche. Users of non-ext3 filesystems on Linux are niche.
SuSE for years shipped reiserfs as a default.
> When my grandmother was growing up she had to use an outside toilet.
> Sometimes the past sucked and we're glad of progress being made.
Not checking for errors is not "progress" its indiscipline aided by
languages and tools that permit it to occur without issuing errors. It's
why software "engineering" is at best approaching early 1950's real
engineering practice ("hey gee we should test this stuff") and has yet to
grow up and get anywhere into the world of real engineering and quality.
Alan
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 14:51 ` Matthew Garrett
2009-03-27 15:08 ` Alan Cox
@ 2009-03-27 15:20 ` Giacomo A. Catenazzi
1 sibling, 0 replies; 419+ messages in thread
From: Giacomo A. Catenazzi @ 2009-03-27 15:20 UTC (permalink / raw)
To: Matthew Garrett
Cc: Theodore Tso, Linus Torvalds, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
Matthew Garrett wrote:
> On Fri, Mar 27, 2009 at 07:24:38AM -0400, Theodore Tso wrote:
>> On Fri, Mar 27, 2009 at 06:21:14AM +0000, Matthew Garrett wrote:
>>> (It could be argued that most relevant Unices implemented fsync() even
>>> before then, so its status in POSIX was broadly irrelevant. The obvious
>>> counterargument is that most relevant Unix filesystems ensure that data
>>> is written before a clobbering rename() is carried out, so POSIX is
>>> again not especially releant)
>> Nope, not true. Most relevant Unix file systems sync'ed data blocks
>> on a 30 timer, and metadata on 5 second timers. They did *not* force
>> data to be written before a clobbering rename() was carried you;
>> you're rewriting history when you say that; it's simply not true.
>> Rename was atomic *only* where metadata was concerned, and all the
>> talk about rename being atomic was because back then we didn't have
>> flock() and you built locking primitives open(O_CREAT) and rename();
>> but that was only metadata, and that was only if the system didn't
>> crash.
>
> No, you're missing my point. The other Unix file systems are irrelevant.
> The number of people running them and having any real risk of system
> crash is small, and they're the ones with full system backups anyway.
Are you telling us that the "Linux compatible" really means
"Linux compatible, but only on ext3, only on x86,
only on Ubuntu, only Gnome or KDE [1]"?
If a program crashes on other setups, is it not a
problem of the program but of the environment?
sigh
cate
[1]Yes, I just see a installation script that expect one of the
two environment.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 15:08 ` Alan Cox
@ 2009-03-27 15:22 ` Matthew Garrett
2009-03-27 16:15 ` Alan Cox
0 siblings, 1 reply; 419+ messages in thread
From: Matthew Garrett @ 2009-03-27 15:22 UTC (permalink / raw)
To: Alan Cox
Cc: Theodore Tso, Linus Torvalds, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
On Fri, Mar 27, 2009 at 03:08:11PM +0000, Alan Cox wrote:
> Not checking for errors is not "progress" its indiscipline aided by
> languages and tools that permit it to occur without issuing errors. It's
> why software "engineering" is at best approaching early 1950's real
> engineering practice ("hey gee we should test this stuff") and has yet to
> grow up and get anywhere into the world of real engineering and quality.
No. Not *having* to check for errors in the cases that you care about is
progress. How much of the core kernel actually deals with kmalloc
failures sensibly? Some things just aren't worth it.
--
Matthew Garrett | mjg59@srcf.ucam.org
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 14:53 ` Geert Uytterhoeven
@ 2009-03-27 15:46 ` Mike Galbraith
2009-03-27 16:02 ` Linus Torvalds
0 siblings, 1 reply; 419+ messages in thread
From: Mike Galbraith @ 2009-03-27 15:46 UTC (permalink / raw)
To: Geert Uytterhoeven; +Cc: Hans-Peter Jansen, linux-kernel, Linus Torvalds
On Fri, 2009-03-27 at 15:53 +0100, Geert Uytterhoeven wrote:
> On Fri, 27 Mar 2009, Hans-Peter Jansen wrote:
> > Am Dienstag, 24. März 2009 schrieb Linus Torvalds:
> > > This obviously starts the merge window for 2.6.30, although as usual,
> > > I'll probably wait a day or two before I start actively merging.
> >
> > It would be very nice, if you could start with a commit to Makefile, that
> > reflects the new series: e.g.:
> >
> > VERSION = 2
> > PATCHLEVEL = 6
> > SUBLEVEL = 30
> > EXTRAVERSION = -pre
> >
> > -pre for preparing state.
>
> If you're using the kernel-of-they-day, you're probably using git, and
> CONFIG_LOCALVERSION_AUTO=y should be mandatory.
I sure hope it never becomes mandatory, I despise that thing. I don't
even do -rc tags. .nn is .nn until baked and nn.1 appears.
(would be nice if baked were immediately handed to stable .nn.0 instead
of being in limbo for a bit, but I don't drive the cart, just tag along
behind [w. shovel];)
-Mike
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 15:46 ` Mike Galbraith
@ 2009-03-27 16:02 ` Linus Torvalds
2009-03-28 7:50 ` Mike Galbraith
2009-03-30 22:00 ` Hans-Peter Jansen
0 siblings, 2 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-03-27 16:02 UTC (permalink / raw)
To: Mike Galbraith; +Cc: Geert Uytterhoeven, Hans-Peter Jansen, linux-kernel
On Fri, 27 Mar 2009, Mike Galbraith wrote:
> >
> > If you're using the kernel-of-they-day, you're probably using git, and
> > CONFIG_LOCALVERSION_AUTO=y should be mandatory.
>
> I sure hope it never becomes mandatory, I despise that thing. I don't
> even do -rc tags. .nn is .nn until baked and nn.1 appears.
If you're a git user that changes kernels frequently, then enabling
CONFIG_LOCALVERSION_AUTO is _really_ convenient when you learn to use it.
This is quite common for me:
gitk v$(uname -r)..
and it works exactly due to CONFIG_LOCALVERSION_AUTO (and because git is
rather good at figuring out version numbers). It's a great way to say
"ok, what is in my git tree that I'm not actually running right now".
Another case where CONFIG_LOCALVERSION_AUTO is very useful is when you're
noticing some new broken behavior, but it took you a while to notice.
You've rebooted several times since, but you know it worked last Tuesday.
What do you do?
The thing to do is
grep "Linux version" /var/log/messages*
and figure out what the good version was, and then do
git bisect start
git bisect good ..that-version..
git bisect bad v$(uname -r)
and off you go. This is _very_ convenient if you are working with some
"random git kernel of the day" like I am (and like hopefully others are
too, in order to get test coverage).
> (would be nice if baked were immediately handed to stable .nn.0 instead
> of being in limbo for a bit, but I don't drive the cart, just tag along
> behind [w. shovel];)
Note that the "v2.6.29[-rcX" part is totally _useless_ in many cases,
because if you're working past merges, and especially if you end up doing
bisection, it is very possible that the main Makefile says "2.6.28-rc2",
but the code you're working on wasn't actually _merged_ until after
2.6.29.
In other words, the main Makefile version is totally useless in non-linear
development, and is meaningful _only_ at specific release times. In
between releases, it's essentially a random thing, since non-linear
development means that versioning simply fundamentally isn't some simple
monotonic numbering. And this is exactly when CONFIG_LOCALVERSION_AUTO is
a huge deal.
(It's even more so if you end up looking at "next" or merging other
peoples trees. If you only ever track my kernel, and you only ever
fast-forward - no bisection, no nothing - then the release numbering looks
"simple", and things like LOCALVERSION looks just like noise).
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 15:22 ` Matthew Garrett
@ 2009-03-27 16:15 ` Alan Cox
2009-03-27 16:28 ` Matthew Garrett
0 siblings, 1 reply; 419+ messages in thread
From: Alan Cox @ 2009-03-27 16:15 UTC (permalink / raw)
To: Matthew Garrett
Cc: Theodore Tso, Linus Torvalds, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
O> No. Not *having* to check for errors in the cases that you care about is
> progress. How much of the core kernel actually deals with kmalloc
> failures sensibly? Some things just aren't worth it.
I'm glad to know thats how you feel about my data, it explains a good
deal about the state of some of the desktop software. In kernel land we
actually have tools that go looking for kmalloc errors and missing tests
to try and check all the paths. We run kernels with kmalloc randomly
failing to make sure the box stays up: because at the end of the day
*kmalloc does fail*. The kernel also tries very hard to keep the fail
rate low - but this doesn't mean you don't check for errors.
Everything in other industry says not having to check for errors is
missing the point. You design systems so that they do not have error
cases when possible, and if they have error cases you handle them and
enforce a policy that prevents them not being handled.
Standard food safety rules include
Labelling food with dates
Having an electronic system so that any product with no label cannot
escape
Checking all labels to ensure nothing past the safe date is sold
Having rules at all stages that any item without a label is removed and
is flagged back so that it can be investigated
Now you are arguing for "not having to check for errors"
So I assume you wouldn't worry about food that ends up with no label on
it somehow ?
Or when you get a "permission denied" do you just assume it didn't
happen ? If the bank says someone has removed all your money do you
assume its an error you don't need to check for ?
The two are *not* the same thing.
You design failure out when possible
You implement systems which ensure all known failure cases must be handled
You track failure rates to prove your analysis
Where you don't handle a failure (because it is too hard) you have
detailed statistical and other analysis based on rigorous methodologies
as to whether not handling it is acceptable (eg ALARP)
and unfortunately at big name universities you can still get a degree or
masters even in software "engineering" without actually studying any of
this stuff, which any real engineering discipline would consider basic
essentials.
How do we design failure out
- One obvious one is to report out of disk space on write not close. At
the app level programmers need to actually check their I/O returns
because contrary to much of todays garbage software (open and
proprietary) or use languages which actually tell them off if each
exception case is not caught somewhere
- Use disk and file formats that ensure across a failure you don't
suddenly get random users medical data popping up post reboot in
index.html or motd. Hence ordered data writes by default (or the same
effect)
- Writing back data regularly to allow for the fact user space
programmers will make mistakes regardless. But this doesn't mean they
"don't check for errors"
And if you think an error check isn't worth making then I hope you can
provide the statistical data, based on there being millions of such
systems and in the case of sloppy application writing where the result
is "oh dear where did the data go" I don't think you can at the moment.
To be honest I don't see your problem. Surely well designed desktop
applications are already all using nice error handling, out of space and
fsync aware interfaces in the gnome library that do all the work for them
- "so they don't have to check for errors".
If not perhaps the desktop should start by putting their own house in
order ?
Alan
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 16:15 ` Alan Cox
@ 2009-03-27 16:28 ` Matthew Garrett
2009-03-27 16:51 ` Alan Cox
0 siblings, 1 reply; 419+ messages in thread
From: Matthew Garrett @ 2009-03-27 16:28 UTC (permalink / raw)
To: Alan Cox
Cc: Theodore Tso, Linus Torvalds, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
On Fri, Mar 27, 2009 at 04:15:53PM +0000, Alan Cox wrote:
> To be honest I don't see your problem. Surely well designed desktop
> applications are already all using nice error handling, out of space and
> fsync aware interfaces in the gnome library that do all the work for them
> - "so they don't have to check for errors".
The context was situations like errors on close() not occuring unless
you've fsync()ed first. I don't think that error case is sufficiently
common to warrant the cost of an fsync() on every single close,
especially since doing so would cripple any application that ever tried
to run on ext3.
--
Matthew Garrett | mjg59@srcf.ucam.org
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 13:35 ` Hans-Peter Jansen
2009-03-27 14:53 ` Geert Uytterhoeven
@ 2009-03-27 16:49 ` Frans Pop
1 sibling, 0 replies; 419+ messages in thread
From: Frans Pop @ 2009-03-27 16:49 UTC (permalink / raw)
To: Hans-Peter Jansen; +Cc: linux-kernel
Hans-Peter Jansen wrote:
> Am Dienstag, 24. März 2009 schrieb Linus Torvalds:
>> This obviously starts the merge window for 2.6.30, although as usual,
>> I'll probably wait a day or two before I start actively merging.
>
> It would be very nice, if you could start with a commit to Makefile,
> that reflects the new series: e.g.:
If you have a git checkout, you can easily do this yourself:
git checkout -b 2.6.30-rc master
sed -i "/^SUBLEVEL/ s/29/30/; /^EXTRAVERSION/ s/$/ -rc0/" Makefile
git add Makefile
git commit -m "Mark as -rc0"
Then to get latest git head:
git checkout master
git pull
git rebase master 2.6.30-rc
When Linus releases -rc1, the rebase will signal a conflict on that commit
and you can just 'git rebase --skip' it.
Instead of sed you can also just edit the Makefile of course, or you can
go the other way and create a simple script that automatically increases
the existing sublevel by 1. I just do this manually, given that it's only
needed once per three months or so.
Using a branch is something I do anyway as I almost always have a few
minor patches on top of git head for various reasons.
Cheers,
FJP
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 16:28 ` Matthew Garrett
@ 2009-03-27 16:51 ` Alan Cox
2009-03-27 17:02 ` Matthew Garrett
0 siblings, 1 reply; 419+ messages in thread
From: Alan Cox @ 2009-03-27 16:51 UTC (permalink / raw)
To: Matthew Garrett
Cc: Theodore Tso, Linus Torvalds, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
On Fri, 27 Mar 2009 16:28:41 +0000
Matthew Garrett <mjg59@srcf.ucam.org> wrote:
> On Fri, Mar 27, 2009 at 04:15:53PM +0000, Alan Cox wrote:
>
> > To be honest I don't see your problem. Surely well designed desktop
> > applications are already all using nice error handling, out of space and
> > fsync aware interfaces in the gnome library that do all the work for them
> > - "so they don't have to check for errors".
>
> The context was situations like errors on close() not occuring unless
> you've fsync()ed first. I don't think that error case is sufficiently
> common to warrant the cost of an fsync() on every single close,
> especially since doing so would cripple any application that ever tried
> to run on ext3.
The fsync if you need to see all errors on close case has been true since
before V7 unix. Its the normal default behaviour on these systems so
anyone who assumes otherwise is just broken. There is a limit to the
extent the OS can clean up after completely broken user apps.
Besides which a properly designed desktop clearly has a single interface
of the form
happened = write_file_reliably(filename|NULL, buffer, len, flags)
happened = replace_file_reliably(filename|NULL, buffer, len,
flags (eg KEEP_BACKUP));
which internally does all the error handling, reporting to user, offering
to save elsewhere, ensuring that the user can switch app and make space
and checking for media errors. It probably also has an asynchronous
version you can bind event handlers to for completion, error, etc so that
you can override the default handling but can't fail to provide something
by default. That would be designing failure out of the system.
IMHO the real solution to a lot of this actually got proposed earlier in
the thread. Adding "fbarrier()" allows the expression of ordering without
blocking and provides something new apps can use to get best performance.
Old properly written apps continue to work and can be improved, and sloppy
garbage continues to mostly work.
The file system behaviour is constrained heavily by the hardware, which
at this point is constrained by the laws of physics and the limits
of materials.
Alan
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 16:51 ` Alan Cox
@ 2009-03-27 17:02 ` Matthew Garrett
2009-03-27 17:19 ` Alan Cox
2009-03-27 17:57 ` Linus Torvalds
0 siblings, 2 replies; 419+ messages in thread
From: Matthew Garrett @ 2009-03-27 17:02 UTC (permalink / raw)
To: Alan Cox
Cc: Theodore Tso, Linus Torvalds, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
On Fri, Mar 27, 2009 at 04:51:50PM +0000, Alan Cox wrote:
> On Fri, 27 Mar 2009 16:28:41 +0000
> Matthew Garrett <mjg59@srcf.ucam.org> wrote:
> > The context was situations like errors on close() not occuring unless
> > you've fsync()ed first. I don't think that error case is sufficiently
> > common to warrant the cost of an fsync() on every single close,
> > especially since doing so would cripple any application that ever tried
> > to run on ext3.
>
> The fsync if you need to see all errors on close case has been true since
> before V7 unix. Its the normal default behaviour on these systems so
> anyone who assumes otherwise is just broken. There is a limit to the
> extent the OS can clean up after completely broken user apps.
If user applications should always check errors, and if errors can't be
reliably produced unless you fsync() before close(), then the correct
behaviour for the kernel is to always flush buffers to disk before
returning from close(). The reason we don't is that it would be an
unacceptable performance hit to take in return for an uncommon case - in
exactly the same way as always calling fsync() before close() is an
unacceptable performance hit to take in return for an uncommon case.
> IMHO the real solution to a lot of this actually got proposed earlier in
> the thread. Adding "fbarrier()" allows the expression of ordering without
> blocking and provides something new apps can use to get best performance.
If every application that does a clobbering rename has to call
fbarrier() first, then the kernel should just guarantee to do so on the
application's behalf. ext3, ext4 and btrfs all effectively do this, so
we should just make it explicit that Linux filesystems are expected to
behave this way. If people want to make their code Linux specific then
that's their problem, not the kernel's.
--
Matthew Garrett | mjg59@srcf.ucam.org
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 17:02 ` Matthew Garrett
@ 2009-03-27 17:19 ` Alan Cox
2009-03-27 18:05 ` Linus Torvalds
2009-03-27 18:36 ` Hua Zhong
2009-03-27 17:57 ` Linus Torvalds
1 sibling, 2 replies; 419+ messages in thread
From: Alan Cox @ 2009-03-27 17:19 UTC (permalink / raw)
To: Matthew Garrett
Cc: Theodore Tso, Linus Torvalds, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
O> If user applications should always check errors, and if errors can't be
> reliably produced unless you fsync() before close(), then the correct
> behaviour for the kernel is to always flush buffers to disk before
> returning from close(). The reason we don't is that it would be an
You make a few assumptions here
Unfortunately:
- close() occurs many times on a file
- the kernel cannot tell which close() calls need to commit data
- there are many cases where data is written and there is a genuine
situation where it is acceptable over a crash to lose data providing
media failure is rare (eg log files in many situations - not banks
obviously)
The kernel cannot tell them apart, while fsync/close() as a pair allows
the user to correctly indicate their requirements.
Even "fsync on last close" can backfire horribly if you happen to have a
handle that is inherited by a child task or kept for reading for a long
period.
For an event driven app you really want some kind of threaded or async
fsync then close (fbarrier isn't quite enough because you don't get told
when the barrier is passed). That could be implemented using threads in
the relevant desktops libraries with the thread doing
fsync()
poke event thread
exit
(or indeed for most cases as part of the more general
write-file-interact-with-user-etc call)
> If every application that does a clobbering rename has to call
> fbarrier() first, then the kernel should just guarantee to do so on the
Rename is a different problem - and a nastier one. Unfortunately even in
posix fsync says nothing about how metadata updating is handled or what
the ordering rules are between two fsync() calls on different files.
There were problems with trying to order rename against data writeback.
fsync ensures the file data and metadata is valid but doesn't (and
cannot) connect this with the directory state. So if you need to implement
write data
ensure it is committed
rename it
after the rename is committed then ...
you can't do that in POSIX. Linux extends fsync() so you can fsync a
directory handle but that is an extension to fix the problem rather than
a standard behaviour.
(Also helpful here would be fsync_range, fdatasync_range and
fbarrier_range)
> application's behalf. ext3, ext4 and btrfs all effectively do this, so
> we should just make it explicit that Linux filesystems are expected to
> behave this way.
> If people want to make their code Linux specific then that's their problem, not the kernel's.
Agreed - which is why close should not happen to do an fsync(). That's
their problem for writing code thats specific to some random may happen
behaviour on certain Linux releases - and unfortunately with no obvious
cheap cure.
--
"Alan, I'm getting a bit worried about you."
-- Linus Torvalds
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 17:02 ` Matthew Garrett
2009-03-27 17:19 ` Alan Cox
@ 2009-03-27 17:57 ` Linus Torvalds
2009-03-27 18:22 ` Linus Torvalds
` (2 more replies)
1 sibling, 3 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-03-27 17:57 UTC (permalink / raw)
To: Matthew Garrett
Cc: Alan Cox, Theodore Tso, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Fri, 27 Mar 2009, Matthew Garrett wrote:
>
> If every application that does a clobbering rename has to call
> fbarrier() first, then the kernel should just guarantee to do so on the
> application's behalf. ext3, ext4 and btrfs all effectively do this, so
> we should just make it explicit that Linux filesystems are expected to
> behave this way. If people want to make their code Linux specific then
> that's their problem, not the kernel's.
It would probably be good to think about something like this, because
there are currently really two totally different cases of "fsync()" users.
(a) The "critical safety" kind (aka the "traditional" fsync user), where
there is a mail server or similar that will reply "all done" to the
sender, and has to _guarantee_ that the file is on disk in order for
data to simply not be lost.
This is a very different case from most desktop uses, and it's a evry
hard "we have to wait until the thing is physically on disk"
situation. And it's the only case where people really traditionally
used "fsync()".
(b) The non-traditional UNIX usage where people historically didn't use
fsync() for: people editing their config files either
programmatically or by hand.
And this one really doesn't need at all the same kind of hard "wait
for it to hit the disk" semantics. It may well want a much softer
kind of "at least don't delete the old version until the new version
is stable" kind of thing.
And Alan - you can argue that fsync() has been around forever, but you
cannot possibly argue that people have used fsync() for file editing.
That's simply not true. It has happened, but it has been very rare. Yes,
some editors (vi, emacs) do it, but even there it's configurable. And
outside of databases, server apps and big editors, fsync is virtually
unheard of. How many sed-scripts have you seen to edit files? None of them
ever used fsync.
And with the ext3 performance profile for it, it sure is not getting any
more common either. If you have a desktop app that uses fsync(), that
application is DEAD IN THE WATER if people are doing anything else on the
machine. Those multi-second pauses aren't going to make people happy.
So the fact is, "people should always use fsync" simply isn't a realistic
expectation, nor is it historically accurate. Claiming it is is just
obviously bogus. And claiming that people _should_ do it is crazy, since
it performs badly enough to simply not be realistic.
Alternatives should be looked at. For desktop apps, the best alternatives
are likely simply stronger default consistency guarantees. Exactly the
"we don't guarantee that your data hits the disk, but we do guarantee that
if you renamed on top of another file, you'll not have lost _both_
contents".
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 17:19 ` Alan Cox
@ 2009-03-27 18:05 ` Linus Torvalds
2009-03-27 18:35 ` Alan Cox
2009-03-27 19:03 ` Theodore Tso
2009-03-27 18:36 ` Hua Zhong
1 sibling, 2 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-03-27 18:05 UTC (permalink / raw)
To: Alan Cox
Cc: Matthew Garrett, Theodore Tso, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
On Fri, 27 Mar 2009, Alan Cox wrote:
>
> The kernel cannot tell them apart, while fsync/close() as a pair allows
> the user to correctly indicate their requirements.
Alan. Repeat after me: "fsync()+close() is basically useless for any app
that expects user interaction under load".
That's a FACT, not an opinion.
> For an event driven app you really want some kind of threaded or async
> fsync then close
Don't be silly. If you want data corruption, then you make people write
threaded applications. Yes, you may work for Intel now, but that doesn't
mean that you have to drink the insane cool-aid. Threading is HARD. Async
stuff is HARD.
We kernel people really are special. Expecting normal apps to spend the
kind of effort we do (in scalability, in error handling, in security) is
just not realistic.
> Agreed - which is why close should not happen to do an fsync(). That's
> their problem for writing code thats specific to some random may happen
> behaviour on certain Linux releases - and unfortunately with no obvious
> cheap cure.
I do agree that close() shouldn't do an fsync - simply for performance
reasons.
But I also think that the "we write meta-data synchronously, but then the
actual data shows up at some random later time" is just crazy talk. That's
simply insane. It _guarantees_ that there will be huge windows of times
where data simply will be lost if something bad happens.
And expecting every app to do fsync() is also crazy talk, especially with
the major filesystems _sucking_ so bad at it (it's actually a lot more
realistic with ext2 than it is with ext3).
So look for a middle ground. Not this crazy militant "user apps must do
fsync()" crap. Because that is simply not a realistic scenario.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 17:57 ` Linus Torvalds
@ 2009-03-27 18:22 ` Linus Torvalds
2009-03-27 18:32 ` Alan Cox
2009-03-27 19:43 ` Jeff Garzik
2 siblings, 0 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-03-27 18:22 UTC (permalink / raw)
To: Matthew Garrett
Cc: Alan Cox, Theodore Tso, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Fri, 27 Mar 2009, Linus Torvalds wrote:
>
> Yes, some editors (vi, emacs) do it, but even there it's configurable.
.. and looking at history, it's even pretty modern. From the vim logs:
Patch 6.2.499
Problem: When writing a file and halting the system, the file might be lost
when using a journalling file system.
Solution: Use fsync() to flush the file data to disk after writing a file.
(Radim Kolar)
Files: src/fileio.c
so it looks (assuming those patch numbers mean what they would seem to
mean) that 'fsync()' in vim is from after 6.2 was released. Some time in
2004.
So traditionally, even solid "good" programs like major editors never
tried to fsync() their files.
Btw, googling for that 6.2.499 patch also shows that people were rather
unhappy with it. Why? It causes disk spinups in laptop mode etc. Which is
very much not what you want to see for power reasons.
So there are other, really fundamental, reasons why applications that
don't have the "mailspool must not be lost" kind of critical issues to
absolutely NOT use fsync(). Those applications would be much better off
with some softer hint that can take things like laptop mode into account.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 17:57 ` Linus Torvalds
2009-03-27 18:22 ` Linus Torvalds
@ 2009-03-27 18:32 ` Alan Cox
2009-03-27 18:40 ` Linus Torvalds
2009-03-27 19:43 ` Jeff Garzik
2 siblings, 1 reply; 419+ messages in thread
From: Alan Cox @ 2009-03-27 18:32 UTC (permalink / raw)
To: Linus Torvalds
Cc: Matthew Garrett, Theodore Tso, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
> more common either. If you have a desktop app that uses fsync(), that
> application is DEAD IN THE WATER if people are doing anything else on the
> machine. Those multi-second pauses aren't going to make people happy.
We added threading about ten years ago.
> So the fact is, "people should always use fsync" simply isn't a realistic
> expectation, nor is it historically accurate.
Far too many people don't - and it is unfortunate but people should learn
to write quality software.
>
> Alternatives should be looked at. For desktop apps, the best alternatives
> are likely simply stronger default consistency guarantees. Exactly the
> "we don't guarantee that your data hits the disk, but we do guarantee that
> if you renamed on top of another file, you'll not have lost _both_
> contents".
Rename is a really nasty case and the standards don't help at all here so
I agree entirely. There *isn't* a way to write a correct portable
application that achieves that guarantee without the kernel making it for
you.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 18:05 ` Linus Torvalds
@ 2009-03-27 18:35 ` Alan Cox
2009-03-27 19:03 ` Theodore Tso
1 sibling, 0 replies; 419+ messages in thread
From: Alan Cox @ 2009-03-27 18:35 UTC (permalink / raw)
To: Linus Torvalds
Cc: Matthew Garrett, Theodore Tso, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
> Don't be silly. If you want data corruption, then you make people write
> threaded applications. Yes, you may work for Intel now, but that doesn't
> mean that you have to drink the insane cool-aid. Threading is HARD. Async
> stuff is HARD.
Which is why you do it once in a library and express it as events. The
gtk desktop already does this and the event model it provides is rather
elegant and can handle this neatly and cleanly for the user.
> But I also think that the "we write meta-data synchronously, but then the
> actual data shows up at some random later time" is just crazy talk. That's
> simply insane. It _guarantees_ that there will be huge windows of times
> where data simply will be lost if something bad happens.
Agreed - apps not checking for errors is sloppy programming however given
they make errors we don't want to make it worse. I wouldn't argue with
that - for the same reason that cars are designed on the basis that their
owners are not competent to operate them ;)
^ permalink raw reply [flat|nested] 419+ messages in thread
* RE: Linux 2.6.29
2009-03-27 17:19 ` Alan Cox
2009-03-27 18:05 ` Linus Torvalds
@ 2009-03-27 18:36 ` Hua Zhong
1 sibling, 0 replies; 419+ messages in thread
From: Hua Zhong @ 2009-03-27 18:36 UTC (permalink / raw)
To: 'Alan Cox', 'Matthew Garrett'
Cc: 'Theodore Tso', 'Linus Torvalds',
'Andrew Morton', 'David Rees',
'Jesper Krogh', 'Linux Kernel Mailing List'
Why are we even arguing about standards?
POSIX, as all other standards, is a common _denominator_ and absolutely the
_minimal_ requirement for a compliant operating system. It does not tell you
how to design the best systems in the real world. For God's sake, can't we
aim for something higher than a piece of literature written some 20 years
ago? And stop making excuses please?
The fact is, most software is crap, and most software developers are lazy
and stupid. Same as most customers are stupid too. A technically correct
operating system isn't necessarily the most successful and accepted
operating system. Have a sense of pragmatism if you are developing something
that is not just a fancy research project.
And it's especially true for ext4. I bet nobody would care about what it did
if it called itself bloody-fast-next-gen-fs, and of course probably nobody
would use it either. But since it's putting the "ext" and "next default
Linux filesystem in all distros" hat on, it'd better take both the glory and
the crap with it. So, no matter whether ext3 made some mistakes, you can't
just throw it all away while keeping its name to give people the false sense
of comfort.
I am really glad that Theodore changed ext4 to handle the common practice of
truncate/rename sequences. It's absolutely necessary. It's not a "favor for
stupid user space", but a mandatory requirement if you even remotely want it
to be a general-purpose file system. In the end, it doesn't matter how
standard compliant you are - people will only choose the filesystem that is
the most reliable, fastest, and works with the most number of applications.
Hua
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 18:32 ` Alan Cox
@ 2009-03-27 18:40 ` Linus Torvalds
2009-03-27 19:00 ` Alan Cox
2009-03-27 20:27 ` Felipe Contreras
0 siblings, 2 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-03-27 18:40 UTC (permalink / raw)
To: Alan Cox
Cc: Matthew Garrett, Theodore Tso, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
On Fri, 27 Mar 2009, Alan Cox wrote:
>
> > So the fact is, "people should always use fsync" simply isn't a realistic
> > expectation, nor is it historically accurate.
>
> Far too many people don't - and it is unfortunate but people should learn
> to write quality software.
You're ignoring reality.
Your definition of "quality software" is PURE SH*T.
Look at that laptop disk spinup issue. Look at the performance issue. Look
at something as nebulous as "usability".
If adding fsync's makes software unusable (and it does), then you
shouldn't call that "quality software".
Alan, just please face that reality, and think about it for a moment. If
fsync() was instantaneous, this discussion wouldn't exist. But read the
thread. We're talking 3-5s under NORMAL load, with peaks of minutes.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 18:40 ` Linus Torvalds
@ 2009-03-27 19:00 ` Alan Cox
2009-03-29 9:15 ` Xavier Bestel
2009-03-27 20:27 ` Felipe Contreras
1 sibling, 1 reply; 419+ messages in thread
From: Alan Cox @ 2009-03-27 19:00 UTC (permalink / raw)
To: Linus Torvalds
Cc: Matthew Garrett, Theodore Tso, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
> > Far too many people don't - and it is unfortunate but people should learn
> > to write quality software.
>
> You're ignoring reality.
>
> Your definition of "quality software" is PURE SH*T.
Actually "pure sh*t" is most of the software currently written. The more
code I read the happier I get that the lawmakers are finally sick of it
and going to make damned sure software is subject to liability law. Boy
will that improve things.
> Alan, just please face that reality, and think about it for a moment. If
> fsync() was instantaneous, this discussion wouldn't exist. But read the
> thread. We're talking 3-5s under NORMAL load, with peaks of minutes.
The peaks of minutes is a bug. The 3-5 seconds is the thread discussion.
Alan
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 18:05 ` Linus Torvalds
2009-03-27 18:35 ` Alan Cox
@ 2009-03-27 19:03 ` Theodore Tso
2009-03-27 19:14 ` Alan Cox
2009-03-27 19:19 ` Gene Heskett
1 sibling, 2 replies; 419+ messages in thread
From: Theodore Tso @ 2009-03-27 19:03 UTC (permalink / raw)
To: Linus Torvalds
Cc: Alan Cox, Matthew Garrett, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
On Fri, Mar 27, 2009 at 11:05:58AM -0700, Linus Torvalds wrote:
>
> Alan. Repeat after me: "fsync()+close() is basically useless for any app
> that expects user interaction under load".
>
> That's a FACT, not an opinion.
This is a fact for ext3 with data=ordered mode. Which is the default
and dominant filesystem today, yes. But it's not true for most other
filesystems. Hopefully at some point we will migrate people off of
ext3 to something better. Ext4 is available today, and is much better
at this than ext4. In the long run, btrfs will be better yet. The
issue then is how do we transition people away from making assumptions
that were essentially only true for ext3's data=ordered mode. Ext4,
btrfs, XFS, all will have the property that if you fsync() a small
file, it will be fast, and it won't inflict major delays for other
programs running on the same system.
You've said for a long that that ext3 is really bad in that it
inflicts this --- I agree with you. People should use other
filesystems which are better. This includes ext4, which is completely
format compatible with ext3. They don't even have to switch on
extents support to get better behaviour. Just mounting an ext3
filesystem with ext4 will result in better behaviour.
So maybe we can't tell application writers, *today*, that they should
use fsync(). But in the future, we should be able to tell them that.
Or maybe we can tell them that if they want, they can use some new
interface, such as a proposed fbarrier() that will do the right thing
(including perhaps being a no-op on ext3) no matter what the
filesystem might be.
I do believe that the last thing we should do is tell people that
because of the characteristics of ext3s, which you yourself have said
sucks, and which we've largely fixed for ext4, and which isn't a
problem with other filesystems, including some that may likely replace
ext3 *and* ext4, that we should give people advice that will lock
applications into doing some very bad things for the indefinite
future.
And I'm not blaming userspace; this is at least as much, if not
entirely, ext3's fault. What that means is we need to work on a way
of providing a transition path back to a better place for the overall
system, which includes both the kernel and userspace application
libraries, such as those found in GNOME, KDE, et. al.
> So look for a middle ground. Not this crazy militant "user apps must do
> fsync()" crap. Because that is simply not a realistic scenario.
Agreed, we need a middle ground. We need a transition path that
recognizes that ext3 won't be the dominant filesystem for Linux in
perpetuity, and that ext3's data=ordered semantics will someday no
longer be a major factor in application design. fbarrier() semantics
might be one approach; there may be others. It's something we need to
figure out.
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 19:03 ` Theodore Tso
@ 2009-03-27 19:14 ` Alan Cox
2009-03-27 19:32 ` Theodore Tso
2009-03-27 19:19 ` Gene Heskett
1 sibling, 1 reply; 419+ messages in thread
From: Alan Cox @ 2009-03-27 19:14 UTC (permalink / raw)
To: Theodore Tso
Cc: Linus Torvalds, Matthew Garrett, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
> Agreed, we need a middle ground. We need a transition path that
> recognizes that ext3 won't be the dominant filesystem for Linux in
> perpetuity, and that ext3's data=ordered semantics will someday no
> longer be a major factor in application design. fbarrier() semantics
> might be one approach; there may be others. It's something we need to
> figure out.
Would making close imply fbarrier() rather than fsync() work for this ?
That would give people the ordering they want even if they are less
careful but wouldn't give the media error cases - which are less
interesting.
Alan
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 7:57 ` Jens Axboe
2009-03-27 14:13 ` Theodore Tso
@ 2009-03-27 19:14 ` Chris Mason
1 sibling, 0 replies; 419+ messages in thread
From: Chris Mason @ 2009-03-27 19:14 UTC (permalink / raw)
To: Jens Axboe
Cc: Linus Torvalds, Jeff Garzik, Theodore Tso, Ingo Molnar, Alan Cox,
Arjan van de Ven, Andrew Morton, Peter Zijlstra, Nick Piggin,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Fri, 2009-03-27 at 08:57 +0100, Jens Axboe wrote:
> On Wed, Mar 25 2009, Linus Torvalds wrote:
> >
> >
> > On Wed, 25 Mar 2009, Jeff Garzik wrote:
> > >
> > > It is clearly possible to implement an fsync(2) that causes FLUSH CACHE to be
> > > issued, without adding full barrier support to a filesystem. It is likely
> > > doable to avoid touching per-filesystem code at all, if we issue the flush
> > > from a generic fsync(2) code path in the kernel.
> >
> > We could easily do that. It would even work for most cases. The
> > problematic ones are where filesystems do their own disk management, but I
> > guess those people can do their own fsync() management too.
> >
> > Somebody send me the patch, we can try it out.
>
> Here's a simple patch that does that. Not even tested, it compiles. Note
> that file systems that currently do blkdev_issue_flush() in their
> ->sync() should then get it removed.
>
The filesystems vary a bit, but in general the perfect fsync (in a mail
server workload) works something like this:
step1: write out and wait for any dirty data
step2: join the running transaction
step3: hang around a bit and wait for friends and neighbors
step4: commit the transaction
step4a: write the log blocks
step4b: barrier. This barrier also makes sure the data is on disk
step4c: write the commit block
step4d: barrier. This barrier makes sure the commit block is on disk.
For ext34 and reiserfs, steps 4b,c,d are actually one call to submit_bh
where two caches flushes are done for us, but they really are two cache
flushes.
During step 3, we collect a bunch of other procs who are hopefully also
running fsync. If we collect 50 procs, then single the barrier in step
5b does a cache flush on the data writes of all 50. 50 flushes this
patch does would be one flush if the FS did it right.
In a multi-process fsync heavy workload, every extra barrier is going to
have work to do because someone is always sending data down.
The flushes done by this patch also aren't helpful for the journaled
filesystem. If we remove the barriers from step 4b or 4d, we no longer
have a consistent FS on power failure. Log checksumming may allow us to
get rid of the barrier in step 4b, but then we wouldn't know the data
blocks were on disk before the transaction commit, and we've had a few
discussions on that already over the last two weeks.
The patch also assumes the FS has one bdev, which isn't true for btrfs.
xfs and btrfs at least want more control over that
filemap_fdatawrite/wait step because we have to repeat it inside the FS
anyway to make sure the inode properly updated before the commit. I'd
much rather see a dumb fsync helper that looks like Jens' vfs_fsync, and
then let the filesystems make their own replacement for the helper in a
new address space operation or super operation.
That way we could also run the fsync on directories without the
directory mutex held, which is much faster.
Also, the patch is sending the return value from blkdev_issue_flush out
through vfs_fsync, which means I think it'll send -EOPNOTSUPP out to
userland.
So, I should be able to run any benchmark that does an fsync with this
patch and find large regressions. It turns out it isn't quite that
easy.
First, I found that ext4 has a neat feature where it is already doing an
extra barrier on every fsync.
Even with that removed, the flushes made ext4 faster (doh!). Looking at
the traces, ext4 and btrfs (which is totally unchanged by this patch)
both do a good job of turning my simple fsync hammering programs into
mostly sequential IO.
The extra flushes are just writing mostly sequential IO, and so they
aren't really hurting overall tput. Plus, Ric reminded me the drive may
have some pass through for larger sequential writes, and ext4 and btrfs
may be doing enough to trigger that.
I should be able to run more complex benchmarks and get really bad
numbers out of this patch with ext4, but instead I'll try ext3...
This is a simple run with fs_mark using 64 threads to create 20k files
with fsync. I timed how long it took to create 900 files. Lower
numbers are better.
unpatched patched
ext3 236s 286s
-chris
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 19:03 ` Theodore Tso
2009-03-27 19:14 ` Alan Cox
@ 2009-03-27 19:19 ` Gene Heskett
2009-03-27 19:48 ` Theodore Tso
1 sibling, 1 reply; 419+ messages in thread
From: Gene Heskett @ 2009-03-27 19:19 UTC (permalink / raw)
To: Theodore Tso, Linus Torvalds, Alan Cox, Matthew Garrett,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Friday 27 March 2009, Theodore Tso wrote:
>On Fri, Mar 27, 2009 at 11:05:58AM -0700, Linus Torvalds wrote:
>> Alan. Repeat after me: "fsync()+close() is basically useless for any app
>> that expects user interaction under load".
>>
>> That's a FACT, not an opinion.
>
>This is a fact for ext3 with data=ordered mode. Which is the default
>and dominant filesystem today, yes. But it's not true for most other
>filesystems. Hopefully at some point we will migrate people off of
>ext3 to something better. Ext4 is available today, and is much better
>at this than ext4. In the long run, btrfs will be better yet. The
>issue then is how do we transition people away from making assumptions
>that were essentially only true for ext3's data=ordered mode. Ext4,
>btrfs, XFS, all will have the property that if you fsync() a small
>file, it will be fast, and it won't inflict major delays for other
>programs running on the same system.
>
>You've said for a long that that ext3 is really bad in that it
>inflicts this --- I agree with you. People should use other
>filesystems which are better. This includes ext4, which is completely
>format compatible with ext3. They don't even have to switch on
>extents support to get better behaviour. Just mounting an ext3
>filesystem with ext4 will result in better behaviour.
Ohkay. But in a 'make xconfig' of 2.6.28.9, how much of ext4 can be turned on
without rendering the old ext3 fstab defaults incompatible should I be forced
to boot a kernel with no ext4 support?
--
Cheers, Gene
"There are four boxes to be used in defense of liberty:
soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Never look a gift horse in the mouth.
-- Saint Jerome
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 19:14 ` Alan Cox
@ 2009-03-27 19:32 ` Theodore Tso
2009-03-27 20:11 ` Andreas T.Auer
2009-03-31 9:58 ` Neil Brown
0 siblings, 2 replies; 419+ messages in thread
From: Theodore Tso @ 2009-03-27 19:32 UTC (permalink / raw)
To: Alan Cox
Cc: Linus Torvalds, Matthew Garrett, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
On Fri, Mar 27, 2009 at 07:14:26PM +0000, Alan Cox wrote:
> > Agreed, we need a middle ground. We need a transition path that
> > recognizes that ext3 won't be the dominant filesystem for Linux in
> > perpetuity, and that ext3's data=ordered semantics will someday no
> > longer be a major factor in application design. fbarrier() semantics
> > might be one approach; there may be others. It's something we need to
> > figure out.
>
> Would making close imply fbarrier() rather than fsync() work for this ?
> That would give people the ordering they want even if they are less
> careful but wouldn't give the media error cases - which are less
> interesting.
The thought that I had was to create a new system call, fbarrier()
which has the semantics that it will request the filesystem to make
sure that (at least) changes that have been made data blocks to date
should be forced out to disk when the next metadata operation is
committed. For ext3 in data=ordered mode, this would be a no-op. For
other filesystems that had fast/efficient fsync()'s, it could simply
be an fsync(). For other filesystems, it could trigger an
asynchronous writeout, if the journal commit will wait for the
writeout to complete. For yet other filesystems, it might set a flag
that will cause the filesystem to start a synchronous writeout of the
file as part of the commit operations. The bottom line was that what
we could *then* tell application programmers to do is
open/write/fbarrier/close/rename. (And for operating systems where
they don't have fbarrier, they can use autoconf magic to replace
fbarrier with fsync.)
We could potentially make close() imply fbarrier(), but there are
plenty of times when that might not be such a great idea. If we do
that, we're back to requiring synchronous data writes for all files on
close(), which might lead to huge latencies, just as ext3's
data=ordered mode did. And in many cases, where the files in
questions can be easily regenerated (such as object files in a kernel
tree build), there really is no reason why it's a good idea to force
the blocks to disk on close(). In the highly unusual case where we
crash in the middle of a kernel build; we can do a "make clean; make"
and regenerate the object files.
The fundamental idea here is not all files need to be forced to disk
on close. Not all files need fsync(), or even fbarrier(). We can
make the system go much more quickly if we can make a distinction
between these two cases. It can also make SSD drives last longer if
we don't force blocks to disk for non-precious files. If people
disagree with this premise, we can go back to something very much like
ext3's data=ordered mode; but then we get *all* of the problems of
ext3's data=ordered mode, including the unexpected filesystem
latencies that Linus and Ingo have been complaining about so much.
The two are very much related.
Anyway, this is just one idea; I'm not claiming that fbarrier() is the
perfect solution --- but it is one I plan to propose at the upcoming
Linux Storage and Filesystem workshop in San Francisco in a week or
so. Maybe someone else will have a better idea.
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 17:57 ` Linus Torvalds
2009-03-27 18:22 ` Linus Torvalds
2009-03-27 18:32 ` Alan Cox
@ 2009-03-27 19:43 ` Jeff Garzik
2009-03-27 20:01 ` Theodore Tso
2009-03-27 21:46 ` Linus Torvalds
2 siblings, 2 replies; 419+ messages in thread
From: Jeff Garzik @ 2009-03-27 19:43 UTC (permalink / raw)
To: Linus Torvalds
Cc: Matthew Garrett, Alan Cox, Theodore Tso, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Linus Torvalds wrote:
> So the fact is, "people should always use fsync" simply isn't a realistic
> expectation, nor is it historically accurate. Claiming it is is just
> obviously bogus. And claiming that people _should_ do it is crazy, since
> it performs badly enough to simply not be realistic.
>
> Alternatives should be looked at. For desktop apps, the best alternatives
> are likely simply stronger default consistency guarantees. Exactly the
> "we don't guarantee that your data hits the disk, but we do guarantee that
> if you renamed on top of another file, you'll not have lost _both_
> contents".
On the other side of the coin, major desktop apps Firefox and
Thunderbird already use it: Firefox uses sqlite to log open web pages
in case of a crash, and sqlite in turn sync's its journal as any good
database app should. [I think tytso just got them to use fdatasync and
a couple other improvements, to make this not-quite-so-bad]
Thunderbird hits the disk for each email received -- always wonderful
with those 1000-email git-commit-head downloads... :)
So, arguments about "people should..." aside, existing desktops apps
_do_ fsync and we get to deal with the bad performance :/
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 19:19 ` Gene Heskett
@ 2009-03-27 19:48 ` Theodore Tso
2009-03-27 20:02 ` Aaron Cohen
` (2 more replies)
0 siblings, 3 replies; 419+ messages in thread
From: Theodore Tso @ 2009-03-27 19:48 UTC (permalink / raw)
To: Gene Heskett
Cc: Linus Torvalds, Alan Cox, Matthew Garrett, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Fri, Mar 27, 2009 at 03:19:10PM -0400, Gene Heskett wrote:
> >You've said for a long that that ext3 is really bad in that it
> >inflicts this --- I agree with you. People should use other
> >filesystems which are better. This includes ext4, which is completely
> >format compatible with ext3. They don't even have to switch on
> >extents support to get better behaviour. Just mounting an ext3
> >filesystem with ext4 will result in better behaviour.
>
> Ohkay. But in a 'make xconfig' of 2.6.28.9, how much of ext4 can be turned on
> without rendering the old ext3 fstab defaults incompatible should I be forced
> to boot a kernel with no ext4 support?
Ext4 doesn't make any non-backwards compatible changes to the
filesystem. So if you just take an ext3 filesystem, and mount it as
ext4, it will work just fine; you will get delayed allocation, you
will get a slightly boosted write priority for kjournald, and then
when you unmount it, that filesystem will work *just* *fine* on a
kernel with no ext4 support. You can mount it as an ext3 filesystem.
If you use tune2fs to enable various ext4 features, such as extents,
etc., then when you mount the filesystem as ext4, you will get the
benefit of extents for any new files which are created, and once you
do that, the filesystem can't be mounted on an ext3-only system, since
ext3 doesn't know how to deal with extents.
And of course, if you want *all* of ext4's benefits, including the
full factor of 6-8 improvement in fsck times, then you will be best
served by creating a new ext4 filesystem from scratch and doing a
backup/reformat/restore pass.
But if you're just annoyed by the large latencies in Ingo's "make
-j32" example, simply taking the ext3 filesystem and mounting it as
ext4 should make those problems go away. And it won't make any
incompatible changes to the filesystem. (This didn't use to be true
in the pre-2.6.26 days, but I insisted on getting this fixed so people
could always mount an ext2 or ext3 filesystems using ext4 without the
kernel making any irreversible filesystem format changes behind the
user's back.)
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 19:43 ` Jeff Garzik
@ 2009-03-27 20:01 ` Theodore Tso
2009-03-27 22:20 ` Jeff Garzik
2009-03-27 21:46 ` Linus Torvalds
1 sibling, 1 reply; 419+ messages in thread
From: Theodore Tso @ 2009-03-27 20:01 UTC (permalink / raw)
To: Jeff Garzik
Cc: Linus Torvalds, Matthew Garrett, Alan Cox, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Fri, Mar 27, 2009 at 03:43:03PM -0400, Jeff Garzik wrote:
> On the other side of the coin, major desktop apps Firefox and
> Thunderbird already use it: Firefox uses sqlite to log open web pages
> in case of a crash, and sqlite in turn sync's its journal as any good
> database app should. [I think tytso just got them to use fdatasync and
> a couple other improvements, to make this not-quite-so-bad]
I spent a very productive hour-long conversation with the Sqlite
maintainer last weekend. He's already checked in a change to use
fdatasync() everywhere, and he's looking into other changes that would
help avoid needing to do a metadata sync because i_size has changed.
One thing that will definitely help is if applications send the
sqlite-specific SQL command "PRAGMA journal_mode = PERSIST;" when they
first startup the Sqlite database connection. This will cause Sqlite
to keep the rollback journal file to stick around instead of being
deleted and then recreated for each Sqlite transaction. This avoids
at least one fsync() of the directory containing the rollback journal
file. Combined with the change in Sqlite's development branch to use
fdatasync() everwhere that fsync() is used, this should definitely be
a huge improvement.
In addition, Firefox 3.1 is reportedly going to use an union of an
on-disk database and an in-memory database, and every 15 or 30 minutes
or so (presumably tunable via some config parameter), the in-memory
database changes will be synched out to the on-disk database. This
will *definitely* help a lot, and also help improve SSD endurance.
(Right now Firefox 3.0 writes 2.5 megabytes each time you click on a
URL, not counting the Firefox cache; I have my Firefox cache directory
symlinked to /tmp to save on unnecessary SSD writes, and I was still
recording 2600k written to the filesystem each time I clicked on a
HTML link. This means that for every 400 pages that I visit, Firefox
is currently generating a full gigabyte of (in my view, unnecessary)
writes to my SSD, all in the name of maintaining Firefox's "Awesome
Bar". This rather nasty behaviour should hopefully be significantly
improved with Firefox 3.1, or so the Sqlite maintainer tells me.)
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 19:48 ` Theodore Tso
@ 2009-03-27 20:02 ` Aaron Cohen
[not found] ` <727e50150903271301l36cff340l33e813bf6f77b4b@mail.gmail.com>
2009-03-27 22:37 ` Gene Heskett
2 siblings, 0 replies; 419+ messages in thread
From: Aaron Cohen @ 2009-03-27 20:02 UTC (permalink / raw)
To: Theodore Tso, Gene Heskett, Linus Torvalds, Alan Cox,
Matthew Garrett, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
> And of course, if you want *all* of ext4's benefits, including the
> full factor of 6-8 improvement in fsck times, then you will be best
> served by creating a new ext4 filesystem from scratch and doing a
> backup/reformat/restore pass.
>
Does a newly create ext4 partition have all the various goodies
enabled that I'd want, or do I also need to tune2fs some parameters to
get an "optimal" setup?
-- Aaron
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
[not found] ` <727e50150903271301l36cff340l33e813bf6f77b4b@mail.gmail.com>
@ 2009-03-27 20:04 ` Theodore Tso
0 siblings, 0 replies; 419+ messages in thread
From: Theodore Tso @ 2009-03-27 20:04 UTC (permalink / raw)
To: aaron
Cc: Gene Heskett, Linus Torvalds, Alan Cox, Matthew Garrett,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Fri, Mar 27, 2009 at 04:01:50PM -0400, Aaron Cohen wrote:
> > And of course, if you want *all* of ext4's benefits, including the
> > full factor of 6-8 improvement in fsck times, then you will be best
> > served by creating a new ext4 filesystem from scratch and doing a
> > backup/reformat/restore pass.
> >
> >
> Does a newly create ext4 partition have all the various goodies enabled that
> I'd want, or do I also need to tune2fs some parameters to get an "optimal"
> setup?
A newly created ext4 partition created with e2fsprogs 1.41.x will have
all of the various goodies enabled. Note that some of what "goodies"
are enabled are controlled by the mke2fs.conf file, which some
distribution packages treat as a config file, so you need to make sure
it is appropriately updated when you update e2fsprogs.
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 19:32 ` Theodore Tso
@ 2009-03-27 20:11 ` Andreas T.Auer
2009-03-27 22:01 ` Linus Torvalds
2009-03-31 9:58 ` Neil Brown
1 sibling, 1 reply; 419+ messages in thread
From: Andreas T.Auer @ 2009-03-27 20:11 UTC (permalink / raw)
To: Theodore Tso
Cc: Alan Cox, Linus Torvalds, Matthew Garrett, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On 2009-03-27 20:32 Theodore Tso wrote:
> We could potentially make close() imply fbarrier(), but there are
> plenty of times when that might not be such a great idea. If we do
> that, we're back to requiring synchronous data writes for all files on
> close()
fbarrier() on close() would only mean, that the data shouldn't be
written after the metadata and new metadata shouldn't be written
_before_ old metadata, so you can also delay the committing of the
"dirty" metadata until the real data are written. You don't need
synchronous data writes necessarily.
> The fundamental idea here is not all files need to be forced to disk
> on close. Not all files need fsync(), or even fbarrier().
An fbarrier() on close() would reflect the thinking of a lot of
developers. You might call them stupid and incompetent, but they surely
are the majority. When closing A before creating B, they don't expect
seeing B without a completed A, even though they might expect that
neither A nor B may be written yet, if the system crashes.
If you have smart developers, you might give them something new, so they
could speed things up with some extra code, e.g. when they create data,
which may be restored by other means, but the default behavior of
automatic fbarrier() on close() would be better.
Andreas
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 18:40 ` Linus Torvalds
2009-03-27 19:00 ` Alan Cox
@ 2009-03-27 20:27 ` Felipe Contreras
1 sibling, 0 replies; 419+ messages in thread
From: Felipe Contreras @ 2009-03-27 20:27 UTC (permalink / raw)
To: Linus Torvalds
Cc: Alan Cox, Matthew Garrett, Theodore Tso, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Fri, Mar 27, 2009 at 8:40 PM, Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
>
> On Fri, 27 Mar 2009, Alan Cox wrote:
>>
>> > So the fact is, "people should always use fsync" simply isn't a realistic
>> > expectation, nor is it historically accurate.
>>
>> Far too many people don't - and it is unfortunate but people should learn
>> to write quality software.
>
> You're ignoring reality.
>
> Your definition of "quality software" is PURE SH*T.
>
> Look at that laptop disk spinup issue. Look at the performance issue. Look
> at something as nebulous as "usability".
>
> If adding fsync's makes software unusable (and it does), then you
> shouldn't call that "quality software".
>
> Alan, just please face that reality, and think about it for a moment. If
> fsync() was instantaneous, this discussion wouldn't exist. But read the
> thread. We're talking 3-5s under NORMAL load, with peaks of minutes.
We are looking at the wrong problem, the problem is not "should
userspace apps do fsync", the problem is "how do we ensure reliable
data where it's needed".
It would be great if as a user I could have the option to set an fsync
level and say; look, I have a fast fs, and I really care about data
reliability in this server, so, level=0; or, hmm, what is this data
reliability thing? I just want my phone to don't be so damn slow,
level=5.
--
Felipe Contreras
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 14:35 ` Christoph Hellwig
2009-03-27 15:03 ` Ric Wheeler
@ 2009-03-27 20:38 ` Jeff Garzik
2009-03-28 0:14 ` Alan Cox
2009-03-29 8:25 ` Christoph Hellwig
1 sibling, 2 replies; 419+ messages in thread
From: Jeff Garzik @ 2009-03-27 20:38 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Theodore Tso, Jens Axboe, Linus Torvalds, Ingo Molnar, Alan Cox,
Arjan van de Ven, Andrew Morton, Peter Zijlstra, Nick Piggin,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Christoph Hellwig wrote:
> And please add a tuneable for the flush. Preferable a generic one at
> the block device layer instead of the current mess where every
> filesystem has a slightly different option for barrier usage.
At the very least, IMO the block layer should be able to notice when
barriers need not be translated into cache flushes. Most notably when
wb cache is disabled on the drive, something easy to auto-detect, but
probably a manual switch also, for people with enterprise battery-backed
storage and such.
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 11:24 ` Theodore Tso
2009-03-27 14:51 ` Matthew Garrett
@ 2009-03-27 21:11 ` Jeremy Fitzhardinge
2009-03-28 7:45 ` Bojan Smojver
1 sibling, 1 reply; 419+ messages in thread
From: Jeremy Fitzhardinge @ 2009-03-27 21:11 UTC (permalink / raw)
To: Theodore Tso, Matthew Garrett, Linus Torvalds, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Theodore Tso wrote:
> When I was growing up we were trained to *always* check error returns
> from *all* system calls, and to *always* fsync() if it was critical
> that the data survive a crash. That was what competent Unix
> programmers did. And if you are always checking error returns, the
> difference in the Lines of Code between doing it right and doing
> really wasn't that big --- and again, back then fsync() wan't
> expensive. Making fsync expensive was ext3's data=ordered mode's
> fault.
This is a fairly narrow view of correct and possible. How can you make
"cat" fsync? grep? sort? How do they know they're not dealing with
critical data? Apps in general don't know, because "criticality" is a
property of the data itself and how its used, not the tools operating on it.
My point isn't that "there should be a way of doing fsync from a shell
script" (which is probably true anyway), but that authors can't
generally anticipate when their program is going to be dealing with
something important. The conservative approach would be to fsync all
data on every close, but that's almost certainly the wrong thing for
everyone.
If the filesystem has reasonably strong inherent data-preserving
properties, then that's much better than scattering fsync everywhere.
fsync obviously makes sense in specific applications; it makes sense to
fsync when you're guaranteeing that a database commit hits stable
storage, etc. But generic tools can't reasonably perform fsyncs, and
its not reasonable to say that "important data is always handled by
special important data tools".
J
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 19:43 ` Jeff Garzik
2009-03-27 20:01 ` Theodore Tso
@ 2009-03-27 21:46 ` Linus Torvalds
2009-03-27 22:06 ` Jeff Garzik
1 sibling, 1 reply; 419+ messages in thread
From: Linus Torvalds @ 2009-03-27 21:46 UTC (permalink / raw)
To: Jeff Garzik
Cc: Matthew Garrett, Alan Cox, Theodore Tso, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Fri, 27 Mar 2009, Jeff Garzik wrote:
>
> On the other side of the coin, major desktop apps Firefox and Thunderbird
> already use it: Firefox uses sqlite [...]
You do know that Firefox had to _disable_ fsync() exactly because not
disabling it was unacceptable? That whole "why does firefox stop for 5
seconds" thing created too many bug-reports.
> So, arguments about "people should..." aside, existing desktops apps _do_
> fsync and we get to deal with the bad performance :/
No they don't. Read up on it. Really.
Guys, I don't understand why you even argue. I've been complaining about
fsync() performance for the last five years or so. It's taken you a long
time to finally realize, and you still don't seem to "get it".
PEOPLE LITERALLY REMOVE 'fsync()' CALLS BECAUSE THEY ARE UNACCEPTABLE FOR
USERS.
It really is that simple.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
[not found] ` <ckcE8-845-9@gated-at.bofh.it>
@ 2009-03-27 21:53 ` Bodo Eggert
2009-03-28 6:51 ` Mike Galbraith
2009-03-28 12:12 ` Theodore Tso
[not found] ` <ckdgW-uA-1@gated-at.bofh.it>
1 sibling, 2 replies; 419+ messages in thread
From: Bodo Eggert @ 2009-03-27 21:53 UTC (permalink / raw)
To: Theodore Tso, Theodore Tso, Matthew Garrett, Linus Torvalds,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Theodore Tso <tytso@mit.edu> wrote:
> On Fri, Mar 27, 2009 at 03:47:05AM +0000, Matthew Garrett wrote:
>> Oh, for the love of a whole range of mythological figures. ext3 didn't
>> train application programmers that they could be careless about fsync().
>> It gave them functionality that they wanted, ie the ability to do things
>> like rename a file over another one with the expectation that these
>> operations would actually occur in the same order that they were
>> generated. More to the point, it let them do this *without* having to
>> call fsync(), resulting in a significant improvement in filesystem
>> usability.
> There were plenty of applications that were written for Unix *and*
> Linux systems before ext3 existed, and they worked just fine. Back
> then, people were drilled into the fact that they needed to use
> fsync(), and fsync() wan't expensive, so there wasn't a big deal in
> terms of usability. The fact that fsync() was expensive was precisely
> because of ext3's data=ordered problem. Writing files safely meant
> that you had to check error returns from fsync() *and* close().
> In fact, if you care about making sure that data doesn't get lost due
> to disk errors, you *must* call fsync().
People don't care about data getting lost if hell breaks lose, but they
care if you ensure that killing the data happens, while keeping the data
is delayed.
Or more simple: Old state: good. New state: good. Inbetween state: bad.
And journaling with delayed data is exposing the inbetween state for
a long period.
> Pavel may have complained
> that fsync() can sometimes drop errors if some other process also has
> the file open and calls fsync() --- but if you don't, and you rely on
> ext3 to magically write the data blocks out as a side effect of the
> commit in data=ordered mode, there's no way to signal the write error
> to the application, and you are *guaranteed * to lose the I/O error
> indication.
Fortunately, IO errors are not common, and errors=remount-ro will
prevent it from being fatal.
> I can tell you quite authoritatively that we didn't implement
> data=ordered to make life easier for application writers, and
> application writers didn't come to ext3 developers asking for this
> convenience. It may have **accidentally** given them convenience that
> they wanted, but it also made fsync() slow.
data=ordered is a sane way of handling data. Otherwise, the millions
would change their ext3 to data=writeback.
>> I'm utterly and screamingly bored of this "Blame userspace" attitude.
>
> I'm not blaming userspace. I'm blaming ourselves, for implementing an
> attractive nuisance, and not realizing that we had implemented an
> attractive nuisance; which years later, is also responsible for these
> latency problems, both with and without fsync() ---- *and* which have
> also traied people into believing that fsync() is always expensive,
> and must be avoided at all costs --- which had not previously been
> true!
I've been waiting ages for a sync() to complete long before reiserfs was
out to make ext2 jealous. Besides that, I don't need the data to be on disk,
I need the update to be mostly-atomic, leaving only small gaps to destroy my
data. Pure chance can (and usually will) give me a better guarantee than what
ext4 did.
I don't know about the logic you put into ext4 to work around the issue, but
I can imagine marking empty-file inodes (and O_APPEND or any i~?) as poisoned
if delayed blocks are appended, and if these poisoned inodes (and depending
operations) don't get played back, it might work acceptably.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 20:11 ` Andreas T.Auer
@ 2009-03-27 22:01 ` Linus Torvalds
0 siblings, 0 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-03-27 22:01 UTC (permalink / raw)
To: Andreas T.Auer
Cc: Theodore Tso, Alan Cox, Matthew Garrett, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Fri, 27 Mar 2009, Andreas T.Auer wrote:
>
> > The fundamental idea here is not all files need to be forced to disk
> > on close. Not all files need fsync(), or even fbarrier().
>
> An fbarrier() on close() would reflect the thinking of a lot of
> developers.
It also happens to be what pretty much all network filesystems end up
implementing.
That said, there's a reason many people prefer local filesystems to even
high-performance NFS - latency (especially for metadata which even modern
versions of NFS cannot cache effectively) just sucks when you have to go
over the network. It pretty much doesn't matter _how_ fast your network or
server is.
One thing that might make sense is to make "close()" start background
writeout for that file (modulo issues like laptop mode) with low priority.
No, it obviously doesn't guarantee any kind of filesystem coherency, but
it _does_ mean that the window for the bad cases goes from potentially 30
seconds down to fractions of seconds. That's likely quite a bit of
improvement in practice.
IOW, no "hard barriers", but simply more of a "even in the absense of
fsync we simply aim for the user to have to be _really_ unlucky to ever
hit any bad cases".
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 21:46 ` Linus Torvalds
@ 2009-03-27 22:06 ` Jeff Garzik
2009-03-27 22:19 ` Linus Torvalds
0 siblings, 1 reply; 419+ messages in thread
From: Jeff Garzik @ 2009-03-27 22:06 UTC (permalink / raw)
To: Linus Torvalds
Cc: Matthew Garrett, Alan Cox, Theodore Tso, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Linus Torvalds wrote:
>
> On Fri, 27 Mar 2009, Jeff Garzik wrote:
>> On the other side of the coin, major desktop apps Firefox and Thunderbird
>> already use it: Firefox uses sqlite [...]
>
> You do know that Firefox had to _disable_ fsync() exactly because not
> disabling it was unacceptable? That whole "why does firefox stop for 5
> seconds" thing created too many bug-reports.
>
>> So, arguments about "people should..." aside, existing desktops apps _do_
>> fsync and we get to deal with the bad performance :/
>
> No they don't. Read up on it. Really.
What is in Fedora 10 and Debian lenny's iceweasel both definitely sync
to disk, as of today, according to my own tests.
I'm talking about what's in real world user's hands today, not some
hoped-for future version in developer CVS somewhere, depending on build
options and who knows what else...
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 22:06 ` Jeff Garzik
@ 2009-03-27 22:19 ` Linus Torvalds
2009-03-27 22:25 ` Linus Torvalds
` (2 more replies)
0 siblings, 3 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-03-27 22:19 UTC (permalink / raw)
To: Jeff Garzik
Cc: Matthew Garrett, Alan Cox, Theodore Tso, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Fri, 27 Mar 2009, Jeff Garzik wrote:
>
> What is in Fedora 10 and Debian lenny's iceweasel both definitely sync to
> disk, as of today, according to my own tests.
Hmm. Go to "about:config" and check your "toolkit.storage.synchronous"
setting.
It _should_ say
default integer 0
and that is what it says for me (yes, on Fedora 10).
The values are: 0 = off, 1 = normal, 2 = full.
If you don't have that "toolkit.storage.synchronous" entry, that means
that you have an older version of firefox-3. And if you have some other
value, it either means somebody changed it, or that Fedora is shipping
with multiple different versions (the "official" Firefox source code
defaults to 1, I think, but they suggested distributions change the
default to 0).
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 20:01 ` Theodore Tso
@ 2009-03-27 22:20 ` Jeff Garzik
0 siblings, 0 replies; 419+ messages in thread
From: Jeff Garzik @ 2009-03-27 22:20 UTC (permalink / raw)
To: Theodore Tso, Jeff Garzik, Linus Torvalds, Matthew Garrett,
Alan Cox, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Theodore Tso wrote:
> On Fri, Mar 27, 2009 at 03:43:03PM -0400, Jeff Garzik wrote:
>> On the other side of the coin, major desktop apps Firefox and
>> Thunderbird already use it: Firefox uses sqlite to log open web pages
>> in case of a crash, and sqlite in turn sync's its journal as any good
>> database app should. [I think tytso just got them to use fdatasync and
>> a couple other improvements, to make this not-quite-so-bad]
>
> I spent a very productive hour-long conversation with the Sqlite
> maintainer last weekend. He's already checked in a change to use
> fdatasync() everywhere, and he's looking into other changes that would
> help avoid needing to do a metadata sync because i_size has changed.
> One thing that will definitely help is if applications send the
> sqlite-specific SQL command "PRAGMA journal_mode = PERSIST;" when they
> first startup the Sqlite database connection. This will cause Sqlite
> to keep the rollback journal file to stick around instead of being
> deleted and then recreated for each Sqlite transaction. This avoids
> at least one fsync() of the directory containing the rollback journal
> file. Combined with the change in Sqlite's development branch to use
> fdatasync() everwhere that fsync() is used, this should definitely be
> a huge improvement.
>
> In addition, Firefox 3.1 is reportedly going to use an union of an
> on-disk database and an in-memory database, and every 15 or 30 minutes
> or so (presumably tunable via some config parameter), the in-memory
> database changes will be synched out to the on-disk database. This
> will *definitely* help a lot, and also help improve SSD endurance.
Definitely, though it will be an interesting balance once user feedback
starts to roll in...
Firefox started doing this stuff because, when it or the window system
or OS crashed, users like my wife would not lose the 50+ tabs they've
opened and were actively using. :)
So it's hard to see how users will react to going back to the days when
firefox crashes once again mean lost work. [referring to the 15-30 min
delay, not fsync(2)]
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 22:19 ` Linus Torvalds
@ 2009-03-27 22:25 ` Linus Torvalds
2009-03-28 1:19 ` Jeff Garzik
2009-03-28 0:18 ` Jeff Garzik
2009-03-28 2:16 ` Mark Lord
2 siblings, 1 reply; 419+ messages in thread
From: Linus Torvalds @ 2009-03-27 22:25 UTC (permalink / raw)
To: Jeff Garzik
Cc: Matthew Garrett, Alan Cox, Theodore Tso, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Fri, 27 Mar 2009, Linus Torvalds wrote:
>
> The values are: 0 = off, 1 = normal, 2 = full.
Of course, I don't actually know that "off" really means "never fsync". It
may be that it only cuts down on the number of fsync's. I do know that
firefox with the original defaults ("fsync everywhere") was totally
unusable, and that got fixed.
But maybe it got fixed to "only pauses occasionally" rather than "every
single page load brings everything to a screetching halt".
Of course, your browsing history database is an excellent example of
something you should _not_ care about that much, and where performance is
a lot more important than "ooh, if the machine goes down suddenly, I need
to be 100% up-to-date". Using fsync on that thing was just stupid, even
regardless of any ext3 issues.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 19:48 ` Theodore Tso
2009-03-27 20:02 ` Aaron Cohen
[not found] ` <727e50150903271301l36cff340l33e813bf6f77b4b@mail.gmail.com>
@ 2009-03-27 22:37 ` Gene Heskett
2009-03-27 22:55 ` Theodore Tso
2 siblings, 1 reply; 419+ messages in thread
From: Gene Heskett @ 2009-03-27 22:37 UTC (permalink / raw)
To: Theodore Tso, Linus Torvalds, Alan Cox, Matthew Garrett,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Friday 27 March 2009, Theodore Tso wrote:
>On Fri, Mar 27, 2009 at 03:19:10PM -0400, Gene Heskett wrote:
>> >You've said for a long that that ext3 is really bad in that it
>> >inflicts this --- I agree with you. People should use other
>> >filesystems which are better. This includes ext4, which is completely
>> >format compatible with ext3. They don't even have to switch on
>> >extents support to get better behaviour. Just mounting an ext3
>> >filesystem with ext4 will result in better behaviour.
>>
>> Ohkay. But in a 'make xconfig' of 2.6.28.9, how much of ext4 can be
>> turned on without rendering the old ext3 fstab defaults incompatible
>> should I be forced to boot a kernel with no ext4 support?
>
>Ext4 doesn't make any non-backwards compatible changes to the
>filesystem. So if you just take an ext3 filesystem, and mount it as
>ext4, it will work just fine; you will get delayed allocation, you
>will get a slightly boosted write priority for kjournald, and then
>when you unmount it, that filesystem will work *just* *fine* on a
>kernel with no ext4 support. You can mount it as an ext3 filesystem.
>
>If you use tune2fs to enable various ext4 features, such as extents,
>etc., then when you mount the filesystem as ext4, you will get the
>benefit of extents for any new files which are created, and once you
>do that, the filesystem can't be mounted on an ext3-only system, since
>ext3 doesn't know how to deal with extents.
>
>And of course, if you want *all* of ext4's benefits, including the
>full factor of 6-8 improvement in fsck times, then you will be best
>served by creating a new ext4 filesystem from scratch and doing a
>backup/reformat/restore pass.
>
>But if you're just annoyed by the large latencies in Ingo's "make
>-j32" example, simply taking the ext3 filesystem and mounting it as
>ext4 should make those problems go away. And it won't make any
>incompatible changes to the filesystem. (This didn't use to be true
>in the pre-2.6.26 days, but I insisted on getting this fixed so people
>could always mount an ext2 or ext3 filesystems using ext4 without the
>kernel making any irreversible filesystem format changes behind the
>user's back.)
>
> - Ted
Thanks Ted, I will build 2.6.28.9 with this:
[root@coyote linux-2.6.28.9]# grep EXT .config
[...]
CONFIG_PAGEFLAGS_EXTENDED=y
CONFIG_EXT2_FS=m
CONFIG_EXT2_FS_XATTR=y
CONFIG_EXT2_FS_POSIX_ACL=y
# CONFIG_EXT2_FS_SECURITY is not set
CONFIG_EXT2_FS_XIP=y
CONFIG_EXT3_FS=m
CONFIG_EXT3_FS_XATTR=y
CONFIG_EXT3_FS_POSIX_ACL=y
CONFIG_EXT3_FS_SECURITY=y
CONFIG_EXT4_FS=y
# CONFIG_EXT4DEV_COMPAT is not set
# CONFIG_EXT4_FS_XATTR is not set
CONFIG_GENERIC_FIND_NEXT_BIT=y
Anything there that isn't compatible?
I'll build that, but only switch the /amandatapes mount in fstab for testing
tonight unless you spot something above.
Thanks.
>To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at http://vger.kernel.org/majordomo-info.html
>Please read the FAQ at http://www.tux.org/lkml/
--
Cheers, Gene
"There are four boxes to be used in defense of liberty:
soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Losing your drivers' license is just God's way of saying "BOOGA, BOOGA!"
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 22:37 ` Gene Heskett
@ 2009-03-27 22:55 ` Theodore Tso
2009-03-28 0:42 ` Gene Heskett
0 siblings, 1 reply; 419+ messages in thread
From: Theodore Tso @ 2009-03-27 22:55 UTC (permalink / raw)
To: Gene Heskett
Cc: Linus Torvalds, Alan Cox, Matthew Garrett, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Fri, Mar 27, 2009 at 06:37:08PM -0400, Gene Heskett wrote:
> Thanks Ted, I will build 2.6.28.9 with this:
> [root@coyote linux-2.6.28.9]# grep EXT .config
> [...]
> CONFIG_PAGEFLAGS_EXTENDED=y
> CONFIG_EXT2_FS=m
> CONFIG_EXT2_FS_XATTR=y
> CONFIG_EXT2_FS_POSIX_ACL=y
> # CONFIG_EXT2_FS_SECURITY is not set
> CONFIG_EXT2_FS_XIP=y
> CONFIG_EXT3_FS=m
> CONFIG_EXT3_FS_XATTR=y
> CONFIG_EXT3_FS_POSIX_ACL=y
> CONFIG_EXT3_FS_SECURITY=y
> CONFIG_EXT4_FS=y
> # CONFIG_EXT4DEV_COMPAT is not set
> # CONFIG_EXT4_FS_XATTR is not set
> CONFIG_GENERIC_FIND_NEXT_BIT=y
>
> Anything there that isn't compatible?
Well, if you need extended attributes (if you are using SELinux, then
you need extended attributes) you'll want to enable
CONFIG_EXT4_FS_XATTR.
If you want to use ext4 on your root filesystem, you may need to take
some special measures depending on your distribution. Using the boot
command-line option rootfstype=ext4 will work on many distributions,
but I haven't tested all of them. It definitely works on Ubuntu, and
it should work if you're not using an initial ramdisk.
Oh yeah; the other thing I should warn you about is that 2.6.28.9
won't have the replace-via-rename and replace-via-truncate
workarounds. So if you crash and your applications aren't using
fsync(), you could end up seeing the zero-length files. I very much
doubt that will make a big difference for your /amandatapes partition,
but if you want to use this for the filesystem where you have home
directory, you'll probably want the workaround patches. I've heard
reports of KDE users telling me that when they initial start up their
desktop, literally hundreds of files are rewritten by their desktop,
just starting it up. (Why? Who knows? It's not good for SSD
endurance, in any case.) But if you crash while initially logging in,
your KDE configuration files might get wiped out w/o the
replace-via-rename and replace-via-truncate workaround patches.
> I'll build that, but only switch the /amandatapes mount in fstab for testing
> tonight unless you spot something above.
OK, so you're not worried about your root filesystem, and presumably
the issue with your home directory won't be an issue for you either.
The only question then is whether you need extended attribute support.
Regards,
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
[not found] ` <ck93x-2oJ-3@gated-at.bofh.it>
@ 2009-03-27 23:22 ` Bodo Eggert
0 siblings, 0 replies; 419+ messages in thread
From: Bodo Eggert @ 2009-03-27 23:22 UTC (permalink / raw)
To: Andrew Morton, Linus Torvalds, Theodore Tso, David Rees,
Jesper Krogh, Linux Kernel Mailing List
Andrew Morton <akpm@linux-foundation.org> wrote:
> On Thu, 26 Mar 2009 18:03:15 -0700 (PDT) Linus Torvalds
>> On Thu, 26 Mar 2009, Andrew Morton wrote:
>> > userspace can get closer than the kernel can.
>>
>> Andrew, that's SIMPLY NOT TRUE.
>>
>> You state that without any amount of data to back it up, as if it was some
>> kind of truism. It's not.
>
> I've seen you repeatedly fiddle the in-kernel defaults based on
> in-field experience. That could just as easily have been done in
> initscripts by distros, and much more effectively because it doesn't
> need a new kernel. That's data.
>
> The fact that this hasn't even been _attempted_ (afaik) is deplorable.
>
> Why does everyone just sit around waiting for the kernel to put a new
> value into two magic numbers which userspace scripts could have set?
Because the user controlling userspace does not understand your knobs.
I want to say "file cache minimum 128 MB (otherwise my system crawls),
max 1,5 GB (or the same happens due to swapping)". Maybe I'll want to say
"start writing if you have data for one second of max. transfer rate"
(obviously a per-device setting).
> My /etc/rc.local has been tweaking dirty_ratio, dirty_background_ratio
> and swappiness for many years. I guess I'm just incredibly advanced.
These settings are good, but they can't prevent the filecache from
growing until the mouse driver gets swapped out.
You happened to find good settings for your setup. Maybe I did once, too,
but it stopped working for the pathological cases in which I'd need
tweaking (which includes normal operation on my laptop), and having a
numeric swappiness without units or a guide did not help. And instead of
dedicating a week to loading NASA images in GIMP (which was a pathological
case on my old desktop) in order to find acceptable settings, I just didn't
do that then.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 20:38 ` Jeff Garzik
@ 2009-03-28 0:14 ` Alan Cox
2009-03-29 8:25 ` Christoph Hellwig
1 sibling, 0 replies; 419+ messages in thread
From: Alan Cox @ 2009-03-28 0:14 UTC (permalink / raw)
To: Jeff Garzik
Cc: Christoph Hellwig, Theodore Tso, Jens Axboe, Linus Torvalds,
Ingo Molnar, Arjan van de Ven, Andrew Morton, Peter Zijlstra,
Nick Piggin, David Rees, Jesper Krogh, Linux Kernel Mailing List
On Fri, 27 Mar 2009 16:38:35 -0400
Jeff Garzik <jeff@garzik.org> wrote:
> Christoph Hellwig wrote:
> > And please add a tuneable for the flush. Preferable a generic one at
> > the block device layer instead of the current mess where every
> > filesystem has a slightly different option for barrier usage.
>
> At the very least, IMO the block layer should be able to notice when
> barriers need not be translated into cache flushes. Most notably when
> wb cache is disabled on the drive, something easy to auto-detect, but
> probably a manual switch also, for people with enterprise battery-backed
> storage and such.
The storage drivers for those cases already generally know this and treat
cache flush requests as "has hit nvram", even the non enterprise ones.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 22:19 ` Linus Torvalds
2009-03-27 22:25 ` Linus Torvalds
@ 2009-03-28 0:18 ` Jeff Garzik
2009-03-28 1:45 ` Linus Torvalds
2009-03-28 2:16 ` Mark Lord
2 siblings, 1 reply; 419+ messages in thread
From: Jeff Garzik @ 2009-03-28 0:18 UTC (permalink / raw)
To: Linus Torvalds
Cc: Matthew Garrett, Alan Cox, Theodore Tso, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Linus Torvalds wrote:
>
> On Fri, 27 Mar 2009, Jeff Garzik wrote:
>> What is in Fedora 10 and Debian lenny's iceweasel both definitely sync to
>> disk, as of today, according to my own tests.
>
> Hmm. Go to "about:config" and check your "toolkit.storage.synchronous"
> setting.
>
> It _should_ say
>
> default integer 0
>
> and that is what it says for me (yes, on Fedora 10).
>
> The values are: 0 = off, 1 = normal, 2 = full.
Definitely a difference! 1 for both, here. Deb is a fresh OS install
and fresh homedir, but my F10 has been through many OS and ff config
upgrades over the years.
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 22:55 ` Theodore Tso
@ 2009-03-28 0:42 ` Gene Heskett
0 siblings, 0 replies; 419+ messages in thread
From: Gene Heskett @ 2009-03-28 0:42 UTC (permalink / raw)
To: Theodore Tso, Linus Torvalds, Alan Cox, Matthew Garrett,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Friday 27 March 2009, Theodore Tso wrote:
>On Fri, Mar 27, 2009 at 06:37:08PM -0400, Gene Heskett wrote:
>> Thanks Ted, I will build 2.6.28.9 with this:
>> [root@coyote linux-2.6.28.9]# grep EXT .config
>> [...]
>> CONFIG_PAGEFLAGS_EXTENDED=y
>> CONFIG_EXT2_FS=m
>> CONFIG_EXT2_FS_XATTR=y
>> CONFIG_EXT2_FS_POSIX_ACL=y
>> # CONFIG_EXT2_FS_SECURITY is not set
>> CONFIG_EXT2_FS_XIP=y
>> CONFIG_EXT3_FS=m
>> CONFIG_EXT3_FS_XATTR=y
>> CONFIG_EXT3_FS_POSIX_ACL=y
>> CONFIG_EXT3_FS_SECURITY=y
>> CONFIG_EXT4_FS=y
>> # CONFIG_EXT4DEV_COMPAT is not set
>> # CONFIG_EXT4_FS_XATTR is not set
>> CONFIG_GENERIC_FIND_NEXT_BIT=y
>>
>> Anything there that isn't compatible?
>
>Well, if you need extended attributes (if you are using SELinux, then
>you need extended attributes) you'll want to enable
>CONFIG_EXT4_FS_XATTR.
>
>If you want to use ext4 on your root filesystem, you may need to take
>some special measures depending on your distribution. Using the boot
>command-line option rootfstype=ext4 will work on many distributions,
>but I haven't tested all of them. It definitely works on Ubuntu, and
>it should work if you're not using an initial ramdisk.
>
>Oh yeah; the other thing I should warn you about is that 2.6.28.9
>won't have the replace-via-rename and replace-via-truncate
>workarounds. So if you crash and your applications aren't using
>fsync(), you could end up seeing the zero-length files. I very much
>doubt that will make a big difference for your /amandatapes partition,
>but if you want to use this for the filesystem where you have home
>directory, you'll probably want the workaround patches. I've heard
>reports of KDE users telling me that when they initial start up their
>desktop, literally hundreds of files are rewritten by their desktop,
>just starting it up. (Why? Who knows? It's not good for SSD
>endurance, in any case.) But if you crash while initially logging in,
>your KDE configuration files might get wiped out w/o the
>replace-via-rename and replace-via-truncate workaround patches.
>
>> I'll build that, but only switch the /amandatapes mount in fstab for
>> testing tonight unless you spot something above.
>
>OK, so you're not worried about your root filesystem, and presumably
>the issue with your home directory won't be an issue for you either.
>The only question then is whether you need extended attribute support.
>
>Regards,
>
> - Ted
Thanks Ted, its building w/o the extra CONFIG_EXT4_FS_XATTR atm, but I'll
enable that and do it again before I reboot. I had just fired off the build
when I saw your answer. NBD, my 'makeit' script is pretty complete.
Thank you.
--
Cheers, Gene
"There are four boxes to be used in defense of liberty:
soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
The only rose without thorns is friendship.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 22:25 ` Linus Torvalds
@ 2009-03-28 1:19 ` Jeff Garzik
2009-03-28 1:30 ` David Miller
` (3 more replies)
0 siblings, 4 replies; 419+ messages in thread
From: Jeff Garzik @ 2009-03-28 1:19 UTC (permalink / raw)
To: Linus Torvalds
Cc: Matthew Garrett, Alan Cox, Theodore Tso, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Linus Torvalds wrote:
> Of course, your browsing history database is an excellent example of
> something you should _not_ care about that much, and where performance is
> a lot more important than "ooh, if the machine goes down suddenly, I need
> to be 100% up-to-date". Using fsync on that thing was just stupid, even
If you are doing a ton of web-based work with a bunch of tabs or windows
open, you really like the post-crash restoration methods that Firefox
now employs. Some users actually do want to checkpoint/restore their
web work, regardless of whether it was the browser, the window system or
the OS that crashed.
You may not care about that, but others do care about the integrity of
the database that stores the active FF state (Web URLs currently open),
a database which necessarily changes for each URL visited.
As an aside, I find it highly ironic that Firefox gained useful session
management around the same time that some GNOME jarhead no-op'd GNOME
session management[1] in X.
Jeff
[1] http://np237.livejournal.com/22014.html
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-28 1:19 ` Jeff Garzik
@ 2009-03-28 1:30 ` David Miller
2009-03-28 2:19 ` Mark Lord
` (2 subsequent siblings)
3 siblings, 0 replies; 419+ messages in thread
From: David Miller @ 2009-03-28 1:30 UTC (permalink / raw)
To: jeff; +Cc: torvalds, mjg59, alan, tytso, akpm, drees76, jesper, linux-kernel
From: Jeff Garzik <jeff@garzik.org>
Date: Fri, 27 Mar 2009 21:19:12 -0400
> As an aside, I find it highly ironic that Firefox gained useful
> session management around the same time that some GNOME jarhead
> no-op'd GNOME session management[1] in X.
Great, now all the KDE boo-birds might have to switch back,
or even go to xfce4.
If KDE and GNOME both make a bad release at the same time,
then we'll really be in trouble. :-)
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-28 0:18 ` Jeff Garzik
@ 2009-03-28 1:45 ` Linus Torvalds
2009-03-28 2:53 ` Jeff Garzik
0 siblings, 1 reply; 419+ messages in thread
From: Linus Torvalds @ 2009-03-28 1:45 UTC (permalink / raw)
To: Jeff Garzik
Cc: Matthew Garrett, Alan Cox, Theodore Tso, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Fri, 27 Mar 2009, Jeff Garzik wrote:
>
> Definitely a difference! 1 for both, here. Deb is a fresh OS install and
> fresh homedir, but my F10 has been through many OS and ff config upgrades over
> the years.
Hmm. I wonder where firefox gets its defaults then.
I can well imagine that Debian has a different firefox build, with
different defaults. But if your F10 thing also is set to 1, and still
shows as "default", then that's odd, considering that mine shows 0.
I have 'rpm -q firefox': firefox-3.0.7-1.fc10.x86_64.
Is yours a 32-bit one? Maybe it comes with different defaults?
And maybe firefox just has a very odd config setup and I don't understand
what "default" means at all. Gene says he doesn't have that
toolkit.storage.synchronous thing at all.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 22:19 ` Linus Torvalds
2009-03-27 22:25 ` Linus Torvalds
2009-03-28 0:18 ` Jeff Garzik
@ 2009-03-28 2:16 ` Mark Lord
2009-03-28 2:38 ` Linus Torvalds
2 siblings, 1 reply; 419+ messages in thread
From: Mark Lord @ 2009-03-28 2:16 UTC (permalink / raw)
To: Linus Torvalds
Cc: Jeff Garzik, Matthew Garrett, Alan Cox, Theodore Tso,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Linus Torvalds wrote:
>
> On Fri, 27 Mar 2009, Jeff Garzik wrote:
>> What is in Fedora 10 and Debian lenny's iceweasel both definitely sync to
>> disk, as of today, according to my own tests.
>
> Hmm. Go to "about:config" and check your "toolkit.storage.synchronous"
> setting.
..
> If you don't have that "toolkit.storage.synchronous" entry, that means
> that you have an older version of firefox-3.
..
Okay, I'll bite. Exactly which version of FF has that variable?
Cuz it ain't in the FF 3.0.8 that I'm running here.
Thanks
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-28 1:19 ` Jeff Garzik
2009-03-28 1:30 ` David Miller
@ 2009-03-28 2:19 ` Mark Lord
2009-03-28 2:49 ` Jeff Garzik
2009-03-29 0:33 ` david
2009-03-31 15:01 ` Thierry Vignaud
3 siblings, 1 reply; 419+ messages in thread
From: Mark Lord @ 2009-03-28 2:19 UTC (permalink / raw)
To: Jeff Garzik
Cc: Linus Torvalds, Matthew Garrett, Alan Cox, Theodore Tso,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Jeff Garzik wrote:
> Linus Torvalds wrote:
>> Of course, your browsing history database is an excellent example of
>> something you should _not_ care about that much, and where performance
>> is a lot more important than "ooh, if the machine goes down suddenly,
>> I need to be 100% up-to-date". Using fsync on that thing was just
>> stupid, even
>
> If you are doing a ton of web-based work with a bunch of tabs or windows
> open, you really like the post-crash restoration methods that Firefox
> now employs. Some users actually do want to checkpoint/restore their
> web work, regardless of whether it was the browser, the window system or
> the OS that crashed.
>
> You may not care about that, but others do care about the integrity of
> the database that stores the active FF state (Web URLs currently open),
> a database which necessarily changes for each URL visited.
..
fsync() isn't going to affect that one way or another
unless the entire kernel freezes and dies.
Firefox locks up the GUI here from time to time,
but the kernel still flushes pages to disk,
and even more quickly when alt-sysrq-s is used.
Cheers
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-28 2:16 ` Mark Lord
@ 2009-03-28 2:38 ` Linus Torvalds
2009-03-28 11:57 ` Andreas T.Auer
0 siblings, 1 reply; 419+ messages in thread
From: Linus Torvalds @ 2009-03-28 2:38 UTC (permalink / raw)
To: Mark Lord
Cc: Jeff Garzik, Matthew Garrett, Alan Cox, Theodore Tso,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Fri, 27 Mar 2009, Mark Lord wrote:
>
> Okay, I'll bite. Exactly which version of FF has that variable?
> Cuz it ain't in the FF 3.0.8 that I'm running here.
I _thought_ it was there since rc2 of FF-3, but clearly there are odd
things afoot. You're the second person to report it not there.
I'd suspect that I mistyped it, but I just cut-and-pasted it from my email
to make sure. Maybe you did. What happens if you just write "sync" in the
Filter: box? Nothing matches?
Do you see firefox pausing a lot under disk load? If you just add that
"toolkit.storage.synchronous" value by hand (right-click in the preference
window, do "New" -> "Integer"), and write it in as zero, does it change
behavior?
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-28 2:19 ` Mark Lord
@ 2009-03-28 2:49 ` Jeff Garzik
2009-03-28 13:29 ` Stefan Richter
0 siblings, 1 reply; 419+ messages in thread
From: Jeff Garzik @ 2009-03-28 2:49 UTC (permalink / raw)
To: Mark Lord
Cc: Linus Torvalds, Matthew Garrett, Alan Cox, Theodore Tso,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Mark Lord wrote:
> fsync() isn't going to affect that one way or another
> unless the entire kernel freezes and dies.
...which is one of the three common crash scenarios listed (and
experienced in the field).
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-28 1:45 ` Linus Torvalds
@ 2009-03-28 2:53 ` Jeff Garzik
2009-03-28 2:56 ` Zid Null
2009-03-28 3:55 ` Gene Heskett
0 siblings, 2 replies; 419+ messages in thread
From: Jeff Garzik @ 2009-03-28 2:53 UTC (permalink / raw)
To: Linus Torvalds
Cc: Matthew Garrett, Alan Cox, Theodore Tso, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Linus Torvalds wrote:
>
> On Fri, 27 Mar 2009, Jeff Garzik wrote:
>> Definitely a difference! 1 for both, here. Deb is a fresh OS install and
>> fresh homedir, but my F10 has been through many OS and ff config upgrades over
>> the years.
>
> Hmm. I wonder where firefox gets its defaults then.
>
> I can well imagine that Debian has a different firefox build, with
> different defaults. But if your F10 thing also is set to 1, and still
> shows as "default", then that's odd, considering that mine shows 0.
>
> I have 'rpm -q firefox': firefox-3.0.7-1.fc10.x86_64.
>
> Is yours a 32-bit one? Maybe it comes with different defaults?
>
> And maybe firefox just has a very odd config setup and I don't understand
> what "default" means at all. Gene says he doesn't have that
> toolkit.storage.synchronous thing at all.
In my case the toolkit.storage.synchronous is present in both, set to 1
in Deb and bolded and set to 1 in F10 (firefox-3.0.7-1.fc10.x86_64).
The latter's bold typeface makes me think my F10 FF
toolkit.storage.synchronous setting is NOT set to the F10 default --
although I have never heard of this setting, and have certainly not
manually tweaked it. The only FF setting I manually tweak is cache
directory.
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-28 2:53 ` Jeff Garzik
@ 2009-03-28 2:56 ` Zid Null
2009-03-28 3:55 ` Gene Heskett
1 sibling, 0 replies; 419+ messages in thread
From: Zid Null @ 2009-03-28 2:56 UTC (permalink / raw)
To: Jeff Garzik
Cc: Linus Torvalds, Matthew Garrett, Alan Cox, Theodore Tso,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
2009/3/28 Jeff Garzik <jeff@garzik.org>:
> Linus Torvalds wrote:
>>
>> On Fri, 27 Mar 2009, Jeff Garzik wrote:
>>>
>>> Definitely a difference! 1 for both, here. Deb is a fresh OS install
>>> and
>>> fresh homedir, but my F10 has been through many OS and ff config upgrades
>>> over
>>> the years.
>>
>> Hmm. I wonder where firefox gets its defaults then.
>> I can well imagine that Debian has a different firefox build, with
>> different defaults. But if your F10 thing also is set to 1, and still shows
>> as "default", then that's odd, considering that mine shows 0.
>>
>> I have 'rpm -q firefox': firefox-3.0.7-1.fc10.x86_64.
>>
>> Is yours a 32-bit one? Maybe it comes with different defaults?
>>
>> And maybe firefox just has a very odd config setup and I don't understand
>> what "default" means at all. Gene says he doesn't have that
>> toolkit.storage.synchronous thing at all.
>
> In my case the toolkit.storage.synchronous is present in both, set to 1 in
> Deb and bolded and set to 1 in F10 (firefox-3.0.7-1.fc10.x86_64).
I compiled my own firefox under gentoo, not present.
Mozilla Firefox 3.0.7, Copyright (c) 1998 - 2009 mozilla.org
> The latter's bold typeface makes me think my F10 FF
> toolkit.storage.synchronous setting is NOT set to the F10 default --
> although I have never heard of this setting, and have certainly not manually
> tweaked it. The only FF setting I manually tweak is cache directory.
>
> Jeff
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-28 2:53 ` Jeff Garzik
2009-03-28 2:56 ` Zid Null
@ 2009-03-28 3:55 ` Gene Heskett
2009-03-28 11:29 ` Alejandro Riveira Fernández
1 sibling, 1 reply; 419+ messages in thread
From: Gene Heskett @ 2009-03-28 3:55 UTC (permalink / raw)
To: Jeff Garzik
Cc: Linus Torvalds, Matthew Garrett, Alan Cox, Theodore Tso,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Friday 27 March 2009, Jeff Garzik wrote:
>Linus Torvalds wrote:
>> On Fri, 27 Mar 2009, Jeff Garzik wrote:
>>> Definitely a difference! 1 for both, here. Deb is a fresh OS install
>>> and fresh homedir, but my F10 has been through many OS and ff config
>>> upgrades over the years.
>>
>> Hmm. I wonder where firefox gets its defaults then.
>>
>> I can well imagine that Debian has a different firefox build, with
>> different defaults. But if your F10 thing also is set to 1, and still
>> shows as "default", then that's odd, considering that mine shows 0.
>>
>> I have 'rpm -q firefox': firefox-3.0.7-1.fc10.x86_64.
>>
>> Is yours a 32-bit one? Maybe it comes with different defaults?
>>
>> And maybe firefox just has a very odd config setup and I don't understand
>> what "default" means at all. Gene says he doesn't have that
>> toolkit.storage.synchronous thing at all.
>
>In my case the toolkit.storage.synchronous is present in both, set to 1
>in Deb and bolded and set to 1 in F10 (firefox-3.0.7-1.fc10.x86_64).
>
>The latter's bold typeface makes me think my F10 FF
>toolkit.storage.synchronous setting is NOT set to the F10 default --
>although I have never heard of this setting, and have certainly not
>manually tweaked it. The only FF setting I manually tweak is cache
>directory.
>
> Jeff
I just let FF update itself to 3.0.8 (from mozilla, not fedora) and there is
no 'toolkit' stuff whatsoever in about:config.
Is this perchance some extension I don't have installed?
>
>--
>To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at http://vger.kernel.org/majordomo-info.html
>Please read the FAQ at http://www.tux.org/lkml/
--
Cheers, Gene
"There are four boxes to be used in defense of liberty:
soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Jacquin's Postulate on Democratic Government:
No man's life, liberty, or property are safe while the
legislature is in session.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 1:25 ` Andrew Morton
` (2 preceding siblings ...)
2009-03-27 3:38 ` Linus Torvalds
@ 2009-03-28 5:06 ` Ingo Molnar
2009-04-01 21:03 ` Lennart Sorensen
4 siblings, 0 replies; 419+ messages in thread
From: Ingo Molnar @ 2009-03-28 5:06 UTC (permalink / raw)
To: Andrew Morton
Cc: Linus Torvalds, Theodore Tso, David Rees, Jesper Krogh,
Linux Kernel Mailing List
* Andrew Morton <akpm@linux-foundation.org> wrote:
> On Thu, 26 Mar 2009 18:03:15 -0700 (PDT) Linus Torvalds <torvalds@linux-foundation.org> wrote:
>
> > On Thu, 26 Mar 2009, Andrew Morton wrote:
> > >
> > > userspace can get closer than the kernel can.
> >
> > Andrew, that's SIMPLY NOT TRUE.
> >
> > You state that without any amount of data to back it up, as if it was some
> > kind of truism. It's not.
>
> I've seen you repeatedly fiddle the in-kernel defaults based on
> in-field experience. That could just as easily have been done in
> initscripts by distros, and much more effectively because it
> doesn't need a new kernel. That's data.
>
> The fact that this hasn't even been _attempted_ (afaik) is
> deplorable.
>
> Why does everyone just sit around waiting for the kernel to put a
> new value into two magic numbers which userspace scripts could
> have set?
Three reasons.
Firstly, this utterly does not scale.
Microsoft has built an empire on the 'power of the default settings'
- why cannot Linux kernel developers finally realize the obvious:
that setting defaults centrally is an incredibly efficient way of
shaping the end result?
The second reason is that in the past 10 years we have gone through
a couple of toxic cycles of distros trying to work around kernel
behavior by setting sysctls. That was done and then forgotten, and a
few years down the line some kernel maintainer found [related to a
bugreport] that distro X set that sysctl to value Y which now had a
different behavior and immediately chastised the distro broken and
refused to touch the bugreport and refused bugreports from that
distro from that point on.
We've seen this again, and again, and i remember 2-3 specific
examples and i know how badly this experience trickles down on the
distro side.
The end result: pretty much any tuning of kernel defaults is done
extremely reluctantly by distros. They consider kernel behavior a
domain of the kernel, and they dont generally risk going away from
the default. [ In other words, distro developers understand the
'power of defaults' a lot better than kernel developers ... ]
This is also true in the reverse direction: they dont actually mind
the kernel doing a central change of policy, if it's a general step
forward. Distro developers are very practical, and they are a lot
less hardline about the sacred Unix principle of separation of
kernel from policy.
Thirdly: the latency of getting changes to users. A new kernel is
released every 3 months. Distros are released every 6 months. A new
Firefox major version is released about once a year. A new major GCC
is released every three years.
Given the release frequency and given our goal to minimize the
latency of getting improvements to users, which of these projects is
best suited to introduce a new default value? [and no, such changes
are not generally done in minor package updates.]
Ingo
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 21:53 ` Bodo Eggert
@ 2009-03-28 6:51 ` Mike Galbraith
2009-03-28 12:12 ` Theodore Tso
1 sibling, 0 replies; 419+ messages in thread
From: Mike Galbraith @ 2009-03-28 6:51 UTC (permalink / raw)
To: 7eggert
Cc: Theodore Tso, Matthew Garrett, Linus Torvalds, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Fri, 2009-03-27 at 22:53 +0100, Bodo Eggert wrote:
> data=ordered is a sane way of handling data. Otherwise, the millions
> would change their ext3 to data=writeback.
This one of the millions did that quite a while ago. Sanity be damned,
my quality of life improved by doing so.
-Mike
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 21:11 ` Jeremy Fitzhardinge
@ 2009-03-28 7:45 ` Bojan Smojver
2009-03-28 8:43 ` Bojan Smojver
0 siblings, 1 reply; 419+ messages in thread
From: Bojan Smojver @ 2009-03-28 7:45 UTC (permalink / raw)
To: linux-kernel
Jeremy Fitzhardinge <jeremy <at> goop.org> writes:
> This is a fairly narrow view of correct and possible. How can you make
> "cat" fsync? grep? sort? How do they know they're not dealing with
> critical data? Apps in general don't know, because "criticality" is a
> property of the data itself and how its used, not the tools operating on it.
Isn't it possible to compile a program that simply calls open()/fsync()/close()
on a given file name? If yes, then in your scripts, you can do whatever you want
with existing tools on a _scratch_ file, then call your fsync program on that
scratch file and then rename it to the real file. No?
In other words, given that you know that your data is critical, you will write
processed data to another file, while preserving the original, store the new
file safely and then rename it to the original. Just like the apps that know
that their files are critical are supposed to do using the API.
--
Bojan
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 16:02 ` Linus Torvalds
@ 2009-03-28 7:50 ` Mike Galbraith
2009-03-30 22:00 ` Hans-Peter Jansen
1 sibling, 0 replies; 419+ messages in thread
From: Mike Galbraith @ 2009-03-28 7:50 UTC (permalink / raw)
To: Linus Torvalds; +Cc: Geert Uytterhoeven, Hans-Peter Jansen, linux-kernel
On Fri, 2009-03-27 at 09:02 -0700, Linus Torvalds wrote:
>
> On Fri, 27 Mar 2009, Mike Galbraith wrote:
> > >
> > > If you're using the kernel-of-they-day, you're probably using git, and
> > > CONFIG_LOCALVERSION_AUTO=y should be mandatory.
> >
> > I sure hope it never becomes mandatory, I despise that thing. I don't
> > even do -rc tags. .nn is .nn until baked and nn.1 appears.
>
> If you're a git user that changes kernels frequently, then enabling
> CONFIG_LOCALVERSION_AUTO is _really_ convenient when you learn to use it.
>
> This is quite common for me:
>
> gitk v$(uname -r)..
>
> and it works exactly due to CONFIG_LOCALVERSION_AUTO (and because git is
> rather good at figuring out version numbers). It's a great way to say
> "ok, what is in my git tree that I'm not actually running right now".
>
> Another case where CONFIG_LOCALVERSION_AUTO is very useful is when you're
> noticing some new broken behavior, but it took you a while to notice.
> You've rebooted several times since, but you know it worked last Tuesday.
> What do you do?
>
> The thing to do is
>
> grep "Linux version" /var/log/messages*
>
> and figure out what the good version was, and then do
>
> git bisect start
> git bisect good ..that-version..
> git bisect bad v$(uname -r)
>
> and off you go. This is _very_ convenient if you are working with some
> "random git kernel of the day" like I am (and like hopefully others are
> too, in order to get test coverage).
That's why it irritates me. I build/test a lot, and do the occasional
bisection, which makes a mess in /boot and /lib/modules. I use a quilt
stack of git pull diffs as reference/rummage points. Awkward maybe, but
effective (so no need for autoversion), and no mess.
-Mike
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-28 7:45 ` Bojan Smojver
@ 2009-03-28 8:43 ` Bojan Smojver
2009-03-28 21:55 ` Bojan Smojver
0 siblings, 1 reply; 419+ messages in thread
From: Bojan Smojver @ 2009-03-28 8:43 UTC (permalink / raw)
To: linux-kernel
Bojan Smojver <bojan <at> rexursive.com> writes:
> Isn't it possible to compile a program that simply calls open()/fsync()/close()
> on a given file name?
That was stupid. Ignore me.
--
Bojan
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-28 3:55 ` Gene Heskett
@ 2009-03-28 11:29 ` Alejandro Riveira Fernández
0 siblings, 0 replies; 419+ messages in thread
From: Alejandro Riveira Fernández @ 2009-03-28 11:29 UTC (permalink / raw)
To: Gene Heskett
Cc: Jeff Garzik, Linus Torvalds, Matthew Garrett, Alan Cox,
Theodore Tso, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
El Fri, 27 Mar 2009 23:55:06 -0400
Gene Heskett <gene.heskett@verizon.net> escribió:
>
> I just let FF update itself to 3.0.8 (from mozilla, not fedora) and there is
> no 'toolkit' stuff whatsoever in about:config.
I do not have it either FF 3.0.8 Ubuntu 8.10. it does not appear searching with
sync; not toolkit nor storage...
>
> Is this perchance some extension I don't have installed?
> >
>
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
[not found] ` <ckoYP-2DC-13@gated-at.bofh.it>
@ 2009-03-28 11:53 ` Bodo Eggert
2009-03-29 14:45 ` Pavel Machek
0 siblings, 1 reply; 419+ messages in thread
From: Bodo Eggert @ 2009-03-28 11:53 UTC (permalink / raw)
To: Linus Torvalds, Matthew Garrett, Alan Cox, Theodore Tso,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Linus Torvalds <torvalds@linux-foundation.org> wrote:
> On Fri, 27 Mar 2009, Linus Torvalds wrote:
>> Yes, some editors (vi, emacs) do it, but even there it's configurable.
>
> .. and looking at history, it's even pretty modern. From the vim logs:
>
> Patch 6.2.499
> Problem: When writing a file and halting the system, the file might be lost
> when using a journalling file system.
> Solution: Use fsync() to flush the file data to disk after writing a file.
> (Radim Kolar)
> Files: src/fileio.c
>
> so it looks (assuming those patch numbers mean what they would seem to
> mean) that 'fsync()' in vim is from after 6.2 was released. Some time in
> 2004.
Besides that, it's a fix specific for /journaled/ filesystems. It's easy to see
that the same journal that was supposed to increase filesystem reliability
is CAUSING more unreliable behavior.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-28 2:38 ` Linus Torvalds
@ 2009-03-28 11:57 ` Andreas T.Auer
0 siblings, 0 replies; 419+ messages in thread
From: Andreas T.Auer @ 2009-03-28 11:57 UTC (permalink / raw)
To: Linus Torvalds
Cc: Mark Lord, Jeff Garzik, Matthew Garrett, Alan Cox, Theodore Tso,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On 28.03.2009 03:38 Linus Torvalds wrote:
> On Fri, 27 Mar 2009, Mark Lord wrote:
>
>> Okay, I'll bite. Exactly which version of FF has that variable?
>> Cuz it ain't in the FF 3.0.8 that I'm running here.
>>
>
> I'd suspect that I mistyped it, but I just cut-and-pasted it from my
email
> to make sure. Maybe you did. What happens if you just write "sync" in the
> Filter: box? Nothing matches?
>
>
No, not with my iceweasel 3.0.7 (Debian/testing).
I couldn't find anything in the Debian patch to the source code, but the
source code contains
toolkit/components/contentprefs/src/nsContentPrefService.js 733-746:
// Turn off disk synchronization checking to reduce disk churn and
speed up
// operations when prefs are changed rapidly (such as when a user
repeatedly
// changes the value of the browser zoom setting for a site).
//
// Note: this could cause database corruption if the OS crashes or
machine
// loses power before the data gets written to disk, but this is
considered
// a reasonable risk for the not-so-critical data stored in this
database.
//
// If you really don't want to take this risk, however, just set the
// toolkit.storage.synchronous pref to 1 (NORMAL synchronization) or 2
// (FULL synchronization), in which case
mozStorageConnection::Initialize
// will use that value, and we won't override it here.
if (!this._prefSvc.prefHasUserValue("toolkit.storage.synchronous"))
dbConnection.executeSimpleSQL("PRAGMA synchronous = OFF");
Probably they preferred the default value "off" so much that they even
dropped the entry in standard configuration.
> Do you see firefox pausing a lot under disk load?
I see iceweasel pausing/blocking a lot when loading stalling webpages,
but that's a different topic.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 21:53 ` Bodo Eggert
2009-03-28 6:51 ` Mike Galbraith
@ 2009-03-28 12:12 ` Theodore Tso
1 sibling, 0 replies; 419+ messages in thread
From: Theodore Tso @ 2009-03-28 12:12 UTC (permalink / raw)
To: Bodo Eggert
Cc: Matthew Garrett, Linus Torvalds, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
On Fri, Mar 27, 2009 at 10:53:26PM +0100, Bodo Eggert wrote:
> data=ordered is a sane way of handling data. Otherwise, the millions
> would change their ext3 to data=writeback.
See the discussion about defaulting to "relatime" (or my preferred,
"noatime") mount option. It's a very sane thing to do, yet most
people don't use anything other than the defaults. And now we're told
even most distro's are hesitant to tweak tuning parameters away from
the default.
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-28 2:49 ` Jeff Garzik
@ 2009-03-28 13:29 ` Stefan Richter
2009-03-28 14:17 ` Jeff Garzik
0 siblings, 1 reply; 419+ messages in thread
From: Stefan Richter @ 2009-03-28 13:29 UTC (permalink / raw)
To: Jeff Garzik
Cc: Mark Lord, Linus Torvalds, Matthew Garrett, Alan Cox,
Theodore Tso, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Jeff Garzik wrote:
> Mark Lord wrote:
[store browser session]
>> fsync() isn't going to affect that one way or another
>> unless the entire kernel freezes and dies.
>
> ...which is one of the three common crash scenarios listed (and
> experienced in the field).
To get work done which one really cares about, one can always choose a
system which does not crash frequently. Those who run unstable drivers
for thrills surely do it on boxes on which nothing important is being
done, one would think.
--
Stefan Richter
-=====-=-=== -=-= -==-=
http://arcgraph.de/sr/
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-28 13:29 ` Stefan Richter
@ 2009-03-28 14:17 ` Jeff Garzik
2009-03-28 14:35 ` Stefan Richter
0 siblings, 1 reply; 419+ messages in thread
From: Jeff Garzik @ 2009-03-28 14:17 UTC (permalink / raw)
To: Stefan Richter
Cc: Mark Lord, Linus Torvalds, Matthew Garrett, Alan Cox,
Theodore Tso, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Stefan Richter wrote:
> Jeff Garzik wrote:
>> Mark Lord wrote:
> [store browser session]
>>> fsync() isn't going to affect that one way or another
>>> unless the entire kernel freezes and dies.
>> ...which is one of the three common crash scenarios listed (and
>> experienced in the field).
>
> To get work done which one really cares about, one can always choose a
> system which does not crash frequently. Those who run unstable drivers
> for thrills surely do it on boxes on which nothing important is being
> done, one would think.
Once software is perfect, there is definitely a lot of useless crash
protection code to remove.
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-28 14:17 ` Jeff Garzik
@ 2009-03-28 14:35 ` Stefan Richter
2009-03-28 15:17 ` Mark Lord
2009-03-28 16:25 ` Alex Goebel
0 siblings, 2 replies; 419+ messages in thread
From: Stefan Richter @ 2009-03-28 14:35 UTC (permalink / raw)
To: Jeff Garzik
Cc: Mark Lord, Linus Torvalds, Matthew Garrett, Alan Cox,
Theodore Tso, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Jeff Garzik wrote:
> Stefan Richter wrote:
>> Jeff Garzik wrote:
>>> Mark Lord wrote:
>> [store browser session]
>>>> fsync() isn't going to affect that one way or another
>>>> unless the entire kernel freezes and dies.
>>> ...which is one of the three common crash scenarios listed (and
>>> experienced in the field).
>>
>> To get work done which one really cares about, one can always choose a
>> system which does not crash frequently. Those who run unstable drivers
>> for thrills surely do it on boxes on which nothing important is being
>> done, one would think.
>
> Once software is perfect, there is definitely a lot of useless crash
> protection code to remove.
Well, for the time being, why not base considerations for performance,
interactivity, energy consumption, graceful restoration of application
state etc. on the assumption that kernel crashes are suitably rare? (At
least on systems where data loss would be of concern.)
--
Stefan Richter
-=====-=-=== -=-= -==-=
http://arcgraph.de/sr/
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-28 14:35 ` Stefan Richter
@ 2009-03-28 15:17 ` Mark Lord
2009-03-28 16:08 ` Stefan Richter
` (2 more replies)
2009-03-28 16:25 ` Alex Goebel
1 sibling, 3 replies; 419+ messages in thread
From: Mark Lord @ 2009-03-28 15:17 UTC (permalink / raw)
To: Stefan Richter
Cc: Jeff Garzik, Linus Torvalds, Matthew Garrett, Alan Cox,
Theodore Tso, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
The better solution seems to be the rather obvious one:
the filesystem should commit data to disk before altering metadata.
Much easier and more reliable to centralize it there, rather than
rely (falsely) upon thousands of programs each performing numerous
performance-killing fsync's.
Cheers
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-28 15:17 ` Mark Lord
@ 2009-03-28 16:08 ` Stefan Richter
2009-03-28 16:32 ` Linus Torvalds
2009-03-29 1:18 ` Jeff Garzik
2009-03-29 23:14 ` Dave Chinner
2 siblings, 1 reply; 419+ messages in thread
From: Stefan Richter @ 2009-03-28 16:08 UTC (permalink / raw)
To: Mark Lord
Cc: Jeff Garzik, Linus Torvalds, Matthew Garrett, Alan Cox,
Theodore Tso, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
I wrote:
>> Well, for the time being, why not base considerations for performance,
>> interactivity, energy consumption, graceful restoration of application
>> state etc. on the assumption that kernel crashes are suitably rare? (At
>> least on systems where data loss would be of concern.)
In more general terms: If overall system reliability is known
insufficient, attempt to increase reliability of lower layers first. If
this approach alone would be too costly in implementation or use, then
also look at how to increase reliability of upper layers too.
(Example: Running a suitably reliable kernel on a desktop for
"mission-critical web browsing" is possible at low cost, at least if
early decisions, e.g. for well-supported video hardware, went right.)
Mark Lord wrote:
> The better solution seems to be the rather obvious one:
>
> the filesystem should commit data to disk before altering metadata.
>
> Much easier and more reliable to centralize it there, rather than
> rely (falsely) upon thousands of programs each performing numerous
> performance-killing fsync's.
>
> Cheers
Sure. I forgot: Not only the frequency of I/O disruption (e.g. due to
kernel crash) factors into system reliability; the particular impact of
such disruption is a factor too. (How hard is recovery? Will at least
old data remain available? ...)
--
Stefan Richter
-=====-=-=== -=-= -==-=
http://arcgraph.de/sr/
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-28 14:35 ` Stefan Richter
2009-03-28 15:17 ` Mark Lord
@ 2009-03-28 16:25 ` Alex Goebel
2009-03-28 21:12 ` Hua Zhong
1 sibling, 1 reply; 419+ messages in thread
From: Alex Goebel @ 2009-03-28 16:25 UTC (permalink / raw)
To: Stefan Richter
Cc: Jeff Garzik, Mark Lord, Linus Torvalds, Matthew Garrett, Alan Cox,
Theodore Tso, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On 3/28/09, Stefan Richter <stefanr@s5r6.in-berlin.de> wrote:
> Well, for the time being, why not base considerations for performance,
> interactivity, energy consumption, graceful restoration of application
> state etc. on the assumption that kernel crashes are suitably rare? (At
> least on systems where data loss would be of concern.)
Absolutely! That's what I thought all the time when following this
(meanwhile quite grotesque) discussion. Even for ordinary
home/office/laptop/desktop users (!=kernel developers), kernel crashes
are simply not a realistic scenario any more to optimize anything for
(which is due to the good work you guys are doing in making/keeping
the kernel stable).
Alex
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-28 16:08 ` Stefan Richter
@ 2009-03-28 16:32 ` Linus Torvalds
2009-03-28 17:22 ` David Hagood
0 siblings, 1 reply; 419+ messages in thread
From: Linus Torvalds @ 2009-03-28 16:32 UTC (permalink / raw)
To: Stefan Richter
Cc: Mark Lord, Jeff Garzik, Matthew Garrett, Alan Cox, Theodore Tso,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Sat, 28 Mar 2009, Stefan Richter wrote:
>
> Sure. I forgot: Not only the frequency of I/O disruption (e.g. due to
> kernel crash) factors into system reliability; the particular impact of
> such disruption is a factor too. (How hard is recovery? Will at least
> old data remain available? ...)
I suspect (at least from my own anecdotal evidence) that a lot of system
crashes are basically X hanging. If you use the system as a desktop, at
that point it's basically dead - and the difference between an X hang and
a kernel crash is almost totally invisible to users.
Us kernel people may walk over to another machine and ping or ssh in to
see, but ask yourself how many normal users would do that - especially
since DOS and Windows has taught people that they need to power-cycle
(and, in all honesty, especially since there usually is very little else
you can do even under Linux if X gets confused).
And then part of the problem ends up being that while in theory the kernel
can continue to write out dirty stuff, in practice people press the power
button long before it can do so. The 30 second thing is really too long.
And don't tell me about sysrq. I know about sysrq. It's very convenient
for kernel people, but it's not like most people use it.
But I absolutely hear you - people seem to think that "correctness" trumps
all, but in reality, quite often users will be happier with a faster
system - even if they know that they may lose data. They may curse
themselves (or, more likely, the system) when they _do_ lose data, but
they'll make the same choice all over two months later.
Which is why I think that if the filesystem people think that the
"data=ordered" mode is too damn fundamentally hard to make fast in the
presense of "fsync", and all sane people (definition: me) think that the
30-second window for either "data=writeback" or the ext4 data writeout is
too fragile, then we should look into something in between.
Because, in the end, you do have to balance performance vs safety when it
comes to disk writes. You absolutely have to delay things for performance,
but it is always going to involve the risk of losing data that you do care
about, but that you aren't willing (or able - random apps and tons of
scripting comes to mind) to do a fsync over.
Which is why I, personally, would probably be perfectly happy with a
"async ordered" mode, for example. At least START the data writeback when
writing back metadata, but don't necessarily wait for it (and don't
necessarily make it go first). Turn the "30 second window of death" into
something much harder to hit.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-28 16:32 ` Linus Torvalds
@ 2009-03-28 17:22 ` David Hagood
0 siblings, 0 replies; 419+ messages in thread
From: David Hagood @ 2009-03-28 17:22 UTC (permalink / raw)
To: Linus Torvalds
Cc: Stefan Richter, Mark Lord, Jeff Garzik, Matthew Garrett, Alan Cox,
Theodore Tso, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
What if you added another phase in the journaling, after the data is
written to the kernel, but before block allocation.
As I understand, the current scenario goes like this:
1) A program writes a bunch of data to a file.
2) The kernel holds the data in buffer cache, delaying allocation.
3) Kernel updates file metadata in journal.
4) Some time later, kernel allocates blocks and writes data.
If things go boom between 3 and 4, you have the files in an inconsistent
state. If the program does an fasync(), then the kernel has to write ALL
data out to be consistent.
What if you could do this:
1) A program writes a bunch of data to a file.
2) The kernel holds the data in buffer cache, delaying allocation.
3) The kernel writes a record to the journal saying "This data goes with
this file, but I've not allocated any blocks for it yet."
4) Kernel updates file metadata in journal.
5) Sometime later, kernel allocates blocks for data, and notes the
allocation in the journal.
6) Sometime later still the kernel commits the data to disk and update
the journal.
It seems to me this would be a not-unreasonable way to have both the
advantages of delayed allocation AND get the data onto disk quickly.
If the user wants to have speed over safety, you could skip steps 3 and
5 (data=ordered). You want safety, you force everything through steps 3
and 5 (data=journaled). You want a middle ground, you only do steps 3
and 5 for files where the program has done an fasync() (data=ordered +
program calls fasync()).
And if you want both speed and safety, you get a big battery-backed up
RAM disk as the journal device and journal everything.
^ permalink raw reply [flat|nested] 419+ messages in thread
* RE: Linux 2.6.29
2009-03-28 16:25 ` Alex Goebel
@ 2009-03-28 21:12 ` Hua Zhong
2009-03-29 8:22 ` Stefan Richter
0 siblings, 1 reply; 419+ messages in thread
From: Hua Zhong @ 2009-03-28 21:12 UTC (permalink / raw)
To: 'Alex Goebel', 'Stefan Richter'
Cc: 'Jeff Garzik', 'Mark Lord',
'Linus Torvalds', 'Matthew Garrett',
'Alan Cox', 'Theodore Tso',
'Andrew Morton', 'David Rees',
'Jesper Krogh', 'Linux Kernel Mailing List'
Good point. We should throw away all the journaling junk and just go back
to ext2. Why pay the extra cost for something we shouldn't optimize for?
It's not like the kernel every crashes.
> Absolutely! That's what I thought all the time when following this
> (meanwhile quite grotesque) discussion. Even for ordinary
> home/office/laptop/desktop users (!=kernel developers), kernel crashes
> are simply not a realistic scenario any more to optimize anything for
> (which is due to the good work you guys are doing in making/keeping
> the kernel stable).
>
> Alex
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-28 8:43 ` Bojan Smojver
@ 2009-03-28 21:55 ` Bojan Smojver
2009-03-31 21:51 ` Jeremy Fitzhardinge
0 siblings, 1 reply; 419+ messages in thread
From: Bojan Smojver @ 2009-03-28 21:55 UTC (permalink / raw)
To: linux-kernel
Bojan Smojver <bojan <at> rexursive.com> writes:
> That was stupid. Ignore me.
And yet, FreeBSD seems to have a command just like that:
http://www.freebsd.org/cgi/man.cgi?query=fsync&sektion=1&manpath=FreeBSD+7.1-RELEASE
--
Bojan
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 3:59 ` Linus Torvalds
@ 2009-03-28 23:52 ` david
0 siblings, 0 replies; 419+ messages in thread
From: david @ 2009-03-28 23:52 UTC (permalink / raw)
To: Linus Torvalds
Cc: Andrew Morton, Theodore Tso, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Thu, 26 Mar 2009, Linus Torvalds wrote:
> On Thu, 26 Mar 2009, Linus Torvalds wrote:
>>
>
> The only excuse _ever_ for user-land tweaking is if you do something
> really odd. Say that you want to get the absolutely best OLTP numbers you
> can possibly get - with no regards for _any_ other workload. In that case,
> you want to tweak the numbers for that exact load, and the exact machine
> that runs it - and the result is going to be a totally worthless number
> (since it's just benchmarketing and doesn't actually reflect any real
> world scenario), but hey, that's what benchmarketing is all about.
>
> Or say that you really are a very embedded environment, with a very
> specific load. A router, a cellphone, a base station, whatever - you do
> one thing, and you're not trying to be a general purpose machine. Then you
> can tweak for that load. But not otherwise.
>
> If you don't have any magical odd special workloads, you shouldn't need to
> tweak a single kernel knob. Because if you need to, then the kernel is
> doing something wrong to begin with.
while I agree with most of what you say, I'll point out that many
enterprise servers really do care about one particular workload to the
exclusion of everything else. if you can get another 10% performance by
tuning your box for an OLTP workload and make your cluster 9 boxes instead
of 10 it's well worth it (it ends up being better response time for users,
less hardware, and avoiding software license costs most of the time"
this is somewhere between benchmarking and embedded, but it is a valid
case.
most users (even most database users) don't need to go after that last
little bit of performance, the defalts should be good enough for most
users, no matter what workload they are running.
David Lang
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-28 1:19 ` Jeff Garzik
2009-03-28 1:30 ` David Miller
2009-03-28 2:19 ` Mark Lord
@ 2009-03-29 0:33 ` david
2009-03-29 1:24 ` Jeff Garzik
2009-03-31 15:01 ` Thierry Vignaud
3 siblings, 1 reply; 419+ messages in thread
From: david @ 2009-03-29 0:33 UTC (permalink / raw)
To: Jeff Garzik
Cc: Linus Torvalds, Matthew Garrett, Alan Cox, Theodore Tso,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Fri, 27 Mar 2009, Jeff Garzik wrote:
> Linus Torvalds wrote:
>> Of course, your browsing history database is an excellent example of
>> something you should _not_ care about that much, and where performance is a
>> lot more important than "ooh, if the machine goes down suddenly, I need to
>> be 100% up-to-date". Using fsync on that thing was just stupid, even
>
> If you are doing a ton of web-based work with a bunch of tabs or windows
> open, you really like the post-crash restoration methods that Firefox now
> employs. Some users actually do want to checkpoint/restore their web work,
> regardless of whether it was the browser, the window system or the OS that
> crashed.
>
> You may not care about that, but others do care about the integrity of the
> database that stores the active FF state (Web URLs currently open), a
> database which necessarily changes for each URL visited.
as one of those users with many windows tabs open (a couple hundred
normally), even the curent firefox behavior isn't good enough because it
doesn't let me _not_ load everything back in when a link I go to triggers
a crash in firefox every time it loads.
so what I do is do a git commit in cron every min of the history file. git
can do the fsync as needed to get it to disk reasonably without firefox
needing to do it _for_every_click_
like laptop mode, you need to be able to define "I'm willing to loose this
much activity in the name of performance/power"
ted's suggestion (in his blog) to tweak fsync to 'misbehave' when laptop
mode is enabled (only pushing data out to disk when the disk is awake
anyway, or the time has hit) would really work well for most users.
servers (where you have the data integrity fsync useage) don't use laptop
mode. desktops could use 'laptop mode' with a delay of 0.5 or 1 second and
get prety close the the guarentee that users want without a huge
performance hit.
David Lang
>
>
> As an aside, I find it highly ironic that Firefox gained useful session
> management around the same time that some GNOME jarhead no-op'd GNOME session
> management[1] in X.
>
> Jeff
>
>
>
> [1] http://np237.livejournal.com/22014.html
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-28 15:17 ` Mark Lord
2009-03-28 16:08 ` Stefan Richter
@ 2009-03-29 1:18 ` Jeff Garzik
2009-03-31 18:45 ` Jörn Engel
2009-03-29 23:14 ` Dave Chinner
2 siblings, 1 reply; 419+ messages in thread
From: Jeff Garzik @ 2009-03-29 1:18 UTC (permalink / raw)
To: Mark Lord
Cc: Stefan Richter, Linus Torvalds, Matthew Garrett, Alan Cox,
Theodore Tso, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Mark Lord wrote:
> The better solution seems to be the rather obvious one:
>
> the filesystem should commit data to disk before altering metadata.
>
> Much easier and more reliable to centralize it there, rather than
> rely (falsely) upon thousands of programs each performing numerous
> performance-killing fsync's.
Firstly, the FS data/metadata write-out order says nothing about when
the write-out is started by the OS. It only implies consistency in the
face of a crash during write-out. Hooray for BSD soft-updates.
If the write-out is started immediately during or after write(2),
congratulations, you are on your way to reinventing synchronous writes.
If the write-out does not start immediately, then you have a
many-seconds window for data loss. And it should be self-evident that
userland application writers will have some situations where design
requirements dictate minimizing or eliminating that window.
Secondly, this email sub-thread is not talking about thousands of
programs, it is talking about Firefox behavior. Firefox is a multi-OS
portable application that has a design requirement that user data must
be protected against crashes. (same concept as your word processor's
auto-save feature)
The author of such a portable application must ensure their app saves
data against Windows Vista kernel crashes, HPUX kernel crashes, OS X
window system crashes, X11 window system crashes, application crashes, etc.
Can a portable app really rely on what Linux kernel hackers think the
underlying filesystem _should_ do?
No, it is either (a) not going to care at all, or (b) uses fsync(2) or
FlushFileBuffers() because if guarantees provided across the OS
spectrum, in light of the myriad OS filesystem caching, flushing, and
ordering algorithms.
Was the BSD soft-updates idea of FS data-before-metadata a good one?
Yes. Obviously.
It is the cornerstone of every SANE journalling-esque database or
filesystem out there -- don't leave a window where your metadata is
inconsistent. "Duh" :)
But that says nothing about when a userland app's design requirements
include ordered writes+flushes of its own application data. That is the
common case when a userland app like Firefox uses a transactional
database such as sqlite or db4.
Thus it is the height of silliness to think that FS data/metadata
write-out order permits elimination of fsync(2) for the class of
application that must care about ordered writes/flushes of its own
application data.
That upstream sqlite replaced fsync(2) with fdatasync(2) makes it
obvious that FS data/metadata write-out order is irrelevant to Firefox.
The issue with transactional databases is more simply a design tradeoff
-- level of fsync punishment versus performance etc. Tweaking the OS
filesystem doesn't help at all with those design choices.
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-29 0:33 ` david
@ 2009-03-29 1:24 ` Jeff Garzik
2009-03-29 3:43 ` Theodore Tso
0 siblings, 1 reply; 419+ messages in thread
From: Jeff Garzik @ 2009-03-29 1:24 UTC (permalink / raw)
To: david
Cc: Linus Torvalds, Matthew Garrett, Alan Cox, Theodore Tso,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
david@lang.hm wrote:
> ted's suggestion (in his blog) to tweak fsync to 'misbehave' when laptop
> mode is enabled (only pushing data out to disk when the disk is awake
> anyway, or the time has hit) would really work well for most users.
> servers (where you have the data integrity fsync useage) don't use
> laptop mode. desktops could use 'laptop mode' with a delay of 0.5 or 1
> second and get prety close the the guarentee that users want without a
> huge performance hit.
The existential struggle is overall amusing:
Application writers start using userland transactional databases for
crash recovery and consistency, and in response, OS writers work to
undercut the consistency guarantees currently provided by the OS.
More seriously, if we get sqlite, db4 and a few others behaving sanely
WRT fsync, you cover a wide swath of apps all at once.
I absolutely agree that db4, sqlite and friends need to be smarter in
the case of laptop mode or overall power saving.
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-29 1:24 ` Jeff Garzik
@ 2009-03-29 3:43 ` Theodore Tso
2009-03-29 4:53 ` Jeff Garzik
0 siblings, 1 reply; 419+ messages in thread
From: Theodore Tso @ 2009-03-29 3:43 UTC (permalink / raw)
To: Jeff Garzik
Cc: david, Linus Torvalds, Matthew Garrett, Alan Cox, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Sat, Mar 28, 2009 at 09:24:59PM -0400, Jeff Garzik wrote:
>> ted's suggestion (in his blog) to tweak fsync to 'misbehave' when
>> laptop mode is enabled (only pushing data out to disk when the disk is
>> awake anyway, or the time has hit) would really work well for most
>> users. servers (where you have the data integrity fsync useage) don't
>> use laptop mode. desktops could use 'laptop mode' with a delay of 0.5
>> or 1 second and get prety close the the guarentee that users want
>> without a huge performance hit.
>
> The existential struggle is overall amusing:
>
> Application writers start using userland transactional databases for
> crash recovery and consistency, and in response, OS writers work to
> undercut the consistency guarantees currently provided by the OS.
Actually, it makes a lot of sense, if you think about it in this way.
The requirement is this; by default, data which is critical shouldn't
be lost. (Whether this should be done by the filesystem performing
magic, or the application/database programmer being careful about
using fsync --- and whether we should treat all files as critical and
to hell with performance, or only those which the application has
designated as precious or nonprecious --- there is some dispute.)
However, the system administrator should be able to say, "I want
laptop mode functionality", and with the turn of a single dial, be
able to say, "In order to save batteries, I'm OK with losing up to X
seconds/minutes worth of work." I would envision a control panel GUI
where there is one checkbox, "enable laptop mode", and another
checkbox, "enable laptop mode only when on battery" (which is greyed
out unless the first is checkbox is enabled), and then a slidebar
which allows the user to set how many seconds and/or minutes the user
is willing to lose if the system crashes.
At that point, it's up to the user. Maybe the defaults should be
something like 15 seconds; maybe the defaults should be 5 seconds.
Maybe the defaults should be automatically set to different values by
different distributions, depending on whether said distro is willing
to use badly unstable proprietary bindary video drivers that crash if
you look at them funny.
The advantage of such a scheme is that there's a single knob for the
user to control, instead one for each application. And fundamentally,
it should be OK for a user of the desktop and/or the system
administrator to make this tradeoff. That's where the choice belongs;
not to the application writer, and not to the filesystem maintainer,
or OS programmers in general.
If I have an Lenovo X61s which is rock solid stable, with Intel video
drivers, I might be willing to risk lose up to 10 minutes of work,
secure in the knowledge it's highly unlikely to happen. If I'm an
Ubuntu user with so super-unstable proprietary video driver, maybe I'd
be more comfortable with this being 5 or 10 seconds. But if we leave
it up to the user, and they have an easy-to-use control panel that
controls it, the user can decide for themself where they want to trade
off performance, battery life, and potential window for data loss.
So having some mode where we can suspend all writes to the disk for up
to a user-defined limit --- and then once the disk wakes up, for
reading or for writing, we flush out all dirty data --- makes a lot of
sense. Laptop mode does most of this already, except that it doesn't
intercept fsync() requests. And as long as the user has given
permission to the operating system to defer fsync() requests by up to
some user-specified time limit, IMHO that's completely fair game.
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-29 3:43 ` Theodore Tso
@ 2009-03-29 4:53 ` Jeff Garzik
0 siblings, 0 replies; 419+ messages in thread
From: Jeff Garzik @ 2009-03-29 4:53 UTC (permalink / raw)
To: Theodore Tso, Matthew Garrett, Alan Cox
Cc: david, Linus Torvalds, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Theodore Tso wrote:
> So having some mode where we can suspend all writes to the disk for up
> to a user-defined limit --- and then once the disk wakes up, for
> reading or for writing, we flush out all dirty data --- makes a lot of
> sense. Laptop mode does most of this already, except that it doesn't
> intercept fsync() requests. And as long as the user has given
> permission to the operating system to defer fsync() requests by up to
> some user-specified time limit, IMHO that's completely fair game.
Overall I agree, but I would rewrite that as: it's fair game as long as
the OS doesn't undercut the deliberate write ordering performed by the
userland application.
When the "laptop mode fsync plug" is uncorked, writes should not be
merged across an fsync(2) barrier; otherwise it becomes impossible to
build transactional databases with any consistency guarantees at all.
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-28 21:12 ` Hua Zhong
@ 2009-03-29 8:22 ` Stefan Richter
0 siblings, 0 replies; 419+ messages in thread
From: Stefan Richter @ 2009-03-29 8:22 UTC (permalink / raw)
To: Hua Zhong
Cc: 'Alex Goebel', 'Jeff Garzik', 'Mark Lord',
'Linus Torvalds', 'Matthew Garrett',
'Alan Cox', 'Theodore Tso',
'Andrew Morton', 'David Rees',
'Jesper Krogh', 'Linux Kernel Mailing List'
Hua Zhong wrote:
> Good point. We should throw away all the journaling junk and just go back
> to ext2. Why pay the extra cost for something we shouldn't optimize for?
> It's not like the kernel every crashes.
The previous two posts were about assumptions at the level of
application software, not at the kernel level.
--
Stefan Richter
-=====-=-=== -=-= -==-=
http://arcgraph.de/sr/
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 20:38 ` Jeff Garzik
2009-03-28 0:14 ` Alan Cox
@ 2009-03-29 8:25 ` Christoph Hellwig
1 sibling, 0 replies; 419+ messages in thread
From: Christoph Hellwig @ 2009-03-29 8:25 UTC (permalink / raw)
To: Jeff Garzik
Cc: Christoph Hellwig, Theodore Tso, Jens Axboe, Linus Torvalds,
Ingo Molnar, Alan Cox, Arjan van de Ven, Andrew Morton,
Peter Zijlstra, Nick Piggin, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Fri, Mar 27, 2009 at 04:38:35PM -0400, Jeff Garzik wrote:
> At the very least, IMO the block layer should be able to notice when
> barriers need not be translated into cache flushes. Most notably when
> wb cache is disabled on the drive, something easy to auto-detect, but
> probably a manual switch also, for people with enterprise battery-backed
> storage and such.
Yeah, that's why I suggested to have the tuning knob in the block layer
and not in the fses when this came up last time.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 19:00 ` Alan Cox
@ 2009-03-29 9:15 ` Xavier Bestel
2009-03-29 20:16 ` Alan Cox
0 siblings, 1 reply; 419+ messages in thread
From: Xavier Bestel @ 2009-03-29 9:15 UTC (permalink / raw)
To: Alan Cox
Cc: Linus Torvalds, Matthew Garrett, Theodore Tso, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Le vendredi 27 mars 2009 à 19:00 +0000, Alan Cox a écrit :
> Actually "pure sh*t" is most of the software currently written. The more
> code I read the happier I get that the lawmakers are finally sick of it
> and going to make damned sure software is subject to liability law. Boy
> will that improve things.
Alan, you're getting old.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-28 11:53 ` Bodo Eggert
@ 2009-03-29 14:45 ` Pavel Machek
2009-03-29 15:47 ` Linus Torvalds
2009-03-30 14:22 ` Morten P.D. Stevens
0 siblings, 2 replies; 419+ messages in thread
From: Pavel Machek @ 2009-03-29 14:45 UTC (permalink / raw)
To: Bodo Eggert
Cc: Linus Torvalds, Matthew Garrett, Alan Cox, Theodore Tso,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Sat 2009-03-28 12:53:34, Bodo Eggert wrote:
> Linus Torvalds <torvalds@linux-foundation.org> wrote:
> > On Fri, 27 Mar 2009, Linus Torvalds wrote:
>
> >> Yes, some editors (vi, emacs) do it, but even there it's configurable.
> >
> > .. and looking at history, it's even pretty modern. From the vim logs:
> >
> > Patch 6.2.499
> > Problem: When writing a file and halting the system, the file might be lost
> > when using a journalling file system.
> > Solution: Use fsync() to flush the file data to disk after writing a file.
> > (Radim Kolar)
> > Files: src/fileio.c
> >
> > so it looks (assuming those patch numbers mean what they would seem to
> > mean) that 'fsync()' in vim is from after 6.2 was released. Some time in
> > 2004.
>
> Besides that, it's a fix specific for /journaled/ filesystems. It's easy to see
> that the same journal that was supposed to increase filesystem reliability
> is CAUSING more unreliable behavior.
Journaling is _not_ supposed to increase filesystem reliability.
It improves fsck time. That's it.
Actually ext2 is more reliable in ext3 -- fsck tells you
about errors on parts of disk that are not normallly used.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-29 14:45 ` Pavel Machek
@ 2009-03-29 15:47 ` Linus Torvalds
2009-03-29 19:15 ` Pavel Machek
2009-03-30 14:22 ` Morten P.D. Stevens
1 sibling, 1 reply; 419+ messages in thread
From: Linus Torvalds @ 2009-03-29 15:47 UTC (permalink / raw)
To: Pavel Machek
Cc: Bodo Eggert, Matthew Garrett, Alan Cox, Theodore Tso,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Sun, 29 Mar 2009, Pavel Machek wrote:
>
> Actually ext2 is more reliable in ext3 -- fsck tells you
> about errors on parts of disk that are not normallly used.
No. ext2 is not more reliable than ext3.
ext2 gets way more errors (that whole 5s + 30s thing), and has no
"data=ordered" mode to even ask for more reliable behavior.
And even if compared to "data=writeback" (which approximates the ext2
writeout ordering), and assuming that the errors are comparable, at least
ext3 ends up automatically fixing up a lot of the errors that cause
inabilities to boot etc.
So don't be silly. ext3 is way more reliable than ext2. In fact, ext3 with
"data=ordered" is rather hard to screw up (but not impossible), and the
only real complaint in this thread is just the fsync performance issue,
not the reliability.
So don't go overboard. Ext3 works perfectly well, and has just that one
(admittedly fairly annoying) major issue - and one that wasn't really
historically even a big deal. I mean, nobody really did fsync() all that
much, and traditionally people cared more about throughput than latency
(or at least that was what all the benchmarks are about, which sadly seems
to still continue).
I do agree that "data=writeback" is broken, but ext2 was equally broken.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-29 15:47 ` Linus Torvalds
@ 2009-03-29 19:15 ` Pavel Machek
0 siblings, 0 replies; 419+ messages in thread
From: Pavel Machek @ 2009-03-29 19:15 UTC (permalink / raw)
To: Linus Torvalds
Cc: Bodo Eggert, Matthew Garrett, Alan Cox, Theodore Tso,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Hi!
> > Actually ext2 is more reliable in ext3 -- fsck tells you
> > about errors on parts of disk that are not normallly used.
>
> No. ext2 is not more reliable than ext3.
>
> ext2 gets way more errors (that whole 5s + 30s thing), and has no
> "data=ordered" mode to even ask for more reliable behavior.
>
> And even if compared to "data=writeback" (which approximates the ext2
> writeout ordering), and assuming that the errors are comparable, at least
> ext3 ends up automatically fixing up a lot of the errors that cause
> inabilities to boot etc.
>
> So don't be silly. ext3 is way more reliable than ext2. In fact, ext3 with
> "data=ordered" is rather hard to screw up (but not impossible), and
> the
Well, ext3 is pretty good, and if you have reliable hardware&kernel,
so all your unclean reboots are due to powerfails, it is better.
If you have flakey ide cable, bad disk driver, non-intel flash storage
or memory with bit flips, you are better with ext2 -- it catches
problems faster. Periodic disk check makes ext3 pretty good,
unfortunately at least one distro silently disables. Pavel
--
(english) http://www.livejournal.com/~pavelmachek (cesky, pictures)
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-29 9:15 ` Xavier Bestel
@ 2009-03-29 20:16 ` Alan Cox
2009-03-29 21:07 ` Linus Torvalds
0 siblings, 1 reply; 419+ messages in thread
From: Alan Cox @ 2009-03-29 20:16 UTC (permalink / raw)
To: Xavier Bestel
Cc: Linus Torvalds, Matthew Garrett, Theodore Tso, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Sun, 29 Mar 2009 11:15:46 +0200
Xavier Bestel <xavier.bestel@free.fr> wrote:
> Le vendredi 27 mars 2009 à 19:00 +0000, Alan Cox a écrit :
> > Actually "pure sh*t" is most of the software currently written. The more
> > code I read the happier I get that the lawmakers are finally sick of it
> > and going to make damned sure software is subject to liability law. Boy
> > will that improve things.
>
> Alan, you're getting old.
Yep and twenty years on software hasn´t improved
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-29 20:16 ` Alan Cox
@ 2009-03-29 21:07 ` Linus Torvalds
2009-03-30 19:37 ` Jeremy Fitzhardinge
0 siblings, 1 reply; 419+ messages in thread
From: Linus Torvalds @ 2009-03-29 21:07 UTC (permalink / raw)
To: Alan Cox
Cc: Xavier Bestel, Matthew Garrett, Theodore Tso, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Sun, 29 Mar 2009, Alan Cox wrote:
>
> Yep and twenty years on software hasn´t improved
I really think you're gilding the edges of those old memories. The
software 20 years ago wasn't that great. I'd say it was on the whole a
whole lot crappier than it is today.
It's just that we have much higher expectations, and our problem sizes
have grown a _lot_ faster than rotating disk latencies have improved.
People didn't worry about having a hundred megs of dirty data and doing an
'fsync' twenty years ago. Even on big hardware (if you _had_ a hundred
megs of dirty data you didn't worry about latencies of a few seconds),
never mind in the Linux world.
This particular problem really largely boils down to "average memory
capacity has expanded a _lot_ more than harddisk speeds have gone up".
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-28 15:17 ` Mark Lord
2009-03-28 16:08 ` Stefan Richter
2009-03-29 1:18 ` Jeff Garzik
@ 2009-03-29 23:14 ` Dave Chinner
2009-03-30 0:39 ` Theodore Tso
` (2 more replies)
2 siblings, 3 replies; 419+ messages in thread
From: Dave Chinner @ 2009-03-29 23:14 UTC (permalink / raw)
To: Mark Lord
Cc: Stefan Richter, Jeff Garzik, Linus Torvalds, Matthew Garrett,
Alan Cox, Theodore Tso, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Sat, Mar 28, 2009 at 11:17:08AM -0400, Mark Lord wrote:
> The better solution seems to be the rather obvious one:
>
> the filesystem should commit data to disk before altering metadata.
Generalities are bad. For example:
write();
unlink();
<do more stuff>
close();
This is a clear case where you want metadata changed before data is
committed to disk. In many cases, you don't even want the data to
hit the disk here.
Similarly, rsync does the magic open,write,close,rename sequence
without an fsync before the rename. And it doesn't need the fsync,
either. The proposed implicit fsync on rename will kill rsync
performance, and I think that may make many people unhappy....
> Much easier and more reliable to centralize it there, rather than
> rely (falsely) upon thousands of programs each performing numerous
> performance-killing fsync's.
The filesystem should batch the fsyncs efficiently. if the
filesystem doesn't handle fsync efficiently, then it is a bad
filesystem choice for that workload....
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-29 23:14 ` Dave Chinner
@ 2009-03-30 0:39 ` Theodore Tso
2009-03-30 1:29 ` Trenton Adams
` (2 more replies)
2009-03-30 3:01 ` Mark Lord
2009-03-30 12:55 ` Chris Mason
2 siblings, 3 replies; 419+ messages in thread
From: Theodore Tso @ 2009-03-30 0:39 UTC (permalink / raw)
To: Mark Lord, Stefan Richter, Jeff Garzik, Linus Torvalds,
Matthew Garrett, Alan Cox, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
On Mon, Mar 30, 2009 at 10:14:51AM +1100, Dave Chinner wrote:
> This is a clear case where you want metadata changed before data is
> committed to disk. In many cases, you don't even want the data to
> hit the disk here.
>
> Similarly, rsync does the magic open,write,close,rename sequence
> without an fsync before the rename. And it doesn't need the fsync,
> either. The proposed implicit fsync on rename will kill rsync
> performance, and I think that may make many people unhappy....
I agree. But unfortunately, I think we're going to be bullied into
data=ordered semantics for the open/write/close/rename sequence, at
least as the default. Ext4 has a noauto_da_alloc mount option (which
Eric Sandeen suggested we rename to "no_pony" :-), for people who
mostly run sane applications that use fsync().
For people who care about rsync's performance and who assume that they
can always restart rsync if the system crashes while the rsync is
running could, rsync could add Yet Another Rsync Option :-) which
explicitly unlinks the target file before the rename, which would
disable the implicit fsync().
> > Much easier and more reliable to centralize it there, rather than
> > rely (falsely) upon thousands of programs each performing numerous
> > performance-killing fsync's.
>
> The filesystem should batch the fsyncs efficiently. if the
> filesystem doesn't handle fsync efficiently, then it is a bad
> filesystem choice for that workload....
All I can do is apologize to all other filesystem developers profusely
for ext3's data=ordered semantics; at this point, I very much regret
that we made data=ordered the default for ext3. But the application
writers vastly outnumber us, and realistically we're not going to be
able to easily roll back eight years of application writers being
trained that fsync() is not necessary, and actually is detrimental for
ext3.
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 0:39 ` Theodore Tso
@ 2009-03-30 1:29 ` Trenton Adams
2009-03-30 3:28 ` Theodore Tso
2009-03-30 6:31 ` Dave Chinner
2009-03-30 7:13 ` Andreas T.Auer
2 siblings, 1 reply; 419+ messages in thread
From: Trenton Adams @ 2009-03-30 1:29 UTC (permalink / raw)
To: Theodore Tso, Mark Lord, Stefan Richter, Jeff Garzik,
Linus Torvalds, Matthew Garrett, Alan Cox, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Sun, Mar 29, 2009 at 6:39 PM, Theodore Tso <tytso@mit.edu> wrote:
> On Mon, Mar 30, 2009 at 10:14:51AM +1100, Dave Chinner wrote:
> All I can do is apologize to all other filesystem developers profusely
> for ext3's data=ordered semantics; at this point, I very much regret
> that we made data=ordered the default for ext3. But the application
> writers vastly outnumber us, and realistically we're not going to be
> able to easily roll back eight years of application writers being
> trained that fsync() is not necessary, and actually is detrimental for
> ext3.
I am slightly confused by the "data=ordered" thing that everyone is
mentioning of late. In theory, it made sense to me before I tried it.
I switched to mounting my ext3 as ext4, and I'm still seeing
seriously delayed fsyncs. Theodore, I used a modified version of your
fsync-tester.c to bench 1M writes, while doing a dd, and I'm still
getting *almost* as bad of "fsync" performance as I was on ext3. On
ext3, the fsync would usually not finish until the dd was complete.
I am currently using Linus' tree at v2.6.29, in x86_64 mode. If you
need more info, let me know.
tdamac ~ # mount
/dev/mapper/s-sys on / type ext4 (rw)
dd if=/dev/zero of=/tmp/bigfile bs=1M count=2000
Your modified fsync test renamed to fs-bench...
tdamac kernel-sluggish # ./fs-bench --sync
write (sync: 1) time: 0.0301
write (sync: 1) time: 0.2098
write (sync: 1) time: 0.0291
write (sync: 1) time: 0.0264
write (sync: 1) time: 1.1664
write (sync: 1) time: 4.0421
write (sync: 1) time: 4.3212
write (sync: 1) time: 3.5316
write (sync: 1) time: 18.6760
write (sync: 1) time: 3.7851
write (sync: 1) time: 13.6281
write (sync: 1) time: 19.4889
write (sync: 1) time: 15.4923
write (sync: 1) time: 7.3491
write (sync: 1) time: 0.0269
write (sync: 1) time: 0.0275
...
This topic is important to me, as it has been affecting my home
machine quite a bit. I can test things as I have time.
Lastly, is there any way data=ordered could be re-written to be
"smart" about not making other processes wait on fsync? Or is that
sort of thing only handled in the scheduler? (not a kernel hacker
here)
Sorry if I'm interrupting. Perhaps I should even be starting another thread?
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-29 23:14 ` Dave Chinner
2009-03-30 0:39 ` Theodore Tso
@ 2009-03-30 3:01 ` Mark Lord
2009-03-30 6:41 ` Andreas T.Auer
2009-03-30 12:55 ` Chris Mason
2 siblings, 1 reply; 419+ messages in thread
From: Mark Lord @ 2009-03-30 3:01 UTC (permalink / raw)
To: Stefan Richter, Jeff Garzik, Linus Torvalds, Matthew Garrett,
Alan Cox, Theodore Tso, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Dave Chinner wrote:
> On Sat, Mar 28, 2009 at 11:17:08AM -0400, Mark Lord wrote:
>> The better solution seems to be the rather obvious one:
>>
>> the filesystem should commit data to disk before altering metadata.
>
> Generalities are bad. For example:
>
> write();
> unlink();
> <do more stuff>
> close();
>
> This is a clear case where you want metadata changed before data is
> committed to disk. In many cases, you don't even want the data to
> hit the disk here.
..
Err, no actually. I want a consistent disk state,
either all old or all new data after a crash.
Not loss of BOTH new and old data.
And the example above is trying to show, what??
Looks like a temporary file case, except the code
is buggy and should be doing the unlink() before
the write() call.
But thanks for looking at this stuff!
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 1:29 ` Trenton Adams
@ 2009-03-30 3:28 ` Theodore Tso
2009-03-30 3:55 ` Trenton D. Adams
0 siblings, 1 reply; 419+ messages in thread
From: Theodore Tso @ 2009-03-30 3:28 UTC (permalink / raw)
To: Trenton Adams
Cc: Mark Lord, Stefan Richter, Jeff Garzik, Linus Torvalds,
Matthew Garrett, Alan Cox, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
On Sun, Mar 29, 2009 at 07:29:09PM -0600, Trenton Adams wrote:
> I am slightly confused by the "data=ordered" thing that everyone is
> mentioning of late. In theory, it made sense to me before I tried it.
> I switched to mounting my ext3 as ext4, and I'm still seeing
> seriously delayed fsyncs. Theodore, I used a modified version of your
> fsync-tester.c to bench 1M writes, while doing a dd, and I'm still
> getting *almost* as bad of "fsync" performance as I was on ext3. On
> ext3, the fsync would usually not finish until the dd was complete.
How much memory do you have? On my 4gig X61 laptop, using a 5400 rpm
laptop drive, I see typical times of 1 to 1.5 seconds, with a few
outliers at 4-5 seconds. With ext3, the fsync times immediately
jumped up to 6-8 seconds, with the outliers in the 13-15 second range.
(This is with a filesystem formated as ext3, and mounted as either
ext3 or ext4; if the filesystem is formatted using "mke2fs -t ext4",
what you see is a very smooth 1.2-1.5 seconds fsync latency, indirect
blocks for very big files end up being quite inefficient.)
So I'm seeing a definite difference --- but also please remember that
"dd if=/dev/zero of=bigzero.img" really is an unfair, worst-case
scenario, since you are dirtying memory as fast as your CPU will dirty
pages. Normally, even if you are running distcc, the rate at which
you can dirty pages will be throttled at your local network speed.
You might want to try more normal workloads and see whether you are
seeing distinct fsync latency differences with ext4. Even with the
worst-case dd if=/dev/zero, I'm seeing major differences in my
testing.
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 3:28 ` Theodore Tso
@ 2009-03-30 3:55 ` Trenton D. Adams
2009-03-30 13:45 ` Theodore Tso
0 siblings, 1 reply; 419+ messages in thread
From: Trenton D. Adams @ 2009-03-30 3:55 UTC (permalink / raw)
To: Theodore Tso, Trenton Adams, Mark Lord, Stefan Richter,
Jeff Garzik, Linus Torvalds, Matthew Garrett, Alan Cox,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Sun, Mar 29, 2009 at 9:28 PM, Theodore Tso <tytso@mit.edu> wrote:
> On Sun, Mar 29, 2009 at 07:29:09PM -0600, Trenton Adams wrote:
>> I am slightly confused by the "data=ordered" thing that everyone is
>> mentioning of late. In theory, it made sense to me before I tried it.
>> I switched to mounting my ext3 as ext4, and I'm still seeing
>> seriously delayed fsyncs. Theodore, I used a modified version of your
>> fsync-tester.c to bench 1M writes, while doing a dd, and I'm still
>> getting *almost* as bad of "fsync" performance as I was on ext3. On
>> ext3, the fsync would usually not finish until the dd was complete.
>
> How much memory do you have? On my 4gig X61 laptop, using a 5400 rpm
> laptop drive, I see typical times of 1 to 1.5 seconds, with a few
> outliers at 4-5 seconds. With ext3, the fsync times immediately
> jumped up to 6-8 seconds, with the outliers in the 13-15 second range.
2G, and I believe 5400rpm
>
> (This is with a filesystem formated as ext3, and mounted as either
> ext3 or ext4; if the filesystem is formatted using "mke2fs -t ext4",
> what you see is a very smooth 1.2-1.5 seconds fsync latency, indirect
> blocks for very big files end up being quite inefficient.)
Oh. I thought I had read somewhere that mounting ext4 over ext3 would
solve the problem. Not sure where I read that now. Sorry for wasting
your time.
>
> So I'm seeing a definite difference --- but also please remember that
> "dd if=/dev/zero of=bigzero.img" really is an unfair, worst-case
> scenario, since you are dirtying memory as fast as your CPU will dirty
> pages. Normally, even if you are running distcc, the rate at which
> you can dirty pages will be throttled at your local network speed.
Yes, I realize that. When trying to find performance problems I try
to be as *unfair* as possible. :D
Thanks Ted.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 0:39 ` Theodore Tso
2009-03-30 1:29 ` Trenton Adams
@ 2009-03-30 6:31 ` Dave Chinner
2009-03-30 13:55 ` Theodore Tso
2009-03-30 7:13 ` Andreas T.Auer
2 siblings, 1 reply; 419+ messages in thread
From: Dave Chinner @ 2009-03-30 6:31 UTC (permalink / raw)
To: Theodore Tso, Mark Lord, Stefan Richter, Jeff Garzik,
Linus Torvalds, Matthew Garrett, Alan Cox, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Sun, Mar 29, 2009 at 08:39:48PM -0400, Theodore Tso wrote:
> On Mon, Mar 30, 2009 at 10:14:51AM +1100, Dave Chinner wrote:
> > This is a clear case where you want metadata changed before data is
> > committed to disk. In many cases, you don't even want the data to
> > hit the disk here.
> >
> > Similarly, rsync does the magic open,write,close,rename sequence
> > without an fsync before the rename. And it doesn't need the fsync,
> > either. The proposed implicit fsync on rename will kill rsync
> > performance, and I think that may make many people unhappy....
>
> I agree. But unfortunately, I think we're going to be bullied into
> data=ordered semantics for the open/write/close/rename sequence, at
> least as the default. Ext4 has a noauto_da_alloc mount option (which
> Eric Sandeen suggested we rename to "no_pony" :-), for people who
> mostly run sane applications that use fsync().
>
> For people who care about rsync's performance and who assume that they
> can always restart rsync if the system crashes while the rsync is
> running could, rsync could add Yet Another Rsync Option :-) which
> explicitly unlinks the target file before the rename, which would
> disable the implicit fsync().
Pardon my french, but that is a fucking joke.
You are making a judgement call that one application is more
important than another application and trying to impose that on
everyone. You are saying that we should perturb a well designed and
written backup application that is embedded into critical scripts
all around the world for the sake of desktop application that has
developers that are too fucking lazy to fix their bugs.
If you want to trade rsync performance for desktop performance, do
it in the filesystem that is aimed at the desktop. Don't fuck rename
up for filesystems that are aimed at the server market and don't
want to implement performance sucking hacks to work around fucked up
desktop applications.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 3:01 ` Mark Lord
@ 2009-03-30 6:41 ` Andreas T.Auer
0 siblings, 0 replies; 419+ messages in thread
From: Andreas T.Auer @ 2009-03-30 6:41 UTC (permalink / raw)
To: Mark Lord
Cc: Stefan Richter, Jeff Garzik, Linus Torvalds, Matthew Garrett,
Alan Cox, Theodore Tso, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On 30.03.2009 05:01 Mark Lord wrote:
> Dave Chinner wrote:
>> On Sat, Mar 28, 2009 at 11:17:08AM -0400, Mark Lord wrote:
>>> The better solution seems to be the rather obvious one:
>>>
>>> the filesystem should commit data to disk before altering metadata.
>>
>> Generalities are bad. For example:
>>
>> write();
>> unlink();
>> <do more stuff>
>> close();
>>
>> This is a clear case where you want metadata changed before data is
>> committed to disk. In many cases, you don't even want the data to
>> hit the disk here.
> ..
>
> Err, no actually. I want a consistent disk state,
> either all old or all new data after a crash.
>
>
Dave is right that if you write to a file and unlink the same file, so
that the data are orphaned. In that case you don't want the orphaned
data to be written on disk. But Mark is right, too. Because in that case
you probably also don't want any metadata to be written to the disk,
unless the open() was already commited. You might have to update
timestamps for the directory.
So rephrasing it:
The filesystem should not alter the metadata before writing the _linked_
data.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 0:39 ` Theodore Tso
2009-03-30 1:29 ` Trenton Adams
2009-03-30 6:31 ` Dave Chinner
@ 2009-03-30 7:13 ` Andreas T.Auer
2009-03-30 9:05 ` Alan Cox
2009-03-30 19:02 ` Bill Davidsen
2 siblings, 2 replies; 419+ messages in thread
From: Andreas T.Auer @ 2009-03-30 7:13 UTC (permalink / raw)
To: Theodore Tso, Mark Lord, Stefan Richter, Jeff Garzik,
Linus Torvalds, Matthew Garrett, Alan Cox, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On 30.03.2009 02:39 Theodore Tso wrote:
> All I can do is apologize to all other filesystem developers profusely
> for ext3's data=ordered semantics; at this point, I very much regret
> that we made data=ordered the default for ext3. But the application
> writers vastly outnumber us, and realistically we're not going to be
> able to easily roll back eight years of application writers being
> trained that fsync() is not necessary, and actually is detrimental for
> ext3.
>
>
It seems you still didn't get the point. ext3 data=ordered is not the
problem. The problem is that the average developer doesn't expect the fs
to _re-order_ stuff. This is how most common fs did work long before
ext3 has been introduced. They just know that there is a caching and
they might lose recent data, but they expect the fs on disk to be a
snapshot of the fs in memory at some time before the crash (except when
crashing while writing). But the re-ordering brings it to the state that
never has been in memory. data=ordered is just reflecting this thinking.
With data=writeback as the default the users would have lost data and
would have simply chosen a different fs instead of twisting the params.
Or the distros would have made data=ordered the default to prevent
beeing blamed for the data loss.
And still I don't know any reason, why it makes sense to write the
metadata to non-existing data immediately instead of delaying that, too.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 7:13 ` Andreas T.Auer
@ 2009-03-30 9:05 ` Alan Cox
2009-03-30 10:49 ` Andreas T.Auer
2009-03-30 19:02 ` Bill Davidsen
1 sibling, 1 reply; 419+ messages in thread
From: Alan Cox @ 2009-03-30 9:05 UTC (permalink / raw)
To: Andreas T.Auer
Cc: Theodore Tso, Mark Lord, Stefan Richter, Jeff Garzik,
Linus Torvalds, Matthew Garrett, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
> It seems you still didn't get the point. ext3 data=ordered is not the
> problem. The problem is that the average developer doesn't expect the fs
> to _re-order_ stuff. This is how most common fs did work long before
No it isn´t. Standard Unix file systems made no such guarantee and would
write out data out of order. The disk scheduler would then further
re-order things.
If you think the ¨guarantees¨ from before ext3 are normal defaults you´ve
been writing junk code
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 22:07 ` Arjan van de Ven
@ 2009-03-30 10:18 ` Pavel Machek
2009-03-31 13:33 ` Rafael J. Wysocki
` (2 subsequent siblings)
3 siblings, 0 replies; 419+ messages in thread
From: Pavel Machek @ 2009-03-30 10:18 UTC (permalink / raw)
To: Arjan van de Ven
Cc: Hans-Peter Jansen, Linus Torvalds, Mike Galbraith,
Geert Uytterhoeven, linux-kernel
Hi!
>> I always wonder, why Arjan does not intervene for his kerneloops.org
>> project, since your approach opens a window of uncertainty during the
>> merge window when simply using git as an efficient fetch tool.
>
> I would *love* it if Linus would, as first commit mark his tree as "-git0"
> (as per snapshots) or "-rc0". So that I can split the "final" versus
> "merge window" oopses.
Pretty please... I keep kernel binaries around and being able to tell
what it is when it boots is useful.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 9:05 ` Alan Cox
@ 2009-03-30 10:49 ` Andreas T.Auer
2009-03-30 11:12 ` Alan Cox
2009-03-30 11:17 ` Ric Wheeler
0 siblings, 2 replies; 419+ messages in thread
From: Andreas T.Auer @ 2009-03-30 10:49 UTC (permalink / raw)
To: Alan Cox
Cc: Andreas T.Auer, Theodore Tso, Mark Lord, Stefan Richter,
Jeff Garzik, Linus Torvalds, Matthew Garrett, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On 30.03.2009 11:05 Alan Cox wrote:
>> It seems you still didn't get the point. ext3 data=ordered is not the
>> problem. The problem is that the average developer doesn't expect the fs
>> to _re-order_ stuff. This is how most common fs did work long before
>>
>
> No it isn´t. Standard Unix file systems made no such guarantee and would
> write out data out of order. The disk scheduler would then further
> re-order things.
>
>
You surely know that better: Did fs actually write "later" data quite
long before "earlier" data? During the flush data may be re-ordered, but
was it also _done_ outside of it?
> If you think the ¨guarantees¨ from before ext3 are normal defaults you´ve
> been writing junk code
>
>
I'm still on ReiserFS since it was considered stable in some SuSE 7.x.
And I expected it to be fairly ordered, but as a network protocol
programmer I didn't rely on the ordering of fs write-outs yet.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 10:49 ` Andreas T.Auer
@ 2009-03-30 11:12 ` Alan Cox
2009-03-30 11:17 ` Ric Wheeler
1 sibling, 0 replies; 419+ messages in thread
From: Alan Cox @ 2009-03-30 11:12 UTC (permalink / raw)
To: Andreas T.Auer
Cc: Andreas T.Auer, Theodore Tso, Mark Lord, Stefan Richter,
Jeff Garzik, Linus Torvalds, Matthew Garrett, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
> You surely know that better: Did fs actually write "later" data quite
> long before "earlier" data? During the flush data may be re-ordered, but
> was it also _done_ outside of it?
BSD FFS/UFS and earlier file systems could leave you with all sorts of
ordering that was not guaranteed - you did get data written within about
30 seconds but no order guarantees and a crash/fsck could give you
interesting partial updates .. really interesting.
renaming was one fairly safe case as BSD FFS/UFS did rename synchronously
for the most part.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 10:49 ` Andreas T.Auer
2009-03-30 11:12 ` Alan Cox
@ 2009-03-30 11:17 ` Ric Wheeler
2009-03-30 13:48 ` Mark Lord
2009-03-30 15:34 ` Linus Torvalds
1 sibling, 2 replies; 419+ messages in thread
From: Ric Wheeler @ 2009-03-30 11:17 UTC (permalink / raw)
To: Andreas T.Auer
Cc: Alan Cox, Theodore Tso, Mark Lord, Stefan Richter, Jeff Garzik,
Linus Torvalds, Matthew Garrett, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
Andreas T.Auer wrote:
> On 30.03.2009 11:05 Alan Cox wrote:
>
>>> It seems you still didn't get the point. ext3 data=ordered is not the
>>> problem. The problem is that the average developer doesn't expect the fs
>>> to _re-order_ stuff. This is how most common fs did work long before
>>>
>>>
>> No it isn´t. Standard Unix file systems made no such guarantee and would
>> write out data out of order. The disk scheduler would then further
>> re-order things.
>>
>>
>>
> You surely know that better: Did fs actually write "later" data quite
> long before "earlier" data? During the flush data may be re-ordered, but
> was it also _done_ outside of it?
>
People keep forgetting that storage (even on your commodity s-ata class
of drives) has very large & volatile cache. The disk firmware can hold
writes in that cache as long as it wants, reorder its writes into
anything that makes sense and has no explicit ordering promises.
This is where the write barrier code comes in - for file systems that
care about ordering for data, we use barrier ops to impose the required
ordering.
In a similar way, fsync() gives applications the power to impose their
own ordering.
If we assume that we can "save" an fsync cost with ordering mode, we
have to keep in mind that the file system will need to do the expensive
cache flushes in order to preserve its internal ordering.
>
>> If you think the ¨guarantees¨ from before ext3 are normal defaults you´ve
>> been writing junk code
>>
>>
>>
> I'm still on ReiserFS since it was considered stable in some SuSE 7.x.
> And I expected it to be fairly ordered, but as a network protocol
> programmer I didn't rely on the ordering of fs write-outs yet.
>
With reiserfs, you will have barriers on by default in SLES/opensuse
which will keep (at least fs meta-data) properly ordered....
ric
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-29 23:14 ` Dave Chinner
2009-03-30 0:39 ` Theodore Tso
2009-03-30 3:01 ` Mark Lord
@ 2009-03-30 12:55 ` Chris Mason
2009-03-30 17:42 ` Theodore Tso
2009-03-31 23:55 ` Dave Chinner
2 siblings, 2 replies; 419+ messages in thread
From: Chris Mason @ 2009-03-30 12:55 UTC (permalink / raw)
To: Dave Chinner
Cc: Mark Lord, Stefan Richter, Jeff Garzik, Linus Torvalds,
Matthew Garrett, Alan Cox, Theodore Tso, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Mon, 2009-03-30 at 10:14 +1100, Dave Chinner wrote:
> On Sat, Mar 28, 2009 at 11:17:08AM -0400, Mark Lord wrote:
> > The better solution seems to be the rather obvious one:
> >
> > the filesystem should commit data to disk before altering metadata.
>
> Generalities are bad. For example:
>
> write();
> unlink();
> <do more stuff>
> close();
>
> This is a clear case where you want metadata changed before data is
> committed to disk. In many cases, you don't even want the data to
> hit the disk here.
>
> Similarly, rsync does the magic open,write,close,rename sequence
> without an fsync before the rename. And it doesn't need the fsync,
> either. The proposed implicit fsync on rename will kill rsync
> performance, and I think that may make many people unhappy....
>
Sorry, I'm afraid that rsync falls into the same category as the
kde/gnome apps here.
There are a lot of backup programs built around rsync, and every one of
them risks losing the old copy of the file by renaming an unflushed new
copy over it.
rsync needs the flushing about a million times more than gnome and kde,
and it doesn't have any option to do it automatically. It does have the
option to create backups, which is how a percentage of people are using
it, but I wouldn't call its current setup safe outside of ext3.
-chris
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 3:55 ` Trenton D. Adams
@ 2009-03-30 13:45 ` Theodore Tso
0 siblings, 0 replies; 419+ messages in thread
From: Theodore Tso @ 2009-03-30 13:45 UTC (permalink / raw)
To: Trenton D. Adams
Cc: Mark Lord, Stefan Richter, Jeff Garzik, Linus Torvalds,
Matthew Garrett, Alan Cox, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
On Sun, Mar 29, 2009 at 09:55:59PM -0600, Trenton D. Adams wrote:
> > (This is with a filesystem formated as ext3, and mounted as either
> > ext3 or ext4; if the filesystem is formatted using "mke2fs -t ext4",
> > what you see is a very smooth 1.2-1.5 seconds fsync latency, indirect
> > blocks for very big files end up being quite inefficient.)
>
> Oh. I thought I had read somewhere that mounting ext4 over ext3 would
> solve the problem. Not sure where I read that now. Sorry for wasting
> your time.
Well, I believe it should solve it for most realistic workloads (where
I don't think "dd if=/dev/zero of=bigzero.img" is realistic).
Looking more closely at the statistics, the delays aren't coming from
trying to flush the data blocks in data=ordered mode. If we disable
delayed allocation (mount -o nodelalloc), you'll see this when you
look at /proc/fs/jbd2/<dev>/history:
R/C tid wait run lock flush log hndls block inlog ctime write drop close
R 12 23 3836 0 1460 2563 50129 56 57
R 13 0 5023 0 1056 2100 64436 70 71
R 14 0 3156 0 1433 1803 40816 47 48
R 15 0 4250 0 1206 2473 57623 63 64
R 16 0 5000 0 1516 1136 61087 67 68
Note the amount of time in milliseconds in the flush column. That's
time spent flusing the allocated data blocks to disk. This goes away
once you enable delayed allocation:
R/C tid wait run lock flush log hndls block inlog ctime write drop close
R 56 0 2283 0 10 1250 32735 37 38
R 57 0 2463 0 13 1126 31297 38 39
R 58 0 2413 0 13 1243 35340 40 41
R 59 3 2383 0 20 1270 30760 38 39
R 60 0 2316 0 23 1176 33696 38 39
R 61 0 2266 0 23 1150 29888 37 38
R 62 0 2490 0 26 1140 35661 39 40
You may see slightly worse times since I'm running with a patch (which
will be pushed for 2.6.30) that makes sure that the blocks we are
writing during the "log" phase are written using WRITE_SYNC instead of
WRITE. (Without this patch, the huge amount of writes caused by the
VM trying to keep up with pages being dirtied at CPU speeds via "dd
if=/dev/zero..." will interfere with writes to the journal.)
During the log phase (which is averaging around 2 seconds for
nodealloc, and 1 seconds with delayed allocation enabled), we write
the metadata to the journal. The number of blocks that we are
actually writing to the journal is small (around 40 per transaction)
so I suspect we're seeing some lock contention or some accounting
overhead caused by the metadata blocks constantly getting dirtied by
dd if=/dev/zero task. We can look to see if this can be improved,
possibly by changing how we handle the locking, but it's no longer
being caused by the data=ordered flushing behaviour.
> Yes, I realize that. When trying to find performance problems I try
> to be as *unfair* as possible. :D
And that's a good thing from a development point of view when trying
to fix performance problems. When making statements about what people
are likely to find in real life, it's less useful.
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 11:17 ` Ric Wheeler
@ 2009-03-30 13:48 ` Mark Lord
2009-03-30 14:00 ` Ric Wheeler
2009-03-30 15:34 ` Linus Torvalds
1 sibling, 1 reply; 419+ messages in thread
From: Mark Lord @ 2009-03-30 13:48 UTC (permalink / raw)
To: Ric Wheeler
Cc: Andreas T.Auer, Alan Cox, Theodore Tso, Stefan Richter,
Jeff Garzik, Linus Torvalds, Matthew Garrett, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Ric Wheeler wrote:
>
> People keep forgetting that storage (even on your commodity s-ata class
> of drives) has very large & volatile cache. The disk firmware can hold
> writes in that cache as long as it wants, reorder its writes into
> anything that makes sense and has no explicit ordering promises.
..
Hi Ric,
No, we don't forget about those drive caches. But in practice,
for nearly everyone, they don't actually matter.
The kernel can crash, and the drives, in practice, will still
flush their caches to media by themselves. Within a second or two.
Sure, there are cases where this might not happen (total power fail),
but those are quite rare for desktop users -- and especially for the
most common variety of desktop user: notebook users (whose machines
have built-in UPSs).
Cheers
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 6:31 ` Dave Chinner
@ 2009-03-30 13:55 ` Theodore Tso
0 siblings, 0 replies; 419+ messages in thread
From: Theodore Tso @ 2009-03-30 13:55 UTC (permalink / raw)
To: Mark Lord, Stefan Richter, Jeff Garzik, Linus Torvalds,
Matthew Garrett, Alan Cox, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
On Mon, Mar 30, 2009 at 05:31:10PM +1100, Dave Chinner wrote:
>
> Pardon my french, but that is a fucking joke.
>
> You are making a judgement call that one application is more
> important than another application and trying to impose that on
> everyone. You are saying that we should perturb a well designed and
> written backup application that is embedded into critical scripts
> all around the world for the sake of desktop application that has
> developers that are too fucking lazy to fix their bugs.
You are welcome to argue with the desktop application writers (and
Linus, who has sided with them). I *knew* this was a fight I was not
going to win, so I implemented the replace-via-rename workaround, even
before I started trying to convince applicaiton writers that they
should write more portable code that would be safe on filesystems such
as, say, XFS. And it looks like we're losing that battle as well;
it's hard to get people to write correct, portable code! (I *told*
the application writers that I was the moderate on this one, even as
they were flaming me to a crisp. Given that I'm taking flak from both
sides, it's to me a good indication that the design choices made for
ext4 was probably the right thing.)
> If you want to trade rsync performance for desktop performance, do
> it in the filesystem that is aimed at the desktop. Don't fuck rename
> up for filesystems that are aimed at the server market and don't
> want to implement performance sucking hacks to work around fucked up
> desktop applications.
What I did was create a mount option for system administrators
interested in the server market. And an rsync option that unlinks the
target filesystem first really isn't that big of a deal --- have you
seen how many options rsync already has? It's been a running joke
with the rsync developers. :-)
If XFS doesn't want to try to support the desktop market, that's fine
--- it's your choice. But at least as far as desktop application
programmers, this is not a fight we're going to win. It makes me sad,
but I'm enough of a realist to understand that.
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 13:48 ` Mark Lord
@ 2009-03-30 14:00 ` Ric Wheeler
2009-03-30 14:44 ` Mark Lord
0 siblings, 1 reply; 419+ messages in thread
From: Ric Wheeler @ 2009-03-30 14:00 UTC (permalink / raw)
To: Mark Lord
Cc: Andreas T.Auer, Alan Cox, Theodore Tso, Stefan Richter,
Jeff Garzik, Linus Torvalds, Matthew Garrett, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Mark Lord wrote:
> Ric Wheeler wrote:
>>
>> People keep forgetting that storage (even on your commodity s-ata
>> class of drives) has very large & volatile cache. The disk firmware
>> can hold writes in that cache as long as it wants, reorder its writes
>> into anything that makes sense and has no explicit ordering promises.
> ..
>
> Hi Ric,
>
> No, we don't forget about those drive caches. But in practice,
> for nearly everyone, they don't actually matter.
Here I disagree - nearly everyone has their critical data being manipulated in
large data centers on top of Linux servers. We all can routinely suffer when
linux crashes and loses data at big sites like google, amazon, hospitals or your
local bank.
It definitely does matter in practice, we usually just don't see it first hand :-)
>
> The kernel can crash, and the drives, in practice, will still
> flush their caches to media by themselves. Within a second or two.
Even with desktops, I am not positive that the drive write cache survives a
kernel crash without data loss. If I remember correctly, Chris's tests used
crashes (not power outages) to display the data corruption that happened without
barriers being enabled properly.
>
> Sure, there are cases where this might not happen (total power fail),
> but those are quite rare for desktop users -- and especially for the
> most common variety of desktop user: notebook users (whose machines
> have built-in UPSs).
>
> Cheers
Unless of course you push your luck with your battery and run it until really
out of power, but in general, I do agree that laptops and notebook users have a
reasonably robust built in UPS.
ric
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-29 14:45 ` Pavel Machek
2009-03-29 15:47 ` Linus Torvalds
@ 2009-03-30 14:22 ` Morten P.D. Stevens
1 sibling, 0 replies; 419+ messages in thread
From: Morten P.D. Stevens @ 2009-03-30 14:22 UTC (permalink / raw)
To: Pavel Machek
Cc: Bodo Eggert, Linus Torvalds, Matthew Garrett, Alan Cox,
Theodore Tso, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
2009/3/29 Pavel Machek <pavel@ucw.cz>:
> Actually ext2 is more reliable in ext3 --
ext2 more reliable than ext3? Is this a joke?
ext2 is about 15 years and the worst linux file system ever. It´s
slow, no journaling...
ext3 is fast, rockstable and solid.
I think ext4 is more reliable than ext3.
-
Morten
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 14:00 ` Ric Wheeler
@ 2009-03-30 14:44 ` Mark Lord
2009-03-30 14:58 ` Ric Wheeler
2009-03-30 15:00 ` Jeff Garzik
0 siblings, 2 replies; 419+ messages in thread
From: Mark Lord @ 2009-03-30 14:44 UTC (permalink / raw)
To: Ric Wheeler
Cc: Andreas T.Auer, Alan Cox, Theodore Tso, Stefan Richter,
Jeff Garzik, Linus Torvalds, Matthew Garrett, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Ric Wheeler wrote:
> Mark Lord wrote:
>> Ric Wheeler wrote:
..
>> The kernel can crash, and the drives, in practice, will still
>> flush their caches to media by themselves. Within a second or two.
>
> Even with desktops, I am not positive that the drive write cache
> survives a kernel crash without data loss. If I remember correctly,
> Chris's tests used crashes (not power outages) to display the data
> corruption that happened without barriers being enabled properly.
..
Linux f/s barriers != drive write caches.
Drive write caches are an almost total non-issue for desktop users,
except on the (very rare) event of a total, sudden power failure
during extended write outs.
Very rare. Yes, a huge problem for server farms. No question.
But the majority of Linux systems are probably (still) desktops/notebooks.
Cheers
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 14:44 ` Mark Lord
@ 2009-03-30 14:58 ` Ric Wheeler
2009-03-30 15:21 ` Mark Lord
2009-03-30 15:00 ` Jeff Garzik
1 sibling, 1 reply; 419+ messages in thread
From: Ric Wheeler @ 2009-03-30 14:58 UTC (permalink / raw)
To: Mark Lord
Cc: Andreas T.Auer, Alan Cox, Theodore Tso, Stefan Richter,
Jeff Garzik, Linus Torvalds, Matthew Garrett, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Mark Lord wrote:
> Ric Wheeler wrote:
>> Mark Lord wrote:
>>> Ric Wheeler wrote:
> ..
>>> The kernel can crash, and the drives, in practice, will still
>>> flush their caches to media by themselves. Within a second or two.
>>
>> Even with desktops, I am not positive that the drive write cache
>> survives a kernel crash without data loss. If I remember correctly,
>> Chris's tests used crashes (not power outages) to display the data
>> corruption that happened without barriers being enabled properly.
> ..
>
> Linux f/s barriers != drive write caches.
>
> Drive write caches are an almost total non-issue for desktop users,
> except on the (very rare) event of a total, sudden power failure
> during extended write outs.
>
> Very rare. Yes, a huge problem for server farms. No question.
> But the majority of Linux systems are probably (still) desktops/notebooks.
>
> Cheers
I am confused as to why you think that barriers (flush barriers specifically)
are not equivalent to drive write cache. We disable barriers when the write
cache is off, use them only to insure that our ordering for fs transactions
survives any power loss. No one should be enabling barriers on linux file
systems if your write cache is disabled or if you have a battery backed write
cache (say on an enterprise class disk array).
Chris' test of barriers (with write cache enabled) did show for desktop class
boxes that you would get file system corruption (i.e., need to fsck the disk) a
huge percentage of the time.
Sudden power failures are not rare for desktops in my personal experience, I see
them several times a year in New England both at home (ice, tree limbs, etc) or
at work (unplanned outages for repair, broken AC, etc).
Ric
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 14:44 ` Mark Lord
2009-03-30 14:58 ` Ric Wheeler
@ 2009-03-30 15:00 ` Jeff Garzik
1 sibling, 0 replies; 419+ messages in thread
From: Jeff Garzik @ 2009-03-30 15:00 UTC (permalink / raw)
To: Mark Lord
Cc: Ric Wheeler, Andreas T.Auer, Alan Cox, Theodore Tso,
Stefan Richter, Linus Torvalds, Matthew Garrett, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Mark Lord wrote:
> Ric Wheeler wrote:
>> Mark Lord wrote:
>>> Ric Wheeler wrote:
> ..
>>> The kernel can crash, and the drives, in practice, will still
>>> flush their caches to media by themselves. Within a second or two.
>>
>> Even with desktops, I am not positive that the drive write cache
>> survives a kernel crash without data loss. If I remember correctly,
>> Chris's tests used crashes (not power outages) to display the data
>> corruption that happened without barriers being enabled properly.
> ..
>
> Linux f/s barriers != drive write caches.
>
> Drive write caches are an almost total non-issue for desktop users,
> except on the (very rare) event of a total, sudden power failure
> during extended write outs.
>
> Very rare.
Heck, even I have lost power on a plane, while a laptop in laptop mode
was flushing out work. Not that rare.
> Yes, a huge problem for server farms. No question.
> But the majority of Linux systems are probably (still) desktops/notebooks.
But it doesn't really matter who is what majority, does it? At the
present time at least, we have not designated any filesystems "desktop
only", nor have we declared Linux a desktop-only OS.
Any generalized decision that hurts servers to help desktops would be
short-sighted. Robbing Peter, to pay Paul, is no formula for OS success.
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 14:58 ` Ric Wheeler
@ 2009-03-30 15:21 ` Mark Lord
2009-03-30 15:27 ` Ric Wheeler
0 siblings, 1 reply; 419+ messages in thread
From: Mark Lord @ 2009-03-30 15:21 UTC (permalink / raw)
To: Ric Wheeler
Cc: Andreas T.Auer, Alan Cox, Theodore Tso, Stefan Richter,
Jeff Garzik, Linus Torvalds, Matthew Garrett, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Ric Wheeler wrote:
>
> I am confused as to why you think that barriers (flush barriers
> specifically) are not equivalent to drive write cache. We disable
> barriers when the write cache is off, use them only to insure that our
> ordering for fs transactions survives any power loss. No one should be
> enabling barriers on linux file systems if your write cache is disabled
> or if you have a battery backed write cache (say on an enterprise class
> disk array).
>
> Chris' test of barriers (with write cache enabled) did show for desktop
> class boxes that you would get file system corruption (i.e., need to
> fsck the disk) a huge percentage of the time.
..
Sure, no doubt there. But it's due to the kernel crash,
not due to the write cache on the drive.
Anything in the drive's write cache very probably made it to the media
within a second or two of arriving there.
So with or without a write cache, the same result should happen
for those tests. Of course, if you disable barriers *and* write cache,
then you are no longer testing the same kernel code.
I'm not arguing against battery backup or UPSs,
or *for* blindly trusting write caches without reliable power.
Just pointing out that they're not the evil that some folks
seem to believe they are.
Cheers
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 15:21 ` Mark Lord
@ 2009-03-30 15:27 ` Ric Wheeler
2009-03-30 16:13 ` Linus Torvalds
0 siblings, 1 reply; 419+ messages in thread
From: Ric Wheeler @ 2009-03-30 15:27 UTC (permalink / raw)
To: Mark Lord, Chris Mason
Cc: Andreas T.Auer, Alan Cox, Theodore Tso, Stefan Richter,
Jeff Garzik, Linus Torvalds, Matthew Garrett, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Mark Lord wrote:
> Ric Wheeler wrote:
>>
>> I am confused as to why you think that barriers (flush barriers
>> specifically) are not equivalent to drive write cache. We disable
>> barriers when the write cache is off, use them only to insure that our
>> ordering for fs transactions survives any power loss. No one should be
>> enabling barriers on linux file systems if your write cache is
>> disabled or if you have a battery backed write cache (say on an
>> enterprise class disk array).
>>
>> Chris' test of barriers (with write cache enabled) did show for
>> desktop class boxes that you would get file system corruption (i.e.,
>> need to fsck the disk) a huge percentage of the time.
> ..
>
> Sure, no doubt there. But it's due to the kernel crash,
> not due to the write cache on the drive.
>
> Anything in the drive's write cache very probably made it to the media
> within a second or two of arriving there.
A modern S-ATA drive has up to 32MB of write cache. If you lose power or suffer
a sudden reboot (that can reset the bus at least), I am pretty sure that your
above assumption is simply not true.
>
> So with or without a write cache, the same result should happen
> for those tests. Of course, if you disable barriers *and* write cache,
> then you are no longer testing the same kernel code.
Here, I still disagree. All of the test that we have done have shown that write
cache enabled/barriers off will provably result in fs corruption.
It would be great to have Chris revise his earlier barrier/corruption test to
validate your assumption (not the test that he posted recently).
>
> I'm not arguing against battery backup or UPSs,
> or *for* blindly trusting write caches without reliable power.
>
> Just pointing out that they're not the evil that some folks
> seem to believe they are.
>
> Cheers
I run with write cache and barriers enabled routinely, but would not run without
working barriers on any desktop box when the drives have write cache enabled
having spent too many hours watching fsck churn :-)
ric
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 11:17 ` Ric Wheeler
2009-03-30 13:48 ` Mark Lord
@ 2009-03-30 15:34 ` Linus Torvalds
2009-03-30 16:11 ` Ric Wheeler
1 sibling, 1 reply; 419+ messages in thread
From: Linus Torvalds @ 2009-03-30 15:34 UTC (permalink / raw)
To: Ric Wheeler
Cc: Andreas T.Auer, Alan Cox, Theodore Tso, Mark Lord, Stefan Richter,
Jeff Garzik, Matthew Garrett, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
On Mon, 30 Mar 2009, Ric Wheeler wrote:
>
> People keep forgetting that storage (even on your commodity s-ata class of
> drives) has very large & volatile cache. The disk firmware can hold writes in
> that cache as long as it wants, reorder its writes into anything that makes
> sense and has no explicit ordering promises.
Well, when it comes to disk caches, it really does make sense to start
looking at what breaks.
For example, it is obviously true that any half-way modern disk has
megabytes of caches, and write caching is quite often enabled by default.
BUT!
The write-caches on disk are rather different in many very fundamental
ways from the kernel write caches.
One of the differences is that no disk I've ever heard of does write-
caching for long times, unless it has battery back-up. Yes, yes, you can
probably find firmware that has some odd starvation issue, and if the disk
is constantly busy and the access patterns are _just_ right the writes can
take a long time, but realistically we're talking delaying and re-ordering
things by milliseconds. We're not talking seconds or tens of seconds.
And that's really quite a _big_ difference in itself. It may not be
qualitatively all that different (re-ordering is re-ordering, delays are
delays), but IN PRACTICE there's an absolutely huge difference between
delaying and re-ordering writes over milliseconds and doing so over 30s.
The other (huge) difference is that the on-disk write caching generally
fails only if the drive power fails. Yes, there's a software component to
it (buggy firmware), but you can really approximate the whole "disk write
caches didn't get flushed" with "powerfail".
Kernel data caches? Let's be honest. The kernel can fail for a thousand
different reasons, including very much _any_ component failing, rather
than just the power supply. But also obviously including bugs.
So when people bring up on-disk caching, it really is a totally different
thing from the kernel delaying writes.
So it's entirely reasonable to say "leave the disk doing write caching,
and don't force flushing", while still saying "the kernel should order the
writes it does".
Thinking that this is somehow a black-and-white issue where "ordered
writes" always has to imply "cache flush commands" is simply wrong. It is
_not_ that black-and-white, and it should probably not even be a
filesystem decision to make (it's a "system" decision).
This, btw, is doubly true simply because if the disk really fails, it's
entirely possible that it fails in a really nasty way. As in "not only did
it not write the sector, but the whole track is now totally unreadable
because power failed while the write head was active".
Because that notion of "power" is not a digital thing - you have
capacitors, brown-outs, and generally nasty "oops, for a few milliseconds
the drive still had power, but it was way out of spec, and odd things
happened".
So quite frankly, if you start worrying about disk power failures, you
should also then worry about the disk failing in _way_ more spectacular
ways than just the simple "wrote or wrote not - that is the question".
And when was the last time you saw a "safe" logging filesystem that was
safe in the face of the log returning IO errors after power comes back on?
Sure, RAID is one answer. Except not so much in 99% of all desktops or
especially laptops.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 15:34 ` Linus Torvalds
@ 2009-03-30 16:11 ` Ric Wheeler
2009-03-30 16:34 ` Linus Torvalds
0 siblings, 1 reply; 419+ messages in thread
From: Ric Wheeler @ 2009-03-30 16:11 UTC (permalink / raw)
To: Linus Torvalds
Cc: Andreas T.Auer, Alan Cox, Theodore Tso, Mark Lord, Stefan Richter,
Jeff Garzik, Matthew Garrett, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
Linus Torvalds wrote:
>
> On Mon, 30 Mar 2009, Ric Wheeler wrote:
>> People keep forgetting that storage (even on your commodity s-ata class of
>> drives) has very large & volatile cache. The disk firmware can hold writes in
>> that cache as long as it wants, reorder its writes into anything that makes
>> sense and has no explicit ordering promises.
>
> Well, when it comes to disk caches, it really does make sense to start
> looking at what breaks.
>
> For example, it is obviously true that any half-way modern disk has
> megabytes of caches, and write caching is quite often enabled by default.
>
> BUT!
>
> The write-caches on disk are rather different in many very fundamental
> ways from the kernel write caches.
>
> One of the differences is that no disk I've ever heard of does write-
> caching for long times, unless it has battery back-up. Yes, yes, you can
> probably find firmware that has some odd starvation issue, and if the disk
> is constantly busy and the access patterns are _just_ right the writes can
> take a long time, but realistically we're talking delaying and re-ordering
> things by milliseconds. We're not talking seconds or tens of seconds.
>
> And that's really quite a _big_ difference in itself. It may not be
> qualitatively all that different (re-ordering is re-ordering, delays are
> delays), but IN PRACTICE there's an absolutely huge difference between
> delaying and re-ordering writes over milliseconds and doing so over 30s.
>
> The other (huge) difference is that the on-disk write caching generally
> fails only if the drive power fails. Yes, there's a software component to
> it (buggy firmware), but you can really approximate the whole "disk write
> caches didn't get flushed" with "powerfail".
>
> Kernel data caches? Let's be honest. The kernel can fail for a thousand
> different reasons, including very much _any_ component failing, rather
> than just the power supply. But also obviously including bugs.
>
> So when people bring up on-disk caching, it really is a totally different
> thing from the kernel delaying writes.
>
> So it's entirely reasonable to say "leave the disk doing write caching,
> and don't force flushing", while still saying "the kernel should order the
> writes it does".
Largely correct above - most disks will gradually destage writes from their
cache. Large, sequential writes might entirely bypass the write cache and be
sent (more or less) immediately out to permanent storage.
I still disagree strongly with the don't force flush idea - we have an absolute
and critical need to have ordered writes that will survive a power failure for
any file system that is built on transactions (or data base).
The big issues are that for s-ata drives, our flush mechanism is really, really
primitive and brutal. We could/should try to validate a better and less onerous
mechanism (with ordering tags? experimental flush ranges? etc).
> Thinking that this is somehow a black-and-white issue where "ordered
> writes" always has to imply "cache flush commands" is simply wrong. It is
> _not_ that black-and-white, and it should probably not even be a
> filesystem decision to make (it's a "system" decision).
>
> This, btw, is doubly true simply because if the disk really fails, it's
> entirely possible that it fails in a really nasty way. As in "not only did
> it not write the sector, but the whole track is now totally unreadable
> because power failed while the write head was active".
I spent a very long time looking at huge numbers of installed systems (millions
of file systems deployed in the field), including taking part in weekly
analysis of why things failed, whether the rates of failure went up or down with
a given configuration, etc. so I can fully appreciate all of the ways drives (or
SSD's!) can magically eat your data.
What you have to keep in mind is the order of magnitude of various buckets of
failures - software crashes/code bugs tend to dominate, followed by drive
failures, followed by power supplies, etc.
I have personally seen a huge reduction in the "software" rate of failures when
you get the write barriers (forced write cache flushing) working properly with a
very large installed base, tested over many years :-)
>
> Because that notion of "power" is not a digital thing - you have
> capacitors, brown-outs, and generally nasty "oops, for a few milliseconds
> the drive still had power, but it was way out of spec, and odd things
> happened".
>
> So quite frankly, if you start worrying about disk power failures, you
> should also then worry about the disk failing in _way_ more spectacular
> ways than just the simple "wrote or wrote not - that is the question".
Again, you have to focus on the errors that happen in order of the prevalence.
The number of boxes, over a 3 year period, that have an unexpected power loss is
much, much higher than the number of boxes that have a disk head crash (probably
the number one cause of hard disk failure).
I do agree that we need to do other (background) tasks to detect things like the
that drives can have (lots of neat terms that give file system people
nightmare in the drive industry: "adjacent track erasures", "over powered
seeks", "hi fly writes" just to name my favourites).
Having full checksumming for data blocks and metadata blocks in btrfs will allow
us to do this kind of background scrubbing pretty naturally, a big win.
>
> And when was the last time you saw a "safe" logging filesystem that was
> safe in the face of the log returning IO errors after power comes back on?
This is pretty much a double failure - you need a bad write to the log (or
undetected media error like the ones I mentioned above) and a power failure/reboot.
As you say, most file systems or data bases will need manual repair or will get
restored from tape.
That is not the normal case, but we can do surface level scans to try and weed
out bad media continually during the healthy phase of a boxes life. This can be
relatively low impact and has a huge positive impact on system reliability.
Any engineer who designs storage system knows that you will have failures - we
just aim to get the rate of failures down to where you have a fighting chance of
recovery at a price you can afford...
>
> Sure, RAID is one answer. Except not so much in 99% of all desktops or
> especially laptops.
>
> Linus
If you only have one disk, you clearly need a good back up plan of some kind. I
try to treat my laptop as a carrying vessel for data that I have temporarily on
it, but is stored somewhere else more stable for when the disk breaks, some kid
steals it, etc :-)
Ric
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 15:27 ` Ric Wheeler
@ 2009-03-30 16:13 ` Linus Torvalds
2009-03-30 16:30 ` Mark Lord
0 siblings, 1 reply; 419+ messages in thread
From: Linus Torvalds @ 2009-03-30 16:13 UTC (permalink / raw)
To: Ric Wheeler
Cc: Mark Lord, Chris Mason, Andreas T.Auer, Alan Cox, Theodore Tso,
Stefan Richter, Jeff Garzik, Matthew Garrett, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Mon, 30 Mar 2009, Ric Wheeler wrote:
>
> A modern S-ATA drive has up to 32MB of write cache. If you lose power or
> suffer a sudden reboot (that can reset the bus at least), I am pretty sure
> that your above assumption is simply not true.
At least traditionally, it's worth to note that 32MB of on-disk cache is
not the same as 32MB of kernel write cache.
The drive caches tend to be more like track caches - you tend to have a
few large cache entries (segments), not something like a sector cache. And
I seriously doubt the disk will let you fill them up with writes: it
likely has things like the sector remapping tables in those caches too.
It's hard to find information about the cache organization of modern
drives, but at least a few years ago, some of them literally had just a
single segment, or just a few segments (ie a "8MB cache" might be eight
segments of one megabyte each).
The reason that matters is that those disks are very good at linear
throughput.
The latency for writing out eight big segments is likely not really
noticeably different from the latency of writing out eight single sectors
spread out across the disk - they both do eight operations, and the
difference between an op that writes a big chunk of a track and writing a
single sector isn't necessarily all that noticeable.
So if you have a 8MB drive cache, it's very likely that the drive can
flush its cache in just a few seeks, and we're still talking milliseconds.
In contrast, even just 8MB of OS caches could have _hundreds_ of seeks and
take several seconds to write out.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 16:13 ` Linus Torvalds
@ 2009-03-30 16:30 ` Mark Lord
2009-03-30 16:58 ` Linus Torvalds
0 siblings, 1 reply; 419+ messages in thread
From: Mark Lord @ 2009-03-30 16:30 UTC (permalink / raw)
To: Linus Torvalds
Cc: Ric Wheeler, Chris Mason, Andreas T.Auer, Alan Cox, Theodore Tso,
Stefan Richter, Jeff Garzik, Matthew Garrett, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Linus Torvalds wrote:
>
> On Mon, 30 Mar 2009, Ric Wheeler wrote:
>> A modern S-ATA drive has up to 32MB of write cache. If you lose power or
>> suffer a sudden reboot (that can reset the bus at least), I am pretty sure
>> that your above assumption is simply not true.
>
> At least traditionally, it's worth to note that 32MB of on-disk cache is
> not the same as 32MB of kernel write cache.
>
> The drive caches tend to be more like track caches - you tend to have a
> few large cache entries (segments), not something like a sector cache. And
> I seriously doubt the disk will let you fill them up with writes: it
> likely has things like the sector remapping tables in those caches too.
..
I spent an entire day recently, trying to see if I could significantly fill
up the 32MB cache on a 750GB Hitach SATA drive here.
With deliberate/random write patterns, big and small, near and far,
I could not fill the drive with anything approaching a full second
of latent write-cache flush time.
Not even close. Which is a pity, because I really wanted to do some testing
related to a deep write cache. But it just wouldn't happen.
I tried this again on a 16MB cache of a Seagate drive, no difference.
Bummer. :)
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 16:11 ` Ric Wheeler
@ 2009-03-30 16:34 ` Linus Torvalds
2009-03-30 17:11 ` Ric Wheeler
2009-03-31 21:10 ` Alan Cox
0 siblings, 2 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-03-30 16:34 UTC (permalink / raw)
To: Ric Wheeler
Cc: Andreas T.Auer, Alan Cox, Theodore Tso, Mark Lord, Stefan Richter,
Jeff Garzik, Matthew Garrett, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
On Mon, 30 Mar 2009, Ric Wheeler wrote:
>
> I still disagree strongly with the don't force flush idea - we have an
> absolute and critical need to have ordered writes that will survive a power
> failure for any file system that is built on transactions (or data base).
Read that sentence of yours again.
In particular, read the "we" part, and ponder.
YOU have that absolute and critical need.
Others? Likely not so much. The reason people run "data=ordered" on their
laptops is not just because it's the default - rather, it's the default
_because_ it's the one that avoids most obvious problems. And for 99% of
all people, that's what they want.
And as mentioned, if you have to have absolute requirements, you
absolutely MUST be using real RAID with real protection (not just RAID0).
Not "should". MUST. If you don't do redundancy, your disk _will_
eventually eat your data. Not because the OS wrote in the wrong order, or
the disk cached writes, but simply because bad things do happen.
But turn that around, and say: if you don't have redundant disks, then
pretty much by definition those drive flushes won't be guaranteeing your
data _anyway_, so why pay the price?
> The big issues are that for s-ata drives, our flush mechanism is really,
> really primitive and brutal. We could/should try to validate a better and less
> onerous mechanism (with ordering tags? experimental flush ranges? etc).
That's one of the issues. The cost of those flushes can be really quite
high, and as mentioned, in the absense of redundancy you don't actually
get the guarantees that you seem to think that you get.
> I spent a very long time looking at huge numbers of installed systems
> (millions of file systems deployed in the field), including taking part in
> weekly analysis of why things failed, whether the rates of failure went up or
> down with a given configuration, etc. so I can fully appreciate all of the
> ways drives (or SSD's!) can magically eat your data.
Well, I can go mainly by my own anecdotal evidence, and so far I've
actually had more catastrophic data failure from failed drives than
anything else. OS crashes in the middle of a "yum update"? Yup, been
there, done that, it was really painful. But it was painful in a "damn, I
need to force a re-install of a couple of rpms".
Actual failed drives that got read errors? I seem to average almost one a
year. It's been overheating laptops, and it's been power outages that
apparently happened at really bad times. I have a UPS now.
> What you have to keep in mind is the order of magnitude of various buckets of
> failures - software crashes/code bugs tend to dominate, followed by drive
> failures, followed by power supplies, etc.
Sure. And those "write flushes" really only cover a rather small
percentage. For many setups, the other corruption issues (drive failure)
are not just more common, but generally more disastrous anyway. So why
would a person like that worry about the (rare) power failure?
> I have personally seen a huge reduction in the "software" rate of failures
> when you get the write barriers (forced write cache flushing) working properly
> with a very large installed base, tested over many years :-)
The software rate of failures should only care about the software write
barriers (ie the ones that order the OS elevator - NOT the ones that
actually tell the disk to flush itself).
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 16:30 ` Mark Lord
@ 2009-03-30 16:58 ` Linus Torvalds
2009-03-30 17:29 ` Mark Lord
2009-03-30 17:57 ` Chris Mason
0 siblings, 2 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-03-30 16:58 UTC (permalink / raw)
To: Mark Lord
Cc: Ric Wheeler, Chris Mason, Andreas T.Auer, Alan Cox, Theodore Tso,
Stefan Richter, Jeff Garzik, Matthew Garrett, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Mon, 30 Mar 2009, Mark Lord wrote:
>
> I spent an entire day recently, trying to see if I could significantly fill
> up the 32MB cache on a 750GB Hitach SATA drive here.
>
> With deliberate/random write patterns, big and small, near and far,
> I could not fill the drive with anything approaching a full second
> of latent write-cache flush time.
>
> Not even close. Which is a pity, because I really wanted to do some testing
> related to a deep write cache. But it just wouldn't happen.
>
> I tried this again on a 16MB cache of a Seagate drive, no difference.
>
> Bummer. :)
Try it with laptop drives. You might get to a second, or at least hundreds
of ms (not counting the spinup delay if it went to sleep, obviously). You
probably tested desktop drives (that 750GB Hitachi one is not a low end
one, and I assume the Seagate one isn't either).
You'll have a much easier time getting long latencies when seeks take tens
of ms, and the platter rotates at some pitiful 3600rpm (ok, I guess those
drives are hard to find these days - I guess 4200rpm is the norm even for
1.8" laptop harddrives).
And also - this is probably obvious to you, but it might not be
immediately obvious to everybody - make sure that you do have TCQ going,
and at full depth. If the drive supports TCQ (and they all do, these days)
it is quite possible that the drive firmware basically limits the write
caching to one segment per TCQ entry (or at least to something smallish).
Why? Because that really simplifies some of the problem space for the
firmware a _lot_ - if you have at least as many segments in your cache as
your max TCQ depth, it means that you always have one segment free to be
re-used without any physical IO when a new command comes in.
And if I were a disk firmware engineer, I'd try my damndest to keep my
problem space simple, so I would do exactly that kind of "limit the number
of dirty cache segments by the queue size" thing.
But I dunno. You may not want to touch those slow laptop drives with a
ten-foot pole. It's certainly not my favorite pastime.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 16:34 ` Linus Torvalds
@ 2009-03-30 17:11 ` Ric Wheeler
2009-03-30 17:39 ` Mark Lord
2009-03-30 17:51 ` Linus Torvalds
2009-03-31 21:10 ` Alan Cox
1 sibling, 2 replies; 419+ messages in thread
From: Ric Wheeler @ 2009-03-30 17:11 UTC (permalink / raw)
To: Linus Torvalds
Cc: Andreas T.Auer, Alan Cox, Theodore Tso, Mark Lord, Stefan Richter,
Jeff Garzik, Matthew Garrett, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
Linus Torvalds wrote:
>
> On Mon, 30 Mar 2009, Ric Wheeler wrote:
>> I still disagree strongly with the don't force flush idea - we have an
>> absolute and critical need to have ordered writes that will survive a power
>> failure for any file system that is built on transactions (or data base).
>
> Read that sentence of yours again.
>
> In particular, read the "we" part, and ponder.
>
> YOU have that absolute and critical need.
>
> Others? Likely not so much. The reason people run "data=ordered" on their
> laptops is not just because it's the default - rather, it's the default
> _because_ it's the one that avoids most obvious problems. And for 99% of
> all people, that's what they want.
My "we" is meant to be the file system writers - we build our journalled file
systems on top of these assumptions about ordering. Not having them punts this
all to fsck running most likely in a manual repair.
>
> And as mentioned, if you have to have absolute requirements, you
> absolutely MUST be using real RAID with real protection (not just RAID0).
>
> Not "should". MUST. If you don't do redundancy, your disk _will_
> eventually eat your data. Not because the OS wrote in the wrong order, or
> the disk cached writes, but simply because bad things do happen.
Simply not true. To build reliable systems, you need reliable components.
It is perfectly normal to build non-raided systems that are components of a
larger storage pool that don't do raid.
Easy example would be two desktops using rsync, most "cloud" storage systems do
something similar at the whole file level (i.e., write out my file 3 times).
If you acknowledge back to a client a write, then have a power outage, the
client should reasonably be able to expect that the data survived the power outage.
>
> But turn that around, and say: if you don't have redundant disks, then
> pretty much by definition those drive flushes won't be guaranteeing your
> data _anyway_, so why pay the price?
They do in fact provide that promise for the extremely common case of power
outage and as such, can be used to build reliable storage if you need to.
>> The big issues are that for s-ata drives, our flush mechanism is really,
>> really primitive and brutal. We could/should try to validate a better and less
>> onerous mechanism (with ordering tags? experimental flush ranges? etc).
>
> That's one of the issues. The cost of those flushes can be really quite
> high, and as mentioned, in the absense of redundancy you don't actually
> get the guarantees that you seem to think that you get.
I have measured the costs of the write flushes on a variety of devices,
routinely, a cache flush is on the order of 10-20 ms with a healthy s-ata drive.
Compared to the write speed of writing any large file from DRAM to storage, one
20ms cost to make sure it is on disk is normally in the noise.
The trade off is clearly not as good for small files.
And I will add, my data is built on years of real data from commodity hardware
running normal Linux kernels - no special hardware. There are also a lot of good
papers that the USENIX FAST people have put out (looking at failures in NetApp
gear, the HPC servers in national labs and at google) that can help provide
realistic & accurate data.
>
>> I spent a very long time looking at huge numbers of installed systems
>> (millions of file systems deployed in the field), including taking part in
>> weekly analysis of why things failed, whether the rates of failure went up or
>> down with a given configuration, etc. so I can fully appreciate all of the
>> ways drives (or SSD's!) can magically eat your data.
>
> Well, I can go mainly by my own anecdotal evidence, and so far I've
> actually had more catastrophic data failure from failed drives than
> anything else. OS crashes in the middle of a "yum update"? Yup, been
> there, done that, it was really painful. But it was painful in a "damn, I
> need to force a re-install of a couple of rpms".
>
> Actual failed drives that got read errors? I seem to average almost one a
> year. It's been overheating laptops, and it's been power outages that
> apparently happened at really bad times. I have a UPS now.
Heat is a major killer of spinning drives (as is severe cold). A lot of times,
drives that have read errors only (not failed writes) might be fully recoverable
if you can re-write that injured sector. What you should look for is a peak in
the remapped sectors (via hdparm) - that usually is a moderately good indicator
(but note that it is normal to have some, just not 10-25% remapped!).
>
>> What you have to keep in mind is the order of magnitude of various buckets of
>> failures - software crashes/code bugs tend to dominate, followed by drive
>> failures, followed by power supplies, etc.
>
> Sure. And those "write flushes" really only cover a rather small
> percentage. For many setups, the other corruption issues (drive failure)
> are not just more common, but generally more disastrous anyway. So why
> would a person like that worry about the (rare) power failure?
This is simply not a true statement from what I have seen personally.
>
>> I have personally seen a huge reduction in the "software" rate of failures
>> when you get the write barriers (forced write cache flushing) working properly
>> with a very large installed base, tested over many years :-)
>
> The software rate of failures should only care about the software write
> barriers (ie the ones that order the OS elevator - NOT the ones that
> actually tell the disk to flush itself).
>
> Linus
The elevator does not issue write barriers on its own - those write barriers are
sent down by the file systems for transaction commits.
I could be totally confused at this point, but I don't know of any sequential
ordering needs that CFQ, etc have for their internal needs.
ric
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 16:58 ` Linus Torvalds
@ 2009-03-30 17:29 ` Mark Lord
2009-03-30 17:57 ` Chris Mason
1 sibling, 0 replies; 419+ messages in thread
From: Mark Lord @ 2009-03-30 17:29 UTC (permalink / raw)
To: Linus Torvalds
Cc: Ric Wheeler, Chris Mason, Andreas T.Auer, Alan Cox, Theodore Tso,
Stefan Richter, Jeff Garzik, Matthew Garrett, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Linus Torvalds wrote:
>
> On Mon, 30 Mar 2009, Mark Lord wrote:
>> I spent an entire day recently, trying to see if I could significantly fill
>> up the 32MB cache on a 750GB Hitach SATA drive here.
>>
>> With deliberate/random write patterns, big and small, near and far,
>> I could not fill the drive with anything approaching a full second
>> of latent write-cache flush time.
>>
>> Not even close. Which is a pity, because I really wanted to do some testing
>> related to a deep write cache. But it just wouldn't happen.
>>
>> I tried this again on a 16MB cache of a Seagate drive, no difference.
>>
>> Bummer. :)
>
> Try it with laptop drives. You might get to a second, or at least hundreds
> of ms (not counting the spinup delay if it went to sleep, obviously). You
> probably tested desktop drives (that 750GB Hitachi one is not a low end
> one, and I assume the Seagate one isn't either).
>
> You'll have a much easier time getting long latencies when seeks take tens
> of ms, and the platter rotates at some pitiful 3600rpm (ok, I guess those
> drives are hard to find these days - I guess 4200rpm is the norm even for
> 1.8" laptop harddrives).
>
> And also - this is probably obvious to you, but it might not be
> immediately obvious to everybody - make sure that you do have TCQ going,
> and at full depth. If the drive supports TCQ (and they all do, these days)
> it is quite possible that the drive firmware basically limits the write
> caching to one segment per TCQ entry (or at least to something smallish).
..
Oh yes, absolute -- I tried with and without NCQ (the SATA replacement
for old-style TCQ), and with varying NCQ queue depths. No luck keeping
the darned thing busy flushing afterwards for anything more than
perhaps a few hundred millseconds. I wasn't really interested in anything
under a second, so I didn't measure it exactly though.
The older and/or slower notebook drives (4200rpm) tend to have smaller
onboard caches, too. Which makes them difficult to fill.
I suspect I'd have much better "luck" with a slow-ish SSD that has
a largish write cache. Dunno if those exist, and they'll have to get
cheaper before I pick one up to deliberately bash on. :)
Cheers
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 17:11 ` Ric Wheeler
@ 2009-03-30 17:39 ` Mark Lord
2009-03-30 17:51 ` Linus Torvalds
1 sibling, 0 replies; 419+ messages in thread
From: Mark Lord @ 2009-03-30 17:39 UTC (permalink / raw)
To: Ric Wheeler
Cc: Linus Torvalds, Andreas T.Auer, Alan Cox, Theodore Tso,
Stefan Richter, Jeff Garzik, Matthew Garrett, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Ric Wheeler wrote:
> Linus Torvalds wrote:
..
>> That's one of the issues. The cost of those flushes can be really
>> quite high, and as mentioned, in the absense of redundancy you don't
>> actually get the guarantees that you seem to think that you get.
>
> I have measured the costs of the write flushes on a variety of devices,
> routinely, a cache flush is on the order of 10-20 ms with a healthy
> s-ata drive.
..
Err, no. Yes, the flush itself will be very quick,
since the drive is nearly always keeping up with the I/O
already (as we are discussing in a separate subthread here!).
But.. the cost of that FLUSH_CACHE command can be quite significant.
To issue it, we first have to stop accepting R/W requests,
and then wait for up to 32 of them currently in-flight to complete.
Then issue the cache-flush, and wait for that to complete.
Then resume R/W again.
And FLUSH_CACHE is a PIO command for most libata hosts,
so it has a multi-microsecond CPU hit as well as the I/O hit,
whereas regular R/W commands will usually use less CPU because
they are usually done via an automated host command queue.
Tiny, but significant. And more so on smaller/slower end-user systems
like netbooks than on datacenter servers, perhaps.
Cheers
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 12:55 ` Chris Mason
@ 2009-03-30 17:42 ` Theodore Tso
2009-03-31 23:55 ` Dave Chinner
1 sibling, 0 replies; 419+ messages in thread
From: Theodore Tso @ 2009-03-30 17:42 UTC (permalink / raw)
To: Chris Mason
Cc: Dave Chinner, Mark Lord, Stefan Richter, Jeff Garzik,
Linus Torvalds, Matthew Garrett, Alan Cox, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Mon, Mar 30, 2009 at 08:55:51AM -0400, Chris Mason wrote:
> Sorry, I'm afraid that rsync falls into the same category as the
> kde/gnome apps here.
>
> There are a lot of backup programs built around rsync, and every one of
> them risks losing the old copy of the file by renaming an unflushed new
> copy over it.
>
> rsync needs the flushing about a million times more than gnome and kde,
> and it doesn't have any option to do it automatically. It does have the
> option to create backups, which is how a percentage of people are using
> it, but I wouldn't call its current setup safe outside of ext3.
I wouldn't make it to be the default, but as an option, if the backup
script would take responsibility for restarting rsync if the server
crashes, and if the rsync process executes a global sync(2) call when
it is complete, an option to make rsync delete the target file before
doing the rename to defeat the replace-via-rename hueristic could be
justifiable.
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 17:11 ` Ric Wheeler
2009-03-30 17:39 ` Mark Lord
@ 2009-03-30 17:51 ` Linus Torvalds
2009-03-30 18:15 ` Ric Wheeler
` (2 more replies)
1 sibling, 3 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-03-30 17:51 UTC (permalink / raw)
To: Ric Wheeler
Cc: Andreas T.Auer, Alan Cox, Theodore Tso, Mark Lord, Stefan Richter,
Jeff Garzik, Matthew Garrett, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
On Mon, 30 Mar 2009, Ric Wheeler wrote:
> >
> > But turn that around, and say: if you don't have redundant disks, then
> > pretty much by definition those drive flushes won't be guaranteeing your
> > data _anyway_, so why pay the price?
>
> They do in fact provide that promise for the extremely common case of power
> outage and as such, can be used to build reliable storage if you need to.
No they really effectively don't. Not if the end result is "oops, the
whole track is now unreadable" (regardless of whether it happened due to a
write durign power-out or during some entirely unrelated disk error). Your
"flush" didn't result in a stable filesystem at all, it just resulted in a
dead one.
That's my point. Disks simply aren't that reliable. Anything you do with
flushing and ordering won't make them magically not have errors any more.
> Heat is a major killer of spinning drives (as is severe cold). A lot of times,
> drives that have read errors only (not failed writes) might be fully
> recoverable if you can re-write that injured sector.
It's not worked for me, and yes, I've tried. Maybe I've been unlucky, but
every single case I can remember of having read failures, that drive has
been dead. Trying to re-write just the sectors with the error (and around
it) didn't do squat, and rewriting the whole disk didn't work either.
I'm sure it works for some "ok, the write just failed to take, and the CRC
was bad" case, but that's apparently not what I've had. I suspect either
the track markers got overwritten (and maybe a disk-specific low-level
reformat would have helped, but at that point I was not going to trust the
drive anyway, so I didn't care), or there was actual major physical damage
due to heat and/or head crash and remapping was just not able to cope.
> > Sure. And those "write flushes" really only cover a rather small percentage.
> > For many setups, the other corruption issues (drive failure) are not just
> > more common, but generally more disastrous anyway. So why would a person
> > like that worry about the (rare) power failure?
>
> This is simply not a true statement from what I have seen personally.
You yourself said that software errors were your biggest issue. The write
flush wouldn't matter for those (but the elevator barrier would)
> The elevator does not issue write barriers on its own - those write barriers
> are sent down by the file systems for transaction commits.
Right. But "elevator write barrier" vs "sending a drive flush command" are
two totally independent issues. You can do one without the other (although
doing a drive flush command without the write barrier is admittedly kind
of pointless ;^)
And my point is, IT MAKES SENSE to just do the elevator barrier, _without_
the drive command. If you worry much more about software (or non-disk
component) failure than about power failures, you're better off just doing
the software-level synchronization, and leaving the hardware alone.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 16:58 ` Linus Torvalds
2009-03-30 17:29 ` Mark Lord
@ 2009-03-30 17:57 ` Chris Mason
2009-03-30 18:39 ` Mark Lord
2009-03-30 18:54 ` Pasi Kärkkäinen
1 sibling, 2 replies; 419+ messages in thread
From: Chris Mason @ 2009-03-30 17:57 UTC (permalink / raw)
To: Linus Torvalds
Cc: Mark Lord, Ric Wheeler, Andreas T.Auer, Alan Cox, Theodore Tso,
Stefan Richter, Jeff Garzik, Matthew Garrett, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
[-- Attachment #1: Type: text/plain, Size: 3440 bytes --]
On Mon, 2009-03-30 at 09:58 -0700, Linus Torvalds wrote:
>
> On Mon, 30 Mar 2009, Mark Lord wrote:
> >
> > I spent an entire day recently, trying to see if I could significantly fill
> > up the 32MB cache on a 750GB Hitach SATA drive here.
> >
> > With deliberate/random write patterns, big and small, near and far,
> > I could not fill the drive with anything approaching a full second
> > of latent write-cache flush time.
> >
> > Not even close. Which is a pity, because I really wanted to do some testing
> > related to a deep write cache. But it just wouldn't happen.
> >
> > I tried this again on a 16MB cache of a Seagate drive, no difference.
> >
> > Bummer. :)
>
> Try it with laptop drives. You might get to a second, or at least hundreds
> of ms (not counting the spinup delay if it went to sleep, obviously). You
> probably tested desktop drives (that 750GB Hitachi one is not a low end
> one, and I assume the Seagate one isn't either).
I had some fun trying things with this, and I've been able to reliably
trigger stalls in write cache of ~60 seconds on my seagate 500GB sata
drive. The worst I saw was 214 seconds.
It took a little experimentation, and I had to switch to the noop
scheduler (no idea why).
Also, I had to watch vmstat closely. When the test first started,
vmstat was reporting 500kb/s or so write throughput. After the test ran
for a few minutes, vmstat jumped up to 8MB/s.
My guess is that the drive has some internal threshold for when it
decides to only write in cache. The switch to 8MB/s is when it switched
to cache only goodness. Or perhaps the attached program is buggy and
I'll end up looking silly...it was some quick coding.
The test forks two procs. One proc does 4k writes to the first 26MB of
the test file (/dev/sdb for me). These writes are O_DIRECT, and use a
block size of 4k.
The idea is that we fill the cache with work that is very beneficial to
keep in cache, but that the drive will tend to flush out because it is
filling up tracks.
The second proc O_DIRECT writes to two adjacent sectors far away from
the hot writes from the first proc, and it puts in a timestamp from just
before the write. Every second or so, this timestamp is printed to
stderr. The drive will want to keep these two sectors in cache because
we are constantly overwriting them.
(It's worth mentioning this is a destructive test. Running it
on /dev/sdb will overwrite the first 64MB of the drive!!!!)
Sample output:
# ./wb-latency /dev/sdb
Found tv 1238434622.461527
starting hot writes run
starting tester run
current time 1238435045.529751
current time 1238435046.531250
...
current time 1238435063.772456
current time 1238435064.788639
current time 1238435065.814101
current time 1238435066.847704
Right here, I pull the power cord. The box comes back up, and I run:
# ./wb-latency -c /dev/sdb
Found tv 1238435067.347829
When -c is passed, it just reads the timestamp out of the timestamp
block and exits. You compare this value with the value printed just
before you pulled the block.
For the run here, the two values are within .5s of each other. The
tester only prints the time every one second, so anything that close is
very good. I had pulled the plug before the drive got into that fast
8MB/s mode, so the drive was doing a pretty good job of fairly servicing
the cache.
My drive has a cache of 32MB. Smaller caches probably need a smaller
hot zone.
-chris
[-- Attachment #2: wb-latency.c --]
[-- Type: text/x-csrc, Size: 4378 bytes --]
/*
* wb-latency.c
*
* This file may be redistributed under the terms of the GNU Public
* License, version 2.
*/
#define _FILE_OFFSET_BITS 64
#define _XOPEN_SOURCE 600
#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/time.h>
#include <sys/wait.h>
#include <signal.h>
#include <time.h>
#include <fcntl.h>
#include <string.h>
#ifndef O_DIRECT
#define O_DIRECT 040000 /* direct disk access hint */
#endif
static int page_size = 4096;
static float timeval_subtract(struct timeval *tv1, struct timeval *tv2)
{
return ((tv1->tv_sec - tv2->tv_sec) +
((float) (tv1->tv_usec - tv2->tv_usec)) / 1000000);
}
/*
* the magic offset is where we write our timestamps.
* The idea is that we write constantly to the magic offset
* and then pull the power.
* After the OS comes back, we read the timestamp stored and compare
* it with the time stamp printed. Any difference over 1s is time the
* IO spent stalled in cache.
*/
static loff_t magic_offset(loff_t total)
{
loff_t cur = total - ((loff_t)64) * 1024;
cur = cur / page_size;
cur = cur * page_size;
return cur;
}
/*
* this function runs in a loop overwriting two nearby
* sectors. The idea is to create something the
* drive is likely to store in cache and not send down very often.
*
* It writes a timestamp to the sector and to stderr. After
* crashing, compare the output of wb-latency -c with the last
* thing printed on stderr.
*/
static void timestamp_io(int fd, char *buf, loff_t total)
{
loff_t cur = magic_offset(total);
struct timeval tv;
struct timeval print_tv;
int ret;
cur = cur / page_size;
cur = cur * page_size;
printf("starting tester run\n");
gettimeofday(&print_tv, NULL);
while(1) {
gettimeofday(&tv, NULL);
memcpy(buf, &tv, sizeof(tv));
if (timeval_subtract(&tv, &print_tv) >= 1) {
fprintf(stderr, "current time %lu.%lu\n",
tv.tv_sec, tv.tv_usec);
gettimeofday(&print_tv, NULL);
}
ret = pwrite(fd, buf, page_size, cur);
if (ret < page_size) {
fprintf(stderr, "short write ret %d cur %llu\n",
ret, (unsigned long long)cur);
exit(1);
}
ret = pwrite(fd, buf, page_size, cur + page_size * 2);
if (ret < page_size) {
fprintf(stderr, "short write ret %d cur %llu\n",
ret, (unsigned long long)cur);
exit(1);
}
}
}
/*
* just print out the timestamp in our magic sector
*/
static void check_timestamp_io(int fd, char *buf, loff_t total)
{
int ret;
struct timeval tv;
loff_t cur = magic_offset(total);
ret = pread(fd, buf, page_size, cur);
if (ret < page_size) {
perror("read");
exit(1);
}
memcpy(&tv, buf, sizeof(tv));
printf("Found tv %lu.%lu\n", tv.tv_sec, tv.tv_usec);
}
int main(int argc, char **argv)
{
int fd;
struct stat st;
pid_t pid;
int ret;
int i;
int status;
loff_t total_size = 128 * 1024 * 1024;
loff_t hot_size = 26 * 1024 * 1024;
loff_t cur;
char *buf;
char *filename = NULL;
int check_only = 0;
ret = posix_memalign((void *)(&buf), page_size, page_size);
if (ret) {
perror("memalign\n");
exit(1);
}
memset(buf, 0, page_size);
if (argc < 2) {
fprintf(stderr, "usage: wb-latency [-c] file\n");
exit(1);
}
for (i = 1; i < argc; i++) {
if (strcmp(argv[i], "-c") == 0)
check_only = 1;
else
filename = argv[i];
}
fd = open(filename, O_RDWR | O_DIRECT | O_CREAT);
if (fd < 0) {
perror("open");
exit(1);
}
ret = fstat(fd, &st);
if (ret < 0) {
perror("fstat");
exit(1);
}
check_timestamp_io(fd, buf, total_size);
if (check_only)
exit(0);
/* setup the file if we aren't doing a block device */
if (!S_ISBLK(st.st_mode) && st.st_size < total_size) {
printf("setting up file %s\n", filename);
while(cur < total_size) {
ret = write(fd, buf, page_size);
if (ret <= 0) {
fprintf(stderr, "short write\n");
exit(1);
}
cur += ret;
}
printf("done setting up %s\n", filename);
}
pid = fork();
if (pid == 0) {
timestamp_io(fd, buf, total_size);
exit(0);
}
waitpid(pid, &status, WNOHANG);
/*
* here we run the hot IO. This is something the drive isn't
* going to bypass the cache on, but something the drive will
* tend to allow to dominate the cache.
*/
printf("starting hot writes run\n");
cur = 0;
while(1) {
pwrite(fd, buf, page_size, cur);
cur += page_size;
if (cur > hot_size)
cur = 0;
}
return 0;
}
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 17:51 ` Linus Torvalds
@ 2009-03-30 18:15 ` Ric Wheeler
2009-03-30 19:08 ` Eric Sandeen
2009-03-30 19:22 ` Rik van Riel
2 siblings, 0 replies; 419+ messages in thread
From: Ric Wheeler @ 2009-03-30 18:15 UTC (permalink / raw)
To: Linus Torvalds
Cc: Andreas T.Auer, Alan Cox, Theodore Tso, Mark Lord, Stefan Richter,
Jeff Garzik, Matthew Garrett, Andrew Morton, David Rees,
Jesper Krogh, Linux Kernel Mailing List
Linus Torvalds wrote:
>
> On Mon, 30 Mar 2009, Ric Wheeler wrote:
>>> But turn that around, and say: if you don't have redundant disks, then
>>> pretty much by definition those drive flushes won't be guaranteeing your
>>> data _anyway_, so why pay the price?
>> They do in fact provide that promise for the extremely common case of power
>> outage and as such, can be used to build reliable storage if you need to.
>
> No they really effectively don't. Not if the end result is "oops, the
> whole track is now unreadable" (regardless of whether it happened due to a
> write durign power-out or during some entirely unrelated disk error). Your
> "flush" didn't result in a stable filesystem at all, it just resulted in a
> dead one.
>
> That's my point. Disks simply aren't that reliable. Anything you do with
> flushing and ordering won't make them magically not have errors any more.
They actually are reliable in this way, I have not seen disks fail as you seem
to think that they do after a simple power failure. With barriers (and barrier
flushes enabled), you don't get that kind of bad reads for tracks after a normal
power outage.
Some of the odd cases come from hot spotting of drives (say, rewriting the same
sector over and over again) which can over many, many writes impact the
integrity of the adjacent tracks. Or, you can get IO errors from temporary
vibration (dropped the laptop or rolled a new machine down the data center).
Those temporary errors are the ones that can be repaired.
I don't know how else to convince you (lots of good wine? beer? :-)), but I have
personally looked at this in depth. Certainly, "Trust me, I know disks" is not
really an argument that you have to buy...
>
>> Heat is a major killer of spinning drives (as is severe cold). A lot of times,
>> drives that have read errors only (not failed writes) might be fully
>> recoverable if you can re-write that injured sector.
>
> It's not worked for me, and yes, I've tried. Maybe I've been unlucky, but
> every single case I can remember of having read failures, that drive has
> been dead. Trying to re-write just the sectors with the error (and around
> it) didn't do squat, and rewriting the whole disk didn't work either.
Lap top drives are more likely to fail hard - you might have really just had a
bad head or similar issue.
Mark Lord hacked in support for doing low level writes into hdparm - might be
worth playing with that next time you get a dud disk.
>
> I'm sure it works for some "ok, the write just failed to take, and the CRC
> was bad" case, but that's apparently not what I've had. I suspect either
> the track markers got overwritten (and maybe a disk-specific low-level
> reformat would have helped, but at that point I was not going to trust the
> drive anyway, so I didn't care), or there was actual major physical damage
> due to heat and/or head crash and remapping was just not able to cope.
>
>>> Sure. And those "write flushes" really only cover a rather small percentage.
>>> For many setups, the other corruption issues (drive failure) are not just
>>> more common, but generally more disastrous anyway. So why would a person
>>> like that worry about the (rare) power failure?
>> This is simply not a true statement from what I have seen personally.
>
> You yourself said that software errors were your biggest issue. The write
> flush wouldn't matter for those (but the elevator barrier would)
How you bucket software issues in a hardware company (old job, not here at Red
Hat) would include things like "file system corrupt, but disk hardware good"
which results from improper barrier configuration.
A disk hardware failure would be something like the drive does not spin up, it
has bad memory in the write cache, a broken head (actually, one of the most
common errors). Those usually would result in the drive failing to mount.
>
>> The elevator does not issue write barriers on its own - those write barriers
>> are sent down by the file systems for transaction commits.
>
> Right. But "elevator write barrier" vs "sending a drive flush command" are
> two totally independent issues. You can do one without the other (although
> doing a drive flush command without the write barrier is admittedly kind
> of pointless ;^)
>
> And my point is, IT MAKES SENSE to just do the elevator barrier, _without_
> the drive command. If you worry much more about software (or non-disk
> component) failure than about power failures, you're better off just doing
> the software-level synchronization, and leaving the hardware alone.
>
> Linus
I guess we have to agree to disagree.
File systems need ordering for transactions and recoverability. Doing barriers
just in the elevator will appear to work well for casual users, but in any given
large population (including desktops here), will produce more corrupted file
systems, manual recoveries after power failure, etc.
File systems people can work harder to reduce fsync latency, but getting rid of
these fundamental building blocks is not really a good plan in my opinion. I am
pretty sure that we can get a safe and high performing file system balance here
that will not seem as bad as you have experienced.
Ric
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 17:57 ` Chris Mason
@ 2009-03-30 18:39 ` Mark Lord
2009-03-30 18:52 ` Chris Mason
2009-03-30 18:54 ` Pasi Kärkkäinen
1 sibling, 1 reply; 419+ messages in thread
From: Mark Lord @ 2009-03-30 18:39 UTC (permalink / raw)
To: Chris Mason
Cc: Linus Torvalds, Ric Wheeler, Andreas T.Auer, Alan Cox,
Theodore Tso, Stefan Richter, Jeff Garzik, Matthew Garrett,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Chris Mason wrote:
>
> I had some fun trying things with this, and I've been able to reliably
> trigger stalls in write cache of ~60 seconds on my seagate 500GB sata
> drive. The worst I saw was 214 seconds.
..
I'd be more interested in how you managed that (above),
than the quite different test you describe below.
Yes, different, I think. The test below just times how long a single
chunk of data might stay in-drive cache under constant load,
rather than how long it takes to flush the drive cache on command.
Right?
Still, useful for other stuff.
> It took a little experimentation, and I had to switch to the noop
> scheduler (no idea why).
>
> Also, I had to watch vmstat closely. When the test first started,
> vmstat was reporting 500kb/s or so write throughput. After the test ran
> for a few minutes, vmstat jumped up to 8MB/s.
>
> My guess is that the drive has some internal threshold for when it
> decides to only write in cache. The switch to 8MB/s is when it switched
> to cache only goodness. Or perhaps the attached program is buggy and
> I'll end up looking silly...it was some quick coding.
>
> The test forks two procs. One proc does 4k writes to the first 26MB of
> the test file (/dev/sdb for me). These writes are O_DIRECT, and use a
> block size of 4k.
>
> The idea is that we fill the cache with work that is very beneficial to
> keep in cache, but that the drive will tend to flush out because it is
> filling up tracks.
>
> The second proc O_DIRECT writes to two adjacent sectors far away from
> the hot writes from the first proc, and it puts in a timestamp from just
> before the write. Every second or so, this timestamp is printed to
> stderr. The drive will want to keep these two sectors in cache because
> we are constantly overwriting them.
>
> (It's worth mentioning this is a destructive test. Running it
> on /dev/sdb will overwrite the first 64MB of the drive!!!!)
>
> Sample output:
>
> # ./wb-latency /dev/sdb
> Found tv 1238434622.461527
> starting hot writes run
> starting tester run
> current time 1238435045.529751
> current time 1238435046.531250
> ...
> current time 1238435063.772456
> current time 1238435064.788639
> current time 1238435065.814101
> current time 1238435066.847704
>
> Right here, I pull the power cord. The box comes back up, and I run:
>
> # ./wb-latency -c /dev/sdb
> Found tv 1238435067.347829
>
> When -c is passed, it just reads the timestamp out of the timestamp
> block and exits. You compare this value with the value printed just
> before you pulled the block.
>
> For the run here, the two values are within .5s of each other. The
> tester only prints the time every one second, so anything that close is
> very good. I had pulled the plug before the drive got into that fast
> 8MB/s mode, so the drive was doing a pretty good job of fairly servicing
> the cache.
>
> My drive has a cache of 32MB. Smaller caches probably need a smaller
> hot zone.
>
> -chris
>
>
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 18:39 ` Mark Lord
@ 2009-03-30 18:52 ` Chris Mason
2009-03-30 20:19 ` Mark Lord
0 siblings, 1 reply; 419+ messages in thread
From: Chris Mason @ 2009-03-30 18:52 UTC (permalink / raw)
To: Mark Lord
Cc: Linus Torvalds, Ric Wheeler, Andreas T.Auer, Alan Cox,
Theodore Tso, Stefan Richter, Jeff Garzik, Matthew Garrett,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Mon, 2009-03-30 at 14:39 -0400, Mark Lord wrote:
> Chris Mason wrote:
> >
> > I had some fun trying things with this, and I've been able to reliably
> > trigger stalls in write cache of ~60 seconds on my seagate 500GB sata
> > drive. The worst I saw was 214 seconds.
> ..
>
> I'd be more interested in how you managed that (above),
> than the quite different test you describe below.
>
> Yes, different, I think. The test below just times how long a single
> chunk of data might stay in-drive cache under constant load,
> rather than how long it takes to flush the drive cache on command.
>
> Right?
>
> Still, useful for other stuff.
>
That's right, it is testing for starvation in a single sector, not for
how long the cache flush actually takes. But, your remark from higher
up in the thread was this:
>
> Anything in the drive's write cache very probably made
> it to the media within a second or two of arriving there.
>
Sorry if I misread things. But the goal is just to show that it really
does matter if we use a writeback cache with or without barriers. The
test has two datasets:
1) An area that is constantly overwritten sequentially
2) A single sector that stores a critical bit of data.
#1 is the filesystem log, #2 is the filesystem super. This isn't a
specialized workload ;)
-chris
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 17:57 ` Chris Mason
2009-03-30 18:39 ` Mark Lord
@ 2009-03-30 18:54 ` Pasi Kärkkäinen
1 sibling, 0 replies; 419+ messages in thread
From: Pasi Kärkkäinen @ 2009-03-30 18:54 UTC (permalink / raw)
To: Chris Mason
Cc: Linus Torvalds, Mark Lord, Ric Wheeler, Andreas T.Auer, Alan Cox,
Theodore Tso, Stefan Richter, Jeff Garzik, Matthew Garrett,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Mon, Mar 30, 2009 at 01:57:12PM -0400, Chris Mason wrote:
> On Mon, 2009-03-30 at 09:58 -0700, Linus Torvalds wrote:
> >
> > On Mon, 30 Mar 2009, Mark Lord wrote:
> > >
> > > I spent an entire day recently, trying to see if I could significantly fill
> > > up the 32MB cache on a 750GB Hitach SATA drive here.
> > >
> > > With deliberate/random write patterns, big and small, near and far,
> > > I could not fill the drive with anything approaching a full second
> > > of latent write-cache flush time.
> > >
> > > Not even close. Which is a pity, because I really wanted to do some testing
> > > related to a deep write cache. But it just wouldn't happen.
> > >
> > > I tried this again on a 16MB cache of a Seagate drive, no difference.
> > >
> > > Bummer. :)
> >
> > Try it with laptop drives. You might get to a second, or at least hundreds
> > of ms (not counting the spinup delay if it went to sleep, obviously). You
> > probably tested desktop drives (that 750GB Hitachi one is not a low end
> > one, and I assume the Seagate one isn't either).
>
> I had some fun trying things with this, and I've been able to reliably
> trigger stalls in write cache of ~60 seconds on my seagate 500GB sata
> drive. The worst I saw was 214 seconds.
>
> It took a little experimentation, and I had to switch to the noop
> scheduler (no idea why).
>
I remember cfq having a bug (or a feature?) that prevents queue depths
deeper than 1.. so with noop you get more ios to the queue.
-- Pasi
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 7:13 ` Andreas T.Auer
2009-03-30 9:05 ` Alan Cox
@ 2009-03-30 19:02 ` Bill Davidsen
2009-04-01 1:19 ` david
1 sibling, 1 reply; 419+ messages in thread
From: Bill Davidsen @ 2009-03-30 19:02 UTC (permalink / raw)
To: linux-kernel
Andreas T.Auer wrote:
> On 30.03.2009 02:39 Theodore Tso wrote:
>> All I can do is apologize to all other filesystem developers profusely
>> for ext3's data=ordered semantics; at this point, I very much regret
>> that we made data=ordered the default for ext3. But the application
>> writers vastly outnumber us, and realistically we're not going to be
>> able to easily roll back eight years of application writers being
>> trained that fsync() is not necessary, and actually is detrimental for
>> ext3.
> And still I don't know any reason, why it makes sense to write the
> metadata to non-existing data immediately instead of delaying that, too.
>
Here I have the same question, I don't expect or demand that anything be done in
a particular order unless I force it so, and I expect there to be some corner
case where the data is written and the metadata doesn't reflect that in the
event of a failure, but I can't see that it ever a good idea to have the
metadata reflect the future and describe what things will look like if
everything goes as planned. I have had enough of that BS from financial planners
and politicians, metadata shouldn't try to predict the future just to save a ms
here or there. It's also necessary to have the metadata match reality after
fsync(), of course, or even the well behaved applications mentioned in this
thread haven't a hope of staying consistent.
Feel free to clarify why clairvoyant metadata is ever a good thing...
--
Bill Davidsen <davidsen@tmr.com>
"We have more to fear from the bungling of the incompetent than from
the machinations of the wicked." - from Slashdot
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 17:51 ` Linus Torvalds
2009-03-30 18:15 ` Ric Wheeler
@ 2009-03-30 19:08 ` Eric Sandeen
2009-03-30 19:22 ` Rik van Riel
2 siblings, 0 replies; 419+ messages in thread
From: Eric Sandeen @ 2009-03-30 19:08 UTC (permalink / raw)
To: Linus Torvalds
Cc: Ric Wheeler, Andreas T.Auer, Alan Cox, Theodore Tso, Mark Lord,
Stefan Richter, Jeff Garzik, Matthew Garrett, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Linus Torvalds wrote:
>
> On Mon, 30 Mar 2009, Ric Wheeler wrote:
>>> But turn that around, and say: if you don't have redundant disks, then
>>> pretty much by definition those drive flushes won't be guaranteeing your
>>> data _anyway_, so why pay the price?
>> They do in fact provide that promise for the extremely common case of power
>> outage and as such, can be used to build reliable storage if you need to.
>
> No they really effectively don't. Not if the end result is "oops, the
> whole track is now unreadable" (regardless of whether it happened due to a
> write durign power-out or during some entirely unrelated disk error). Your
> "flush" didn't result in a stable filesystem at all, it just resulted in a
> dead one.
>
> That's my point. Disks simply aren't that reliable. Anything you do with
> flushing and ordering won't make them magically not have errors any more.
But this is apples and oranges isn't it?
All of the effort that goes into metadata journalling in ext3, ext4,
xfs, reiserfs, jfs ... is to save us from the fsck time on restart, and
ensure a consistent filesystem framework (metadata, that is, in
general), after an unclean shutdown. That could be due to a system
crash or a power outage. This is much more common in my personal
experience than a drive failure.
That journalling requires ordering guarantees, and with large drive
write caches, and no ordering, it's not hard for it to go south to the
point where things *do* get corrupted when you lose power or the drive
resets in the middle of basically random write cache destaging. See
Chris Mason's tests from a year or so ago, proving that ext3 is quite
vulnerable to this - it likely explains some of the random htree
corruption that occasionally gets reported to us.
And yes, sometimes drives die, and then you are really screwed, but
that's orthogonal to all of the above, I think.
-Eric
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 17:51 ` Linus Torvalds
2009-03-30 18:15 ` Ric Wheeler
2009-03-30 19:08 ` Eric Sandeen
@ 2009-03-30 19:22 ` Rik van Riel
2009-03-30 19:41 ` Jeff Garzik
` (3 more replies)
2 siblings, 4 replies; 419+ messages in thread
From: Rik van Riel @ 2009-03-30 19:22 UTC (permalink / raw)
To: Linus Torvalds
Cc: Ric Wheeler, Andreas T.Auer, Alan Cox, Theodore Tso, Mark Lord,
Stefan Richter, Jeff Garzik, Matthew Garrett, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Linus Torvalds wrote:
> On Mon, 30 Mar 2009, Ric Wheeler wrote:
>> Heat is a major killer of spinning drives (as is severe cold). A lot of times,
>> drives that have read errors only (not failed writes) might be fully
>> recoverable if you can re-write that injured sector.
>
> It's not worked for me, and yes, I've tried.
It's worked here. It would be nice to have a device mapper module
that can just insert itself between the disk and the higher device
mapper layer and "scrub" the disk, fetching unreadable sectors from
the other RAID copy where required.
> I'm sure it works for some "ok, the write just failed to take, and the CRC
> was bad" case, but that's apparently not what I've had. I suspect either
> the track markers got overwritten (and maybe a disk-specific low-level
> reformat would have helped, but at that point I was not going to trust the
> drive anyway, so I didn't care), or there was actual major physical damage
> due to heat and/or head crash and remapping was just not able to cope.
Maybe a stupid question, but aren't tracks so small compared to
the disk head that a physical head crash would take out multiple
tracks at once? (the last on I experienced here took out a major
part of the disk)
Another case I have seen years ago was me writing data to a disk
while it was still cold (I brought it home, plugged it in and
started using it). Once the drive came up to temperature, it
could no longer read the tracks it just wrote - maybe the disk
expanded by more than it is willing to seek around for tracks
due to thermal correction? Low level formatting the drive
made it work perfectly and I kept using it until it was just
too small to be useful :)
> And my point is, IT MAKES SENSE to just do the elevator barrier, _without_
> the drive command.
No argument there. I have seen NCQ starvation on SATA disks,
with some requests sitting in the drive for seconds, while
the drive was busy handling hundreds of requests/second
elsewhere...
--
All rights reversed.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-29 21:07 ` Linus Torvalds
@ 2009-03-30 19:37 ` Jeremy Fitzhardinge
0 siblings, 0 replies; 419+ messages in thread
From: Jeremy Fitzhardinge @ 2009-03-30 19:37 UTC (permalink / raw)
To: Linus Torvalds
Cc: Alan Cox, Xavier Bestel, Matthew Garrett, Theodore Tso,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Linus Torvalds wrote:
> This particular problem really largely boils down to "average memory
> capacity has expanded a _lot_ more than harddisk speeds have gone up".
>
Yes, but clearly lawyers are better at fixing this kind of problem.
J
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 19:22 ` Rik van Riel
@ 2009-03-30 19:41 ` Jeff Garzik
2009-03-30 20:21 ` Michael Tokarev
2009-03-30 20:05 ` Linus Torvalds
` (2 subsequent siblings)
3 siblings, 1 reply; 419+ messages in thread
From: Jeff Garzik @ 2009-03-30 19:41 UTC (permalink / raw)
To: Rik van Riel
Cc: Linus Torvalds, Ric Wheeler, Andreas T.Auer, Alan Cox,
Theodore Tso, Mark Lord, Stefan Richter, Matthew Garrett,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Rik van Riel wrote:
> Linus Torvalds wrote:
>> And my point is, IT MAKES SENSE to just do the elevator barrier,
>> _without_ the drive command.
>
> No argument there. I have seen NCQ starvation on SATA disks,
> with some requests sitting in the drive for seconds, while
> the drive was busy handling hundreds of requests/second
> elsewhere...
If certain requests are hanging out in the drive's wbcache longer than
others, that increases the probability that OS filesystem-required,
elevator-provided ordering becomes skewed once requests are passed to
drive firmware.
The sad, sucky fact is that NCQ starvation implies FLUSH CACHE is more
important than ever, if filesystems want to get ordering correct.
IDEALLY, according to the SATA protocol spec, we could issue up to 32
NCQ commands to a SATA drive, each marked with the "FUA" bit to force
the command to hit permanent media before returning.
In theory, this NCQ+FUA mode gives the drive maximum ability to optimize
parallel in-progress commands, decoupling command completion and command
issue -- while also giving the OS complete control of ordering by virtue
of emptying the SATA tagged command queue.
In practice, NCQ+FUA flat out did not work on early drives, and
performance was way under what you would expect for parallel write-thru
command execution. I haven't benchmarked NCQ+FUA in a few years; it
might be worth revisiting.
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 19:22 ` Rik van Riel
2009-03-30 19:41 ` Jeff Garzik
@ 2009-03-30 20:05 ` Linus Torvalds
2009-03-31 9:27 ` Neil Brown
2009-03-31 21:13 ` Alan Cox
3 siblings, 0 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-03-30 20:05 UTC (permalink / raw)
To: Rik van Riel
Cc: Ric Wheeler, Andreas T.Auer, Alan Cox, Theodore Tso, Mark Lord,
Stefan Richter, Jeff Garzik, Matthew Garrett, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Mon, 30 Mar 2009, Rik van Riel wrote:
>
> Maybe a stupid question, but aren't tracks so small compared to
> the disk head that a physical head crash would take out multiple
> tracks at once? (the last on I experienced here took out a major
> part of the disk)
Probably. My experiences (not _that_ many drives, but more than one) have
certainly been that I've never seen a _single_ read error.
> Another case I have seen years ago was me writing data to a disk
> while it was still cold (I brought it home, plugged it in and
> started using it). Once the drive came up to temperature, it
> could no longer read the tracks it just wrote - maybe the disk
> expanded by more than it is willing to seek around for tracks
> due to thermal correction? Low level formatting the drive
> made it work perfectly and I kept using it until it was just
> too small to be useful :)
I've had one drive that just stopped spinning. On power-on, it would make
these pitiful noises trying to get the platters to move, but not actually
ever work. If I recall correctly, I got the data off it by letting it just
cool down, then powering up (successfully) and transferring all the data
I cared about off the disk. And then replacing the disk.
> > And my point is, IT MAKES SENSE to just do the elevator barrier, _without_
> > the drive command.
>
> No argument there. I have seen NCQ starvation on SATA disks,
> with some requests sitting in the drive for seconds, while
> the drive was busy handling hundreds of requests/second
> elsewhere...
I _thought_ we stopped feeding new requests while the flush was active, so
if you actually do a flush, that should never actually happen. But I
didn't check.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 18:52 ` Chris Mason
@ 2009-03-30 20:19 ` Mark Lord
0 siblings, 0 replies; 419+ messages in thread
From: Mark Lord @ 2009-03-30 20:19 UTC (permalink / raw)
To: Chris Mason
Cc: Linus Torvalds, Ric Wheeler, Andreas T.Auer, Alan Cox,
Theodore Tso, Stefan Richter, Jeff Garzik, Matthew Garrett,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Chris Mason wrote:
> On Mon, 2009-03-30 at 14:39 -0400, Mark Lord wrote:
>> Chris Mason wrote:
>>> I had some fun trying things with this, and I've been able to reliably
>>> trigger stalls in write cache of ~60 seconds on my seagate 500GB sata
>>> drive. The worst I saw was 214 seconds.
>> ..
>>
>> I'd be more interested in how you managed that (above),
>> than the quite different test you describe below.
>>
>> Yes, different, I think. The test below just times how long a single
>> chunk of data might stay in-drive cache under constant load,
>> rather than how long it takes to flush the drive cache on command.
>>
>> Right?
>>
>> Still, useful for other stuff.
>>
>
> That's right, it is testing for starvation in a single sector, not for
> how long the cache flush actually takes. But, your remark from higher
> up in the thread was this:
>
> >
> > Anything in the drive's write cache very probably made
> > it to the media within a second or two of arriving there.
..
Yeah, but that was in the context of how long the drive takes
to clear out it's cache when there's a (brief) break in the action.
Still, it's really good to see hard data on a drive that actually
starves itself for an extended period. Very handy insight, that!
> Sorry if I misread things. But the goal is just to show that it really
> does matter if we use a writeback cache with or without barriers. The
> test has two datasets:
>
> 1) An area that is constantly overwritten sequentially
> 2) A single sector that stores a critical bit of data.
>
> #1 is the filesystem log, #2 is the filesystem super. This isn't a
> specialized workload ;)
..
Good points.
I'm thinking of perhaps acquiring an OCZ Vertex SSD.
The 120GB ones apparently have 64MB of RAM inside,
much of which is used to cache data heading to the flash.
I wonder how long it takes to empty out that sucker!
Cheers
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 19:41 ` Jeff Garzik
@ 2009-03-30 20:21 ` Michael Tokarev
2009-03-30 20:26 ` Mark Lord
2009-03-30 20:34 ` Jeff Garzik
0 siblings, 2 replies; 419+ messages in thread
From: Michael Tokarev @ 2009-03-30 20:21 UTC (permalink / raw)
To: Jeff Garzik
Cc: Rik van Riel, Linus Torvalds, Ric Wheeler, Andreas T.Auer,
Alan Cox, Theodore Tso, Mark Lord, Stefan Richter,
Matthew Garrett, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Jeff Garzik wrote:
[]
> IDEALLY, according to the SATA protocol spec, we could issue up to 32
> NCQ commands to a SATA drive, each marked with the "FUA" bit to force
> the command to hit permanent media before returning.
>
> In theory, this NCQ+FUA mode gives the drive maximum ability to optimize
> parallel in-progress commands, decoupling command completion and command
> issue -- while also giving the OS complete control of ordering by virtue
> of emptying the SATA tagged command queue.
>
> In practice, NCQ+FUA flat out did not work on early drives, and
> performance was way under what you would expect for parallel write-thru
> command execution. I haven't benchmarked NCQ+FUA in a few years; it
> might be worth revisiting.
But are there drives out there that actually supports FUA?
The only cases I've seen dmesg DIFFERENT from something like
sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled,
doesn't support DPO or FUA
^^^^^^^^^^^^^^^^^^^^^^^^^^
is with SOME SCSI drives. Even most modern SAS drives I've seen
reports lack of support for DPO or FUA. Or at least kernel
reports that.
In the SATA world, I've seen no single case. Seagate (7200.9..7200.11,
Barracuda ES and ES2), WD (Caviar CE, Caviar Black, Caviar Green,
RE2 GP), Hitachi DeskStar and UltraStar (old and new), some others --
all the same, no DPO or FUA.
/mjt
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 20:21 ` Michael Tokarev
@ 2009-03-30 20:26 ` Mark Lord
2009-03-30 20:29 ` Mark Lord
2009-03-30 20:35 ` Jeff Garzik
2009-03-30 20:34 ` Jeff Garzik
1 sibling, 2 replies; 419+ messages in thread
From: Mark Lord @ 2009-03-30 20:26 UTC (permalink / raw)
To: Michael Tokarev
Cc: Jeff Garzik, Rik van Riel, Linus Torvalds, Ric Wheeler,
Andreas T.Auer, Alan Cox, Theodore Tso, Stefan Richter,
Matthew Garrett, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Michael Tokarev wrote:
>
> But are there drives out there that actually supports FUA?
..
Most (or all?) current model Hitachi Deskstar drives have it.
Cheers
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 20:26 ` Mark Lord
@ 2009-03-30 20:29 ` Mark Lord
2009-03-30 20:35 ` Jeff Garzik
1 sibling, 0 replies; 419+ messages in thread
From: Mark Lord @ 2009-03-30 20:29 UTC (permalink / raw)
To: Michael Tokarev
Cc: Jeff Garzik, Rik van Riel, Linus Torvalds, Ric Wheeler,
Andreas T.Auer, Alan Cox, Theodore Tso, Stefan Richter,
Matthew Garrett, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Mark Lord wrote:
> Michael Tokarev wrote:
>>
>> But are there drives out there that actually supports FUA?
> ..
>
> Most (or all?) current model Hitachi Deskstar drives have it.
..
Mmmm.. so does my notebook's WD 250GB drive.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 20:21 ` Michael Tokarev
2009-03-30 20:26 ` Mark Lord
@ 2009-03-30 20:34 ` Jeff Garzik
1 sibling, 0 replies; 419+ messages in thread
From: Jeff Garzik @ 2009-03-30 20:34 UTC (permalink / raw)
To: Michael Tokarev
Cc: Rik van Riel, Linus Torvalds, Ric Wheeler, Andreas T.Auer,
Alan Cox, Theodore Tso, Mark Lord, Stefan Richter,
Matthew Garrett, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Michael Tokarev wrote:
> In the SATA world, I've seen no single case. Seagate (7200.9..7200.11,
> Barracuda ES and ES2), WD (Caviar CE, Caviar Black, Caviar Green,
> RE2 GP), Hitachi DeskStar and UltraStar (old and new), some others --
> all the same, no DPO or FUA.
If your drive supports NCQ, it is highly likely it supports FUA.
By default, the libata driver _pretends_ your drive does not support FUA.
grep the kernel source for libata_fua and check out the module parameter
'fua'
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 20:26 ` Mark Lord
2009-03-30 20:29 ` Mark Lord
@ 2009-03-30 20:35 ` Jeff Garzik
2009-03-30 20:40 ` Mark Lord
1 sibling, 1 reply; 419+ messages in thread
From: Jeff Garzik @ 2009-03-30 20:35 UTC (permalink / raw)
To: Mark Lord
Cc: Michael Tokarev, Rik van Riel, Linus Torvalds, Ric Wheeler,
Andreas T.Auer, Alan Cox, Theodore Tso, Stefan Richter,
Matthew Garrett, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Mark Lord wrote:
> Michael Tokarev wrote:
>>
>> But are there drives out there that actually supports FUA?
> ..
>
> Most (or all?) current model Hitachi Deskstar drives have it.
Depends on your source of information: if you judge from probe
messages, libata_fua==0 will imply !FUA-support.
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 20:35 ` Jeff Garzik
@ 2009-03-30 20:40 ` Mark Lord
0 siblings, 0 replies; 419+ messages in thread
From: Mark Lord @ 2009-03-30 20:40 UTC (permalink / raw)
To: Michael Tokarev
Cc: Jeff Garzik, Rik van Riel, Linus Torvalds, Ric Wheeler,
Andreas T.Auer, Alan Cox, Theodore Tso, Stefan Richter,
Matthew Garrett, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Jeff Garzik wrote:
> Mark Lord wrote:
>> Michael Tokarev wrote:
>>>
>>> But are there drives out there that actually supports FUA?
>> ..
>>
>> Most (or all?) current model Hitachi Deskstar drives have it.
>
> Depends on your source of information: if you judge from probe
> messages, libata_fua==0 will imply !FUA-support.
..
As your other post points out, lots of drives already support FUA,
but libata deliberately disables it by default (due to the performance
impact, similar to mounting a f/s with -osync).
For the curious, you can use this command to see if your hardware has FUA:
hdparm -I /dev/sd? | grep FUA
It will show lines like this for the drives that support it:
* WRITE_{DMA|MULTIPLE}_FUA_EXT
Cheers
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 16:02 ` Linus Torvalds
2009-03-28 7:50 ` Mike Galbraith
@ 2009-03-30 22:00 ` Hans-Peter Jansen
2009-03-30 22:07 ` Arjan van de Ven
2009-04-02 19:01 ` Andreas T.Auer
1 sibling, 2 replies; 419+ messages in thread
From: Hans-Peter Jansen @ 2009-03-30 22:00 UTC (permalink / raw)
To: Linus Torvalds; +Cc: Mike Galbraith, Geert Uytterhoeven, linux-kernel, arjan
Am Freitag, 27. März 2009 schrieb Linus Torvalds:
> In other words, the main Makefile version is totally useless in
> non-linear development, and is meaningful _only_ at specific release
> times. In between releases, it's essentially a random thing, since
> non-linear development means that versioning simply fundamentally isn't
> some simple monotonic numbering. And this is exactly when
> CONFIG_LOCALVERSION_AUTO is a huge deal.
Well, you guys always see things from a deeply involved kernel developer
_using git_ POV - which I do understand and accept (unlike hats nobody can
change his head after all ;-), but there are other approaches to kernel
source code, e.g. git is also really great for tracking the kernel
development without any further involvement apart from using the resulting
trees.
I build kernel rpms from your git tree, and have a bunch of BUILDs lying
around. Sure, I can always fetch the tarballs or fiddle with git, but why?
Having a Makefile start commit allows to make sure with simplest tools,
say "head Makefile" that a locally copied 2.6.29 tree is really a 2.6.29,
and not something moving towards the next release. That's all, nothing
less, nothing more, it's just a strong hint which blend is in the box.
I always wonder, why Arjan does not intervene for his kerneloops.org
project, since your approach opens a window of uncertainty during the merge
window when simply using git as an efficient fetch tool.
Ducks and hides now,
Pete
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 22:00 ` Hans-Peter Jansen
@ 2009-03-30 22:07 ` Arjan van de Ven
2009-03-30 10:18 ` Pavel Machek
` (3 more replies)
2009-04-02 19:01 ` Andreas T.Auer
1 sibling, 4 replies; 419+ messages in thread
From: Arjan van de Ven @ 2009-03-30 22:07 UTC (permalink / raw)
To: Hans-Peter Jansen
Cc: Linus Torvalds, Mike Galbraith, Geert Uytterhoeven, linux-kernel
Hans-Peter Jansen wrote:
> Am Freitag, 27. März 2009 schrieb Linus Torvalds:
>
> I always wonder, why Arjan does not intervene for his kerneloops.org
> project, since your approach opens a window of uncertainty during the merge
> window when simply using git as an efficient fetch tool.
I would *love* it if Linus would, as first commit mark his tree as "-git0"
(as per snapshots) or "-rc0". So that I can split the "final" versus
"merge window" oopses.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 19:22 ` Rik van Riel
2009-03-30 19:41 ` Jeff Garzik
2009-03-30 20:05 ` Linus Torvalds
@ 2009-03-31 9:27 ` Neil Brown
2009-03-31 21:13 ` Alan Cox
3 siblings, 0 replies; 419+ messages in thread
From: Neil Brown @ 2009-03-31 9:27 UTC (permalink / raw)
To: Rik van Riel
Cc: Linus Torvalds, Ric Wheeler, Andreas T.Auer, Alan Cox,
Theodore Tso, Mark Lord, Stefan Richter, Jeff Garzik,
Matthew Garrett, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Monday March 30, riel@redhat.com wrote:
> Linus Torvalds wrote:
> > On Mon, 30 Mar 2009, Ric Wheeler wrote:
>
> >> Heat is a major killer of spinning drives (as is severe cold). A lot of times,
> >> drives that have read errors only (not failed writes) might be fully
> >> recoverable if you can re-write that injured sector.
> >
> > It's not worked for me, and yes, I've tried.
>
> It's worked here. It would be nice to have a device mapper module
> that can just insert itself between the disk and the higher device
> mapper layer and "scrub" the disk, fetching unreadable sectors from
> the other RAID copy where required.
You want to start using 'md' :-)
With raid0,1,4,5,6,10, if it gets a read error, it find the data from
elsewhere and tries to over-write the read error and then read back.
If that all works, then it assume the drive is still good.
This happens during normal IO and all when you 'scrub' the array which
e.g. Debian does on the first Sunday of the month by default.
NeilBrown
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 19:32 ` Theodore Tso
2009-03-27 20:11 ` Andreas T.Auer
@ 2009-03-31 9:58 ` Neil Brown
1 sibling, 0 replies; 419+ messages in thread
From: Neil Brown @ 2009-03-31 9:58 UTC (permalink / raw)
To: Theodore Tso
Cc: Alan Cox, Linus Torvalds, Matthew Garrett, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Friday March 27, tytso@mit.edu wrote:
> On Fri, Mar 27, 2009 at 07:14:26PM +0000, Alan Cox wrote:
> > > Agreed, we need a middle ground. We need a transition path that
> > > recognizes that ext3 won't be the dominant filesystem for Linux in
> > > perpetuity, and that ext3's data=ordered semantics will someday no
> > > longer be a major factor in application design. fbarrier() semantics
> > > might be one approach; there may be others. It's something we need to
> > > figure out.
> >
> > Would making close imply fbarrier() rather than fsync() work for this ?
> > That would give people the ordering they want even if they are less
> > careful but wouldn't give the media error cases - which are less
> > interesting.
>
> The thought that I had was to create a new system call, fbarrier()
> which has the semantics that it will request the filesystem to make
> sure that (at least) changes that have been made data blocks to date
> should be forced out to disk when the next metadata operation is
> committed.
I'm curious about the exact semantics that you are suggesting.
Do you mean that
1/ any data block in any file will be forced out before any metadata
for any file? or
2/ any data block for 'this' file will be forced out before any
metadata for any file? or
3/ any data block for 'this' file will be forced out before any
metadata for this file?
I assume the contents of directories are metadata. If 3 is that case
do we included the metadata of any directories known to contain this
file? Recursively?
I think that if we do introduce new semantics, they should be as weak
as possibly while still achieving the goal, so that fs designers have
as much freedom as possible. It should also be as expressive as
possible so that we don't find we want to extend it later.
What would you think of:
fcntl(fd, F_BEFORE, fd2)
with the semantics that it sets up a transaction dependency between fd
and fd2 and more particularly the operations requested through each
fd.
So if 'fd' is a file, and 'fd2' is the directory holding that file,
then
fcntl(fd, F_BEFORE, fd2)
write(fd, stuff)
renameat(fd2, 'file', fd2, 'newname')
would ensure that the writes to the file were visible on storage
before the rename.
You could also do
fd1 = open("afile", O_RDWR);
fd2 = open("afile", O_RDWR);
fcntl(fd1, F_BEFORE, fd2);
then use write(fd1) to write journal updates to one part of the
(database) file, and write(fd2) to write in-place updates,
and it would just "do the right thing". (You might want to call
fcntl(fd2, F_BEFORE, fd1) as well ... I haven't quite thought through
the details of that yet).
If you gave AT_FDCWD as the fd2 in the fcntl, then operations on fd1
would be ordered before any namespace operations which did not specify a
particular directory, which would be fairly close to option 2 above.
A minimal implementation could fsync fd1 before allowing any operation
on fd2. A more sophisticated implementation could record set up
dependencies in internal data structures and start writeout of the fd1
changes without actually waiting for them to complete.
Just a thought....
NeilBrown
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 22:07 ` Arjan van de Ven
2009-03-30 10:18 ` Pavel Machek
@ 2009-03-31 13:33 ` Rafael J. Wysocki
2009-03-31 15:30 ` Hans-Peter Jansen
2009-03-31 19:37 ` Jeff Garzik
3 siblings, 0 replies; 419+ messages in thread
From: Rafael J. Wysocki @ 2009-03-31 13:33 UTC (permalink / raw)
To: Arjan van de Ven
Cc: Hans-Peter Jansen, Linus Torvalds, Mike Galbraith,
Geert Uytterhoeven, linux-kernel
On Tuesday 31 March 2009, Arjan van de Ven wrote:
> Hans-Peter Jansen wrote:
> > Am Freitag, 27. März 2009 schrieb Linus Torvalds:
> >
> > I always wonder, why Arjan does not intervene for his kerneloops.org
> > project, since your approach opens a window of uncertainty during the merge
> > window when simply using git as an efficient fetch tool.
>
> I would *love* it if Linus would, as first commit mark his tree as "-git0"
> (as per snapshots) or "-rc0". So that I can split the "final" versus
> "merge window" oopses.
FWIW, that would also be useful for tracking regressions.
Thanks,
Rafael
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-28 1:19 ` Jeff Garzik
` (2 preceding siblings ...)
2009-03-29 0:33 ` david
@ 2009-03-31 15:01 ` Thierry Vignaud
3 siblings, 0 replies; 419+ messages in thread
From: Thierry Vignaud @ 2009-03-31 15:01 UTC (permalink / raw)
To: Jeff Garzik
Cc: Linus Torvalds, Matthew Garrett, Alan Cox, Theodore Tso,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
Jeff Garzik <jeff@garzik.org> writes:
> > Of course, your browsing history database is an excellent example of
> > something you should _not_ care about that much, and where
> > performance is a lot more important than "ooh, if the machine goes
> > down suddenly, I need to be 100% up-to-date". Using fsync on that
> > thing was just stupid, even
>
> If you are doing a ton of web-based work with a bunch of tabs or
> windows open, you really like the post-crash restoration methods that
> Firefox now employs. Some users actually do want to
> checkpoint/restore their web work, regardless of whether it was the
> browser, the window system or the OS that crashed.
This is all about tradeoff.
I guess everybody can afford loosing the last 30 seconds of history (or
5mn ...).
That's not that much of lost work...
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 22:07 ` Arjan van de Ven
2009-03-30 10:18 ` Pavel Machek
2009-03-31 13:33 ` Rafael J. Wysocki
@ 2009-03-31 15:30 ` Hans-Peter Jansen
2009-03-31 19:37 ` Jeff Garzik
3 siblings, 0 replies; 419+ messages in thread
From: Hans-Peter Jansen @ 2009-03-31 15:30 UTC (permalink / raw)
To: Arjan van de Ven, gitster
Cc: Linus Torvalds, Mike Galbraith, Geert Uytterhoeven, linux-kernel
Am Dienstag, 31. März 2009 schrieb Arjan van de Ven:
> Hans-Peter Jansen wrote:
> >
> > I always wonder, why Arjan does not intervene for his kerneloops.org
> > project, since your approach opens a window of uncertainty during the
> > merge window when simply using git as an efficient fetch tool.
>
> I would *love* it if Linus would, as first commit mark his tree as
> "-git0" (as per snapshots) or "-rc0". So that I can split the "final"
> versus "merge window" oopses.
..which is an important difference. I still vote for -pre for "preparation
state" as -git0 does imply some sort of versioning, which *is* meaningless
in this state.
Linus, this would be a small step for you, but makes a big difference for
those of us, that miss it sorely.
Junio: is it possible to automate this in git somehow: make sure, that the
first commit after a release really happens for a "new" version (e.g. a
version patch to Makefile)?
Pete
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-29 1:18 ` Jeff Garzik
@ 2009-03-31 18:45 ` Jörn Engel
0 siblings, 0 replies; 419+ messages in thread
From: Jörn Engel @ 2009-03-31 18:45 UTC (permalink / raw)
To: Jeff Garzik
Cc: Mark Lord, Stefan Richter, Linus Torvalds, Matthew Garrett,
Alan Cox, Theodore Tso, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Sat, 28 March 2009 21:18:51 -0400, Jeff Garzik wrote:
>
> Was the BSD soft-updates idea of FS data-before-metadata a good one?
> Yes. Obviously.
>
> It is the cornerstone of every SANE journalling-esque database or
> filesystem out there -- don't leave a window where your metadata is
> inconsistent. "Duh" :)
Your idea of 'consistent' seems a bit fuzzy. Soft updates, afaiu, leave
plenty of windows and reasons to run fsck. They only guarantee that all
those windows result in lost space - data allocations without any
references. It certainly prevents the worst problems, but I would use a
different word for it. :)
Jörn
--
Don't worry about people stealing your ideas. If your ideas are any good,
you'll have to ram them down people's throats.
-- Howard Aiken quoted by Ken Iverson quoted by Jim Horning quoted by
Raph Levien, 1979
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 22:07 ` Arjan van de Ven
` (2 preceding siblings ...)
2009-03-31 15:30 ` Hans-Peter Jansen
@ 2009-03-31 19:37 ` Jeff Garzik
2009-03-31 19:47 ` Arjan van de Ven
3 siblings, 1 reply; 419+ messages in thread
From: Jeff Garzik @ 2009-03-31 19:37 UTC (permalink / raw)
To: Arjan van de Ven
Cc: Hans-Peter Jansen, Linus Torvalds, Mike Galbraith,
Geert Uytterhoeven, linux-kernel
Arjan van de Ven wrote:
> Hans-Peter Jansen wrote:
>> Am Freitag, 27. März 2009 schrieb Linus Torvalds:
>>
>> I always wonder, why Arjan does not intervene for his kerneloops.org
>> project, since your approach opens a window of uncertainty during the
>> merge window when simply using git as an efficient fetch tool.
>
> I would *love* it if Linus would, as first commit mark his tree as "-git0"
> (as per snapshots) or "-rc0". So that I can split the "final" versus
> "merge window" oopses.
Can't you discern that from the v$VERSION tag? According to your
definition, -git0 would simply be v2.6.29 commit + 1, correct?
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-31 19:37 ` Jeff Garzik
@ 2009-03-31 19:47 ` Arjan van de Ven
0 siblings, 0 replies; 419+ messages in thread
From: Arjan van de Ven @ 2009-03-31 19:47 UTC (permalink / raw)
To: Jeff Garzik
Cc: Hans-Peter Jansen, Linus Torvalds, Mike Galbraith,
Geert Uytterhoeven, linux-kernel
Jeff Garzik wrote:
> Arjan van de Ven wrote:
>> Hans-Peter Jansen wrote:
>>> Am Freitag, 27. März 2009 schrieb Linus Torvalds:
>>>
>>> I always wonder, why Arjan does not intervene for his kerneloops.org
>>> project, since your approach opens a window of uncertainty during the
>>> merge window when simply using git as an efficient fetch tool.
>>
>> I would *love* it if Linus would, as first commit mark his tree as
>> "-git0"
>> (as per snapshots) or "-rc0". So that I can split the "final" versus
>> "merge window" oopses.
>
> Can't you discern that from the v$VERSION tag? According to your
> definition, -git0 would simply be v2.6.29 commit + 1, correct?
it needs to be something that is shown in the oops output...
... basically version or extraversion in the Makefile.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-25 19:43 ` Jens Axboe
2009-03-25 19:49 ` Ric Wheeler
2009-03-25 20:25 ` Jeff Garzik
@ 2009-03-31 20:49 ` Jeff Garzik
2009-03-31 22:02 ` Ric Wheeler
2 siblings, 1 reply; 419+ messages in thread
From: Jeff Garzik @ 2009-03-31 20:49 UTC (permalink / raw)
To: Jens Axboe
Cc: Linus Torvalds, Theodore Tso, Ingo Molnar, Alan Cox,
Arjan van de Ven, Andrew Morton, Peter Zijlstra, Nick Piggin,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Jens Axboe wrote:
> Another problem is that FLUSH_CACHE sucks. Really. And not just on
> ext3/ordered, generally. Write a 50 byte file, fsync, flush cache and
> wit for the world to finish. Pretty hard to teach people to use a nicer
> fdatasync(), when the majority of the cost now becomes flushing the
> cache of that 1TB drive you happen to have 8 partitions on. Good luck
> with that.
(responding to an email way back near the start of the thread)
I emailed Microsoft about their proposal to add a WRITE BARRIER command
to ATA, documented at
http://www.t13.org/Documents/UploadedDocuments/docs2007/e07174r0-Write_Barrier_Command_Proposal.doc
The MSFT engineer said they were definitely still pursuing this proposal.
IMO we could look at this too, or perhaps come up with an alternate
proposal like FLUSH CACHE RANGE(s).
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 16:34 ` Linus Torvalds
2009-03-30 17:11 ` Ric Wheeler
@ 2009-03-31 21:10 ` Alan Cox
2009-03-31 21:55 ` Linus Torvalds
1 sibling, 1 reply; 419+ messages in thread
From: Alan Cox @ 2009-03-31 21:10 UTC (permalink / raw)
To: Linus Torvalds
Cc: Ric Wheeler, Andreas T.Auer, Theodore Tso, Mark Lord,
Stefan Richter, Jeff Garzik, Matthew Garrett, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
> percentage. For many setups, the other corruption issues (drive failure)
> are not just more common, but generally more disastrous anyway. So why
> would a person like that worry about the (rare) power failure?
How about the far more regular crash case ? We may be pretty reliable but
we are hardly indestructible especially on random boxes with funky BIOSes
or low grade hardware builds.
For the generic sane low end server/high end desktop build with at least
two drive software RAID the hardware failure for data loss case is
pretty rare. Crashes yes, having to reboot to recover from a RAID failure
sure but data loss far less so
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 19:22 ` Rik van Riel
` (2 preceding siblings ...)
2009-03-31 9:27 ` Neil Brown
@ 2009-03-31 21:13 ` Alan Cox
3 siblings, 0 replies; 419+ messages in thread
From: Alan Cox @ 2009-03-31 21:13 UTC (permalink / raw)
To: Rik van Riel
Cc: Linus Torvalds, Ric Wheeler, Andreas T.Auer, Theodore Tso,
Mark Lord, Stefan Richter, Jeff Garzik, Matthew Garrett,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
> No argument there. I have seen NCQ starvation on SATA disks,
> with some requests sitting in the drive for seconds, while
> the drive was busy handling hundreds of requests/second
> elsewhere...
The really sad thing about that one is that the SCSI vendors had this
problem over ten years ago with TCQ - and fixed it in the drives.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-28 21:55 ` Bojan Smojver
@ 2009-03-31 21:51 ` Jeremy Fitzhardinge
2009-03-31 22:30 ` Bojan Smojver
0 siblings, 1 reply; 419+ messages in thread
From: Jeremy Fitzhardinge @ 2009-03-31 21:51 UTC (permalink / raw)
To: Bojan Smojver; +Cc: linux-kernel
Bojan Smojver wrote:
> Bojan Smojver <bojan <at> rexursive.com> writes:
>
>
>> That was stupid. Ignore me.
>>
>
> And yet, FreeBSD seems to have a command just like that:
>
> http://www.freebsd.org/cgi/man.cgi?query=fsync&sektion=1&manpath=FreeBSD+7.1-RELEASE
>
I was thinking something like "munge_important_stuff | fsync > output" -
ie, cat which fsyncs on close. In fact, its vaguely surprising that GNU
cat doesn't have this already.
J
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-31 21:10 ` Alan Cox
@ 2009-03-31 21:55 ` Linus Torvalds
0 siblings, 0 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-03-31 21:55 UTC (permalink / raw)
To: Alan Cox
Cc: Ric Wheeler, Andreas T.Auer, Theodore Tso, Mark Lord,
Stefan Richter, Jeff Garzik, Matthew Garrett, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Tue, 31 Mar 2009, Alan Cox wrote:
>
> How about the far more regular crash case ? We may be pretty reliable but
> we are hardly indestructible especially on random boxes with funky BIOSes
> or low grade hardware builds.
The regular crash case doesn't need to care about the disk write-cache AT
ALL. The disk will finish the writes on its own long after the kernel
crashed.
That was my _point_. The write cache on the disk is generally a whole lot
safer than the OS data cache. If there's a catastrophic software failure
(outside of the disk firmware itself ;), then the OS data cache is gone.
But the disk write cache will be written back.
Of course, if you have an automatic and immediate "power-off-on-oops",
you're screwed, but if so, you have bigger problems anyway. You need to
wait at _least_ a second or two before you power off.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-31 20:49 ` Jeff Garzik
@ 2009-03-31 22:02 ` Ric Wheeler
2009-03-31 22:22 ` Jeff Garzik
0 siblings, 1 reply; 419+ messages in thread
From: Ric Wheeler @ 2009-03-31 22:02 UTC (permalink / raw)
To: Jeff Garzik
Cc: Jens Axboe, Linus Torvalds, Theodore Tso, Ingo Molnar, Alan Cox,
Arjan van de Ven, Andrew Morton, Peter Zijlstra, Nick Piggin,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Jeff Garzik wrote:
> Jens Axboe wrote:
>> Another problem is that FLUSH_CACHE sucks. Really. And not just on
>> ext3/ordered, generally. Write a 50 byte file, fsync, flush cache and
>> wit for the world to finish. Pretty hard to teach people to use a nicer
>> fdatasync(), when the majority of the cost now becomes flushing the
>> cache of that 1TB drive you happen to have 8 partitions on. Good luck
>> with that.
>
> (responding to an email way back near the start of the thread)
>
> I emailed Microsoft about their proposal to add a WRITE BARRIER
> command to ATA, documented at
> http://www.t13.org/Documents/UploadedDocuments/docs2007/e07174r0-Write_Barrier_Command_Proposal.doc
>
>
> The MSFT engineer said they were definitely still pursuing this proposal.
>
> IMO we could look at this too, or perhaps come up with an alternate
> proposal like FLUSH CACHE RANGE(s).
>
> Jeff
>
I agree that it is worth getting better mechanisms in place - the cache
flush is really primitive. Now we just need a victim to sit in on
T13/T10 standards meetings :-)
ric
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-31 22:02 ` Ric Wheeler
@ 2009-03-31 22:22 ` Jeff Garzik
2009-04-01 18:34 ` Mark Lord
0 siblings, 1 reply; 419+ messages in thread
From: Jeff Garzik @ 2009-03-31 22:22 UTC (permalink / raw)
To: Ric Wheeler
Cc: Jens Axboe, Linus Torvalds, Theodore Tso, Ingo Molnar, Alan Cox,
Arjan van de Ven, Andrew Morton, Mark Lord,
Linux Kernel Mailing List, Linux IDE mailing list
Ric Wheeler wrote:
> Jeff Garzik wrote:
>> Jens Axboe wrote:
>>> Another problem is that FLUSH_CACHE sucks. Really. And not just on
>>> ext3/ordered, generally. Write a 50 byte file, fsync, flush cache and
>>> wit for the world to finish. Pretty hard to teach people to use a nicer
>>> fdatasync(), when the majority of the cost now becomes flushing the
>>> cache of that 1TB drive you happen to have 8 partitions on. Good luck
>>> with that.
>>
>> (responding to an email way back near the start of the thread)
>>
>> I emailed Microsoft about their proposal to add a WRITE BARRIER
>> command to ATA, documented at
>> http://www.t13.org/Documents/UploadedDocuments/docs2007/e07174r0-Write_Barrier_Command_Proposal.doc
>> The MSFT engineer said they were definitely still pursuing this proposal.
>>
>> IMO we could look at this too, or perhaps come up with an alternate
>> proposal like FLUSH CACHE RANGE(s).
> I agree that it is worth getting better mechanisms in place - the cache
> flush is really primitive. Now we just need a victim to sit in on
> T13/T10 standards meetings :-)
Heck, we could even do a prototype implementation with the help of Mark
Lord's sata_mv target mode support...
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-31 21:51 ` Jeremy Fitzhardinge
@ 2009-03-31 22:30 ` Bojan Smojver
2009-04-01 5:26 ` Bojan Smojver
0 siblings, 1 reply; 419+ messages in thread
From: Bojan Smojver @ 2009-03-31 22:30 UTC (permalink / raw)
To: Jeremy Fitzhardinge; +Cc: linux-kernel
On Tue, 2009-03-31 at 14:51 -0700, Jeremy Fitzhardinge wrote:
> I was thinking something like "munge_important_stuff | fsync > output"
> - ie, cat which fsyncs on close.
Yeah, after I wrote my initial comment, I noticed you were saying
essentially the same thing in your original post. I know, I should
_read_ before posting. Sorry :-(
> In fact, its vaguely surprising that GNU cat doesn't have this
> already.
I have no idea why we don't have that either. FreeBSD code seems really
straightforward.
--
Bojan
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 12:55 ` Chris Mason
2009-03-30 17:42 ` Theodore Tso
@ 2009-03-31 23:55 ` Dave Chinner
2009-04-01 12:53 ` Chris Mason
1 sibling, 1 reply; 419+ messages in thread
From: Dave Chinner @ 2009-03-31 23:55 UTC (permalink / raw)
To: Chris Mason
Cc: Mark Lord, Stefan Richter, Jeff Garzik, Linus Torvalds,
Matthew Garrett, Alan Cox, Theodore Tso, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Mon, Mar 30, 2009 at 08:55:51AM -0400, Chris Mason wrote:
> On Mon, 2009-03-30 at 10:14 +1100, Dave Chinner wrote:
> > On Sat, Mar 28, 2009 at 11:17:08AM -0400, Mark Lord wrote:
> > > The better solution seems to be the rather obvious one:
> > >
> > > the filesystem should commit data to disk before altering metadata.
> >
> > Generalities are bad. For example:
> >
> > write();
> > unlink();
> > <do more stuff>
> > close();
> >
> > This is a clear case where you want metadata changed before data is
> > committed to disk. In many cases, you don't even want the data to
> > hit the disk here.
> >
> > Similarly, rsync does the magic open,write,close,rename sequence
> > without an fsync before the rename. And it doesn't need the fsync,
> > either. The proposed implicit fsync on rename will kill rsync
> > performance, and I think that may make many people unhappy....
> >
>
> Sorry, I'm afraid that rsync falls into the same category as the
> kde/gnome apps here.
I disagree.
> There are a lot of backup programs built around rsync, and every one of
> them risks losing the old copy of the file by renaming an unflushed new
> copy over it.
If you crash while rsync is running, then the state of the copy
is garbage anyway. You have to restart from scratch and rsync will
detect such failures and resync the file. gnome/kde have no
mechanism for such recovery.
> rsync needs the flushing about a million times more than gnome and kde,
> and it doesn't have any option to do it automatically.
And therein lies the problem with a "flush-before-rename"
semantic....
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 19:02 ` Bill Davidsen
@ 2009-04-01 1:19 ` david
2009-04-01 16:24 ` Bill Davidsen
0 siblings, 1 reply; 419+ messages in thread
From: david @ 2009-04-01 1:19 UTC (permalink / raw)
To: Bill Davidsen; +Cc: linux-kernel
On Mon, 30 Mar 2009, Bill Davidsen wrote:
> Andreas T.Auer wrote:
>> On 30.03.2009 02:39 Theodore Tso wrote:
>>> All I can do is apologize to all other filesystem developers profusely
>>> for ext3's data=ordered semantics; at this point, I very much regret
>>> that we made data=ordered the default for ext3. But the application
>>> writers vastly outnumber us, and realistically we're not going to be
>>> able to easily roll back eight years of application writers being
>>> trained that fsync() is not necessary, and actually is detrimental for
>>> ext3.
>
>> And still I don't know any reason, why it makes sense to write the
>> metadata to non-existing data immediately instead of delaying that, too.
>>
> Here I have the same question, I don't expect or demand that anything be done
> in a particular order unless I force it so, and I expect there to be some
> corner case where the data is written and the metadata doesn't reflect that
> in the event of a failure, but I can't see that it ever a good idea to have
> the metadata reflect the future and describe what things will look like if
> everything goes as planned. I have had enough of that BS from financial
> planners and politicians, metadata shouldn't try to predict the future just
> to save a ms here or there. It's also necessary to have the metadata match
> reality after fsync(), of course, or even the well behaved applications
> mentioned in this thread haven't a hope of staying consistent.
>
> Feel free to clarify why clairvoyant metadata is ever a good thing...
it's not that it's deliberatly pushing metadata out ahead of file data,
but say you have the following sequence
write to file1
update metadata for file1
write to file2
update metadata for file2
if file1 and file2 are in the same directory your software can finish all
four of these steps before _any_ of the data gets pushed to disk.
then when the system goes to write the metadata for file1 it is pushing
the then-current copy of that sector to disk, which includes the metadata
for file2, even though the data for file2 hasn't been written yet.
if you try to say 'flush all data blocks before metadata blocks' and have
a lot of activity going on in a directory, and have to wait until it all
stops before you write any of the metadata out, you could be blocked from
writing the metadata for a _long_ time.
Also, if somone does a fsync on any of those files you can end up waiting
a long time for all that other data to get written out (especially if the
files are still being modified while you are trying to do the fsync). As I
understand it, this is the fundamental cause of the slow fsync calls on
ext3 with data=ordered.
David Lang
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-31 22:30 ` Bojan Smojver
@ 2009-04-01 5:26 ` Bojan Smojver
2009-04-01 6:35 ` Jeremy Fitzhardinge
0 siblings, 1 reply; 419+ messages in thread
From: Bojan Smojver @ 2009-04-01 5:26 UTC (permalink / raw)
To: Jeremy Fitzhardinge; +Cc: linux-kernel
On Wed, 2009-04-01 at 09:30 +1100, Bojan Smojver wrote:
> I have no idea why we don't have that either. FreeBSD code seems
> really straightforward.
I just tried using dd with conv=fsync option and that kinda does what
you mentioned. I see this at the end of strace:
---------------------------------
write(1, "<some data...>"..., 512) = 512
read(0, ""..., 512) = 0
fsync(1) = 0
close(0) = 0
close(1) = 0
---------------------------------
So, maybe GNU folks just don't want to have yet another tool for this.
--
Bojan
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-01 5:26 ` Bojan Smojver
@ 2009-04-01 6:35 ` Jeremy Fitzhardinge
0 siblings, 0 replies; 419+ messages in thread
From: Jeremy Fitzhardinge @ 2009-04-01 6:35 UTC (permalink / raw)
To: Bojan Smojver; +Cc: linux-kernel
Bojan Smojver wrote:
> On Wed, 2009-04-01 at 09:30 +1100, Bojan Smojver wrote:
>
>> I have no idea why we don't have that either. FreeBSD code seems
>> really straightforward.
>>
>
> I just tried using dd with conv=fsync option and that kinda does what
> you mentioned. I see this at the end of strace:
> ---------------------------------
> write(1, "<some data...>"..., 512) = 512
> read(0, ""..., 512) = 0
> fsync(1) = 0
> close(0) = 0
> close(1) = 0
> ---------------------------------
>
> So, maybe GNU folks just don't want to have yet another tool for this.
>
Huh, didn't know dd had grown that. Confusingly similar to the
completely different conv=sync, so its a perfect dd addition. Ooh,
fdatasync too.
J
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-31 23:55 ` Dave Chinner
@ 2009-04-01 12:53 ` Chris Mason
2009-04-01 15:41 ` Andreas T.Auer
0 siblings, 1 reply; 419+ messages in thread
From: Chris Mason @ 2009-04-01 12:53 UTC (permalink / raw)
To: Dave Chinner
Cc: Mark Lord, Stefan Richter, Jeff Garzik, Linus Torvalds,
Matthew Garrett, Alan Cox, Theodore Tso, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
On Wed, 2009-04-01 at 10:55 +1100, Dave Chinner wrote:
> On Mon, Mar 30, 2009 at 08:55:51AM -0400, Chris Mason wrote:
> > On Mon, 2009-03-30 at 10:14 +1100, Dave Chinner wrote:
> > > On Sat, Mar 28, 2009 at 11:17:08AM -0400, Mark Lord wrote:
> > > > The better solution seems to be the rather obvious one:
> > > >
> > > > the filesystem should commit data to disk before altering metadata.
> > >
> > > Generalities are bad. For example:
> > >
> > > write();
> > > unlink();
> > > <do more stuff>
> > > close();
> > >
> > > This is a clear case where you want metadata changed before data is
> > > committed to disk. In many cases, you don't even want the data to
> > > hit the disk here.
> > >
> > > Similarly, rsync does the magic open,write,close,rename sequence
> > > without an fsync before the rename. And it doesn't need the fsync,
> > > either. The proposed implicit fsync on rename will kill rsync
> > > performance, and I think that may make many people unhappy....
> > >
> >
> > Sorry, I'm afraid that rsync falls into the same category as the
> > kde/gnome apps here.
>
> I disagree.
>
> > There are a lot of backup programs built around rsync, and every one of
> > them risks losing the old copy of the file by renaming an unflushed new
> > copy over it.
>
> If you crash while rsync is running, then the state of the copy
> is garbage anyway. You have to restart from scratch and rsync will
> detect such failures and resync the file. gnome/kde have no
> mechanism for such recovery.
>
If this were the recovery system they had in mind, then why use rename
at all? They could just as easily overwrite the original in place.
Using rename implies they want to replace the old with a complete new
version.
There's also the window where you crash after the rsync is done but
before all the new data safely makes it into the replacement files.
> > rsync needs the flushing about a million times more than gnome and kde,
> > and it doesn't have any option to do it automatically.
>
> And therein lies the problem with a "flush-before-rename"
> semantic....
Here I was just talking about a rsync --flush-after-rename or something,
not an option from the kernel.
-chris
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-01 12:53 ` Chris Mason
@ 2009-04-01 15:41 ` Andreas T.Auer
2009-04-01 16:02 ` Chris Mason
0 siblings, 1 reply; 419+ messages in thread
From: Andreas T.Auer @ 2009-04-01 15:41 UTC (permalink / raw)
To: Chris Mason
Cc: Dave Chinner, Mark Lord, Stefan Richter, Jeff Garzik,
Linus Torvalds, Matthew Garrett, Alan Cox, Theodore Tso,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On 01.04.2009 14:53 Chris Mason wrote:
> On Wed, 2009-04-01 at 10:55 +1100, Dave Chinner wrote:
>
>> If you crash while rsync is running, then the state of the copy
>> is garbage anyway. You have to restart from scratch and rsync will
>> detect such failures and resync the file. gnome/kde have no
>> mechanism for such recovery.
>>
> If this were the recovery system they had in mind, then why use rename
> at all? They could just as easily overwrite the original in place.
>
It is not a recovery system. The renaming procedure is almost atomic
with e.g. reiser or ext3 (ordered), but simple overwriting would always
leave a window between truncating and the complete rewrite of the file.
> Using rename implies they want to replace the old with a complete new
> version.
>
> There's also the window where you crash after the rsync is done but
> before all the new data safely makes it into the replacement files.
>
Sure, but in that case you have only lost some of your _mirrored_ data.
The original will usually be untouched by this. So after the restart you
just start the mirroring process again, and hopefully, this time you get
a perfect copy.
In KDE and lots of other apps the _original_ config files (and not any
copies) are "overlinked" with the new files by the rename. That's the
difference.
Andreas
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-01 15:41 ` Andreas T.Auer
@ 2009-04-01 16:02 ` Chris Mason
2009-04-01 18:37 ` Andreas T.Auer
2009-04-01 21:50 ` Theodore Tso
0 siblings, 2 replies; 419+ messages in thread
From: Chris Mason @ 2009-04-01 16:02 UTC (permalink / raw)
To: Andreas T.Auer
Cc: Dave Chinner, Mark Lord, Stefan Richter, Jeff Garzik,
Linus Torvalds, Matthew Garrett, Alan Cox, Theodore Tso,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Wed, 2009-04-01 at 17:41 +0200, Andreas T.Auer wrote:
>
> On 01.04.2009 14:53 Chris Mason wrote:
> > On Wed, 2009-04-01 at 10:55 +1100, Dave Chinner wrote:
> >
> >> If you crash while rsync is running, then the state of the copy
> >> is garbage anyway. You have to restart from scratch and rsync will
> >> detect such failures and resync the file. gnome/kde have no
> >> mechanism for such recovery.
> >>
> > If this were the recovery system they had in mind, then why use rename
> > at all? They could just as easily overwrite the original in place.
> >
>
> It is not a recovery system. The renaming procedure is almost atomic
> with e.g. reiser or ext3 (ordered), but simple overwriting would always
> leave a window between truncating and the complete rewrite of the file.
>
Well, we're considering a future where ext3 and reiser are no longer
used, and applications are responsible for the flushing if they want
renames atomic for data as well as metadata.
In this case, rename without additional flush and truncate are the same.
> > Using rename implies they want to replace the old with a complete new
> > version.
> >
> > There's also the window where you crash after the rsync is done but
> > before all the new data safely makes it into the replacement files.
> >
>
> Sure, but in that case you have only lost some of your _mirrored_ data.
> The original will usually be untouched by this. So after the restart you
> just start the mirroring process again, and hopefully, this time you get
> a perfect copy.
>
If we crash during the rsync, the backup logs will yell. If we crash
just after the rsync, the backup logs won't know. The data could still
be gone.
> In KDE and lots of other apps the _original_ config files (and not any
> copies) are "overlinked" with the new files by the rename. That's the
> difference.
We don't run backup programs because we can use the original as a backup
for the backup ;) From an rsync-for-backup point of view, the backup is
the only copy.
Yes, rsync could easily be fixed. Or maybe people just aren't worried,
its hard to say. Having the ext3 style flush with the rename makes the
system easier to use, and easier to predict how it will react.
rsync was originally brought up when someone asked about applications
that do renames and don't care about atomic data replacement. If the
flushing is a horrible thing, there must be a lot more examples?
-chris
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-01 1:19 ` david
@ 2009-04-01 16:24 ` Bill Davidsen
2009-04-01 20:15 ` david
0 siblings, 1 reply; 419+ messages in thread
From: Bill Davidsen @ 2009-04-01 16:24 UTC (permalink / raw)
To: david; +Cc: linux-kernel
david@lang.hm wrote:
> On Mon, 30 Mar 2009, Bill Davidsen wrote:
>
>> Andreas T.Auer wrote:
>>> On 30.03.2009 02:39 Theodore Tso wrote:
>>>> All I can do is apologize to all other filesystem developers profusely
>>>> for ext3's data=ordered semantics; at this point, I very much regret
>>>> that we made data=ordered the default for ext3. But the application
>>>> writers vastly outnumber us, and realistically we're not going to be
>>>> able to easily roll back eight years of application writers being
>>>> trained that fsync() is not necessary, and actually is detrimental for
>>>> ext3.
>>
>>> And still I don't know any reason, why it makes sense to write the
>>> metadata to non-existing data immediately instead of delaying that,
>>> too.
>>>
>> Here I have the same question, I don't expect or demand that anything
>> be done in a particular order unless I force it so, and I expect
>> there to be some corner case where the data is written and the
>> metadata doesn't reflect that in the event of a failure, but I can't
>> see that it ever a good idea to have the metadata reflect the future
>> and describe what things will look like if everything goes as
>> planned. I have had enough of that BS from financial planners and
>> politicians, metadata shouldn't try to predict the future just to
>> save a ms here or there. It's also necessary to have the metadata
>> match reality after fsync(), of course, or even the well behaved
>> applications mentioned in this thread haven't a hope of staying
>> consistent.
>>
>> Feel free to clarify why clairvoyant metadata is ever a good thing...
>
> it's not that it's deliberatly pushing metadata out ahead of file
> data, but say you have the following sequence
>
> write to file1
> update metadata for file1
> write to file2
> update metadata for file2
>
Understood that it's not deliberate just careless. The two behaviors
which are reported are (a) updating a record in an existing file and
having the entire file content vanish, and (b) finding some one else's
old data in my file - a serious security issue. I haven't seen any
report of the case where a process unlinks or truncates a file, the disk
space gets reused, and then the systems fails before the metadata is
updated, leaving the data written by some other process in the file
where it can be read - another possible security issue.
> if file1 and file2 are in the same directory your software can finish
> all four of these steps before _any_ of the data gets pushed to disk.
>
> then when the system goes to write the metadata for file1 it is
> pushing the then-current copy of that sector to disk, which includes
> the metadata for file2, even though the data for file2 hasn't been
> written yet.
>
> if you try to say 'flush all data blocks before metadata blocks' and
> have a lot of activity going on in a directory, and have to wait until
> it all stops before you write any of the metadata out, you could be
> blocked from writing the metadata for a _long_ time.
>
If you mean "write all data for that file" before the metadata, it would
seem to behave the way an fsync would, and the metadata should go out in
some reasonable time.
> Also, if somone does a fsync on any of those files you can end up
> waiting a long time for all that other data to get written out
> (especially if the files are still being modified while you are trying
> to do the fsync). As I understand it, this is the fundamental cause of
> the slow fsync calls on ext3 with data=ordered.
Your analysis sounds right to me,
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
"You are disgraced professional losers. And by the way, give us our money back."
- Representative Earl Pomeroy, Democrat of North Dakota
on the A.I.G. executives who were paid bonuses after a federal bailout.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-31 22:22 ` Jeff Garzik
@ 2009-04-01 18:34 ` Mark Lord
0 siblings, 0 replies; 419+ messages in thread
From: Mark Lord @ 2009-04-01 18:34 UTC (permalink / raw)
To: Jeff Garzik
Cc: Ric Wheeler, Jens Axboe, Linus Torvalds, Theodore Tso,
Ingo Molnar, Alan Cox, Arjan van de Ven, Andrew Morton, Mark Lord,
Linux Kernel Mailing List, Linux IDE mailing list
Jeff Garzik wrote:
> Ric Wheeler wrote:
>> Jeff Garzik wrote:
>>> Jens Axboe wrote:
..
>>> IMO we could look at this too, or perhaps come up with an alternate
>>> proposal like FLUSH CACHE RANGE(s).
>
>> I agree that it is worth getting better mechanisms in place - the
>> cache flush is really primitive. Now we just need a victim to sit in
>> on T13/T10 standards meetings :-)
>
>
> Heck, we could even do a prototype implementation with the help of Mark
> Lord's sata_mv target mode support...
..
Speaking of which.. you probably won't see the preliminary rev
of sata_mv + target_mode until sometime this weekend.
It's going to be something quite simple for 2.6.30,
and we can expand on that in later kernels.
Cheers
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-01 16:02 ` Chris Mason
@ 2009-04-01 18:37 ` Andreas T.Auer
2009-04-01 21:50 ` Theodore Tso
1 sibling, 0 replies; 419+ messages in thread
From: Andreas T.Auer @ 2009-04-01 18:37 UTC (permalink / raw)
To: Chris Mason
Cc: Andreas T.Auer, Dave Chinner, Mark Lord, Stefan Richter,
Jeff Garzik, Linus Torvalds, Matthew Garrett, Alan Cox,
Theodore Tso, Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On 01.04.2009 18:02 Chris Mason wrote:
> On Wed, 2009-04-01 at 17:41 +0200, Andreas T.Auer wrote:
>
>> On 01.04.2009 14:53 Chris Mason wrote:
>>
>> It is not a recovery system. The renaming procedure is almost atomic
>> with e.g. reiser or ext3 (ordered), but simple overwriting would always
>> leave a window between truncating and the complete rewrite of the file.
>>
>>
>
> Well, we're considering a future where ext3 and reiser are no longer
> used, and applications are responsible for the flushing if they want
> renames atomic for data as well as metadata.
>
As long as you only consider it, all will be fine ;-). As a user I don't
want to use a filesystem which leaves a long gap between renaming the
metadata and writing the data for it, that is having dirty, inconsistent
metadata overwriting clean metadata. So Ted's quick pragmatic approach
to patch it in the first step was good, even if it's possible that it's
not be the final solution.
Flushing in applications is not a suitable solution. Maybe barriers
could be a solution, but to get something like this into _all_ the
multitude of applications is very unlikely.
There might be filesystems which use a delayed, but ordered mode. They
could provide "atomic" renames, and perform much better, if applications
do not flush with every file update.
Andreas
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-01 16:24 ` Bill Davidsen
@ 2009-04-01 20:15 ` david
2009-04-01 21:33 ` Andreas T.Auer
2009-04-01 22:00 ` Harald Arnesen
0 siblings, 2 replies; 419+ messages in thread
From: david @ 2009-04-01 20:15 UTC (permalink / raw)
To: Bill Davidsen; +Cc: linux-kernel
On Wed, 1 Apr 2009, Bill Davidsen wrote:
> david@lang.hm wrote:
>> On Mon, 30 Mar 2009, Bill Davidsen wrote:
>>
>>> Andreas T.Auer wrote:
>>>> On 30.03.2009 02:39 Theodore Tso wrote:
>>>>> All I can do is apologize to all other filesystem developers profusely
>>>>> for ext3's data=ordered semantics; at this point, I very much regret
>>>>> that we made data=ordered the default for ext3. But the application
>>>>> writers vastly outnumber us, and realistically we're not going to be
>>>>> able to easily roll back eight years of application writers being
>>>>> trained that fsync() is not necessary, and actually is detrimental for
>>>>> ext3.
>>>
>>>> And still I don't know any reason, why it makes sense to write the
>>>> metadata to non-existing data immediately instead of delaying that, too.
>>>>
>>> Here I have the same question, I don't expect or demand that anything be
>>> done in a particular order unless I force it so, and I expect there to be
>>> some corner case where the data is written and the metadata doesn't
>>> reflect that in the event of a failure, but I can't see that it ever a
>>> good idea to have the metadata reflect the future and describe what things
>>> will look like if everything goes as planned. I have had enough of that BS
>>> from financial planners and politicians, metadata shouldn't try to predict
>>> the future just to save a ms here or there. It's also necessary to have
>>> the metadata match reality after fsync(), of course, or even the well
>>> behaved applications mentioned in this thread haven't a hope of staying
>>> consistent.
>>>
>>> Feel free to clarify why clairvoyant metadata is ever a good thing...
>>
>> it's not that it's deliberatly pushing metadata out ahead of file data, but
>> say you have the following sequence
>>
>> write to file1
>> update metadata for file1
>> write to file2
>> update metadata for file2
>>
> Understood that it's not deliberate just careless. The two behaviors which
> are reported are (a) updating a record in an existing file and having the
> entire file content vanish, and (b) finding some one else's old data in my
> file - a serious security issue. I haven't seen any report of the case where
> a process unlinks or truncates a file, the disk space gets reused, and then
> the systems fails before the metadata is updated, leaving the data written by
> some other process in the file where it can be read - another possible
> security issue.
ext3 eliminates this security issue by writing the data before the
metadata. ext4 (and I thing XFS) eliminate this security issue by not
allocating the blocks until it goes to write the data out. I don't know
how other filesystems deal with this.
>> if file1 and file2 are in the same directory your software can finish all
>> four of these steps before _any_ of the data gets pushed to disk.
>>
>> then when the system goes to write the metadata for file1 it is pushing the
>> then-current copy of that sector to disk, which includes the metadata for
>> file2, even though the data for file2 hasn't been written yet.
>>
>> if you try to say 'flush all data blocks before metadata blocks' and have a
>> lot of activity going on in a directory, and have to wait until it all
>> stops before you write any of the metadata out, you could be blocked from
>> writing the metadata for a _long_ time.
>>
> If you mean "write all data for that file" before the metadata, it would seem
> to behave the way an fsync would, and the metadata should go out in some
> reasonable time.
except if another file in the directory gets modified while it's writing
out the first two, that file now would need to get written out as well,
before the metadata for that directory can be written. if you have a busy
system (say a database or log server), where files are getting modified
pretty constantly, it can be a long time before all the file data is
written out and the system is idle enough to write the metadata.
David Lang
>> Also, if somone does a fsync on any of those files you can end up waiting a
>> long time for all that other data to get written out (especially if the
>> files are still being modified while you are trying to do the fsync). As I
>> understand it, this is the fundamental cause of the slow fsync calls on
>> ext3 with data=ordered.
>
> Your analysis sounds right to me,
>
>
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 1:25 ` Andrew Morton
` (3 preceding siblings ...)
2009-03-28 5:06 ` Ingo Molnar
@ 2009-04-01 21:03 ` Lennart Sorensen
2009-04-01 21:36 ` Andrew Morton
` (2 more replies)
4 siblings, 3 replies; 419+ messages in thread
From: Lennart Sorensen @ 2009-04-01 21:03 UTC (permalink / raw)
To: Andrew Morton
Cc: Linus Torvalds, Theodore Tso, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Thu, Mar 26, 2009 at 06:25:19PM -0700, Andrew Morton wrote:
> The JBD journal is a massive designed-in contention point. It's why
> for several years I've been telling anyone who will listen that we need
> a new fs. Hopefully our response to all these problems will soon be
> "did you try btrfs?".
Oh I look forward to the day when it will be safe to convert my mythtv
box from ext3 to btrfs. Current kernels just have too much IO latency
with ext3 it seems. Older kernels were more responsive, but probably
had other places they were less efficient.
--
Len Sorensen
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-01 20:15 ` david
@ 2009-04-01 21:33 ` Andreas T.Auer
2009-04-01 22:29 ` david
2009-04-01 22:00 ` Harald Arnesen
1 sibling, 1 reply; 419+ messages in thread
From: Andreas T.Auer @ 2009-04-01 21:33 UTC (permalink / raw)
To: david; +Cc: Bill Davidsen, linux-kernel
On 01.04.2009 22:15 david@lang.hm wrote:
> On Wed, 1 Apr 2009, Bill Davidsen wrote:
>
>> david@lang.hm wrote:
>>> it's not that it's deliberatly pushing metadata out ahead of file
>>> data, but say you have the following sequence
>>>
>>> write to file1
>>> update metadata for file1
>>> write to file2
>>> update metadata for file2
>>>
>>> if file1 and file2 are in the same directory your software can
>>> finish all four of these steps before _any_ of the data gets pushed
>>> to disk.
>>>
>>> then when the system goes to write the metadata for file1 it is
>>> pushing the then-current copy of that sector to disk, which includes
>>> the metadata for file2, even though the data for file2 hasn't been
>>> written yet.
>>>
>>> if you try to say 'flush all data blocks before metadata blocks' and
>>> have a lot of activity going on in a directory, and have to wait
>>> until it all stops before you write any of the metadata out, you
>>> could be blocked from writing the metadata for a _long_ time.
>>>
>> If you mean "write all data for that file" before the metadata, it
>> would seem to behave the way an fsync would, and the metadata should
>> go out in some reasonable time.
>
> except if another file in the directory gets modified while it's
> writing out the first two, that file now would need to get written out
> as well, before the metadata for that directory can be written. if you
> have a busy system (say a database or log server), where files are
> getting modified pretty constantly, it can be a long time before all
> the file data is written out and the system is idle enough to write
> the metadata.
Thank you, David, for this use case, but I think the problem could be
solved quite easily:
At any write-out time, e.g. after collecting enough data for delayed
allocation or at fsync()
1) copy the metadata in memory, i.e. snapshot it
2) write out the data corresponding to the metadata-snapshot
3) write out the snapshot of the metadata
In that way subsequent metadata changes should not interfere with the
metadata-update on disk.
Andreas
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-01 21:03 ` Lennart Sorensen
@ 2009-04-01 21:36 ` Andrew Morton
2009-04-01 22:57 ` Lennart Sorensen
2009-04-02 1:00 ` Ingo Molnar
2009-04-02 11:05 ` Janne Grunau
2009-04-02 12:17 ` Theodore Tso
2 siblings, 2 replies; 419+ messages in thread
From: Andrew Morton @ 2009-04-01 21:36 UTC (permalink / raw)
To: Lennart Sorensen; +Cc: torvalds, tytso, drees76, jesper, linux-kernel
On Wed, 1 Apr 2009 17:03:38 -0400
lsorense@csclub.uwaterloo.ca (Lennart Sorensen) wrote:
> On Thu, Mar 26, 2009 at 06:25:19PM -0700, Andrew Morton wrote:
> > The JBD journal is a massive designed-in contention point. It's why
> > for several years I've been telling anyone who will listen that we need
> > a new fs. Hopefully our response to all these problems will soon be
> > "did you try btrfs?".
>
> Oh I look forward to the day when it will be safe to convert my mythtv
> box from ext3 to btrfs. Current kernels just have too much IO latency
> with ext3 it seems. Older kernels were more responsive, but probably
> had other places they were less efficient.
Back in 2002ish I did a *lot* of work on IO latency, reads-vs-writes,
etc, etc (but not fsync - for practical purposes it's unfixable on
ext3-ordered)
Performance was pretty good. From some of the descriptions I'm seeing
get tossed around lately, I suspect that it has regressed.
It would be useful/interesting if people were to rerun some of these
tests with `echo anticipatory > /sys/block/sda/queue/scheduler'.
Or with linux-2.5.60 :(
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-01 16:02 ` Chris Mason
2009-04-01 18:37 ` Andreas T.Auer
@ 2009-04-01 21:50 ` Theodore Tso
2009-04-01 23:44 ` Matthew Garrett
1 sibling, 1 reply; 419+ messages in thread
From: Theodore Tso @ 2009-04-01 21:50 UTC (permalink / raw)
To: Chris Mason
Cc: Andreas T.Auer, Dave Chinner, Mark Lord, Stefan Richter,
Jeff Garzik, Linus Torvalds, Matthew Garrett, Alan Cox,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Wed, Apr 01, 2009 at 12:02:26PM -0400, Chris Mason wrote:
>
> If we crash during the rsync, the backup logs will yell. If we crash
> just after the rsync, the backup logs won't know. The data could still
> be gone.
So have rsync call the sync() system call before it exits. Not a big
deal, and not all that costly. So basically what I would suggest
doing for people who are really worried about rsync performance with
flush-on-rename is to create a patch to rsync which creates a new
flag, --unlink-before-rename, which will defeat the flush-on-rename
hueristic; and if this patch also causes rsync to call sync() when it
is done, it should be quite safe.
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-01 20:15 ` david
2009-04-01 21:33 ` Andreas T.Auer
@ 2009-04-01 22:00 ` Harald Arnesen
2009-04-01 22:09 ` Alejandro Riveira Fernández
2009-04-01 22:28 ` david
1 sibling, 2 replies; 419+ messages in thread
From: Harald Arnesen @ 2009-04-01 22:00 UTC (permalink / raw)
To: david; +Cc: Bill Davidsen, linux-kernel
david@lang.hm writes:
>> Understood that it's not deliberate just careless. The two behaviors
>> which are reported are (a) updating a record in an existing file and
>> having the entire file content vanish, and (b) finding some one
>> else's old data in my file - a serious security issue. I haven't
>> seen any report of the case where a process unlinks or truncates a
>> file, the disk space gets reused, and then the systems fails before
>> the metadata is updated, leaving the data written by some other
>> process in the file where it can be read - another possible security
>> issue.
>
> ext3 eliminates this security issue by writing the data before the
> metadata. ext4 (and I thing XFS) eliminate this security issue by not
> allocating the blocks until it goes to write the data out. I don't
> know how other filesystems deal with this.
I've been wondering about that during the last days. How abut JFS and
data loss (files containing zeroes after a crash), as compared to ext3,
ext4, ordered and writeback journal modes? Is is safe?
--
Hilsen Harald.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-01 22:00 ` Harald Arnesen
@ 2009-04-01 22:09 ` Alejandro Riveira Fernández
2009-04-01 22:28 ` david
1 sibling, 0 replies; 419+ messages in thread
From: Alejandro Riveira Fernández @ 2009-04-01 22:09 UTC (permalink / raw)
To: Harald Arnesen; +Cc: david, Bill Davidsen, linux-kernel
El Thu, 02 Apr 2009 00:00:04 +0200
Harald Arnesen <skogtun.harald@gmail.com> escribió:
>
> I've been wondering about that during the last days. How abut JFS and
> data loss (files containing zeroes after a crash), as compared to ext3,
> ext4, ordered and writeback journal modes? Is is safe?
i have had zeroed conf files with jfs (shell history) and corrupted firefox
history files too after power outages and the like.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-01 22:00 ` Harald Arnesen
2009-04-01 22:09 ` Alejandro Riveira Fernández
@ 2009-04-01 22:28 ` david
1 sibling, 0 replies; 419+ messages in thread
From: david @ 2009-04-01 22:28 UTC (permalink / raw)
To: Harald Arnesen; +Cc: Bill Davidsen, linux-kernel
On Thu, 2 Apr 2009, Harald Arnesen wrote:
> david@lang.hm writes:
>
>>> Understood that it's not deliberate just careless. The two behaviors
>>> which are reported are (a) updating a record in an existing file and
>>> having the entire file content vanish, and (b) finding some one
>>> else's old data in my file - a serious security issue. I haven't
>>> seen any report of the case where a process unlinks or truncates a
>>> file, the disk space gets reused, and then the systems fails before
>>> the metadata is updated, leaving the data written by some other
>>> process in the file where it can be read - another possible security
>>> issue.
>>
>> ext3 eliminates this security issue by writing the data before the
>> metadata. ext4 (and I thing XFS) eliminate this security issue by not
>> allocating the blocks until it goes to write the data out. I don't
>> know how other filesystems deal with this.
>
> I've been wondering about that during the last days. How abut JFS and
> data loss (files containing zeroes after a crash), as compared to ext3,
> ext4, ordered and writeback journal modes? Is is safe?
if you don't do a fsync you can (and will) loose data if there is a crash
period, end of statement, with all filesystems
for all filesystems except ext3 in data=ordered or data=journaled modes
journaling does _not_ mean that your files will have valid data in them.
all it means is that your metadata will not be inconsistant (things like
one block on disk showing up as being part of two different files)
this guarantee means that a crash is not likely to scramble your entire
disk, but any data written shortly before the crash may not have made it
to disk (and the files may contain garbage in the space that was allocated
but not written). as such it is not nessasary to do a fsck after every
crash (it's still a good idea to do so every once in a while)
that's _ALL_ that journaling is protecting you from.
delayed allocateion and data=ordered are ways to address the security
problem that the garbage data that could end up as part of the file could
contain sensitive data that had been part of other files in the past.
data=ordered and data=journaled address this security risk by writing the
data before they write the metadata (at the cost of long delays in writing
the metadata out, and therefor long fsync times)
XFS and ext4 solve the problem by not allocating the data blocks until
they are actually ready to write the data.
David Lang
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-01 21:33 ` Andreas T.Auer
@ 2009-04-01 22:29 ` david
2009-04-02 2:30 ` Bron Gondwana
2009-04-02 12:30 ` Bill Davidsen
0 siblings, 2 replies; 419+ messages in thread
From: david @ 2009-04-01 22:29 UTC (permalink / raw)
To: Andreas T.Auer; +Cc: Bill Davidsen, linux-kernel
On Wed, 1 Apr 2009, Andreas T.Auer wrote:
> On 01.04.2009 22:15 david@lang.hm wrote:
>> On Wed, 1 Apr 2009, Bill Davidsen wrote:
>>
>>> david@lang.hm wrote:
>>>> it's not that it's deliberatly pushing metadata out ahead of file
>>>> data, but say you have the following sequence
>>>>
>>>> write to file1
>>>> update metadata for file1
>>>> write to file2
>>>> update metadata for file2
>>>>
>>>> if file1 and file2 are in the same directory your software can
>>>> finish all four of these steps before _any_ of the data gets pushed
>>>> to disk.
>>>>
>>>> then when the system goes to write the metadata for file1 it is
>>>> pushing the then-current copy of that sector to disk, which includes
>>>> the metadata for file2, even though the data for file2 hasn't been
>>>> written yet.
>>>>
>>>> if you try to say 'flush all data blocks before metadata blocks' and
>>>> have a lot of activity going on in a directory, and have to wait
>>>> until it all stops before you write any of the metadata out, you
>>>> could be blocked from writing the metadata for a _long_ time.
>>>>
>>> If you mean "write all data for that file" before the metadata, it
>>> would seem to behave the way an fsync would, and the metadata should
>>> go out in some reasonable time.
>>
>> except if another file in the directory gets modified while it's
>> writing out the first two, that file now would need to get written out
>> as well, before the metadata for that directory can be written. if you
>> have a busy system (say a database or log server), where files are
>> getting modified pretty constantly, it can be a long time before all
>> the file data is written out and the system is idle enough to write
>> the metadata.
> Thank you, David, for this use case, but I think the problem could be
> solved quite easily:
>
> At any write-out time, e.g. after collecting enough data for delayed
> allocation or at fsync()
>
> 1) copy the metadata in memory, i.e. snapshot it
> 2) write out the data corresponding to the metadata-snapshot
> 3) write out the snapshot of the metadata
>
> In that way subsequent metadata changes should not interfere with the
> metadata-update on disk.
the problem with this approach is that the dcache has no provision for
there being two (or more) copies of the disk block in it's cache, adding
this would significantly complicate things (it was mentioned briefly a few
days ago in this thread)
David Lang
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-01 21:36 ` Andrew Morton
@ 2009-04-01 22:57 ` Lennart Sorensen
2009-04-03 14:46 ` Mark Lord
2009-04-02 1:00 ` Ingo Molnar
1 sibling, 1 reply; 419+ messages in thread
From: Lennart Sorensen @ 2009-04-01 22:57 UTC (permalink / raw)
To: Andrew Morton; +Cc: torvalds, tytso, drees76, jesper, linux-kernel
On Wed, Apr 01, 2009 at 02:36:22PM -0700, Andrew Morton wrote:
> Back in 2002ish I did a *lot* of work on IO latency, reads-vs-writes,
> etc, etc (but not fsync - for practical purposes it's unfixable on
> ext3-ordered)
>
> Performance was pretty good. From some of the descriptions I'm seeing
> get tossed around lately, I suspect that it has regressed.
>
> It would be useful/interesting if people were to rerun some of these
> tests with `echo anticipatory > /sys/block/sda/queue/scheduler'.
>
> Or with linux-2.5.60 :(
Well 2.6.18 seems to keep popping up as the last kernel with "sane"
behaviour, at least in terms of not causing huge delays under many
workloads. I currently run 2.6.26, although that could be updated as
soon as I get around to figuring out why lirc isn't working for me when
I move past 2.6.26.
I could certainly try changing the scheduler on my mythtv box and seeing
if that makes any difference to the behaviour. It is pretty darn obvious
whether it is responsive or not when starting to play back a video.
--
Len Sorensen
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-01 21:50 ` Theodore Tso
@ 2009-04-01 23:44 ` Matthew Garrett
0 siblings, 0 replies; 419+ messages in thread
From: Matthew Garrett @ 2009-04-01 23:44 UTC (permalink / raw)
To: Theodore Tso, Chris Mason, Andreas T.Auer, Dave Chinner,
Mark Lord, Stefan Richter, Jeff Garzik, Linus Torvalds, Alan Cox,
Andrew Morton, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Wed, Apr 01, 2009 at 05:50:40PM -0400, Theodore Tso wrote:
> On Wed, Apr 01, 2009 at 12:02:26PM -0400, Chris Mason wrote:
> >
> > If we crash during the rsync, the backup logs will yell. If we crash
> > just after the rsync, the backup logs won't know. The data could still
> > be gone.
>
> So have rsync call the sync() system call before it exits.
sync() isn't guaranteed to be synchronous. Treating it as such isn't
portable.
--
Matthew Garrett | mjg59@srcf.ucam.org
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-01 21:36 ` Andrew Morton
2009-04-01 22:57 ` Lennart Sorensen
@ 2009-04-02 1:00 ` Ingo Molnar
2009-04-03 4:06 ` Lennart Sorensen
1 sibling, 1 reply; 419+ messages in thread
From: Ingo Molnar @ 2009-04-02 1:00 UTC (permalink / raw)
To: Andrew Morton
Cc: Lennart Sorensen, torvalds, tytso, drees76, jesper, linux-kernel
* Andrew Morton <akpm@linux-foundation.org> wrote:
> On Wed, 1 Apr 2009 17:03:38 -0400
> lsorense@csclub.uwaterloo.ca (Lennart Sorensen) wrote:
>
> > On Thu, Mar 26, 2009 at 06:25:19PM -0700, Andrew Morton wrote:
> > > The JBD journal is a massive designed-in contention point. It's why
> > > for several years I've been telling anyone who will listen that we need
> > > a new fs. Hopefully our response to all these problems will soon be
> > > "did you try btrfs?".
> >
> > Oh I look forward to the day when it will be safe to convert my mythtv
> > box from ext3 to btrfs. Current kernels just have too much IO latency
> > with ext3 it seems. Older kernels were more responsive, but probably
> > had other places they were less efficient.
>
> Back in 2002ish I did a *lot* of work on IO latency,
> reads-vs-writes, etc, etc (but not fsync - for practical purposes
> it's unfixable on ext3-ordered)
>
> Performance was pretty good. From some of the descriptions I'm
> seeing get tossed around lately, I suspect that it has regressed.
>
> It would be useful/interesting if people were to rerun some of these
> tests with `echo anticipatory > /sys/block/sda/queue/scheduler'.
I'll test this (and the other suggestions) once i'm out of the merge
window.
> Or with linux-2.5.60 :(
I probably wont test that though ;-)
Going back to v2.6.14 to do pre-mutex-merge performance tests was
already quite a challenge on modern hardware.
Ingo
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-01 22:29 ` david
@ 2009-04-02 2:30 ` Bron Gondwana
2009-04-02 4:55 ` david
2009-04-02 12:30 ` Bill Davidsen
1 sibling, 1 reply; 419+ messages in thread
From: Bron Gondwana @ 2009-04-02 2:30 UTC (permalink / raw)
To: david; +Cc: Andreas T.Auer, Bill Davidsen, linux-kernel
On Wed, Apr 01, 2009 at 03:29:29PM -0700, david@lang.hm wrote:
> On Wed, 1 Apr 2009, Andreas T.Auer wrote:
>> On 01.04.2009 22:15 david@lang.hm wrote:
>>> except if another file in the directory gets modified while it's
>>> writing out the first two, that file now would need to get written out
>>> as well, before the metadata for that directory can be written. if you
>>> have a busy system (say a database or log server), where files are
>>> getting modified pretty constantly, it can be a long time before all
>>> the file data is written out and the system is idle enough to write
>>> the metadata.
>> Thank you, David, for this use case, but I think the problem could be
>> solved quite easily:
>>
>> At any write-out time, e.g. after collecting enough data for delayed
>> allocation or at fsync()
>>
>> 1) copy the metadata in memory, i.e. snapshot it
>> 2) write out the data corresponding to the metadata-snapshot
>> 3) write out the snapshot of the metadata
>>
>> In that way subsequent metadata changes should not interfere with the
>> metadata-update on disk.
>
> the problem with this approach is that the dcache has no provision for
> there being two (or more) copies of the disk block in it's cache, adding
> this would significantly complicate things (it was mentioned briefly a
> few days ago in this thread)
It seems that it's obviously the "right way" to solve the problem
though. How much does the dcache need to know about this "in flight"
block (ok, blocks - I can imagine a pathological case where there
were a stack of them all slightly different in the queue)?
You'd be basically reinventing MVCC-like database logic with
transactional commits at that point - so each fs "barrier" call
would COW all the affected pages and write them down to disk.
Bron.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-02 2:30 ` Bron Gondwana
@ 2009-04-02 4:55 ` david
2009-04-02 5:29 ` Bron Gondwana
2009-04-02 9:58 ` Andreas T.Auer
0 siblings, 2 replies; 419+ messages in thread
From: david @ 2009-04-02 4:55 UTC (permalink / raw)
To: Bron Gondwana; +Cc: Andreas T.Auer, Bill Davidsen, linux-kernel
On Thu, 2 Apr 2009, Bron Gondwana wrote:
> On Wed, Apr 01, 2009 at 03:29:29PM -0700, david@lang.hm wrote:
>> On Wed, 1 Apr 2009, Andreas T.Auer wrote:
>>> On 01.04.2009 22:15 david@lang.hm wrote:
>>>> except if another file in the directory gets modified while it's
>>>> writing out the first two, that file now would need to get written out
>>>> as well, before the metadata for that directory can be written. if you
>>>> have a busy system (say a database or log server), where files are
>>>> getting modified pretty constantly, it can be a long time before all
>>>> the file data is written out and the system is idle enough to write
>>>> the metadata.
>>> Thank you, David, for this use case, but I think the problem could be
>>> solved quite easily:
>>>
>>> At any write-out time, e.g. after collecting enough data for delayed
>>> allocation or at fsync()
>>>
>>> 1) copy the metadata in memory, i.e. snapshot it
>>> 2) write out the data corresponding to the metadata-snapshot
>>> 3) write out the snapshot of the metadata
>>>
>>> In that way subsequent metadata changes should not interfere with the
>>> metadata-update on disk.
>>
>> the problem with this approach is that the dcache has no provision for
>> there being two (or more) copies of the disk block in it's cache, adding
>> this would significantly complicate things (it was mentioned briefly a
>> few days ago in this thread)
>
> It seems that it's obviously the "right way" to solve the problem
> though. How much does the dcache need to know about this "in flight"
> block (ok, blocks - I can imagine a pathological case where there
> were a stack of them all slightly different in the queue)?
but if only one filesystem needs this caability is it really worth
complicating the dcache for the entire system?
> You'd be basically reinventing MVCC-like database logic with
> transactional commits at that point - so each fs "barrier" call
> would COW all the affected pages and write them down to disk.
one aspect of mvcc systems is that they eat up space and require 'garbage
collection' type functions. that could cause deadlocks if you aren't
careful.
David Lang
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-02 4:55 ` david
@ 2009-04-02 5:29 ` Bron Gondwana
2009-04-02 9:58 ` Andreas T.Auer
1 sibling, 0 replies; 419+ messages in thread
From: Bron Gondwana @ 2009-04-02 5:29 UTC (permalink / raw)
To: david; +Cc: Bron Gondwana, Andreas T.Auer, Bill Davidsen, linux-kernel
On Wed, Apr 01, 2009 at 09:55:18PM -0700, david@lang.hm wrote:
> On Thu, 2 Apr 2009, Bron Gondwana wrote:
>
>> On Wed, Apr 01, 2009 at 03:29:29PM -0700, david@lang.hm wrote:
>>> the problem with this approach is that the dcache has no provision for
>>> there being two (or more) copies of the disk block in it's cache, adding
>>> this would significantly complicate things (it was mentioned briefly a
>>> few days ago in this thread)
>>
>> It seems that it's obviously the "right way" to solve the problem
>> though. How much does the dcache need to know about this "in flight"
>> block (ok, blocks - I can imagine a pathological case where there
>> were a stack of them all slightly different in the queue)?
>
> but if only one filesystem needs this caability is it really worth
> complicating the dcache for the entire system?
Depends if that one filesystem is expected to have 90% of the
installed base or not, I guess. If not, then it's not worth
it. If having something like this makes that one filesystem
the best for the majority of workloads, then hell yes.
>> You'd be basically reinventing MVCC-like database logic with
>> transactional commits at that point - so each fs "barrier" call
>> would COW all the affected pages and write them down to disk.
>
> one aspect of mvcc systems is that they eat up space and require 'garbage
> collection' type functions. that could cause deadlocks if you aren't
> careful.
I guess the nice thing here is that the only consumer for the older
versions is the disk flushing thread, so figuring out when to cleanup
wouldn't be so hard as in a concurrent-users database.
But I'm speculating with no little hands-on experience with the
code. I just know I'd like the result...
Bron ( creating consistent pages on disk that never really
existed in memory sounds... exciting )
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-02 4:55 ` david
2009-04-02 5:29 ` Bron Gondwana
@ 2009-04-02 9:58 ` Andreas T.Auer
1 sibling, 0 replies; 419+ messages in thread
From: Andreas T.Auer @ 2009-04-02 9:58 UTC (permalink / raw)
To: david; +Cc: Bron Gondwana, Andreas T.Auer, Bill Davidsen, linux-kernel
On 02.04.2009 06:55 david@lang.hm wrote:
> On Thu, 2 Apr 2009, Bron Gondwana wrote:
>
>
>> On Wed, Apr 01, 2009 at 03:29:29PM -0700, david@lang.hm wrote:
>>
>>> On Wed, 1 Apr 2009, Andreas T.Auer wrote:
>>>
>>>> On 01.04.2009 22:15 david@lang.hm wrote:
>>>>
>>>>> except if another file in the directory gets modified while it's
>>>>> writing out the first two, that file now would need to get written out
>>>>> as well, before the metadata for that directory can be written. if you
>>>>> have a busy system (say a database or log server), where files are
>>>>> getting modified pretty constantly, it can be a long time before all
>>>>> the file data is written out and the system is idle enough to write
>>>>> the metadata.
>>>>>
>>>> Thank you, David, for this use case, but I think the problem could be
>>>> solved quite easily:
>>>>
>>>> At any write-out time, e.g. after collecting enough data for delayed
>>>> allocation or at fsync()
>>>>
>>>> 1) copy the metadata in memory, i.e. snapshot it
>>>> 2) write out the data corresponding to the metadata-snapshot
>>>> 3) write out the snapshot of the metadata
>>>>
>>>> In that way subsequent metadata changes should not interfere with the
>>>> metadata-update on disk.
>>>>
>>> the problem with this approach is that the dcache has no provision for
>>> there being two (or more) copies of the disk block in it's cache, adding
>>> this would significantly complicate things (it was mentioned briefly a
>>> few days ago in this thread)
>>>
I must have missed that message and can't find it.
>> It seems that it's obviously the "right way" to solve the problem
>> though. How much does the dcache need to know about this "in flight"
>> block (ok, blocks - I can imagine a pathological case where there
>> were a stack of them all slightly different in the queue)?
>>
>
> but if only one filesystem needs this caability is it really worth
> complicating the dcache for the entire system?
>
No, it's not necessary. It should be possible for the specific fs to
keep the metadata copy internally. And as long as these blocks are
written immediately after writing the data, there should be no "queue"
of copies, depending on how fsyncs are handled while the fs is
committing. There might be one copy for the current commit and (at most)
one copy corresponding to the most recent pending fsync. If there are
multiple fsyncs before the commit is finished, the "pending copy" could
simply be overwritten.
Andreas
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-01 21:03 ` Lennart Sorensen
2009-04-01 21:36 ` Andrew Morton
@ 2009-04-02 11:05 ` Janne Grunau
2009-04-02 16:09 ` Andrew Morton
` (3 more replies)
2009-04-02 12:17 ` Theodore Tso
2 siblings, 4 replies; 419+ messages in thread
From: Janne Grunau @ 2009-04-02 11:05 UTC (permalink / raw)
To: Lennart Sorensen
Cc: Andrew Morton, Linus Torvalds, Theodore Tso, David Rees,
Jesper Krogh, Linux Kernel Mailing List
On Wed, Apr 01, 2009 at 05:03:38PM -0400, Lennart Sorensen wrote:
> On Thu, Mar 26, 2009 at 06:25:19PM -0700, Andrew Morton wrote:
> > The JBD journal is a massive designed-in contention point. It's why
> > for several years I've been telling anyone who will listen that we need
> > a new fs. Hopefully our response to all these problems will soon be
> > "did you try btrfs?".
>
> Oh I look forward to the day when it will be safe to convert my mythtv
> box from ext3 to btrfs.
You could convert it to xfs now. xfs is probably the file system with
the lowest complaints usage ratio within the mythyv community.
Using distinct discs for system and recording storage helps too.
> Current kernels just have too much IO latency
> with ext3 it seems.
MythTV calls fsync every few seconds on ongoing recordings to prevent
stalls due to large cache writebacks on ext3.
cheers
Janne
(MythTV developer)
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-01 21:03 ` Lennart Sorensen
2009-04-01 21:36 ` Andrew Morton
2009-04-02 11:05 ` Janne Grunau
@ 2009-04-02 12:17 ` Theodore Tso
2009-04-02 21:54 ` Lennart Sorensen
2 siblings, 1 reply; 419+ messages in thread
From: Theodore Tso @ 2009-04-02 12:17 UTC (permalink / raw)
To: Lennart Sorensen
Cc: Andrew Morton, Linus Torvalds, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Wed, Apr 01, 2009 at 05:03:38PM -0400, Lennart Sorensen wrote:
> On Thu, Mar 26, 2009 at 06:25:19PM -0700, Andrew Morton wrote:
> > The JBD journal is a massive designed-in contention point. It's why
> > for several years I've been telling anyone who will listen that we need
> > a new fs. Hopefully our response to all these problems will soon be
> > "did you try btrfs?".
>
> Oh I look forward to the day when it will be safe to convert my mythtv
> box from ext3 to btrfs. Current kernels just have too much IO latency
> with ext3 it seems. Older kernels were more responsive, but probably
> had other places they were less efficient.
Well, ext4 will be an interim solution you can convert to first. It
will be best with a backup/reformat/restore pass, better if you enable
extents (at least for new files, but then you won't be able to go back
to ext3), but you'll get improvements even if you just mount an ext3
filesystem as ext4.
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-01 22:29 ` david
2009-04-02 2:30 ` Bron Gondwana
@ 2009-04-02 12:30 ` Bill Davidsen
1 sibling, 0 replies; 419+ messages in thread
From: Bill Davidsen @ 2009-04-02 12:30 UTC (permalink / raw)
To: david; +Cc: Andreas T.Auer, linux-kernel
david@lang.hm wrote:
> On Wed, 1 Apr 2009, Andreas T.Auer wrote:
>> Thank you, David, for this use case, but I think the problem could be
>> solved quite easily:
>>
>> At any write-out time, e.g. after collecting enough data for delayed
>> allocation or at fsync()
>>
>> 1) copy the metadata in memory, i.e. snapshot it
>> 2) write out the data corresponding to the metadata-snapshot
>> 3) write out the snapshot of the metadata
>>
>> In that way subsequent metadata changes should not interfere with the
>> metadata-update on disk.
>
> the problem with this approach is that the dcache has no provision for
> there being two (or more) copies of the disk block in it's cache,
> adding this would significantly complicate things (it was mentioned
> briefly a few days ago in this thread)
I think the sync point should be between the file system and the dcache,
with the data only going into the dcache when it's time to write it.
That also opens the door to doing atime better at no cost, atime changes
would be kept internal to the file system, and only be written at close
or fsync, even on a mount which does not use noatime or relatime. The
file system can keep that information and only write it when appropriate.
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
"You are disgraced professional losers. And by the way, give us our money back."
- Representative Earl Pomeroy, Democrat of North Dakota
on the A.I.G. executives who were paid bonuses after a federal bailout.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-24 6:19 ` Jesper Krogh
2009-03-24 6:46 ` David Rees
@ 2009-04-02 14:00 ` Mathieu Desnoyers
1 sibling, 0 replies; 419+ messages in thread
From: Mathieu Desnoyers @ 2009-04-02 14:00 UTC (permalink / raw)
To: Jesper Krogh
Cc: Linus Torvalds, Linux Kernel Mailing List, Theodore Tso,
Ingo Molnar, David Rees, Alan Cox
>
> Linus Torvalds wrote:
> > This obviously starts the merge window for 2.6.30, although as usual, I'll
> > probably wait a day or two before I start actively merging. I do that in
> > order to hopefully result in people testing the final plain 2.6.29 a bit
> > more before all the crazy changes start up again.
>
> I know this has been discussed before:
>
> [129401.996244] INFO: task updatedb.mlocat:31092 blocked for more than
> 480 seconds.
> [129402.084667] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [129402.179331] updatedb.mloc D 0000000000000000 0 31092 31091
> [129402.179335] ffff8805ffa1d900 0000000000000082 ffff8803ff5688a8
> 0000000000001000
> [129402.179338] ffffffff806cc000 ffffffff806cc000 ffffffff806d3e80
> ffffffff806d3e80
> [129402.179341] ffffffff806cfe40 ffffffff806d3e80 ffff8801fb9f87e0
> 000000000000ffff
> [129402.179343] Call Trace:
> [129402.179353] [<ffffffff802d3ff0>] sync_buffer+0x0/0x50
> [129402.179358] [<ffffffff80493a50>] io_schedule+0x20/0x30
> [129402.179360] [<ffffffff802d402b>] sync_buffer+0x3b/0x50
> [129402.179362] [<ffffffff80493d2f>] __wait_on_bit+0x4f/0x80
> [129402.179364] [<ffffffff802d3ff0>] sync_buffer+0x0/0x50
> [129402.179366] [<ffffffff80493dda>] out_of_line_wait_on_bit+0x7a/0xa0
> [129402.179369] [<ffffffff80252730>] wake_bit_function+0x0/0x30
> [129402.179396] [<ffffffffa0264346>] ext3_find_entry+0xf6/0x610 [ext3]
> [129402.179399] [<ffffffff802d3453>] __find_get_block+0x83/0x170
> [129402.179403] [<ffffffff802c4a90>] ifind_fast+0x50/0xa0
> [129402.179405] [<ffffffff802c5874>] iget_locked+0x44/0x180
> [129402.179412] [<ffffffffa0266435>] ext3_lookup+0x55/0x100 [ext3]
> [129402.179415] [<ffffffff802c32a7>] d_alloc+0x127/0x1c0
> [129402.179417] [<ffffffff802ba2a7>] do_lookup+0x1b7/0x250
> [129402.179419] [<ffffffff802bc51d>] __link_path_walk+0x76d/0xd60
> [129402.179421] [<ffffffff802ba17f>] do_lookup+0x8f/0x250
> [129402.179424] [<ffffffff802c8b37>] mntput_no_expire+0x27/0x150
> [129402.179426] [<ffffffff802bcb64>] path_walk+0x54/0xb0
> [129402.179428] [<ffffffff802bfd10>] filldir+0x0/0xf0
> [129402.179430] [<ffffffff802bcc8a>] do_path_lookup+0x7a/0x150
> [129402.179432] [<ffffffff802bbb55>] getname+0xe5/0x1f0
> [129402.179434] [<ffffffff802bd8d4>] user_path_at+0x44/0x80
> [129402.179437] [<ffffffff802b53b5>] cp_new_stat+0xe5/0x100
> [129402.179440] [<ffffffff802b56d0>] vfs_lstat_fd+0x20/0x60
> [129402.179442] [<ffffffff802b5737>] sys_newlstat+0x27/0x50
> [129402.179445] [<ffffffff8020c35b>] system_call_fastpath+0x16/0x1b
> Consensus seems to be something with large memory machines, lots of
> dirty pages and a long writeout time due to ext3.
>
> At the moment this the largest "usabillity" issue in the serversetup I'm
> working with. Can there be done something to "autotune" it .. or perhaps
> even fix it? .. or is it just to shift to xfs or wait for ext4?
>
Hi Jesper,
What you are seeing looks awefully like the bug I have spent some time
to try to figure out in this bugzilla thread :
[Bug 12309] Large I/O operations result in slow performance and high
iowait times
http://bugzilla.kernel.org/show_bug.cgi?id=12309
I created a fio test case out of a lttng trace to reproduce the problem
and created a patch to try to account the pages used by the i/o elevator
in the vm page count used to calculate memory pressure. Basically, the
behavior I was seeing is a constant increase of memory usage when doing
a dd-like write to disk until the memory fills up, which is indeed
wrong. The patch I posted in that thread seems to cause other problems
though, so probably we should teach kjournald to do better.
Here is the patch attempt :
http://bugzilla.kernel.org/attachment.cgi?id=20172
Here is the fio test case :
http://bugzilla.kernel.org/attachment.cgi?id=19894
My findings were this (I hope other people with deeper knowledge of
block layer/vm interaction can correct me) :
- Upon heavy and long disk writes, the pages used to back the buffers
continuously increase as if there was no memory pressure at all.
Therefore, I suspect they are held in a nowhere land that's unaccounted
for at the vm layer (not part of memory pressure). That would seem to
be the I/O elevator.
Can you give a try at the dd and fio test cases pointed out in the
bugzilla entry ? You may also want to see if my patch helps to partially
solve your problem. Another hint is to try to use the cgroups to
restrict you heavy I/O processes to a limited amount of memory;
although it does not solve the core of the problem, it made it disappear
for me. And of course trying to get a LTTng trace to get your head
around the problem can be very efficient. It's available as a git tree
over 2.6.29, and includes VFS, block I/O layer and vm instrumentation,
which helps looking at their interaction. All information is at
http://www.lttng.org.
Hoping this helps,
Mathieu
--
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-02 11:05 ` Janne Grunau
@ 2009-04-02 16:09 ` Andrew Morton
2009-04-02 16:33 ` David Rees
2009-04-02 16:29 ` David Rees
` (2 subsequent siblings)
3 siblings, 1 reply; 419+ messages in thread
From: Andrew Morton @ 2009-04-02 16:09 UTC (permalink / raw)
To: Janne Grunau
Cc: Lennart Sorensen, Linus Torvalds, Theodore Tso, David Rees,
Jesper Krogh, Linux Kernel Mailing List
On Thu, 2 Apr 2009 13:05:32 +0200 Janne Grunau <j@jannau.net> wrote:
> MythTV calls fsync every few seconds on ongoing recordings to prevent
> stalls due to large cache writebacks on ext3.
It should use sync_file_range(SYNC_FILE_RANGE_WRITE). That will
- have minimum latency. It tries to avoid blocking at all.
- avoid writing metadata
- avoid syncing other unrelated files within ext3
- avoid waiting for the ext3 commit to complete.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-02 11:05 ` Janne Grunau
2009-04-02 16:09 ` Andrew Morton
@ 2009-04-02 16:29 ` David Rees
2009-04-02 16:42 ` Andrew Morton
2009-04-02 21:42 ` Theodore Tso
2009-04-02 21:50 ` Lennart Sorensen
2009-04-03 15:07 ` Mark Lord
3 siblings, 2 replies; 419+ messages in thread
From: David Rees @ 2009-04-02 16:29 UTC (permalink / raw)
To: Janne Grunau
Cc: Lennart Sorensen, Andrew Morton, Linus Torvalds, Theodore Tso,
Jesper Krogh, Linux Kernel Mailing List
On Thu, Apr 2, 2009 at 4:05 AM, Janne Grunau <j@jannau.net> wrote:
>> Current kernels just have too much IO latency
>> with ext3 it seems.
>
> MythTV calls fsync every few seconds on ongoing recordings to prevent
> stalls due to large cache writebacks on ext3.
Personally that is also one of my MythTV pet peeves. A hack added to
MythTV to work around a crappy ext3 latency bug that also causes these
large files to get heavily fragmented. That and the fact that yo have
to patch MythTV to eliminate those forced fdatasyncs - there is no
knob to turn it off if you're running MythTV on a filesystem which
doesn't suffer from ext3's data=ordered fsync stalls.
-Dave
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-02 16:09 ` Andrew Morton
@ 2009-04-02 16:33 ` David Rees
2009-04-02 16:46 ` Linus Torvalds
` (2 more replies)
0 siblings, 3 replies; 419+ messages in thread
From: David Rees @ 2009-04-02 16:33 UTC (permalink / raw)
To: Andrew Morton
Cc: Janne Grunau, Lennart Sorensen, Linus Torvalds, Theodore Tso,
Jesper Krogh, Linux Kernel Mailing List
On Thu, Apr 2, 2009 at 9:09 AM, Andrew Morton <akpm@linux-foundation.org> wrote:
> On Thu, 2 Apr 2009 13:05:32 +0200 Janne Grunau <j@jannau.net> wrote:
>> MythTV calls fsync every few seconds on ongoing recordings to prevent
>> stalls due to large cache writebacks on ext3.
>
> It should use sync_file_range(SYNC_FILE_RANGE_WRITE). That will
>
> - have minimum latency. It tries to avoid blocking at all.
> - avoid writing metadata
> - avoid syncing other unrelated files within ext3
> - avoid waiting for the ext3 commit to complete.
MythTV actually uses fdatasync, not fsync (or at least that's what it
did last time I looked at the source). Not sure how the behavior of
fdatasync compares to sync_file_range.
Either way - forcing the data to be synced to disk a couple times
every second is a hack and causes fragmentation in filesystems without
delayed allocation. Fragmentation really goes up if you are recording
multiple shows at once.
-Dave
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-02 16:29 ` David Rees
@ 2009-04-02 16:42 ` Andrew Morton
2009-04-02 16:57 ` Linus Torvalds
2009-04-02 18:52 ` David Rees
2009-04-02 21:42 ` Theodore Tso
1 sibling, 2 replies; 419+ messages in thread
From: Andrew Morton @ 2009-04-02 16:42 UTC (permalink / raw)
To: David Rees
Cc: Janne Grunau, Lennart Sorensen, Linus Torvalds, Theodore Tso,
Jesper Krogh, Linux Kernel Mailing List
On Thu, 2 Apr 2009 09:29:59 -0700 David Rees <drees76@gmail.com> wrote:
> On Thu, Apr 2, 2009 at 4:05 AM, Janne Grunau <j@jannau.net> wrote:
> >> Current kernels just have too much IO latency
> >> with ext3 it seems.
> >
> > MythTV calls fsync every few seconds on ongoing recordings to prevent
> > stalls due to large cache writebacks on ext3.
>
> Personally that is also one of my MythTV pet peeves. A hack added to
> MythTV to work around a crappy ext3 latency bug that also causes these
> large files to get heavily fragmented. That and the fact that yo have
> to patch MythTV to eliminate those forced fdatasyncs - there is no
> knob to turn it off if you're running MythTV on a filesystem which
> doesn't suffer from ext3's data=ordered fsync stalls.
>
For any filesystem it is quite sensible for an application to manage
the amount of dirty memory which the kernel is holding on its behalf,
and based upon the application's knowledge of its future access
patterns.
But MythTV did it the wrong way.
A suitable design for the streaming might be, every 4MB:
- run sync_file_range(SYNC_FILE_RANGE_WRITE) to get the 4MB underway
to the disk
- run fadvise(POSIX_FADV_DONTNEED) against the previous 4MB to
discard it from pagecache.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-02 16:33 ` David Rees
@ 2009-04-02 16:46 ` Linus Torvalds
2009-04-02 16:51 ` Andrew Morton
2009-04-02 21:56 ` Jeff Garzik
2 siblings, 0 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-04-02 16:46 UTC (permalink / raw)
To: David Rees
Cc: Andrew Morton, Janne Grunau, Lennart Sorensen, Theodore Tso,
Jesper Krogh, Linux Kernel Mailing List
On Thu, 2 Apr 2009, David Rees wrote:
>
> MythTV actually uses fdatasync, not fsync (or at least that's what it
> did last time I looked at the source). Not sure how the behavior of
> fdatasync compares to sync_file_range.
fdatasync() _waits_ for the data to hit the disk.
sync_file_range() just starts writeout.
It _can_ do more - you can also ask for it to wait for previous write-out
in order to start _new_ writeout, or wait for the result, but you wouldn't
want to, not for something like this.
sync_file_range() is really a much nicer interface, and is a more extended
fdatasync() that actually matches what the kernel does internally.
You can think of fdatasync(fd) as a
sync_file_range(fd, 0, ~0ull,
SYNC_FILE_RANGE_WAIT_BEFORE | SYNC_FILE_RANGE_WRITE | SYNC_FILE_RANGE_WAIT_AFTER);
and then you see why fdatasync is such a horrible interface.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-02 16:33 ` David Rees
2009-04-02 16:46 ` Linus Torvalds
@ 2009-04-02 16:51 ` Andrew Morton
2009-04-02 22:13 ` Jeff Garzik
2009-04-02 21:56 ` Jeff Garzik
2 siblings, 1 reply; 419+ messages in thread
From: Andrew Morton @ 2009-04-02 16:51 UTC (permalink / raw)
To: David Rees
Cc: Janne Grunau, Lennart Sorensen, Linus Torvalds, Theodore Tso,
Jesper Krogh, Linux Kernel Mailing List
On Thu, 2 Apr 2009 09:33:44 -0700 David Rees <drees76@gmail.com> wrote:
> On Thu, Apr 2, 2009 at 9:09 AM, Andrew Morton <akpm@linux-foundation.org> wrote:
> > On Thu, 2 Apr 2009 13:05:32 +0200 Janne Grunau <j@jannau.net> wrote:
> >> MythTV calls fsync every few seconds on ongoing recordings to prevent
> >> stalls due to large cache writebacks on ext3.
> >
> > It should use sync_file_range(SYNC_FILE_RANGE_WRITE). __That will
> >
> > - have minimum latency. __It tries to avoid blocking at all.
> > - avoid writing metadata
> > - avoid syncing other unrelated files within ext3
> > - avoid waiting for the ext3 commit to complete.
>
> MythTV actually uses fdatasync, not fsync (or at least that's what it
> did last time I looked at the source). Not sure how the behavior of
> fdatasync compares to sync_file_range.
fdatasync() will still trigger the bad ext3 behaviour.
> Either way - forcing the data to be synced to disk a couple times
> every second is a hack and causes fragmentation in filesystems without
> delayed allocation. Fragmentation really goes up if you are recording
> multiple shows at once.
The file layout issue is unrelated to the frequency of fdatasync() -
the block allocation is done at the time of write().
ext3 _should_ handle this case fairly well nowadays - I thought we fixed that.
However it would probably benefit from having the size of the block reservation
window increased - use ioctl(EXT3_IOC_SETRSVSZ). That way, each file gets a
decent-sized hunk of disk "reserved" for its ongoing appending. Other
files won't come in and intermingle their blocks with it.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-02 16:42 ` Andrew Morton
@ 2009-04-02 16:57 ` Linus Torvalds
2009-04-02 17:04 ` Linus Torvalds
2009-04-02 18:52 ` David Rees
1 sibling, 1 reply; 419+ messages in thread
From: Linus Torvalds @ 2009-04-02 16:57 UTC (permalink / raw)
To: Andrew Morton
Cc: David Rees, Janne Grunau, Lennart Sorensen, Theodore Tso,
Jesper Krogh, Linux Kernel Mailing List
On Thu, 2 Apr 2009, Andrew Morton wrote:
>
> A suitable design for the streaming might be, every 4MB:
>
> - run sync_file_range(SYNC_FILE_RANGE_WRITE) to get the 4MB underway
> to the disk
>
> - run fadvise(POSIX_FADV_DONTNEED) against the previous 4MB to
> discard it from pagecache.
Here's an example. I call it "overwrite.c" for obvious reasons.
Except I used 8MB ranges, and I "stream" random data. Very useful for
"secure delete" of harddisks. It gives pretty optimal speed, while not
destroying your system experience.
Of course, I do think the kernel could/should do this kind of thing
automatically. We really could do something like this with a "dirty LRU"
queue. Make the logic be:
- if you have more than "2*limit" pages in your dirty LRU queue, start
writeout on "limit" pages (default value: 8MB, tunable in /proc).
Remove from LRU queues.
- On writeback IO completion, if it's not on any LRU list, insert page
into "done_write" LRU list.
- if you have more than "2*limit" pages on the done_write LRU queue,
try to just get rid of the first "limit" pages.
It would probably work fine in general. Temp-files (smaller than 8MB
total) would go into the dirty LRU queue, but wouldn't be written out to
disk if they get deleted before you've generated 8MB of dirty data.
But this does the queue-handling by hand, and gives you a throughput
indicator. It should get fairly close to disk speeds.
Linus
---
#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>
#include <errno.h>
#include <fcntl.h>
#include <sys/mman.h>
#include <sys/time.h>
#include <linux/fs.h>
#define BUFSIZE (8*1024*1024ul)
int main(int argc, char **argv)
{
static char buffer[BUFSIZE];
struct timeval start, now;
unsigned int index;
int fd;
mlockall(MCL_CURRENT | MCL_FUTURE);
fd = open("/dev/urandom", O_RDONLY);
if (read(fd, buffer, BUFSIZE) != BUFSIZE) {
perror("/dev/urandom");
exit(1);
}
close(fd);
fd = open(argv[1], O_RDWR | O_CREAT, 0666);
if (fd < 0) {
perror(argv[1]);
exit(1);
}
gettimeofday(&start, NULL);
for (index = 0; ;index++) {
double s;
unsigned long MBps;
unsigned long MB;
if (write(fd, buffer, BUFSIZE) != BUFSIZE)
break;
sync_file_range(fd, index*BUFSIZE, BUFSIZE, SYNC_FILE_RANGE_WRITE);
if (index)
sync_file_range(fd, (index-1)*BUFSIZE, BUFSIZE, SYNC_FILE_RANGE_WAIT_BEFORE|SYNC_FILE_RANGE_WRITE|SYNC_FILE_RANGE_WAIT_AFTER);
gettimeofday(&now, NULL);
s = (now.tv_sec - start.tv_sec) + ((double) now.tv_usec - start.tv_usec)/ 1000000;
MB = index * (BUFSIZE >> 20);
MBps = MB;
if (s > 1)
MBps = MBps / s;
printf("%8lu.%03lu GB written in %5.2f (%lu MB/s) \r",
MB >> 10, (MB & 1023) * 1000 >> 10, s, MBps);
fflush(stdout);
}
close(fd);
printf("\n");
return 0;
}
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-02 16:57 ` Linus Torvalds
@ 2009-04-02 17:04 ` Linus Torvalds
2009-04-02 22:09 ` Jeff Garzik
2009-04-03 15:14 ` Mark Lord
0 siblings, 2 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-04-02 17:04 UTC (permalink / raw)
To: Andrew Morton
Cc: David Rees, Janne Grunau, Lennart Sorensen, Theodore Tso,
Jesper Krogh, Linux Kernel Mailing List
On Thu, 2 Apr 2009, Linus Torvalds wrote:
>
> On Thu, 2 Apr 2009, Andrew Morton wrote:
> >
> > A suitable design for the streaming might be, every 4MB:
> >
> > - run sync_file_range(SYNC_FILE_RANGE_WRITE) to get the 4MB underway
> > to the disk
> >
> > - run fadvise(POSIX_FADV_DONTNEED) against the previous 4MB to
> > discard it from pagecache.
>
> Here's an example. I call it "overwrite.c" for obvious reasons.
Oh, except my example doesn't do the fadvise. Instead, I make sure to
throttle the writes and the old range with
SYNC_FILE_RANGE_WAIT_BEFORE|SYNC_FILE_RANGE_WRITE|SYNC_FILE_RANGE_WAIT_AFTER
which makes sure that the old pages are easily dropped by the VM - and
they will be, since they end up always being on the cold list.
I _wanted_ to add a SYNC_FILE_RANGE_DROP but I never bothered because this
particular load it didn't matter. The system was perfectly usable while
overwriting even huge disks because there was never more than 8MB of dirty
data in flight in the IO queues at any time.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-02 16:42 ` Andrew Morton
2009-04-02 16:57 ` Linus Torvalds
@ 2009-04-02 18:52 ` David Rees
1 sibling, 0 replies; 419+ messages in thread
From: David Rees @ 2009-04-02 18:52 UTC (permalink / raw)
To: Andrew Morton
Cc: Janne Grunau, Lennart Sorensen, Linus Torvalds, Theodore Tso,
Jesper Krogh, Linux Kernel Mailing List
On Thu, Apr 2, 2009 at 9:42 AM, Andrew Morton <akpm@linux-foundation.org> wrote:
> For any filesystem it is quite sensible for an application to manage
> the amount of dirty memory which the kernel is holding on its behalf,
> and based upon the application's knowledge of its future access
> patterns.
>
> But MythTV did it the wrong way.
>
> A suitable design for the streaming might be, every 4MB:
>
> - run sync_file_range(SYNC_FILE_RANGE_WRITE) to get the 4MB underway
> to the disk
>
> - run fadvise(POSIX_FADV_DONTNEED) against the previous 4MB to
> discard it from pagecache.
Yep, you're right. sync_file_range is perfect for what MythTV wants to do.
Though there are cases where MythTV can read data it wrote out not too
long ago, for example, when commercial flagging, so
fadvise(POSIX_FADV_DONTNEED) may not be warranted.
-Dave
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-30 22:00 ` Hans-Peter Jansen
2009-03-30 22:07 ` Arjan van de Ven
@ 2009-04-02 19:01 ` Andreas T.Auer
1 sibling, 0 replies; 419+ messages in thread
From: Andreas T.Auer @ 2009-04-02 19:01 UTC (permalink / raw)
To: Hans-Peter Jansen
Cc: Linus Torvalds, Mike Galbraith, Geert Uytterhoeven, linux-kernel,
arjan
On 31.03.2009 00:00 Hans-Peter Jansen wrote:
> I build kernel rpms from your git tree, and have a bunch of BUILDs lying
> around.
So you have a place where you have a git repository from which you copy
the source tree to rpms, which have no connection to the git anymore?
> Sure, I can always fetch the tarballs or fiddle with git, but why?
You may add a small script like this into .git/hooks/post-checkout:
-----
#!/bin/bash
if [ "$3" == 1 ]; then # don't do it for file checkouts
sed -ri "s/^(EXTRAVERSION =.*)/\1$(scripts/setlocalversion)/" Makefile
fi
-----
That will append the EXTRAVERSION automatically with what
CONFIG_LOCALVERSION_AUTO=y would append to the version string.
> Having a Makefile start commit allows to make sure with simplest tools,
> say "head Makefile" that a locally copied 2.6.29 tree is really a 2.6.29,
> and not something moving towards the next release. That's all, nothing
> less, nothing more, it's just a strong hint which blend is in the box.
If you are working on a tagged version, the EXTRAVERSION won't be
extended, on an untagged version it will have some ident for that
intermediate version
e.g.
git checkout master -> EXTRAVERSION =-07100-g833bb30
git checkout HEAD~1 -> EXTRAVERSION =-07099-g8b53ef3
git checkout v2.6.29 -> EXTRAVERSION =
git checkout HEAD~1 -> EXTRAVERSION = -rc8-00303-g0030864
git checkout v2.6.29-rc8 -> EXTRAVERSION = -rc8
In that way your copies of the source tree will have the EXTRAVERSION
set in the Makefile. You can detect an intermediate version easily in
the Makefile and you even can checkout that exact version from the git
tree later, if you need to. Or just make an diff between two rpms by
diffing the versions taken from the Makefiles e.g.
git diff 07099-g8b53ef3..07100-g833bb30
or
git diff 00303-g0030864..v2.6.29
Attention:
Of course, the Makefile is changed in your working tree as if you had
changed it yourself. Therefore you have to use "git checkout Makefile"
to revert the changes before you can checkout a different version from
the git tree.
This is only a hack and there might be a better way to do it, but maybe
it helps as a starting point in your special situation.
Andreas
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-02 16:29 ` David Rees
2009-04-02 16:42 ` Andrew Morton
@ 2009-04-02 21:42 ` Theodore Tso
1 sibling, 0 replies; 419+ messages in thread
From: Theodore Tso @ 2009-04-02 21:42 UTC (permalink / raw)
To: David Rees
Cc: Janne Grunau, Lennart Sorensen, Andrew Morton, Linus Torvalds,
Jesper Krogh, Linux Kernel Mailing List
On Thu, Apr 02, 2009 at 09:29:59AM -0700, David Rees wrote:
> On Thu, Apr 2, 2009 at 4:05 AM, Janne Grunau <j@jannau.net> wrote:
> >> Current kernels just have too much IO latency
> >> with ext3 it seems.
> >
> > MythTV calls fsync every few seconds on ongoing recordings to prevent
> > stalls due to large cache writebacks on ext3.
>
> Personally that is also one of my MythTV pet peeves. A hack added to
> MythTV to work around a crappy ext3 latency bug that also causes these
> large files to get heavily fragmented. That and the fact that yo have
> to patch MythTV to eliminate those forced fdatasyncs - there is no
> knob to turn it off if you're running MythTV on a filesystem which
> doesn't suffer from ext3's data=ordered fsync stalls.
So use XFS or ext4, and use fallocate() to get the disk blocks
allocated ahead of time. That completely avoids the fragmentation
problem, altogether. If you are using ext3 on a dedicated MythTV box,
I would certainly advise mounting with data=writeback, which will also
avoid the latency bug.
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-02 11:05 ` Janne Grunau
2009-04-02 16:09 ` Andrew Morton
2009-04-02 16:29 ` David Rees
@ 2009-04-02 21:50 ` Lennart Sorensen
2009-04-03 15:07 ` Mark Lord
3 siblings, 0 replies; 419+ messages in thread
From: Lennart Sorensen @ 2009-04-02 21:50 UTC (permalink / raw)
To: Janne Grunau
Cc: Andrew Morton, Linus Torvalds, Theodore Tso, David Rees,
Jesper Krogh, Linux Kernel Mailing List
On Thu, Apr 02, 2009 at 01:05:32PM +0200, Janne Grunau wrote:
> You could convert it to xfs now. xfs is probably the file system with
> the lowest complaints usage ratio within the mythyv community.
> Using distinct discs for system and recording storage helps too.
Yeah, but I am not ready to give xfs another change yet. The nasty bugs
back in 2.6.8ish days still hurt. Locking up the filesystem when doing
rm -rf gcc-4.0 and having to repair it after a reboot was not fun.
> MythTV calls fsync every few seconds on ongoing recordings to prevent
> stalls due to large cache writebacks on ext3.
Yeah. What I have been seeing since 2.6.24 or 2.6.25 or so is that it
sometimes simply doesn't start playback on a file, and after 15 seconds
or so times out, and then you ask it to try again and it works the next
time just fine. Then at times it will stop responding to the keyboard or
remote in mythtv for up to 2 minutes, and then suddenly it will respond
to whatever you hit 2 minutes ago. Fortunately that doesn't seem to
happen that often. I was hoping to see if 2.6.28 helped that, but lirc
didn't seem to work on my remote with that version, so I went back to
2.6.26 again. I haven't tried 2.6.29 on it yet since I am currently
trying to fix the debian nvidia-driver build against the new kbuild only
much too clever linux-headers-2.6.29 package they have come up with.
I think I have got that figured out though so I should be able to upgrade
that now.
--
Len Sorensen
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-02 12:17 ` Theodore Tso
@ 2009-04-02 21:54 ` Lennart Sorensen
2009-04-02 23:27 ` Theodore Tso
0 siblings, 1 reply; 419+ messages in thread
From: Lennart Sorensen @ 2009-04-02 21:54 UTC (permalink / raw)
To: Theodore Tso, Andrew Morton, Linus Torvalds, David Rees,
Jesper Krogh, Linux Kernel Mailing List
On Thu, Apr 02, 2009 at 08:17:35AM -0400, Theodore Tso wrote:
> Well, ext4 will be an interim solution you can convert to first. It
> will be best with a backup/reformat/restore pass, better if you enable
> extents (at least for new files, but then you won't be able to go back
> to ext3), but you'll get improvements even if you just mount an ext3
> filesystem as ext4.
Well I did pickup a 1TB external USB/eSATA drive for pretty much such
a task. I wasn't sure if ext4 was ready or stable enough to play
with yet though.
--
Len Sorensen
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-02 16:33 ` David Rees
2009-04-02 16:46 ` Linus Torvalds
2009-04-02 16:51 ` Andrew Morton
@ 2009-04-02 21:56 ` Jeff Garzik
2 siblings, 0 replies; 419+ messages in thread
From: Jeff Garzik @ 2009-04-02 21:56 UTC (permalink / raw)
To: David Rees
Cc: Andrew Morton, Janne Grunau, Lennart Sorensen, Linus Torvalds,
Theodore Tso, Jesper Krogh, Linux Kernel Mailing List
David Rees wrote:
> Either way - forcing the data to be synced to disk a couple times
> every second is a hack and causes fragmentation in filesystems without
> delayed allocation. Fragmentation really goes up if you are recording
> multiple shows at once.
Check out posix_fallocate(3). Not appropriate for every situation,
might eat additional disk bandwidth...
But if you are looking to combat fragmentation, pre-allocation (manual
or kernel-assisted) is a relevant technique. Plus, overwriting existing
data blocks is a LOT cheaper than appending to a file. fsync's more
quickly to disk, too.
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-02 17:04 ` Linus Torvalds
@ 2009-04-02 22:09 ` Jeff Garzik
2009-04-02 22:42 ` Linus Torvalds
2009-04-03 15:14 ` Mark Lord
1 sibling, 1 reply; 419+ messages in thread
From: Jeff Garzik @ 2009-04-02 22:09 UTC (permalink / raw)
To: Linus Torvalds
Cc: Andrew Morton, David Rees, Janne Grunau, Lennart Sorensen,
Theodore Tso, Jesper Krogh, Linux Kernel Mailing List
Linus Torvalds wrote:
>
> On Thu, 2 Apr 2009, Linus Torvalds wrote:
>> On Thu, 2 Apr 2009, Andrew Morton wrote:
>>> A suitable design for the streaming might be, every 4MB:
>>>
>>> - run sync_file_range(SYNC_FILE_RANGE_WRITE) to get the 4MB underway
>>> to the disk
>>>
>>> - run fadvise(POSIX_FADV_DONTNEED) against the previous 4MB to
>>> discard it from pagecache.
>> Here's an example. I call it "overwrite.c" for obvious reasons.
>
> Oh, except my example doesn't do the fadvise. Instead, I make sure to
> throttle the writes and the old range with
>
> SYNC_FILE_RANGE_WAIT_BEFORE|SYNC_FILE_RANGE_WRITE|SYNC_FILE_RANGE_WAIT_AFTER
>
> which makes sure that the old pages are easily dropped by the VM - and
> they will be, since they end up always being on the cold list.
Dumb VM question, then: I understand the logic behind the
write-throttling part (some of my own userland code does something
similar), but,
Does this imply adding fadvise to your overwrite.c example is (a) not
noticable, (b) potentially less efficient, (c) potentially more efficient?
Or IOW, does fadvise purely put pages on the cold list as your
sync_file_range incantation does, or something different?
Thanks,
Jeff, who is already using sync_file_range in
some server-esque userland projects
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-02 16:51 ` Andrew Morton
@ 2009-04-02 22:13 ` Jeff Garzik
0 siblings, 0 replies; 419+ messages in thread
From: Jeff Garzik @ 2009-04-02 22:13 UTC (permalink / raw)
To: Andrew Morton
Cc: David Rees, Janne Grunau, Lennart Sorensen, Linus Torvalds,
Theodore Tso, Jesper Krogh, Linux Kernel Mailing List
Andrew Morton wrote:
> ext3 _should_ handle this case fairly well nowadays - I thought we fixed that.
> However it would probably benefit from having the size of the block reservation
> window increased - use ioctl(EXT3_IOC_SETRSVSZ). That way, each file gets a
> decent-sized hunk of disk "reserved" for its ongoing appending. Other
> files won't come in and intermingle their blocks with it.
How big of a chore would it be, to use this code to implement
i_op->fallocate() for ext3, I wonder?
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-02 22:09 ` Jeff Garzik
@ 2009-04-02 22:42 ` Linus Torvalds
2009-04-02 22:51 ` Andrew Morton
2009-04-03 2:01 ` Jeff Garzik
0 siblings, 2 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-04-02 22:42 UTC (permalink / raw)
To: Jeff Garzik
Cc: Andrew Morton, David Rees, Janne Grunau, Lennart Sorensen,
Theodore Tso, Jesper Krogh, Linux Kernel Mailing List
On Thu, 2 Apr 2009, Jeff Garzik wrote:
>
> Dumb VM question, then: I understand the logic behind the write-throttling
> part (some of my own userland code does something similar), but,
>
> Does this imply adding fadvise to your overwrite.c example is (a) not
> noticable, (b) potentially less efficient, (c) potentially more efficient?
For _that_ particular load it was more of a "it wasn't the issue". I
wanted to get timely writeouts, because otherwise they bunch up and become
unmanageable (with even the people who are not actually writing end up
waiting for the writeouts).
Once the pages are clean, it just didn't matter. The VM did the balancing
right enough that I stopped caring. With other access patterns (ie if the
pages ended up on the active list) the situation might have been
different.
> Or IOW, does fadvise purely put pages on the cold list as your
> sync_file_range incantation does, or something different?
sync_file_range() doesn't actually put the pages on the inactive list, but
since the program was just a streaming one, they never even left it.
But no, fadvise actually tries to actually invalidate the pages (ie gets
rid of them, as opposed to moving them to the inactive list).
Another note: I literally used that program just for whole-disk testing,
so the behavior on an actual filesystem may or may not match. But I just
tested on ext3 on my desktop, and got
1.734 GB written in 30.38 (58 MB/s)
until I ^C'd it, and I didn't have any sound skipping or anything like
that. Of course, that's with those nice Intel SSD's, so that doesn't
really say anything.
Feel free to give it a try. It _should_ maintain good write speed while
not disturbing the system much. But I bet if you added the "fadvise()" it
would disturb things even _less_.
My only point is really that you _can_ do streaming writes well, but at
the same time I do think the kernel makes it too hard to do it with
"simple" applications. I'd love to get the same kind of high-speed
streaming behavior by just doing a simple "dd if=/dev/zero of=bigfile"
And I really think we should be able to.
And no, we clearly are _not_ able to do that now. I just tried with "dd",
and created a 1.7G file that way, and it was stuttering - even with my
nice SSD setup. I'm in my MUA writing this email (obviously), and in the
middle it just totally hung for about half a minute - because it was
obviously doing some fsync() for temporary saving etc while the "sync" was
going on.
With the "overwrite.c" thing, I do get short pauses when my MUA does
something, but they are not the kind of "oops, everything hung for several
seconds" kind.
(Full disclosure: 'alpine' with the local mbox on one disk - I _think_
that what alpine does is fsync() temporary save-files, but it might also
be checking email in the background - I have not looked at _why_ alpine
does an fsync, but it definitely does. And 5+ second delays are very
annoying when writing emails - much less half a minute).
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-02 22:42 ` Linus Torvalds
@ 2009-04-02 22:51 ` Andrew Morton
2009-04-02 23:00 ` Linus Torvalds
2009-04-03 2:01 ` Jeff Garzik
1 sibling, 1 reply; 419+ messages in thread
From: Andrew Morton @ 2009-04-02 22:51 UTC (permalink / raw)
To: Linus Torvalds; +Cc: jeff, drees76, j, lsorense, tytso, jesper, linux-kernel
On Thu, 2 Apr 2009 15:42:51 -0700 (PDT)
Linus Torvalds <torvalds@linux-foundation.org> wrote:
> My only point is really that you _can_ do streaming writes well, but at
> the same time I do think the kernel makes it too hard to do it with
> "simple" applications. I'd love to get the same kind of high-speed
> streaming behavior by just doing a simple "dd if=/dev/zero of=bigfile"
>
> And I really think we should be able to.
The thing which has always worried me about trying to do smart
drop-behind is the cost of getting it wrong - and sometimes it _will_
get it wrong.
Someone out there will have an important application which linearly
writes a 1G file and then reads it all back in again. They will get
really upset when their runtime doubles.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-02 22:51 ` Andrew Morton
@ 2009-04-02 23:00 ` Linus Torvalds
0 siblings, 0 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-04-02 23:00 UTC (permalink / raw)
To: Andrew Morton; +Cc: jeff, drees76, j, lsorense, tytso, jesper, linux-kernel
On Thu, 2 Apr 2009, Andrew Morton wrote:
>
> The thing which has always worried me about trying to do smart
> drop-behind is the cost of getting it wrong - and sometimes it _will_
> get it wrong.
>
> Someone out there will have an important application which linearly
> writes a 1G file and then reads it all back in again. They will get
> really upset when their runtime doubles.
Yes. The good news is that it would be a pretty easy tunable to have a
"how soon do we writeback and how soon would we drop". And I do suspect
that _dropping_ should default to off (exactly because of the kind of
situation you bring up).
As mentioned, at least in my experience the VM is pretty good at dropping
the right pages anyway. It's when they are dirty or locked that we end up
stuttering (or when we do fsync). And "start background writeout earlier"
improves that case regardless of drop-behind.
But at the same time it is also unquestionably true that the current
behavior tends to maximize throughput performance. Delaying the writes as
long as possible is almost always the right thing for througput.
In my experience, at least on desktops, latency is a lot more important
than throughput is. And I don't think anybody wants to start the writes
_immediately_.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-02 21:54 ` Lennart Sorensen
@ 2009-04-02 23:27 ` Theodore Tso
2009-04-03 0:32 ` Lennart Sorensen
0 siblings, 1 reply; 419+ messages in thread
From: Theodore Tso @ 2009-04-02 23:27 UTC (permalink / raw)
To: Lennart Sorensen
Cc: Andrew Morton, Linus Torvalds, David Rees, Jesper Krogh,
Linux Kernel Mailing List
On Thu, Apr 02, 2009 at 05:54:42PM -0400, Lennart Sorensen wrote:
> On Thu, Apr 02, 2009 at 08:17:35AM -0400, Theodore Tso wrote:
> > Well, ext4 will be an interim solution you can convert to first. It
> > will be best with a backup/reformat/restore pass, better if you enable
> > extents (at least for new files, but then you won't be able to go back
> > to ext3), but you'll get improvements even if you just mount an ext3
> > filesystem as ext4.
>
> Well I did pickup a 1TB external USB/eSATA drive for pretty much such
> a task. I wasn't sure if ext4 was ready or stable enough to play
> with yet though.
To play with, definitely. For production use, I'll have to let you
make your own judgements. I've been using it on my laptop since July.
At the moment, there's only one bug which I'm very concerned about,
being worked here:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/330824
But a number of community distro's will be supporting it within the
next month or two. So it's definitely getting there. As we increase
the user base, we'll turn up more of the harder-to-reproduce bugs, but
hopefully we'll get them fixed quickly.
- Ted
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-02 23:27 ` Theodore Tso
@ 2009-04-03 0:32 ` Lennart Sorensen
0 siblings, 0 replies; 419+ messages in thread
From: Lennart Sorensen @ 2009-04-03 0:32 UTC (permalink / raw)
To: Theodore Tso, Andrew Morton, Linus Torvalds, David Rees,
Jesper Krogh, Linux Kernel Mailing List
On Thu, Apr 02, 2009 at 07:27:15PM -0400, Theodore Tso wrote:
> To play with, definitely. For production use, I'll have to let you
> make your own judgements. I've been using it on my laptop since July.
Well I made a 75GB ext4 just to store temporary virtual machine images
to play with. I won't be upset if I loose those.
> At the moment, there's only one bug which I'm very concerned about,
> being worked here:
>
> https://bugs.launchpad.net/ubuntu/+source/linux/+bug/330824
>
> But a number of community distro's will be supporting it within the
> next month or two. So it's definitely getting there. As we increase
> the user base, we'll turn up more of the harder-to-reproduce bugs, but
> hopefully we'll get them fixed quickly.
Well pretty soon I will probably consider switching to that. btrfs sounds
neat and all, but I will wait for the disk format to get finalized first.
--
Len Sorensen
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-02 22:42 ` Linus Torvalds
2009-04-02 22:51 ` Andrew Morton
@ 2009-04-03 2:01 ` Jeff Garzik
2009-04-03 2:16 ` Linus Torvalds
2009-04-03 2:38 ` Trenton D. Adams
1 sibling, 2 replies; 419+ messages in thread
From: Jeff Garzik @ 2009-04-03 2:01 UTC (permalink / raw)
To: Linus Torvalds; +Cc: Andrew Morton, David Rees, Linux Kernel Mailing List
[-- Attachment #1: Type: text/plain, Size: 2707 bytes --]
Linus Torvalds wrote:
> Feel free to give it a try. It _should_ maintain good write speed while
> not disturbing the system much. But I bet if you added the "fadvise()" it
> would disturb things even _less_.
>
> My only point is really that you _can_ do streaming writes well, but at
> the same time I do think the kernel makes it too hard to do it with
> "simple" applications. I'd love to get the same kind of high-speed
> streaming behavior by just doing a simple "dd if=/dev/zero of=bigfile"
>
> And I really think we should be able to.
>
> And no, we clearly are _not_ able to do that now. I just tried with "dd",
> and created a 1.7G file that way, and it was stuttering - even with my
> nice SSD setup. I'm in my MUA writing this email (obviously), and in the
> middle it just totally hung for about half a minute - because it was
> obviously doing some fsync() for temporary saving etc while the "sync" was
> going on.
>
> With the "overwrite.c" thing, I do get short pauses when my MUA does
> something, but they are not the kind of "oops, everything hung for several
> seconds" kind.
Attached is my slightly-modified version of overwrite.c, modded to bound
the file size and to use fadvise().
On a 128GB, 3.0 Gbps no-name SATA SSD, x86-64, ext3, 2.6.29 vanilla kernel:
+ ./overwrite -b 3000 /spare/tmp/test.dat
writing 3000 buffers of size 8m
23.429 GB written in 1019.25 (23 MB/s)
real 17m0.211s
user 0m0.028s
sys 1m5.800s
+ ./overwrite -b 3000 -f /spare/tmp/test.dat
using fadvise()
writing 3000 buffers of size 8m
23.429 GB written in 1060.54 (22 MB/s)
real 17m41.446s
user 0m0.036s
sys 1m9.016s
The most interesting thing I found: the SSD does 80 MB/s for the first
~1 GB or so, then slows down dramatically. After ~2GB, it is down to 32
MB/s. After ~4GB, it reaches a steady speed around 23 MB/s.
--------------------------------------------------
On a 500GB, 3.0Gbps Seagate SATA drive, x86-64, ext3, 2.6.29 vanilla kernel:
+ ./overwrite -b 3000 /garz/tmp/test.dat
writing 3000 buffers of size 8m
23.429 GB written in 539.06 (44 MB/s)
real 9m0.348s
user 0m0.064s
sys 1m2.704s
+ ./overwrite -b 3000 -f /garz/tmp/test.dat
using fadvise()
writing 3000 buffers of size 8m
23.429 GB written in 535.08 (44 MB/s)
real 8m55.971s
user 0m0.044s
sys 1m6.600s
There is a similar performance fall-off for the Seagate, but much less
pronounced:
After 1GB: 52 MB/s
After 2GB: 44 MB/s
After 3GB: steady state
There appears to be a small increase in system time with "-f" (use
fadvise), but I'm guessing time(1) does not really give a good picture
of overall system time used, when you include background VM activity.
Jeff
[-- Attachment #2: overwrite.c --]
[-- Type: text/plain, Size: 2187 bytes --]
#define _GNU_SOURCE
#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>
#include <errno.h>
#include <fcntl.h>
#include <ctype.h>
#include <sys/mman.h>
#include <sys/time.h>
#include <linux/fs.h>
#define BUFSIZE (8*1024*1024ul)
static unsigned int maxbuf = ~0U;
static int do_fadvise;
static void parse_opt(int argc, char **argv)
{
int ch;
while (1) {
ch = getopt(argc, argv, "fb:");
if (ch == -1)
break;
switch (ch) {
case 'f':
do_fadvise = 1;
fprintf(stderr, "using fadvise()\n");
break;
case 'b':
if (atoi(optarg) > 1)
maxbuf = atoi(optarg);
else
fprintf(stderr, "invalid bufcount '%s'\n",
optarg);
break;
default:
fprintf(stderr, "invalid option 0%o (%c)\n",
ch,
isprint(ch) ? ch : '-');
break;
}
}
}
int main(int argc, char **argv)
{
static char buffer[BUFSIZE];
struct timeval start, now;
unsigned int index;
int fd;
parse_opt(argc, argv);
mlockall(MCL_CURRENT | MCL_FUTURE);
fd = open("/dev/urandom", O_RDONLY);
if (read(fd, buffer, BUFSIZE) != BUFSIZE) {
perror("/dev/urandom");
exit(1);
}
close(fd);
fd = open(argv[optind], O_RDWR | O_CREAT, 0666);
if (fd < 0) {
perror(argv[optind]);
exit(1);
}
if (maxbuf != ~0U)
fprintf(stderr, "writing %u buffers of size %lum\n",
maxbuf, BUFSIZE / (1024 * 1024ul));
gettimeofday(&start, NULL);
for (index = 0; index < maxbuf; index++) {
double s;
unsigned long MBps;
unsigned long MB;
if (write(fd, buffer, BUFSIZE) != BUFSIZE)
break;
sync_file_range(fd, index*BUFSIZE, BUFSIZE, SYNC_FILE_RANGE_WRITE);
if (index)
sync_file_range(fd, (index-1)*BUFSIZE, BUFSIZE, SYNC_FILE_RANGE_WAIT_BEFORE|SYNC_FILE_RANGE_WRITE|SYNC_FILE_RANGE_WAIT_AFTER);
if (do_fadvise)
posix_fadvise(fd, (index-1)*BUFSIZE, BUFSIZE,
POSIX_FADV_DONTNEED);
gettimeofday(&now, NULL);
s = (now.tv_sec - start.tv_sec) + ((double) now.tv_usec - start.tv_usec)/ 1000000;
MB = index * (BUFSIZE >> 20);
MBps = MB;
if (s > 1)
MBps = MBps / s;
printf("%8lu.%03lu GB written in %5.2f (%lu MB/s) \r",
MB >> 10, (MB & 1023) * 1000 >> 10, s, MBps);
fflush(stdout);
}
close(fd);
printf("\n");
return 0;
}
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 2:01 ` Jeff Garzik
@ 2009-04-03 2:16 ` Linus Torvalds
2009-04-03 3:05 ` Jeff Garzik
` (2 more replies)
2009-04-03 2:38 ` Trenton D. Adams
1 sibling, 3 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-04-03 2:16 UTC (permalink / raw)
To: Jeff Garzik; +Cc: Andrew Morton, David Rees, Linux Kernel Mailing List
On Thu, 2 Apr 2009, Jeff Garzik wrote:
>
> The most interesting thing I found: the SSD does 80 MB/s for the first ~1 GB
> or so, then slows down dramatically. After ~2GB, it is down to 32 MB/s.
> After ~4GB, it reaches a steady speed around 23 MB/s.
Are you sure that isn't an effect of double and triple indirect blocks
etc? The metadata updates get more complex for the deeper indirect blocks.
Or just our page cache lookup? Maybe our radix tree thing hits something
stupid. Although it sure shouldn't be _that_ noticeable.
> There is a similar performance fall-off for the Seagate, but much less
> pronounced:
> After 1GB: 52 MB/s
> After 2GB: 44 MB/s
> After 3GB: steady state
That would seem to indicate that it's something else than the disk speed.
> There appears to be a small increase in system time with "-f" (use fadvise),
> but I'm guessing time(1) does not really give a good picture of overall system
> time used, when you include background VM activity.
It would also be good to just compare it to something like
time sh -c "dd + sync"
(Which in my experience tends to fluctuate much more than the steady state
thing, so I suspect you'd need to do a few runs to make sure the numbers
are stable).
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 2:01 ` Jeff Garzik
2009-04-03 2:16 ` Linus Torvalds
@ 2009-04-03 2:38 ` Trenton D. Adams
2009-04-03 2:54 ` Jeff Garzik
1 sibling, 1 reply; 419+ messages in thread
From: Trenton D. Adams @ 2009-04-03 2:38 UTC (permalink / raw)
To: Jeff Garzik
Cc: Linus Torvalds, Andrew Morton, David Rees,
Linux Kernel Mailing List
On Thu, Apr 2, 2009 at 8:01 PM, Jeff Garzik <jeff@garzik.org> wrote:
> Linus Torvalds wrote:
> The most interesting thing I found: the SSD does 80 MB/s for the first ~1
> GB or so, then slows down dramatically. After ~2GB, it is down to 32 MB/s.
> After ~4GB, it reaches a steady speed around 23 MB/s.
Isn't that the kernel IO queue, and the dd averaging of transfer
speed? For example, once you hit the dirty ratio limit, that is when
it starts writing to disk. So, the first bit you'll see really fast
speeds, as it goes to memory, but it averages out over time to a
slower speed. As an example...
tdamac ~ # dd if=/dev/zero of=/tmp/bigfile bs=1M count=1
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.00489853 s, 214 MB/s
tdamac ~ # dd if=/dev/zero of=/tmp/bigfile bs=1M count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.242217 s, 43.3 MB/s
Those are with /proc/sys/vm/dirty_bytes set to 1M...
echo $((1024*1024*1)) > /proc/sys/vm/dirty_bytes
It's probably better to set it much higher though.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 2:38 ` Trenton D. Adams
@ 2009-04-03 2:54 ` Jeff Garzik
0 siblings, 0 replies; 419+ messages in thread
From: Jeff Garzik @ 2009-04-03 2:54 UTC (permalink / raw)
To: Trenton D. Adams
Cc: Linus Torvalds, Andrew Morton, David Rees,
Linux Kernel Mailing List
Trenton D. Adams wrote:
> On Thu, Apr 2, 2009 at 8:01 PM, Jeff Garzik <jeff@garzik.org> wrote:
>> Linus Torvalds wrote:
>> The most interesting thing I found: the SSD does 80 MB/s for the first ~1
>> GB or so, then slows down dramatically. After ~2GB, it is down to 32 MB/s.
>> After ~4GB, it reaches a steady speed around 23 MB/s.
>
> Isn't that the kernel IO queue, and the dd averaging of transfer
> speed? For example, once you hit the dirty ratio limit, that is when
> it starts writing to disk. So, the first bit you'll see really fast
> speeds, as it goes to memory, but it averages out over time to a
> slower speed. As an example...
overwrite.c is a special program that does this, in a loop:
write(buffer-N) data to pagecache
start buffer-N write-out to storage
wait for buffer-(N-1) write-out to complete
It uses the sync_file_range() system call, which is like fsync() on
steroids, wearing cool sunglasses.
Regards,
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 2:16 ` Linus Torvalds
@ 2009-04-03 3:05 ` Jeff Garzik
2009-04-03 3:34 ` Linus Torvalds
2009-04-03 5:05 ` Nick Piggin
2009-04-03 8:31 ` Jeff Garzik
2 siblings, 1 reply; 419+ messages in thread
From: Jeff Garzik @ 2009-04-03 3:05 UTC (permalink / raw)
To: Linus Torvalds; +Cc: Andrew Morton, David Rees, Linux Kernel Mailing List
Linus Torvalds wrote:
>
> On Thu, 2 Apr 2009, Jeff Garzik wrote:
>> The most interesting thing I found: the SSD does 80 MB/s for the first ~1 GB
>> or so, then slows down dramatically. After ~2GB, it is down to 32 MB/s.
>> After ~4GB, it reaches a steady speed around 23 MB/s.
>
> Are you sure that isn't an effect of double and triple indirect blocks
> etc? The metadata updates get more complex for the deeper indirect blocks.
> Or just our page cache lookup? Maybe our radix tree thing hits something
> stupid. Although it sure shouldn't be _that_ noticeable.
Indirect block overhead increased as the file grew to 23 GB, I'm sure...
I should probably re-test pre-creating the file, _then_ running
overwrite.c. That would at least guarantee the filesystem isn't
allocating new blocks and metadata.
I was really surprised the performance was so high at first, then fell
off so dramatically, on the SSD here.
Unfortunately I cannot trash these blkdevs, so the raw blkdev numbers
are not immediately measurable.
>> There is a similar performance fall-off for the Seagate, but much less
>> pronounced:
>> After 1GB: 52 MB/s
>> After 2GB: 44 MB/s
>> After 3GB: steady state
>
> That would seem to indicate that it's something else than the disk speed.
>
>> There appears to be a small increase in system time with "-f" (use fadvise),
>> but I'm guessing time(1) does not really give a good picture of overall system
>> time used, when you include background VM activity.
>
> It would also be good to just compare it to something like
>
> time sh -c "dd + sync"
I'll add that to the next run...
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 3:05 ` Jeff Garzik
@ 2009-04-03 3:34 ` Linus Torvalds
2009-04-03 11:32 ` Chris Mason
0 siblings, 1 reply; 419+ messages in thread
From: Linus Torvalds @ 2009-04-03 3:34 UTC (permalink / raw)
To: Jeff Garzik; +Cc: Andrew Morton, David Rees, Linux Kernel Mailing List
On Thu, 2 Apr 2009, Jeff Garzik wrote:
>
> I was really surprised the performance was so high at first, then fell off so
> dramatically, on the SSD here.
Well, one rather simple explanation is that if you hadn't been doing lots
of writes, then the background garbage collection on the Intel SSD gets
ahead of the game, and gives you lots of bursty nice write bandwidth due
to having a nicely compacted and pre-erased blocks.
Then, after lots of writing, all the pre-erased blocks are gone, and you
are down to a steady state where it needs to GC and erase blocks to make
room for new writes.
So that part doesn't suprise me per se. The Intel SSD's definitely
flucutate a bit timing-wise (but I love how they never degenerate to the
"ooh, that _really_ sucks" case that the other SSD's and the rotational
media I've seen does when you do random writes).
The fact that it also happens for the regular disk does imply that it's
not the _only_ thing going on, though.
> Unfortunately I cannot trash these blkdevs, so the raw blkdev numbers are not
> immediately measurable.
Hey, understood. I don't think raw block accesses are even all that
interesting. But you might try to write the file backwards, and see if you
see the same pattern.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-02 1:00 ` Ingo Molnar
@ 2009-04-03 4:06 ` Lennart Sorensen
2009-04-03 4:13 ` Linus Torvalds
2009-04-03 22:28 ` Jeff Moyer
0 siblings, 2 replies; 419+ messages in thread
From: Lennart Sorensen @ 2009-04-03 4:06 UTC (permalink / raw)
To: Ingo Molnar; +Cc: Andrew Morton, torvalds, tytso, drees76, jesper, linux-kernel
On Thu, Apr 02, 2009 at 03:00:44AM +0200, Ingo Molnar wrote:
> I'll test this (and the other suggestions) once i'm out of the merge
> window.
>
> I probably wont test that though ;-)
>
> Going back to v2.6.14 to do pre-mutex-merge performance tests was
> already quite a challenge on modern hardware.
Well after a day of running my mythtv box with anticipatiry rather than
the default cfq scheduler, it certainly looks a lot better. I haven't
seen any slowdowns, the disk activity light isn't on solidly (it just
flashes every couple of seconds instead), and it doesn't even mind
me lanuching bittornado on multiple torrents at the same time as two
recordings are taking place and some commercial flagging is taking place.
With cfq this would usually make the system unusable (and a Q6600 with
6GB ram should never be unresponsive in my opinion).
So so far I would rank anticipatory at about 1000x better than cfq for
my work load. It sure acts a lot more like it used to back in 2.6.18
times.
--
Len Sorensen
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 4:06 ` Lennart Sorensen
@ 2009-04-03 4:13 ` Linus Torvalds
2009-04-03 7:25 ` Jens Axboe
2009-04-03 22:28 ` Jeff Moyer
1 sibling, 1 reply; 419+ messages in thread
From: Linus Torvalds @ 2009-04-03 4:13 UTC (permalink / raw)
To: Lennart Sorensen, Jens Axboe
Cc: Ingo Molnar, Andrew Morton, tytso, drees76, jesper,
Linux Kernel Mailing List
Jens - remind us what the problem with AS was wrt CFQ?
There's some write throttling in CFQ, maybe it has some really broken
case?
Linus
On Fri, 3 Apr 2009, Lennart Sorensen wrote:
> On Thu, Apr 02, 2009 at 03:00:44AM +0200, Ingo Molnar wrote:
> > I'll test this (and the other suggestions) once i'm out of the merge
> > window.
> >
> > I probably wont test that though ;-)
> >
> > Going back to v2.6.14 to do pre-mutex-merge performance tests was
> > already quite a challenge on modern hardware.
>
> Well after a day of running my mythtv box with anticipatiry rather than
> the default cfq scheduler, it certainly looks a lot better. I haven't
> seen any slowdowns, the disk activity light isn't on solidly (it just
> flashes every couple of seconds instead), and it doesn't even mind
> me lanuching bittornado on multiple torrents at the same time as two
> recordings are taking place and some commercial flagging is taking place.
> With cfq this would usually make the system unusable (and a Q6600 with
> 6GB ram should never be unresponsive in my opinion).
>
> So so far I would rank anticipatory at about 1000x better than cfq for
> my work load. It sure acts a lot more like it used to back in 2.6.18
> times.
>
> --
> Len Sorensen
>
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 2:16 ` Linus Torvalds
2009-04-03 3:05 ` Jeff Garzik
@ 2009-04-03 5:05 ` Nick Piggin
2009-04-03 8:31 ` Jeff Garzik
2 siblings, 0 replies; 419+ messages in thread
From: Nick Piggin @ 2009-04-03 5:05 UTC (permalink / raw)
To: Linus Torvalds
Cc: Jeff Garzik, Andrew Morton, David Rees, Linux Kernel Mailing List
On Friday 03 April 2009 13:16:08 Linus Torvalds wrote:
>
> On Thu, 2 Apr 2009, Jeff Garzik wrote:
> >
> > The most interesting thing I found: the SSD does 80 MB/s for the first ~1 GB
> > or so, then slows down dramatically. After ~2GB, it is down to 32 MB/s.
> > After ~4GB, it reaches a steady speed around 23 MB/s.
>
> Are you sure that isn't an effect of double and triple indirect blocks
> etc? The metadata updates get more complex for the deeper indirect blocks.
>
> Or just our page cache lookup? Maybe our radix tree thing hits something
> stupid. Although it sure shouldn't be _that_ noticeable.
Hmm, I don't know what you have in mind. page cache lookup should be
several orders of magnitude faster than a disk can write the pages out?
Dirty/writeout/clean cycle still has to lock the radix tree to change
tags, but that's really not going to be significantly contended (nor
does it synchronise with simple lookups).
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 4:13 ` Linus Torvalds
@ 2009-04-03 7:25 ` Jens Axboe
2009-04-03 8:15 ` Ingo Molnar
2009-04-03 14:21 ` Lennart Sorensen
0 siblings, 2 replies; 419+ messages in thread
From: Jens Axboe @ 2009-04-03 7:25 UTC (permalink / raw)
To: Linus Torvalds
Cc: Lennart Sorensen, Ingo Molnar, Andrew Morton, tytso, drees76,
jesper, Linux Kernel Mailing List
On Thu, Apr 02 2009, Linus Torvalds wrote:
>
> Jens - remind us what the problem with AS was wrt CFQ?
CFQ was just faster, plus it supported things like io priorities that AS
does not.
> There's some write throttling in CFQ, maybe it has some really broken
> case?
Who knows, it's definitely interesting and something to look into why AS
performs that differently to CFQ on his box. Lennart, can you give some
information on what file system + mount options, disk drive(s), etc? A
full dmesg would be good, too.
>
> Linus
>
> On Fri, 3 Apr 2009, Lennart Sorensen wrote:
>
> > On Thu, Apr 02, 2009 at 03:00:44AM +0200, Ingo Molnar wrote:
> > > I'll test this (and the other suggestions) once i'm out of the merge
> > > window.
> > >
> > > I probably wont test that though ;-)
> > >
> > > Going back to v2.6.14 to do pre-mutex-merge performance tests was
> > > already quite a challenge on modern hardware.
> >
> > Well after a day of running my mythtv box with anticipatiry rather than
> > the default cfq scheduler, it certainly looks a lot better. I haven't
> > seen any slowdowns, the disk activity light isn't on solidly (it just
> > flashes every couple of seconds instead), and it doesn't even mind
> > me lanuching bittornado on multiple torrents at the same time as two
> > recordings are taking place and some commercial flagging is taking place.
> > With cfq this would usually make the system unusable (and a Q6600 with
> > 6GB ram should never be unresponsive in my opinion).
> >
> > So so far I would rank anticipatory at about 1000x better than cfq for
> > my work load. It sure acts a lot more like it used to back in 2.6.18
> > times.
> >
> > --
> > Len Sorensen
> >
--
Jens Axboe
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 7:25 ` Jens Axboe
@ 2009-04-03 8:15 ` Ingo Molnar
2009-04-06 21:46 ` Bill Davidsen
2009-04-03 14:21 ` Lennart Sorensen
1 sibling, 1 reply; 419+ messages in thread
From: Ingo Molnar @ 2009-04-03 8:15 UTC (permalink / raw)
To: Jens Axboe, Nick Piggin
Cc: Linus Torvalds, Lennart Sorensen, Andrew Morton, tytso, drees76,
jesper, Linux Kernel Mailing List, Peter Zijlstra
* Jens Axboe <jens.axboe@oracle.com> wrote:
> On Thu, Apr 02 2009, Linus Torvalds wrote:
> > On Fri, 3 Apr 2009, Lennart Sorensen wrote:
> > > So so far I would rank anticipatory at about 1000x better than
> > > cfq for my work load. It sure acts a lot more like it used to
> > > back in 2.6.18 times.
[...]
> > Jens - remind us what the problem with AS was wrt CFQ?
>
> CFQ was just faster, plus it supported things like io priorities
> that AS does not.
btw., while pluggable IO schedulers have their upsides:
- They are easier to test during development and deployment.
- The uptick of a new, experimental IO scheduler is faster due to
easier availability.
- Regressions in the primary IO scheduler are easier to prove.
And the technical case for pluggable IO schedulers is much stronger
than the case for pluggable process schedulers:
- Persistent media has persistent workloads - and each workload has
different access patterns.
- The inefficiencies of mixed workloads on the same rotating media
have forced a clear separation of the 'one disk, one workload'
usage model, and has hammered this down people's minds. (Nobody
in their right mind is going to put a big Oracle and SAP
installation on the same [rotating] disk.)
- the 'NOP' scheduler makes sense on media with RAM-like
properties. 90% of CFQ's overhead is useless fluff on such media.
- [ These properties are not there for CPU schedulers: CPUs are
data processors not persistent data storage so they are
fundamentally shared by all workloads and have a lot less
persistent state - so mixing workloads on CPUs is common and
having one good scheduler is paramount. ]
At the risk of restarting the "to plug or not to plug" scheduler
flamewars ;-), the pluggable IO scheduler design has its very clear
downsides as well:
- 99% of users use CFQ, so any bugs in it will hit 99% of the Linux
community and we have not actually won much in terms of helping
real people out in the field.
- We are many years down the road of having replaced AS with the
supposedly better CFQ - and AS is still (or again?) markedly
better for some common tests.
- The 1% of testers/users who find that CFQ sucks and track it down
to CFQ can easily switch back to another IO scheduler: NOP or AS.
This dillutes the quality of _CFQ_, our crown jewel IO scheduler:
as it removes critical participiants from the pool of testers.
They might be only 1% of all Linux users, but they are the 1% who
make things happen upstream.
The result: even if CFQ sucks for some important workloads, the
combined social pressure is IMO never strong enough on upstream
to get our act together. While we might fix the bugs reported
here, the time to realize and address these bugs was way too
long. Power-users configure they way out and go the path of least
resistance and the rest suffers in silence.
- There's not even any feedback in the common case: people think
"hey, what I'm doing must be some oddball thing" and leave it at
that. Even if that oddball thing is not odd at all. Furthermore,
getting feedback _after_ someone has solved their problems by
switching to AS is a lot harder than getting feedback while they
are still hurting and cursing. Yesterday's solved problem is
boring and a lot less worthy to report than today's high-prio
ticket.
- It is _too easy_ to switch to AS, and shops with critical data
will not be as eager to report CFQ problems, and will not be as
eager to test experimental kernel patches that fix CFQ problems,
if they can switch to AS at the flip of a switch.
Ergo, i think pluggable designs for something as critical and as
central as IO scheduling has its clear downsides as it created two
mediocre schedulers:
- CFQ with all the modern features but performance problems on
certain workloads
- Anticipatory with legacy features only but works (much!) better
on some workloads.
... instead of giving us just a single well-working CFQ scheduler.
This, IMHO, in its current form, seems to trump the upsides of IO
schedulers.
So i do think that late during development (i.e. now), _years_ down
the line, we should make it gradually harder for people to use AS.
I'd not remove the AS code per se (it _is_ convenient to test it
without having to patch the kernel - especially now that we _know_
that there is a common problem, and there _are_ genuinely oddball
workloads where it might work better due to luck or design), but
still we should:
- Make it harder to configure in.
- Change the /sys switch-to-AS method to break any existing scripts
that switched CFQ to AS. Add a warning to the syslog if an old
script uses the old method and document the change prominetly but
do _not_ switch the IO scheduler to AS.
- If the user still switched to AS, emit some scary warning about
this being an obsolete IO scheduler, that it is not being tested
as widely as CFQ and hence might have bugs, and that if the user
still feels absolutely compelled to use it, to report his problem
to the appropriate mailing lists so that upstream can fix CFQ
instead.
By splintering the pool of testers and by removing testers from that
pool who are the most important in getting our default IO scheduler
tested we are not doing ourselves any favors.
Btw., my personal opinion is that even such extreme measures dont
work fully right due to social factors, so _my_ preferred choice for
doing such things is well known: to implement one good default
scheduler and to fix all bugs in it ;-)
For IO schedulers i think there's just two sane technical choices
for plugins: one good default scheduler (CFQ) or no IO scheduler at
all (NOP).
The rest is development fuzz or migration fuzz - and such fuzz needs
to be forced to zero after years of stabilization.
What do you think?
Ingo
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 2:16 ` Linus Torvalds
2009-04-03 3:05 ` Jeff Garzik
2009-04-03 5:05 ` Nick Piggin
@ 2009-04-03 8:31 ` Jeff Garzik
2009-04-03 8:35 ` Jeff Garzik
2 siblings, 1 reply; 419+ messages in thread
From: Jeff Garzik @ 2009-04-03 8:31 UTC (permalink / raw)
To: Linux Kernel Mailing List; +Cc: Linus Torvalds, Andrew Morton, David Rees
[-- Attachment #1: Type: text/plain, Size: 1901 bytes --]
Linus Torvalds wrote:
>
> On Thu, 2 Apr 2009, Jeff Garzik wrote:
>> The most interesting thing I found: the SSD does 80 MB/s for the first ~1 GB
>> or so, then slows down dramatically. After ~2GB, it is down to 32 MB/s.
>> After ~4GB, it reaches a steady speed around 23 MB/s.
>
> Are you sure that isn't an effect of double and triple indirect blocks
> etc? The metadata updates get more complex for the deeper indirect blocks.
>
> Or just our page cache lookup? Maybe our radix tree thing hits something
> stupid. Although it sure shouldn't be _that_ noticeable.
>
>> There is a similar performance fall-off for the Seagate, but much less
>> pronounced:
>> After 1GB: 52 MB/s
>> After 2GB: 44 MB/s
>> After 3GB: steady state
>
> That would seem to indicate that it's something else than the disk speed.
Attached are some additional tests using sync_file_range, dd, an SSD and
a normal SATA disk. The test program -- overwrite.c -- is unchanged
from my last posting, basically the same as Linus's except with
posix_fadvise()
Observations:
* the no-name SSD does seem to burst the first ~1GB of writes rapidly,
but degrades to a much lower sustained level, as observed before.
Repeated tests do not produce ~80 MB/s, only the first test, which lends
credence to the theory about background activity.
* For the SSD, overwrite is noticeably faster than dd.
* For the Seagate NCQ hard drive, dd is noticeably faster than overwrite.
* fadvise() appears to help, but mostly the results are either
inconclusive or lost in the noise: A slight increase in throughput, and
a slight increase in system time.
The test sequence for both SATA devices was the following:
3 x dd
3 x overwrite
3 x overwrite w/ fadvise(don't need)
System setup: Intel Nahalem(sp?) x86-64, ICH10, Fedora 10, ext3
filesystem (mounted defaults + noatime), 2.6.29 vanilla kernel.
Regards,
Jeff
[-- Attachment #2: test-output.txt --]
[-- Type: text/plain, Size: 2804 bytes --]
=======================================================
128GB, 3.0 Gbps no-name SATA SSD, x86-64, ext3, 2.6.29 vanilla
First dd(1) creates the file, others simply rewrite it.
=======================================================
24000+0 records in
24000+0 records out
25165824000 bytes (25 GB) copied, 917.599 s, 27.4 MB/s)
real 15m30.928s
user 0m0.016s
sys 1m3.924s
24000+0 records in
24000+0 records out
25165824000 bytes (25 GB) copied, 1056.92 s, 23.8 MB/s)
real 18m1.686s
user 0m0.016s
sys 1m4.816s
24000+0 records in
24000+0 records out
25165824000 bytes (25 GB) copied, 1044.25 s, 24.1 MB/s)
real 17m37.884s
user 0m0.020s
sys 1m4.300s
writing 2800 buffers of size 8m
21.867 GB written in 645.56 (34 MB/s)
real 10m46.502s
user 0m0.044s
sys 0m35.990s
writing 2800 buffers of size 8m
21.867 GB written in 634.55 (35 MB/s)
real 10m35.448s
user 0m0.036s
sys 0m36.466s
writing 2800 buffers of size 8m
21.867 GB written in 642.00 (34 MB/s)
real 10m42.890s
user 0m0.044s
sys 0m34.930s
using fadvise()
writing 2800 buffers of size 8m
21.867 GB written in 639.49 (35 MB/s)
real 10m40.384s
user 0m0.036s
sys 0m38.582s
using fadvise()
writing 2800 buffers of size 8m
21.867 GB written in 636.17 (35 MB/s)
real 10m37.061s
user 0m0.024s
sys 0m39.146s
using fadvise()
writing 2800 buffers of size 8m
21.867 GB written in 636.07 (35 MB/s)
real 10m37.003s
user 0m0.060s
sys 0m39.174s
=======================================================
500GB, 3.0Gbps Seagate SATA drive, x86-64, ext3, 2.6.29 vanilla
First dd(1) creates the file, others simply rewrite it.
=======================================================
24000+0 records in
24000+0 records out
25165824000 bytes (25 GB) copied, 494.797 s, 50.9 MB/s)
real 8m42.680s
user 0m0.016s
sys 0m58.176s
24000+0 records in
24000+0 records out
25165824000 bytes (25 GB) copied, 498.295 s, 50.5 MB/s)
real 8m27.505s
user 0m0.016s
sys 0m58.744s
24000+0 records in
24000+0 records out
25165824000 bytes (25 GB) copied, 492.145 s, 51.1 MB/s)
real 8m23.616s
user 0m0.016s
sys 0m59.064s
writing 2800 buffers of size 8m
21.867 GB written in 478.41 (46 MB/s)
real 7m59.690s
user 0m0.032s
sys 0m33.210s
writing 2800 buffers of size 8m
21.867 GB written in 513.54 (43 MB/s)
real 8m34.461s
user 0m0.048s
sys 0m33.342s
writing 2800 buffers of size 8m
21.867 GB written in 471.38 (47 MB/s)
real 7m52.641s
user 0m0.020s
sys 0m33.486s
using fadvise()
writing 2800 buffers of size 8m
21.867 GB written in 467.67 (47 MB/s)
real 7m48.756s
user 0m0.048s
sys 0m36.838s
using fadvise()
writing 2800 buffers of size 8m
21.867 GB written in 462.69 (48 MB/s)
real 7m43.597s
user 0m0.020s
sys 0m37.462s
using fadvise()
writing 2800 buffers of size 8m
21.867 GB written in 463.56 (48 MB/s)
real 7m44.472s
user 0m0.036s
sys 0m37.342s
[-- Attachment #3: run-test.sh --]
[-- Type: application/x-sh, Size: 481 bytes --]
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 8:31 ` Jeff Garzik
@ 2009-04-03 8:35 ` Jeff Garzik
0 siblings, 0 replies; 419+ messages in thread
From: Jeff Garzik @ 2009-04-03 8:35 UTC (permalink / raw)
To: Linux Kernel Mailing List; +Cc: Linus Torvalds, Andrew Morton, David Rees
Jeff Garzik wrote:
> Attached are some additional tests using sync_file_range, dd, an SSD and
> a normal SATA disk. The test program -- overwrite.c -- is unchanged
> from my last posting, basically the same as Linus's except with
> posix_fadvise()
Oh, and, as run-test.sh shows, these tests were done with the file
pre-allocated and sync'd to disk.
The dd and overwrite invocations that follow the first dd invocation do
/not/ require the fs to allocate new blocks.
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 3:34 ` Linus Torvalds
@ 2009-04-03 11:32 ` Chris Mason
2009-04-03 15:07 ` Linus Torvalds
0 siblings, 1 reply; 419+ messages in thread
From: Chris Mason @ 2009-04-03 11:32 UTC (permalink / raw)
To: Linus Torvalds
Cc: Jeff Garzik, Andrew Morton, David Rees, Linux Kernel Mailing List
On Thu, 2009-04-02 at 20:34 -0700, Linus Torvalds wrote:
>
> On Thu, 2 Apr 2009, Jeff Garzik wrote:
> >
> > I was really surprised the performance was so high at first, then fell off so
> > dramatically, on the SSD here.
>
> Well, one rather simple explanation is that if you hadn't been doing lots
> of writes, then the background garbage collection on the Intel SSD gets
> ahead of the game, and gives you lots of bursty nice write bandwidth due
> to having a nicely compacted and pre-erased blocks.
>
> Then, after lots of writing, all the pre-erased blocks are gone, and you
> are down to a steady state where it needs to GC and erase blocks to make
> room for new writes.
>
> So that part doesn't suprise me per se. The Intel SSD's definitely
> flucutate a bit timing-wise (but I love how they never degenerate to the
> "ooh, that _really_ sucks" case that the other SSD's and the rotational
> media I've seen does when you do random writes).
>
23MB/s seems a bit low though, I'd try with O_DIRECT. ext3 doesn't do
writepages, and the ssd may be very sensitive to smaller writes (what
brand?)
> The fact that it also happens for the regular disk does imply that it's
> not the _only_ thing going on, though.
>
Jeff if you blktrace it I can make up a seekwatcher graph. My bet is
that pdflush is stuck writing the indirect blocks, and doing a ton of
seeks.
You could change the overwrite program to also do sync_file_range on the
block device ;)
-chris
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-03-27 5:13 ` Theodore Tso
2009-03-27 5:57 ` Matthew Garrett
@ 2009-04-03 12:39 ` Pavel Machek
1 sibling, 0 replies; 419+ messages in thread
From: Pavel Machek @ 2009-04-03 12:39 UTC (permalink / raw)
To: Theodore Tso, Matthew Garrett, Linus Torvalds, Andrew Morton,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Hi!
> > I'm utterly and screamingly bored of this "Blame userspace" attitude.
>
> I'm not blaming userspace. I'm blaming ourselves, for implementing an
> attractive nuisance, and not realizing that we had implemented an
> attractive nuisance; which years later, is also responsible for these
> latency problems, both with and without fsync() ---- *and* which have
> also traied people into believing that fsync() is always expensive,
> and must be avoided at all costs --- which had not previously been
> true!
Well... fsync is quite expensive. If your disk is down, it costs 3+
and 3J+. If your disk is up, it will only take 20msec+.
OTOH the rename trick on ext3 costs approximately nothing...
Imagine those desktops where they want windows layout
preserved. Having 30 second old layout is acceptable, loosing layout
altogether is not. If you add fsync to the window manager, user will
see those 3seconds+ delays, unless window manager gets multithreaded.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 7:25 ` Jens Axboe
2009-04-03 8:15 ` Ingo Molnar
@ 2009-04-03 14:21 ` Lennart Sorensen
2009-04-03 15:05 ` Mark Lord
1 sibling, 1 reply; 419+ messages in thread
From: Lennart Sorensen @ 2009-04-03 14:21 UTC (permalink / raw)
To: Jens Axboe
Cc: Linus Torvalds, Ingo Molnar, Andrew Morton, tytso, drees76,
jesper, Linux Kernel Mailing List
On Fri, Apr 03, 2009 at 09:25:07AM +0200, Jens Axboe wrote:
> CFQ was just faster, plus it supported things like io priorities that AS
> does not.
Faster at what? I am now wondering if switching the servers at work to
anticipatory will make them be more responsive when an rsnapshot run is
done (which it is every 3 hours). That would provide another data point.
It is currently very easy to tell when 10:00, 13:00, 16:00 and 19:00
comes around.
> Who knows, it's definitely interesting and something to look into why AS
> performs that differently to CFQ on his box. Lennart, can you give some
> information on what file system + mount options, disk drive(s), etc? A
> full dmesg would be good, too.
Well the system is setup like this:
Core 2 Quad Q6600 CPU (2.4GHz quad core).
Asus P5K mainboard (Intel P35 chipset)
6GB of ram
PVR500 dual NTSC tuner pci card
4 x 500GB WD5000AAKS SATA drives
25GB sda1 + sdb1 raid1 for /
25GB sdc1 + sdd1 raid1 for /home
remaining as sda2 + sdb2 + sdc2 + sdd2 raid5 for LVM.
1.2TB /var uses most of the LVM for mythtv storage and other data
6GB swap on LVM
94GB test volume on LVM (this uses ext4 but is hardly ever used)
all filesystems other than the test one are ext3
I run the ICH9 in AHCI mode since in IDE mode it doesn't do 64bit DMA
and the bounce buffers seemed to be having issues keeping up.
So normal use of the machine is:
mythtv-backend + mysql takes care of the mythtv recording work.
mythtv-frontend with output on an nvidia 8600GT (with proprietary drivers
in use)
commercial flagging and some transcode to mpeg4 for shows I keep for a
while run in parallel, since after all there are 4 cores to use.
folding@home running smp (using all 4 cores) at idle priority
birtornado running with many torrents running slowly seeding (I limit it
to 5kb/s up at all times due to monthly caps on transfers from my ISP,
s this way it can do something consistently without going over).
It probably has 300GB worth of files being seeded at the moment.
So when I first built the machine it ran really nicely, but I think that
was with 2.6.16 or 2.6.18 or so. It was a while ago. It worked quite
well, responsiveness was good, etc. 2.6.24 - 2.6.26 has been not
so great. Well until I switched the ioscheduler a couple of days ago.
So the behaviour with cfq is:
Disk light seems to be constantly on if there is any disk activity. iotop
can show a total io of maybe 1MB/s and the disk light is on constantly.
Rarely does it seem to make it over about 15MB/s of total io. Running a
sync command in the hopes that it will get a clue usually takes 20 to
30 seconds to complete. Running it again right after it completes can
take another 5 seconds if something is recording at the time. That seems
like a long time to handle a few seconds worth of MPEG2 NTSC video.
Starting playback on mythtv most often fails on the first attempt with a
15 second timeout, and then on the second attempt it will usually manage
to start playback. Sometimes it takes a 3rd attempt.
The behaviour with anticipatory is:
Disk light flashes every second or two for a moment, iotop shows much
faster completion of io, and I have even seen 75MB/s worth of IO when
restarting the bittornado client and having it hash check the data
of files. With cfq that never seemed to get over about 15MB/s and the
system would become unusable while doing it. mythtv reponsiveness is
instant like it used to be in the past. Running sync returns practically
immediately. Maybe 1second in worst case (which was with 2 shows
recording, 2 transcoding, and bittorrent hash checking).
Anyhow here is dmesg from bootup. The only thing I have seen in it since
boot (I unfortunately cleared it yesterday to see if any more messages
were happening since the change) are messages about mpeg2 data dropped
by ivtv because the system wasn't keeping up. I haven't seen any of
those messages since the switch. They were happening a lot before.
[ 0.000000] Initializing cgroup subsys cpuset
[ 0.000000] Initializing cgroup subsys cpu
[ 0.000000] Linux version 2.6.26-1-amd64 (Debian 2.6.26-13) (waldi@debian.org) (gcc version 4.1.3 20080704 (prerelease) (Debian 4.1.2-24)) #1 SMP Sat Jan 10 17:57:00 UTC 2009
[ 0.000000] Command line: root=/dev/md0 ro quiet
[ 0.000000] BIOS-provided physical RAM map:
[ 0.000000] BIOS-e820: 0000000000000000 - 000000000009ec00 (usable)
[ 0.000000] BIOS-e820: 000000000009ec00 - 00000000000a0000 (reserved)
[ 0.000000] BIOS-e820: 00000000000e4000 - 0000000000100000 (reserved)
[ 0.000000] BIOS-e820: 0000000000100000 - 00000000cff80000 (usable)
[ 0.000000] BIOS-e820: 00000000cff80000 - 00000000cff8e000 (ACPI data)
[ 0.000000] BIOS-e820: 00000000cff8e000 - 00000000cffe0000 (ACPI NVS)
[ 0.000000] BIOS-e820: 00000000cffe0000 - 00000000d0000000 (reserved)
[ 0.000000] BIOS-e820: 00000000fee00000 - 00000000fee01000 (reserved)
[ 0.000000] BIOS-e820: 00000000fff00000 - 0000000100000000 (reserved)
[ 0.000000] BIOS-e820: 0000000100000000 - 00000001b0000000 (usable)
[ 0.000000] Entering add_active_range(0, 0, 158) 0 entries of 3200 used
[ 0.000000] Entering add_active_range(0, 256, 851840) 1 entries of 3200 used
[ 0.000000] Entering add_active_range(0, 1048576, 1769472) 2 entries of 3200 used
[ 0.000000] max_pfn_mapped = 1769472
[ 0.000000] init_memory_mapping
[ 0.000000] DMI 2.4 present.
[ 0.000000] ACPI: RSDP 000FBCD0, 0024 (r2 ACPIAM)
[ 0.000000] ACPI: XSDT CFF80100, 0054 (r1 A_M_I_ OEMXSDT 3000805 MSFT 97)
[ 0.000000] ACPI: FACP CFF80290, 00F4 (r3 A_M_I_ OEMFACP 3000805 MSFT 97)
[ 0.000000] ACPI: DSDT CFF80440, 90AB (r1 A0871 A0871018 18 INTL 20060113)
[ 0.000000] ACPI: FACS CFF8E000, 0040
[ 0.000000] ACPI: APIC CFF80390, 006C (r1 A_M_I_ OEMAPIC 3000805 MSFT 97)
[ 0.000000] ACPI: MCFG CFF80400, 003C (r1 A_M_I_ OEMMCFG 3000805 MSFT 97)
[ 0.000000] ACPI: OEMB CFF8E040, 0081 (r1 A_M_I_ AMI_OEM 3000805 MSFT 97)
[ 0.000000] ACPI: HPET CFF894F0, 0038 (r1 A_M_I_ OEMHPET 3000805 MSFT 97)
[ 0.000000] ACPI: OSFR CFF89530, 00B0 (r1 A_M_I_ OEMOSFR 3000805 MSFT 97)
[ 0.000000] No NUMA configuration found
[ 0.000000] Faking a node at 0000000000000000-00000001b0000000
[ 0.000000] Entering add_active_range(0, 0, 158) 0 entries of 3200 used
[ 0.000000] Entering add_active_range(0, 256, 851840) 1 entries of 3200 used
[ 0.000000] Entering add_active_range(0, 1048576, 1769472) 2 entries of 3200 used
[ 0.000000] Bootmem setup node 0 0000000000000000-00000001b0000000
[ 0.000000] NODE_DATA [0000000000010000 - 0000000000014fff]
[ 0.000000] bootmap [0000000000015000 - 000000000004afff] pages 36
[ 0.000000] early res: 0 [0-fff] BIOS data page
[ 0.000000] early res: 1 [6000-7fff] TRAMPOLINE
[ 0.000000] early res: 2 [200000-675397] TEXT DATA BSS
[ 0.000000] early res: 3 [37775000-37fef581] RAMDISK
[ 0.000000] early res: 4 [9ec00-fffff] BIOS reserved
[ 0.000000] early res: 5 [8000-ffff] PGTABLE
[ 0.000000] [ffffe20000000000-ffffe20002dfffff] PMD -> [ffff810001200000-ffff810003ffffff] on node 0
[ 0.000000] [ffffe20003800000-ffffe20005ffffff] PMD -> [ffff81000c000000-ffff81000e7fffff] on node 0
[ 0.000000] Zone PFN ranges:
[ 0.000000] DMA 0 -> 4096
[ 0.000000] DMA32 4096 -> 1048576
[ 0.000000] Normal 1048576 -> 1769472
[ 0.000000] Movable zone start PFN for each node
[ 0.000000] early_node_map[3] active PFN ranges
[ 0.000000] 0: 0 -> 158
[ 0.000000] 0: 256 -> 851840
[ 0.000000] 0: 1048576 -> 1769472
[ 0.000000] On node 0 totalpages: 1572638
[ 0.000000] DMA zone: 56 pages used for memmap
[ 0.000000] DMA zone: 1251 pages reserved
[ 0.000000] DMA zone: 2691 pages, LIFO batch:0
[ 0.000000] DMA32 zone: 14280 pages used for memmap
[ 0.000000] DMA32 zone: 833464 pages, LIFO batch:31
[ 0.000000] Normal zone: 9856 pages used for memmap
[ 0.000000] Normal zone: 711040 pages, LIFO batch:31
[ 0.000000] Movable zone: 0 pages used for memmap
[ 0.000000] ACPI: PM-Timer IO Port: 0x808
[ 0.000000] ACPI: Local APIC address 0xfee00000
[ 0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
[ 0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x01] enabled)
[ 0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x02] enabled)
[ 0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x03] enabled)
[ 0.000000] ACPI: IOAPIC (id[0x04] address[0xfec00000] gsi_base[0])
[ 0.000000] IOAPIC[0]: apic_id 4, version 0, address 0xfec00000, GSI 0-23
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[ 0.000000] ACPI: IRQ0 used by override.
[ 0.000000] ACPI: IRQ2 used by override.
[ 0.000000] ACPI: IRQ9 used by override.
[ 0.000000] Setting APIC routing to flat
[ 0.000000] ACPI: HPET id: 0xffffffff base: 0xfed00000
[ 0.000000] Using ACPI (MADT) for SMP configuration information
[ 0.000000] PM: Registered nosave memory: 000000000009e000 - 000000000009f000
[ 0.000000] PM: Registered nosave memory: 000000000009f000 - 00000000000a0000
[ 0.000000] PM: Registered nosave memory: 00000000000a0000 - 00000000000e4000
[ 0.000000] PM: Registered nosave memory: 00000000000e4000 - 0000000000100000
[ 0.000000] PM: Registered nosave memory: 00000000cff80000 - 00000000cff8e000
[ 0.000000] PM: Registered nosave memory: 00000000cff8e000 - 00000000cffe0000
[ 0.000000] PM: Registered nosave memory: 00000000cffe0000 - 00000000d0000000
[ 0.000000] PM: Registered nosave memory: 00000000d0000000 - 00000000fee00000
[ 0.000000] PM: Registered nosave memory: 00000000fee00000 - 00000000fee01000
[ 0.000000] PM: Registered nosave memory: 00000000fee01000 - 00000000fff00000
[ 0.000000] PM: Registered nosave memory: 00000000fff00000 - 0000000100000000
[ 0.000000] Allocating PCI resources starting at d4000000 (gap: d0000000:2ee00000)
[ 0.000000] SMP: Allowing 4 CPUs, 0 hotplug CPUs
[ 0.000000] PERCPU: Allocating 37168 bytes of per cpu data
[ 0.000000] NR_CPUS: 32, nr_cpu_ids: 4
[ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 1547195
[ 0.000000] Policy zone: Normal
[ 0.000000] Kernel command line: root=/dev/md0 ro quiet
[ 0.000000] Initializing CPU#0
[ 0.000000] PID hash table entries: 4096 (order: 12, 32768 bytes)
[ 0.000000] Extended CMOS year: 2000
[ 0.000000] TSC calibrated against PM_TIMER
[ 0.000000] time.c: Detected 2405.452 MHz processor.
[ 0.004000] Console: colour VGA+ 80x25
[ 0.004000] console [tty0] enabled
[ 0.004000] Checking aperture...
[ 0.004000] Calgary: detecting Calgary via BIOS EBDA area
[ 0.004000] Calgary: Unable to locate Rio Grande table in EBDA - bailing!
[ 0.004000] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
[ 0.004000] Placing software IO TLB between 0x4000000 - 0x8000000
[ 0.004000] Memory: 6122760k/7077888k available (2226k kernel code, 167792k reserved, 1082k data, 392k init)
[ 0.004000] CPA: page pool initialized 1 of 1 pages preallocated
[ 0.004000] hpet clockevent registered
[ 0.083783] Calibrating delay using timer specific routine.. 4895.00 BogoMIPS (lpj=9790007)
[ 0.083821] Security Framework initialized
[ 0.083826] SELinux: Disabled at boot.
[ 0.083829] Capability LSM initialized
[ 0.084005] Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes)
[ 0.088005] Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes)
[ 0.088005] Mount-cache hash table entries: 256
[ 0.088185] Initializing cgroup subsys ns
[ 0.088190] Initializing cgroup subsys cpuacct
[ 0.088192] Initializing cgroup subsys devices
[ 0.088211] CPU: L1 I cache: 32K, L1 D cache: 32K
[ 0.088213] CPU: L2 cache: 4096K
[ 0.088215] CPU 0/0 -> Node 0
[ 0.088216] CPU: Physical Processor ID: 0
[ 0.088217] CPU: Processor Core ID: 0
[ 0.088224] CPU0: Thermal monitoring enabled (TM2)
[ 0.088225] using mwait in idle threads.
[ 0.089074] ACPI: Core revision 20080321
[ 0.148009] CPU0: Intel(R) Core(TM)2 Quad CPU @ 2.40GHz stepping 07
[ 0.148009] Using local APIC timer interrupts.
[ 0.152009] APIC timer calibration result 16704557
[ 0.152009] Detected 16.704 MHz APIC timer.
[ 0.152009] Booting processor 1/1 ip 6000
[ 0.160010] Initializing CPU#1
[ 0.160010] Calibrating delay using timer specific routine.. 4767.29 BogoMIPS (lpj=9534596)
[ 0.160010] CPU: L1 I cache: 32K, L1 D cache: 32K
[ 0.160010] CPU: L2 cache: 4096K
[ 0.160010] CPU 1/1 -> Node 0
[ 0.160010] CPU: Physical Processor ID: 0
[ 0.160010] CPU: Processor Core ID: 1
[ 0.160010] CPU1: Thermal monitoring enabled (TM2)
[ 0.240015] CPU1: Intel(R) Core(TM)2 Quad CPU @ 2.40GHz stepping 07
[ 0.240015] checking TSC synchronization [CPU#0 -> CPU#1]: passed.
[ 0.244015] Booting processor 2/2 ip 6000
[ 0.252015] Initializing CPU#2
[ 0.252015] Calibrating delay using timer specific routine.. 4811.00 BogoMIPS (lpj=9622005)
[ 0.252015] CPU: L1 I cache: 32K, L1 D cache: 32K
[ 0.252015] CPU: L2 cache: 4096K
[ 0.252015] CPU 2/2 -> Node 0
[ 0.252015] CPU: Physical Processor ID: 0
[ 0.252015] CPU: Processor Core ID: 2
[ 0.252015] CPU2: Thermal monitoring enabled (TM2)
[ 0.331645] CPU2: Intel(R) Core(TM)2 Quad CPU @ 2.40GHz stepping 07
[ 0.331664] checking TSC synchronization [CPU#0 -> CPU#2]: passed.
[ 0.336021] Booting processor 3/3 ip 6000
[ 0.347625] Initializing CPU#3
[ 0.347625] Calibrating delay using timer specific routine.. 4853.64 BogoMIPS (lpj=9707299)
[ 0.347625] CPU: L1 I cache: 32K, L1 D cache: 32K
[ 0.347625] CPU: L2 cache: 4096K
[ 0.347625] CPU 3/3 -> Node 0
[ 0.347625] CPU: Physical Processor ID: 0
[ 0.347625] CPU: Processor Core ID: 3
[ 0.347625] CPU3: Thermal monitoring enabled (TM2)
[ 0.424043] CPU3: Intel(R) Core(TM)2 Quad CPU @ 2.40GHz stepping 07
[ 0.424061] checking TSC synchronization [CPU#0 -> CPU#3]: passed.
[ 0.428290] Brought up 4 CPUs
[ 0.428290] Total of 4 processors activated (19326.95 BogoMIPS).
[ 0.428290] CPU0 attaching sched-domain:
[ 0.428290] domain 0: span 0-1
[ 0.428290] groups: 0 1
[ 0.428290] domain 1: span 0-3
[ 0.428290] groups: 0-1 2-3
[ 0.428290] domain 2: span 0-3
[ 0.428290] groups: 0-3
[ 0.428290] CPU1 attaching sched-domain:
[ 0.428290] domain 0: span 0-1
[ 0.428290] groups: 1 0
[ 0.428290] domain 1: span 0-3
[ 0.428290] groups: 0-1 2-3
[ 0.428290] domain 2: span 0-3
[ 0.428290] groups: 0-3
[ 0.428290] CPU2 attaching sched-domain:
[ 0.428290] domain 0: span 2-3
[ 0.428290] groups: 2 3
[ 0.428290] domain 1: span 0-3
[ 0.428290] groups: 2-3 0-1
[ 0.428290] domain 2: span 0-3
[ 0.428290] groups: 0-3
[ 0.428290] CPU3 attaching sched-domain:
[ 0.428290] domain 0: span 2-3
[ 0.428290] groups: 3 2
[ 0.428290] domain 1: span 0-3
[ 0.428290] groups: 2-3 0-1
[ 0.428290] domain 2: span 0-3
[ 0.428290] groups: 0-3
[ 0.428290] net_namespace: 1224 bytes
[ 0.428290] Booting paravirtualized kernel on bare hardware
[ 0.428290] NET: Registered protocol family 16
[ 0.428290] ACPI: bus type pci registered
[ 0.428290] PCI: MCFG configuration 0: base e0000000 segment 0 buses 0 - 255
[ 0.428290] PCI: Not using MMCONFIG.
[ 0.428290] PCI: Using configuration type 1 for base access
[ 0.428290] ACPI: EC: Look up EC in DSDT
[ 0.442132] ACPI: Interpreter enabled
[ 0.442134] ACPI: (supports S0 S1 S3 S4 S5)
[ 0.442171] ACPI: Using IOAPIC for interrupt routing
[ 0.442223] PCI: MCFG configuration 0: base e0000000 segment 0 buses 0 - 255
[ 0.445747] PCI: MCFG area at e0000000 reserved in ACPI motherboard resources
[ 0.457787] PCI: Using MMCONFIG at e0000000 - efffffff
[ 0.468107] ACPI: PCI Root Bridge [PCI0] (0000:00)
[ 0.468851] pci 0000:00:1f.0: quirk: region 0800-087f claimed by ICH6 ACPI/GPIO/TCO
[ 0.468855] pci 0000:00:1f.0: quirk: region 0480-04bf claimed by ICH6 GPIO
[ 0.470581] PCI: Transparent bridge - 0000:00:1e.0
[ 0.470809] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT]
[ 0.471412] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P2._PRT]
[ 0.471519] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P1._PRT]
[ 0.471717] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P8._PRT]
[ 0.471826] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P9._PRT]
[ 0.471946] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P4._PRT]
[ 0.489478] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 7 10 *11 12 14 15)
[ 0.489649] ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 5 6 7 *10 11 12 14 15)
[ 0.489793] ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 *5 6 7 10 11 12 14 15)
[ 0.489907] ACPI: PCI Interrupt Link [LNKD] (IRQs 3 *4 5 6 7 10 11 12 14 15)
[ 0.490021] ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 5 6 7 10 11 12 14 15) *0, disabled.
[ 0.490135] ACPI: PCI Interrupt Link [LNKF] (IRQs *3 4 5 6 7 10 11 12 14 15)
[ 0.491031] ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 5 6 7 10 11 12 14 *15)
[ 0.491145] ACPI: PCI Interrupt Link [LNKH] (IRQs 3 4 5 6 *7 10 11 12 14 15)
[ 0.494260] Linux Plug and Play Support v0.97 (c) Adam Belay
[ 0.494260] pnp: PnP ACPI init
[ 0.494260] ACPI: bus type pnp registered
[ 0.494260] pnp 00:00: parse allocated resources
[ 0.494260] pnp 00:00: add io 0xcf8-0xcff flags 0x1
[ 0.494260] pnp 00:00: Plug and Play ACPI device, IDs PNP0a08 PNP0a03 (active)
[ 0.494260] pnp 00:01: parse allocated resources
[ 0.494260] pnp 00:01: add mem 0xfed14000-0xfed19fff flags 0x1
[ 0.494260] pnp 00:01: PNP0c01: calling quirk_system_pci_resources+0x0/0x15c
[ 0.494260] pnp 00:01: Plug and Play ACPI device, IDs PNP0c01 (active)
[ 0.494260] pnp 00:02: parse allocated resources
[ 0.494260] pnp 00:02: add dma 4 flags 0x4
[ 0.494260] pnp 00:02: add io 0x0-0xf flags 0x1
[ 0.494260] pnp 00:02: add io 0x81-0x83 flags 0x1
[ 0.494260] pnp 00:02: add io 0x87-0x87 flags 0x1
[ 0.494260] pnp 00:02: add io 0x89-0x8b flags 0x1
[ 0.494260] pnp 00:02: add io 0x8f-0x8f flags 0x1
[ 0.494260] pnp 00:02: add io 0xc0-0xdf flags 0x1
[ 0.494260] pnp 00:02: Plug and Play ACPI device, IDs PNP0200 (active)
[ 0.494260] pnp 00:03: parse allocated resources
[ 0.494260] pnp 00:03: add io 0x70-0x71 flags 0x1
[ 0.494260] pnp 00:03: add irq 8 flags 0x1
[ 0.494260] pnp 00:03: Plug and Play ACPI device, IDs PNP0b00 (active)
[ 0.494260] pnp 00:04: parse allocated resources
[ 0.494260] pnp 00:04: add io 0x61-0x61 flags 0x1
[ 0.494260] pnp 00:04: Plug and Play ACPI device, IDs PNP0800 (active)
[ 0.494260] pnp 00:05: parse allocated resources
[ 0.494260] pnp 00:05: add io 0xf0-0xff flags 0x1
[ 0.494260] pnp 00:05: add irq 13 flags 0x1
[ 0.494260] pnp 00:05: Plug and Play ACPI device, IDs PNP0c04 (active)
[ 0.494260] pnp 00:06: parse allocated resources
[ 0.494260] pnp 00:06: add io 0x0-0xffffffffffffffff flags 0x10000001
[ 0.494260] pnp 00:06: add io 0x0-0xffffffffffffffff flags 0x10000001
[ 0.494260] pnp 00:06: add io 0x290-0x297 flags 0x1
[ 0.494260] pnp 00:06: PNP0c02: calling quirk_system_pci_resources+0x0/0x15c
[ 0.494260] pnp 00:06: Plug and Play ACPI device, IDs PNP0c02 (active)
[ 0.494260] pnp 00:07: parse allocated resources
[ 0.494260] pnp 00:07: add io 0x10-0x1f flags 0x1
[ 0.494260] pnp 00:07: add io 0x22-0x3f flags 0x1
[ 0.494260] pnp 00:07: add io 0x44-0x4d flags 0x1
[ 0.494260] pnp 00:07: add io 0x50-0x5f flags 0x1
[ 0.494260] pnp 00:07: add io 0x62-0x63 flags 0x1
[ 0.494260] pnp 00:07: add io 0x65-0x6f flags 0x1
[ 0.494260] pnp 00:07: add io 0x72-0x7f flags 0x1
[ 0.494260] pnp 00:07: add io 0x80-0x80 flags 0x1
[ 0.494260] pnp 00:07: add io 0x84-0x86 flags 0x1
[ 0.494260] pnp 00:07: add io 0x88-0x88 flags 0x1
[ 0.494260] pnp 00:07: add io 0x8c-0x8e flags 0x1
[ 0.494260] pnp 00:07: add io 0x90-0x9f flags 0x1
[ 0.494260] pnp 00:07: add io 0xa2-0xbf flags 0x1
[ 0.494260] pnp 00:07: add io 0xe0-0xef flags 0x1
[ 0.494260] pnp 00:07: add io 0x4d0-0x4d1 flags 0x1
[ 0.494260] pnp 00:07: add io 0x800-0x87f flags 0x1
[ 0.494260] pnp 00:07: add io 0x400-0x3ff flags 0x10000001
[ 0.494260] pnp 00:07: add io 0x480-0x4bf flags 0x1
[ 0.494260] pnp 00:07: add mem 0xfed1c000-0xfed1ffff flags 0x1
[ 0.494260] pnp 00:07: add mem 0xfed20000-0xfed3ffff flags 0x1
[ 0.494260] pnp 00:07: add mem 0xfed50000-0xfed8ffff flags 0x1
[ 0.494260] pnp 00:07: add mem 0xffa00000-0xffafffff flags 0x1
[ 0.494260] pnp 00:07: add mem 0xffb00000-0xffbfffff flags 0x1
[ 0.494260] pnp 00:07: add mem 0xffe00000-0xffefffff flags 0x1
[ 0.494260] pnp 00:07: add mem 0xfff00000-0xfffffffe flags 0x1
[ 0.494260] pnp 00:07: PNP0c02: calling quirk_system_pci_resources+0x0/0x15c
[ 0.494260] pnp 00:07: Plug and Play ACPI device, IDs PNP0c02 (active)
[ 0.494260] pnp 00:08: parse allocated resources
[ 0.494260] pnp 00:08: add mem 0xfed00000-0xfed003ff flags 0x0
[ 0.494260] pnp 00:08: Plug and Play ACPI device, IDs PNP0103 (active)
[ 0.494260] pnp 00:09: parse allocated resources
[ 0.494260] pnp 00:09: add mem 0xfec00000-0xfec00fff flags 0x0
[ 0.494260] pnp 00:09: add mem 0xfee00000-0xfee00fff flags 0x0
[ 0.494260] pnp 00:09: PNP0c02: calling quirk_system_pci_resources+0x0/0x15c
[ 0.494260] pnp 00:09: Plug and Play ACPI device, IDs PNP0c02 (active)
[ 0.494260] pnp 00:0a: parse allocated resources
[ 0.494260] pnp 00:0a: add io 0x60-0x60 flags 0x1
[ 0.494260] pnp 00:0a: add io 0x64-0x64 flags 0x1
[ 0.494260] pnp 00:0a: add irq 1 flags 0x1
[ 0.494260] pnp 00:0a: Plug and Play ACPI device, IDs PNP0303 PNP030b (active)
[ 0.494260] pnp 00:0b: parse allocated resources
[ 0.494260] pnp 00:0b: add mem 0xe0000000-0xefffffff flags 0x0
[ 0.494260] pnp 00:0b: PNP0c02: calling quirk_system_pci_resources+0x0/0x15c
[ 0.494260] pnp 00:0b: Plug and Play ACPI device, IDs PNP0c02 (active)
[ 0.494260] pnp 00:0c: parse allocated resources
[ 0.494260] pnp 00:0c: add mem 0x0-0x9ffff flags 0x1
[ 0.494260] pnp 00:0c: add mem 0xc0000-0xcffff flags 0x0
[ 0.494260] pnp 00:0c: add mem 0xe0000-0xfffff flags 0x0
[ 0.494260] pnp 00:0c: add mem 0x100000-0xcfffffff flags 0x1
[ 0.494260] pnp 00:0c: add mem 0x0-0xffffffffffffffff flags 0x10000000
[ 0.494260] pnp 00:0c: PNP0c01: calling quirk_system_pci_resources+0x0/0x15c
[ 0.494260] pnp 00:0c: Plug and Play ACPI device, IDs PNP0c01 (active)
[ 0.494580] pnp: PnP ACPI: found 13 devices
[ 0.494582] ACPI: ACPI bus type pnp unregistered
[ 0.498261] usbcore: registered new interface driver usbfs
[ 0.498261] usbcore: registered new interface driver hub
[ 0.498261] usbcore: registered new device driver usb
[ 0.498261] PCI: Using ACPI for IRQ routing
[ 0.517244] PCI-GART: No AMD northbridge found.
[ 0.517250] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0
[ 0.517255] hpet0: 4 64-bit timers, 14318180 Hz
[ 0.518262] ACPI: RTC can wake from S4
[ 0.521171] Switched to high resolution mode on CPU 0
[ 0.521836] Switched to high resolution mode on CPU 1
[ 0.523590] Switched to high resolution mode on CPU 2
[ 0.524768] Switched to high resolution mode on CPU 3
[ 0.529149] pnp: the driver 'system' has been registered
[ 0.529161] system 00:01: iomem range 0xfed14000-0xfed19fff has been reserved
[ 0.529164] system 00:01: driver attached
[ 0.529173] system 00:06: ioport range 0x290-0x297 has been reserved
[ 0.529176] system 00:06: driver attached
[ 0.529182] system 00:07: ioport range 0x4d0-0x4d1 has been reserved
[ 0.529185] system 00:07: ioport range 0x800-0x87f has been reserved
[ 0.529188] system 00:07: ioport range 0x480-0x4bf has been reserved
[ 0.529192] system 00:07: iomem range 0xfed1c000-0xfed1ffff has been reserved
[ 0.529196] system 00:07: iomem range 0xfed20000-0xfed3ffff has been reserved
[ 0.529199] system 00:07: iomem range 0xfed50000-0xfed8ffff has been reserved
[ 0.529202] system 00:07: iomem range 0xffa00000-0xffafffff has been reserved
[ 0.529206] system 00:07: iomem range 0xffb00000-0xffbfffff has been reserved
[ 0.529209] system 00:07: iomem range 0xffe00000-0xffefffff has been reserved
[ 0.529213] system 00:07: iomem range 0xfff00000-0xfffffffe could not be reserved
[ 0.529215] system 00:07: driver attached
[ 0.529222] system 00:09: iomem range 0xfec00000-0xfec00fff has been reserved
[ 0.529226] system 00:09: iomem range 0xfee00000-0xfee00fff could not be reserved
[ 0.529229] system 00:09: driver attached
[ 0.529236] system 00:0b: iomem range 0xe0000000-0xefffffff could not be reserved
[ 0.529239] system 00:0b: driver attached
[ 0.529245] system 00:0c: iomem range 0x0-0x9ffff could not be reserved
[ 0.529249] system 00:0c: iomem range 0xc0000-0xcffff has been reserved
[ 0.529252] system 00:0c: iomem range 0xe0000-0xfffff could not be reserved
[ 0.529255] system 00:0c: iomem range 0x100000-0xcfffffff could not be reserved
[ 0.529258] system 00:0c: driver attached
[ 0.530253] PCI: Bridge: 0000:00:01.0
[ 0.530253] IO window: c000-cfff
[ 0.530253] MEM window: 0xfa000000-0xfe8fffff
[ 0.530253] PREFETCH window: 0x00000000d0000000-0x00000000dfffffff
[ 0.530253] PCI: Bridge: 0000:00:1c.0
[ 0.530253] IO window: disabled.
[ 0.530253] MEM window: disabled.
[ 0.530253] PREFETCH window: 0x00000000eff00000-0x00000000efffffff
[ 0.530253] PCI: Bridge: 0000:00:1c.4
[ 0.530253] IO window: d000-dfff
[ 0.530253] MEM window: 0xfea00000-0xfeafffff
[ 0.530253] PREFETCH window: disabled.
[ 0.530253] PCI: Bridge: 0000:00:1c.5
[ 0.530253] IO window: disabled.
[ 0.530253] MEM window: 0xfe900000-0xfe9fffff
[ 0.530253] PREFETCH window: disabled.
[ 0.530253] PCI: Bridge: 0000:05:02.0
[ 0.530253] IO window: disabled.
[ 0.530253] MEM window: disabled.
[ 0.530253] PREFETCH window: 0x00000000f0000000-0x00000000f7ffffff
[ 0.530253] PCI: Bridge: 0000:00:1e.0
[ 0.530253] IO window: e000-efff
[ 0.530253] MEM window: 0xfeb00000-0xfebfffff
[ 0.530253] PREFETCH window: 0x00000000f0000000-0x00000000f7ffffff
[ 0.530253] ACPI: PCI Interrupt 0000:00:01.0[A] -> GSI 16 (level, low) -> IRQ 16
[ 0.530253] PCI: Setting latency timer of device 0000:00:01.0 to 64
[ 0.530253] ACPI: PCI Interrupt 0000:00:1c.0[A] -> GSI 17 (level, low) -> IRQ 17
[ 0.530253] PCI: Setting latency timer of device 0000:00:1c.0 to 64
[ 0.530253] ACPI: PCI Interrupt 0000:00:1c.4[A] -> GSI 17 (level, low) -> IRQ 17
[ 0.530253] PCI: Setting latency timer of device 0000:00:1c.4 to 64
[ 0.530253] ACPI: PCI Interrupt 0000:00:1c.5[B] -> GSI 16 (level, low) -> IRQ 16
[ 0.530253] PCI: Setting latency timer of device 0000:00:1c.5 to 64
[ 0.530253] PCI: Setting latency timer of device 0000:00:1e.0 to 64
[ 0.530253] NET: Registered protocol family 2
[ 0.577123] IP route cache hash table entries: 262144 (order: 9, 2097152 bytes)
[ 0.577123] TCP established hash table entries: 524288 (order: 11, 8388608 bytes)
[ 0.581992] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
[ 0.581992] TCP: Hash tables configured (established 524288 bind 65536)
[ 0.581992] TCP reno registered
[ 0.595229] NET: Registered protocol family 1
[ 0.595229] checking if image is initramfs... it is
[ 1.187643] Freeing initrd memory: 8681k freed
[ 1.199197] audit: initializing netlink socket (disabled)
[ 1.199197] type=2000 audit(1234114909.180:1): initialized
[ 1.199326] Total HugeTLB memory allocated, 0
[ 1.199326] VFS: Disk quotas dquot_6.5.1
[ 1.199326] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[ 1.199326] msgmni has been set to 11975
[ 1.199326] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253)
[ 1.199326] io scheduler noop registered
[ 1.199326] io scheduler anticipatory registered
[ 1.199326] io scheduler deadline registered
[ 1.199326] io scheduler cfq registered (default)
[ 1.199326] pci 0000:01:00.0: Boot video device
[ 1.199326] PCI: Setting latency timer of device 0000:00:01.0 to 64
[ 1.199326] assign_interrupt_mode Found MSI capability
[ 1.199326] Allocate Port Service[0000:00:01.0:pcie00]
[ 1.199326] Allocate Port Service[0000:00:01.0:pcie03]
[ 1.199326] PCI: Setting latency timer of device 0000:00:1c.0 to 64
[ 1.199326] assign_interrupt_mode Found MSI capability
[ 1.199326] Allocate Port Service[0000:00:1c.0:pcie00]
[ 1.199326] Allocate Port Service[0000:00:1c.0:pcie02]
[ 1.199326] Allocate Port Service[0000:00:1c.0:pcie03]
[ 1.199326] PCI: Setting latency timer of device 0000:00:1c.4 to 64
[ 1.199326] assign_interrupt_mode Found MSI capability
[ 1.199326] Allocate Port Service[0000:00:1c.4:pcie00]
[ 1.199326] Allocate Port Service[0000:00:1c.4:pcie02]
[ 1.199326] Allocate Port Service[0000:00:1c.4:pcie03]
[ 1.199326] PCI: Setting latency timer of device 0000:00:1c.5 to 64
[ 1.199326] assign_interrupt_mode Found MSI capability
[ 1.199326] Allocate Port Service[0000:00:1c.5:pcie00]
[ 1.199326] Allocate Port Service[0000:00:1c.5:pcie02]
[ 1.199326] Allocate Port Service[0000:00:1c.5:pcie03]
[ 1.376076] hpet_resources: 0xfed00000 is busy
[ 1.376076] Linux agpgart interface v0.103
[ 1.376076] Serial: 8250/16550 driver $Revision: 1.90 $ 4 ports, IRQ sharing enabled
[ 1.376076] pnp: the driver 'serial' has been registered
[ 1.376077] brd: module loaded
[ 1.376077] input: Macintosh mouse button emulation as /class/input/input0
[ 1.376077] pnp: the driver 'i8042 kbd' has been registered
[ 1.376077] i8042 kbd 00:0a: driver attached
[ 1.376077] pnp: the driver 'i8042 aux' has been registered
[ 1.376077] PNP: PS/2 Controller [PNP0303:PS2K] at 0x60,0x64 irq 1
[ 1.376077] PNP: PS/2 appears to have AUX port disabled, if this is incorrect please boot with i8042.nopnp
[ 1.376077] serio: i8042 KBD port at 0x60,0x64 irq 1
[ 1.399968] mice: PS/2 mouse device common for all mice
[ 1.399968] pnp: the driver 'rtc_cmos' has been registered
[ 1.399968] rtc_cmos 00:03: rtc core: registered rtc_cmos as rtc0
[ 1.399968] rtc0: alarms up to one month, y3k
[ 1.399968] rtc_cmos 00:03: driver attached
[ 1.399968] cpuidle: using governor ladder
[ 1.399968] cpuidle: using governor menu
[ 1.399968] No iBFT detected.
[ 1.399968] TCP cubic registered
[ 1.399968] NET: Registered protocol family 17
[ 1.399968] registered taskstats version 1
[ 1.399968] rtc_cmos 00:03: setting system clock to 2009-02-08 17:41:49 UTC (1234114909)
[ 1.399968] Freeing unused kernel memory: 392k freed
[ 1.424678] input: AT Translated Set 2 keyboard as /class/input/input1
[ 1.477565] ACPI: SSDT CFF8E0D0, 01D2 (r1 AMI CPU1PM 1 INTL 20060113)
[ 1.480491] ACPI: ACPI0007:00 is registered as cooling_device0
[ 1.480491] ACPI: SSDT CFF8E2B0, 0143 (r1 AMI CPU2PM 1 INTL 20060113)
[ 1.480491] ACPI: ACPI0007:01 is registered as cooling_device1
[ 1.480491] ACPI: SSDT CFF8E400, 0143 (r1 AMI CPU3PM 1 INTL 20060113)
[ 1.480491] ACPI: ACPI0007:02 is registered as cooling_device2
[ 1.480491] ACPI: SSDT CFF8E550, 0143 (r1 AMI CPU4PM 1 INTL 20060113)
[ 1.480491] ACPI: ACPI0007:03 is registered as cooling_device3
[ 1.596989] USB Universal Host Controller Interface driver v3.0
[ 1.596989] ACPI: PCI Interrupt 0000:00:1a.0[A] -> GSI 16 (level, low) -> IRQ 16
[ 1.596989] PCI: Setting latency timer of device 0000:00:1a.0 to 64
[ 1.596989] uhci_hcd 0000:00:1a.0: UHCI Host Controller
[ 1.596989] uhci_hcd 0000:00:1a.0: new USB bus registered, assigned bus number 1
[ 1.596989] uhci_hcd 0000:00:1a.0: irq 16, io base 0x0000b800
[ 1.596989] usb usb1: configuration #1 chosen from 1 choice
[ 1.596989] hub 1-0:1.0: USB hub found
[ 1.596989] hub 1-0:1.0: 2 ports detected
[ 1.653713] Uniform Multi-Platform E-IDE driver
[ 1.653713] ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx
[ 1.653713] No dock devices found.
[ 1.657713] SCSI subsystem initialized
[ 1.661847] libata version 3.00 loaded.
[ 1.704762] usb usb1: New USB device found, idVendor=1d6b, idProduct=0001
[ 1.704765] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[ 1.704767] usb usb1: Product: UHCI Host Controller
[ 1.704769] usb usb1: Manufacturer: Linux 2.6.26-1-amd64 uhci_hcd
[ 1.704770] usb usb1: SerialNumber: 0000:00:1a.0
[ 1.706649] ACPI: PCI Interrupt 0000:00:1a.1[B] -> GSI 21 (level, low) -> IRQ 21
[ 1.706649] PCI: Setting latency timer of device 0000:00:1a.1 to 64
[ 1.706649] uhci_hcd 0000:00:1a.1: UHCI Host Controller
[ 1.706649] uhci_hcd 0000:00:1a.1: new USB bus registered, assigned bus number 2
[ 1.706649] uhci_hcd 0000:00:1a.1: irq 21, io base 0x0000b880
[ 1.706649] usb usb2: configuration #1 chosen from 1 choice
[ 1.706649] hub 2-0:1.0: USB hub found
[ 1.706649] hub 2-0:1.0: 2 ports detected
[ 1.812149] usb usb2: New USB device found, idVendor=1d6b, idProduct=0001
[ 1.812153] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[ 1.812156] usb usb2: Product: UHCI Host Controller
[ 1.812158] usb usb2: Manufacturer: Linux 2.6.26-1-amd64 uhci_hcd
[ 1.812161] usb usb2: SerialNumber: 0000:00:1a.1
[ 1.813962] ACPI: PCI Interrupt 0000:00:1a.2[C] -> GSI 18 (level, low) -> IRQ 18
[ 1.813962] PCI: Setting latency timer of device 0000:00:1a.2 to 64
[ 1.813962] uhci_hcd 0000:00:1a.2: UHCI Host Controller
[ 1.813962] uhci_hcd 0000:00:1a.2: new USB bus registered, assigned bus number 3
[ 1.813962] uhci_hcd 0000:00:1a.2: irq 18, io base 0x0000bc00
[ 1.813962] usb usb3: configuration #1 chosen from 1 choice
[ 1.813962] hub 3-0:1.0: USB hub found
[ 1.813962] hub 3-0:1.0: 2 ports detected
[ 1.944886] usb usb3: New USB device found, idVendor=1d6b, idProduct=0001
[ 1.944889] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[ 1.944892] usb usb3: Product: UHCI Host Controller
[ 1.944894] usb usb3: Manufacturer: Linux 2.6.26-1-amd64 uhci_hcd
[ 1.944896] usb usb3: SerialNumber: 0000:00:1a.2
[ 1.949961] ACPI: PCI Interrupt 0000:00:1a.7[C] -> GSI 18 (level, low) -> IRQ 18
[ 1.949961] PCI: Setting latency timer of device 0000:00:1a.7 to 64
[ 1.949961] ehci_hcd 0000:00:1a.7: EHCI Host Controller
[ 1.949961] ehci_hcd 0000:00:1a.7: new USB bus registered, assigned bus number 4
[ 1.957946] ehci_hcd 0000:00:1a.7: debug port 1
[ 1.957946] PCI: cache line size of 32 is not supported by device 0000:00:1a.7
[ 1.957946] ehci_hcd 0000:00:1a.7: irq 18, io mem 0xf9fffc00
[ 1.969966] ehci_hcd 0000:00:1a.7: USB 2.0 started, EHCI 1.00, driver 10 Dec 2004
[ 1.969966] usb usb4: configuration #1 chosen from 1 choice
[ 1.969966] hub 4-0:1.0: USB hub found
[ 1.969966] hub 4-0:1.0: 6 ports detected
[ 2.120490] usb usb4: New USB device found, idVendor=1d6b, idProduct=0002
[ 2.120494] usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[ 2.120497] usb usb4: Product: EHCI Host Controller
[ 2.120499] usb usb4: Manufacturer: Linux 2.6.26-1-amd64 ehci_hcd
[ 2.120501] usb usb4: SerialNumber: 0000:00:1a.7
[ 2.122100] ACPI: PCI Interrupt 0000:00:1d.0[A] -> GSI 23 (level, low) -> IRQ 23
[ 2.122100] PCI: Setting latency timer of device 0000:00:1d.0 to 64
[ 2.122100] uhci_hcd 0000:00:1d.0: UHCI Host Controller
[ 2.122100] uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 5
[ 2.122100] uhci_hcd 0000:00:1d.0: irq 23, io base 0x0000b080
[ 2.122100] usb usb5: configuration #1 chosen from 1 choice
[ 2.122100] hub 5-0:1.0: USB hub found
[ 2.122100] hub 5-0:1.0: 2 ports detected
[ 2.248513] usb usb5: New USB device found, idVendor=1d6b, idProduct=0001
[ 2.248516] usb usb5: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[ 2.248519] usb usb5: Product: UHCI Host Controller
[ 2.248521] usb usb5: Manufacturer: Linux 2.6.26-1-amd64 uhci_hcd
[ 2.248524] usb usb5: SerialNumber: 0000:00:1d.0
[ 2.250070] ACPI: PCI Interrupt 0000:00:1d.1[B] -> GSI 19 (level, low) -> IRQ 19
[ 2.250070] PCI: Setting latency timer of device 0000:00:1d.1 to 64
[ 2.250070] uhci_hcd 0000:00:1d.1: UHCI Host Controller
[ 2.250070] uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 6
[ 2.250070] uhci_hcd 0000:00:1d.1: irq 19, io base 0x0000b400
[ 2.250070] usb usb6: configuration #1 chosen from 1 choice
[ 2.250070] hub 6-0:1.0: USB hub found
[ 2.250070] hub 6-0:1.0: 2 ports detected
[ 2.378035] usb usb6: New USB device found, idVendor=1d6b, idProduct=0001
[ 2.378039] usb usb6: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[ 2.378042] usb usb6: Product: UHCI Host Controller
[ 2.378044] usb usb6: Manufacturer: Linux 2.6.26-1-amd64 uhci_hcd
[ 2.378046] usb usb6: SerialNumber: 0000:00:1d.1
[ 2.379606] ACPI: PCI Interrupt 0000:00:1d.2[C] -> GSI 18 (level, low) -> IRQ 18
[ 2.379606] PCI: Setting latency timer of device 0000:00:1d.2 to 64
[ 2.379606] uhci_hcd 0000:00:1d.2: UHCI Host Controller
[ 2.379606] uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 7
[ 2.379606] uhci_hcd 0000:00:1d.2: irq 18, io base 0x0000b480
[ 2.379606] usb usb7: configuration #1 chosen from 1 choice
[ 2.379606] hub 7-0:1.0: USB hub found
[ 2.379606] hub 7-0:1.0: 2 ports detected
[ 2.515666] usb usb7: New USB device found, idVendor=1d6b, idProduct=0001
[ 2.515669] usb usb7: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[ 2.515672] usb usb7: Product: UHCI Host Controller
[ 2.515674] usb usb7: Manufacturer: Linux 2.6.26-1-amd64 uhci_hcd
[ 2.515677] usb usb7: SerialNumber: 0000:00:1d.2
[ 2.515688] ACPI: PCI Interrupt 0000:00:1d.7[A] -> GSI 23 (level, low) -> IRQ 23
[ 2.515688] PCI: Setting latency timer of device 0000:00:1d.7 to 64
[ 2.515688] ehci_hcd 0000:00:1d.7: EHCI Host Controller
[ 2.515688] ehci_hcd 0000:00:1d.7: new USB bus registered, assigned bus number 8
[ 2.531823] ehci_hcd 0000:00:1d.7: debug port 1
[ 2.531823] PCI: cache line size of 32 is not supported by device 0000:00:1d.7
[ 2.531823] ehci_hcd 0000:00:1d.7: irq 23, io mem 0xf9fff800
[ 2.567648] ehci_hcd 0000:00:1d.7: USB 2.0 started, EHCI 1.00, driver 10 Dec 2004
[ 2.567648] usb usb8: configuration #1 chosen from 1 choice
[ 2.567648] hub 8-0:1.0: USB hub found
[ 2.567648] hub 8-0:1.0: 6 ports detected
[ 2.671676] usb usb8: New USB device found, idVendor=1d6b, idProduct=0002
[ 2.671679] usb usb8: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[ 2.671682] usb usb8: Product: EHCI Host Controller
[ 2.671684] usb usb8: Manufacturer: Linux 2.6.26-1-amd64 ehci_hcd
[ 2.671687] usb usb8: SerialNumber: 0000:00:1d.7
[ 2.671863] ahci 0000:00:1f.2: version 3.0
[ 2.671863] ACPI: PCI Interrupt 0000:00:1f.2[B] -> GSI 22 (level, low) -> IRQ 22
[ 2.834274] usb 2-1: new low speed USB device using uhci_hcd and address 2
[ 2.968763] usb 2-1: configuration #1 chosen from 1 choice
[ 3.234256] usb 2-1: New USB device found, idVendor=051d, idProduct=0002
[ 3.234256] usb 2-1: New USB device strings: Mfr=3, Product=1, SerialNumber=2
[ 3.234256] usb 2-1: Product: Smart-UPS 1500 FW:601.3.D USB FW:8.1
[ 3.234256] usb 2-1: Manufacturer: American Power Conversion
[ 3.234256] usb 2-1: SerialNumber: AS0719222574
[ 3.897902] usbcore: registered new interface driver hiddev
[ 3.907306] ahci 0000:00:1f.2: AHCI 0001.0200 32 slots 4 ports 3 Gbps 0x33 impl SATA mode
[ 3.907311] ahci 0000:00:1f.2: flags: 64bit ncq sntf stag pm led clo pmp pio slum part
[ 3.907316] PCI: Setting latency timer of device 0000:00:1f.2 to 64
[ 3.911275] scsi0 : ahci
[ 3.911275] scsi1 : ahci
[ 3.911275] scsi2 : ahci
[ 3.911275] scsi3 : ahci
[ 3.911275] scsi4 : ahci
[ 3.911275] scsi5 : ahci
[ 3.911275] ata1: SATA max UDMA/133 abar m2048@0xf9ffe800 port 0xf9ffe900 irq 1275
[ 3.911275] ata2: SATA max UDMA/133 abar m2048@0xf9ffe800 port 0xf9ffe980 irq 1275
[ 3.911275] ata3: DUMMY
[ 3.911275] ata4: DUMMY
[ 3.911275] ata5: SATA max UDMA/133 abar m2048@0xf9ffe800 port 0xf9ffeb00 irq 1275
[ 3.911275] ata6: SATA max UDMA/133 abar m2048@0xf9ffe800 port 0xf9ffeb80 irq 1275
[ 4.522988] usb 7-1: new low speed USB device using uhci_hcd and address 2
[ 4.837395] usb 7-1: configuration #1 chosen from 1 choice
[ 4.837395] usb 7-1: New USB device found, idVendor=15c2, idProduct=ffdc
[ 4.837395] usb 7-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0
[ 5.029494] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[ 5.041541] ata1.00: HPA detected: current 976771055, native 976773168
[ 5.041541] ata1.00: ATA-7: WDC WD5000AAKS-00TMA0, 12.01C01, max UDMA/133
[ 5.041541] ata1.00: 976771055 sectors, multi 0: LBA48 NCQ (depth 31/32)
[ 5.041541] ata1.00: configured for UDMA/133
[ 5.188625] usb 7-2: new low speed USB device using uhci_hcd and address 3
[ 5.365313] usb 7-2: configuration #1 chosen from 1 choice
[ 5.368267] usb 7-2: New USB device found, idVendor=0b38, idProduct=0010
[ 5.368270] usb 7-2: New USB device strings: Mfr=0, Product=0, SerialNumber=0
[ 5.871651] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[ 5.871651] ata2.00: ATA-8: WDC WD5000AAKS-00YGA0, 12.01C02, max UDMA/133
[ 5.871651] ata2.00: 976773168 sectors, multi 0: LBA48 NCQ (depth 31/32)
[ 5.875683] ata2.00: configured for UDMA/133
[ 6.378722] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[ 5.871651] ata5.00: ATA-8: WDC WD5000AAKS-00YGA0, 12.01C02, max UDMA/133
[ 5.871651] ata5.00: 976773168 sectors, multi 0: LBA48 NCQ (depth 31/32)
[ 6.391641] ata5.00: configured for UDMA/133
[ 6.942956] ata6: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[ 6.947471] ata6.00: ATA-8: WDC WD5000AAKS-00YGA0, 12.01C02, max UDMA/133
[ 6.947471] ata6.00: 976773168 sectors, multi 0: LBA48 NCQ (depth 31/32)
[ 6.947471] ata6.00: configured for UDMA/133
[ 6.947471] scsi 0:0:0:0: Direct-Access ATA WDC WD5000AAKS-0 12.0 PQ: 0 ANSI: 5
[ 6.947471] scsi 1:0:0:0: Direct-Access ATA WDC WD5000AAKS-0 12.0 PQ: 0 ANSI: 5
[ 6.947471] scsi 4:0:0:0: Direct-Access ATA WDC WD5000AAKS-0 12.0 PQ: 0 ANSI: 5
[ 6.947471] scsi 5:0:0:0: Direct-Access ATA WDC WD5000AAKS-0 12.0 PQ: 0 ANSI: 5
[ 7.190467] JMB: IDE controller (0x197b:0x2363 rev 0x03) at PCI slot 0000:03:00.1
[ 7.190467] ACPI: PCI Interrupt 0000:03:00.1[B] -> GSI 17 (level, low) -> IRQ 17
[ 7.190467] JMB: 100% native mode on irq 17
[ 7.190467] ide0: BM-DMA at 0xd400-0xd407
[ 7.190467] ide1: BM-DMA at 0xd408-0xd40f
[ 7.190467] Probing IDE interface ide0...
[ 7.339303] hiddev96hidraw0: USB HID v1.10 Device [American Power Conversion Smart-UPS 1500 FW:601.3.D USB FW:8.1] on usb-0000:00:1a.1-1
[ 7.351770] input: HID 0b38:0010 as /class/input/input2
[ 7.371588] input,hidraw1: USB HID v1.10 Keyboard [HID 0b38:0010] on usb-0000:00:1d.2-2
[ 7.392467] input: HID 0b38:0010 as /class/input/input3
[ 7.408058] input,hidraw2: USB HID v1.10 Device [HID 0b38:0010] on usb-0000:00:1d.2-2
[ 7.408082] usbcore: registered new interface driver usbhid
[ 7.408085] usbhid: v2.6:USB HID core driver
[ 7.967911] hda: PLEXTOR DVDR PX-760A, ATAPI CD/DVD-ROM drive
[ 8.303604] hda: host max PIO5 wanted PIO255(auto-tune) selected PIO4
[ 8.303673] hda: UDMA/66 mode selected
[ 8.303744] Probing IDE interface ide1...
[ 9.232484] ide0 at 0xdc00-0xdc07,0xd882 on irq 17
[ 9.241583] ide1 at 0xd800-0xd807,0xd482 on irq 17
[ 9.241583] ACPI: PCI Interrupt 0000:03:00.0[A] -> GSI 16 (level, low) -> IRQ 16
[ 10.241657] ahci 0000:03:00.0: AHCI 0001.0000 32 slots 2 ports 3 Gbps 0x3 impl SATA mode
[ 10.241661] ahci 0000:03:00.0: flags: 64bit ncq pm led clo pmp pio slum part
[ 10.241668] PCI: Setting latency timer of device 0000:03:00.0 to 64
[ 10.245557] scsi6 : ahci
[ 10.245557] scsi7 : ahci
[ 10.245557] ata7: SATA max UDMA/133 abar m8192@0xfeafe000 port 0xfeafe100 irq 16
[ 10.245557] ata8: SATA max UDMA/133 abar m8192@0xfeafe000 port 0xfeafe180 irq 16
[ 10.906187] ata7: SATA link down (SStatus 0 SControl 300)
[ 11.292125] ata8: SATA link down (SStatus 0 SControl 300)
[ 11.320150] ACPI: PCI Interrupt 0000:02:00.0[A] -> GSI 17 (level, low) -> IRQ 17
[ 11.320150] PCI: Setting latency timer of device 0000:02:00.0 to 64
[ 11.320150] atl1 0000:02:00.0: version 2.1.3
[ 11.904008] Driver 'sd' needs updating - please use bus_type methods
[ 11.904008] sd 0:0:0:0: [sda] 976771055 512-byte hardware sectors (500107 MB)
[ 11.904009] sd 0:0:0:0: [sda] Write Protect is off
[ 11.904009] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
[ 11.904009] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 11.904009] sd 0:0:0:0: [sda] 976771055 512-byte hardware sectors (500107 MB)
[ 11.904009] sd 0:0:0:0: [sda] Write Protect is off
[ 11.904009] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
[ 11.904009] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 11.904009] sda:<6>ACPI: PCI Interrupt 0000:05:03.0[A] -> GSI 16 (level, low) -> IRQ 16
[ 11.918139] sda1 sda2
[ 11.918251] sd 0:0:0:0: [sda] Attached SCSI disk
[ 11.918316] sd 1:0:0:0: [sdb] 976773168 512-byte hardware sectors (500108 MB)
[ 11.918329] sd 1:0:0:0: [sdb] Write Protect is off
[ 11.918331] sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00
[ 11.918350] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 11.918391] sd 1:0:0:0: [sdb] 976773168 512-byte hardware sectors (500108 MB)
[ 11.918402] sd 1:0:0:0: [sdb] Write Protect is off
[ 11.918404] sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00
[ 11.918423] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 11.918425] sdb: sdb1 sdb2
[ 11.935044] sd 1:0:0:0: [sdb] Attached SCSI disk
[ 11.935096] sd 4:0:0:0: [sdc] 976773168 512-byte hardware sectors (500108 MB)
[ 11.935108] sd 4:0:0:0: [sdc] Write Protect is off
[ 11.935110] sd 4:0:0:0: [sdc] Mode Sense: 00 3a 00 00
[ 11.935129] sd 4:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 11.935164] sd 4:0:0:0: [sdc] 976773168 512-byte hardware sectors (500108 MB)
[ 11.935175] sd 4:0:0:0: [sdc] Write Protect is off
[ 11.935177] sd 4:0:0:0: [sdc] Mode Sense: 00 3a 00 00
[ 11.935196] sd 4:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 11.935198] sdc: sdc1 sdc2
[ 11.943259] sd 4:0:0:0: [sdc] Attached SCSI disk
[ 11.943259] sd 5:0:0:0: [sdd] 976773168 512-byte hardware sectors (500108 MB)
[ 11.943259] sd 5:0:0:0: [sdd] Write Protect is off
[ 11.943259] sd 5:0:0:0: [sdd] Mode Sense: 00 3a 00 00
[ 11.943259] sd 5:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 11.943259] sd 5:0:0:0: [sdd] 976773168 512-byte hardware sectors (500108 MB)
[ 11.943259] sd 5:0:0:0: [sdd] Write Protect is off
[ 11.943259] sd 5:0:0:0: [sdd] Mode Sense: 00 3a 00 00
[ 11.943259] sd 5:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 11.943259] sdd: sdd1 sdd2
[ 11.960964] sd 5:0:0:0: [sdd] Attached SCSI disk
[ 11.967109] ohci1394: fw-host0: OHCI-1394 1.1 (PCI): IRQ=[16] MMIO=[febff800-febfffff] Max Packet=[2048] IR/IT contexts=[4/8]
[ 11.994057] hda: ATAPI 40X DVD-ROM DVD-R CD-R/RW drive, 2048kB Cache
[ 11.994057] Uniform CD-ROM driver Revision: 3.20
[ 12.132930] md: raid1 personality registered for level 1
[ 12.136816] xor: automatically using best checksumming function: generic_sse
[ 12.157605] generic_sse: 8587.000 MB/sec
[ 12.157605] xor: using function: generic_sse (8587.000 MB/sec)
[ 12.157605] async_tx: api initialized (async)
[ 12.226046] raid6: int64x1 2255 MB/s
[ 12.294046] raid6: int64x2 3042 MB/s
[ 12.362086] raid6: int64x4 2669 MB/s
[ 12.430086] raid6: int64x8 1725 MB/s
[ 12.498087] raid6: sse2x1 3823 MB/s
[ 12.566087] raid6: sse2x2 4355 MB/s
[ 12.634088] raid6: sse2x4 7250 MB/s
[ 12.634088] raid6: using algorithm sse2x4 (7250 MB/s)
[ 12.634088] md: raid6 personality registered for level 6
[ 12.634088] md: raid5 personality registered for level 5
[ 12.634088] md: raid4 personality registered for level 4
[ 12.638108] md: md0 stopped.
[ 12.647060] md: bind<sdb1>
[ 12.647060] md: bind<sda1>
[ 12.658281] raid1: raid set md0 active with 2 out of 2 mirrors
[ 12.658281] md: md1 stopped.
[ 12.707292] md: bind<sdd1>
[ 12.707129] md: bind<sdc1>
[ 12.716865] raid1: raid set md1 active with 2 out of 2 mirrors
[ 12.716865] md: md2 stopped.
[ 12.778826] md: bind<sdb2>
[ 12.779362] md: bind<sdc2>
[ 12.781148] md: bind<sdd2>
[ 12.781148] md: bind<sda2>
[ 12.789087] raid5: device sda2 operational as raid disk 0
[ 12.789089] raid5: device sdd2 operational as raid disk 3
[ 12.789091] raid5: device sdc2 operational as raid disk 2
[ 12.789092] raid5: device sdb2 operational as raid disk 1
[ 12.791697] raid5: allocated 4274kB for md2
[ 12.791697] raid5: raid level 5 set md2 active with 4 out of 4 devices, algorithm 2
[ 12.791697] RAID5 conf printout:
[ 12.791697] --- rd:4 wd:4
[ 12.791697] disk 0, o:1, dev:sda2
[ 12.791697] disk 1, o:1, dev:sdb2
[ 12.791697] disk 2, o:1, dev:sdc2
[ 12.791697] disk 3, o:1, dev:sdd2
[ 12.903039] device-mapper: uevent: version 1.0.3
[ 12.903039] device-mapper: ioctl: 4.13.0-ioctl (2007-10-18) initialised: dm-devel@redhat.com
[ 12.964067] PM: Starting manual resume from disk
[ 13.055693] kjournald starting. Commit interval 5 seconds
[ 13.055693] EXT3-fs: mounted filesystem with ordered data mode.
[ 13.286980] ieee1394: Host added: ID:BUS[0-00:1023] GUID[0011d800018f90fd]
[ 14.954097] udev: starting version 136
[ 14.954097] udev: deprecated sysfs layout; update the kernel or disable CONFIG_SYSFS_DEPRECATED; some udev features will not work correctly
[ 15.403737] input: Power Button (FF) as /class/input/input4
[ 15.448661] ACPI: Power Button (FF) [PWRF]
[ 15.448758] input: Power Button (CM) as /class/input/input5
[ 15.515258] ACPI: Power Button (CM) [PWRB]
[ 15.557826] input: PC Speaker as /class/input/input6
[ 15.585410] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
[ 15.588707] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
[ 15.640968] iTCO_wdt: Intel TCO WatchDog Timer Driver v1.03 (30-Apr-2008)
[ 15.641047] iTCO_wdt: Found a ICH9 TCO device (Version=2, TCOBASE=0x0860)
[ 15.641078] iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0)
[ 15.676124] ACPI: PCI Interrupt 0000:00:1f.3[C] -> GSI 18 (level, low) -> IRQ 18
[ 15.837745] Linux video capture interface: v2.00
[ 15.918394] ivtv: Start initialization, version 1.3.0
[ 15.918394] ivtv0: Initializing card #0
[ 15.918394] ivtv0: Autodetected Hauppauge card (cx23416 based)
[ 15.918394] ACPI: PCI Interrupt 0000:06:08.0[A] -> GSI 18 (level, low) -> IRQ 18
[ 15.970787] tveeprom 1-0050: Hauppauge model 23552, rev D492, serial# 9396298
[ 15.970790] tveeprom 1-0050: tuner model is Philips FQ1236A MK4 (idx 92, type 57)
[ 15.970792] tveeprom 1-0050: TV standards NTSC(M) (eeprom 0x08)
[ 15.970794] tveeprom 1-0050: second tuner model is Philips TEA5768HL FM Radio (idx 101, type 62)
[ 15.970796] tveeprom 1-0050: audio processor is CX25843 (idx 37)
[ 15.970798] tveeprom 1-0050: decoder processor is CX25843 (idx 30)
[ 15.970800] tveeprom 1-0050: has radio, has no IR receiver, has no IR transmitter
[ 15.970802] ivtv0: Autodetected WinTV PVR 500 (unit #1)
[ 16.025414] cx25840 1-0044: cx25843-23 found @ 0x88 (ivtv i2c driver #0)
[ 16.075968] ACPI: PCI Interrupt 0000:00:1b.0[A] -> GSI 22 (level, low) -> IRQ 22
[ 16.075968] PCI: Setting latency timer of device 0000:00:1b.0 to 64
[ 16.085710] tuner 1-0060: chip found @ 0xc0 (ivtv i2c driver #0)
[ 16.085710] tea5767 1-0060: type set to Philips TEA5767HN FM Radio
[ 16.107789] tuner 1-0043: chip found @ 0x86 (ivtv i2c driver #0)
[ 16.111830] hda_codec: Unknown model for ALC883, trying auto-probe from BIOS...
[ 16.128506] tda9887 1-0043: creating new instance
[ 16.128506] tda9887 1-0043: tda988[5/6/7] found
[ 16.128506] tuner 1-0061: chip found @ 0xc2 (ivtv i2c driver #0)
[ 16.128506] wm8775 1-001b: chip found @ 0x36 (ivtv i2c driver #0)
[ 16.154237] tuner-simple 1-0061: creating new instance
[ 16.154237] tuner-simple 1-0061: type set to 57 (Philips FQ1236A MK4)
[ 16.162279] ivtv0: Registered device video0 for encoder MPG (4096 kB)
[ 16.162279] ivtv0: Registered device video32 for encoder YUV (2048 kB)
[ 16.162279] ivtv0: Registered device vbi0 for encoder VBI (1024 kB)
[ 16.162279] ivtv0: Registered device video24 for encoder PCM (320 kB)
[ 16.162279] ivtv0: Registered device radio0 for encoder radio
[ 16.162279] ivtv0: Initialized card #0: WinTV PVR 500 (unit #1)
[ 16.162279] ivtv1: Initializing card #1
[ 16.162279] ivtv1: Autodetected Hauppauge card (cx23416 based)
[ 16.162279] ACPI: PCI Interrupt 0000:06:09.0[A] -> GSI 19 (level, low) -> IRQ 19
[ 16.214206] tveeprom 2-0050: Hauppauge model 23552, rev D492, serial# 9396298
[ 16.214206] tveeprom 2-0050: tuner model is Philips FQ1236A MK4 (idx 92, type 57)
[ 16.214206] tveeprom 2-0050: TV standards NTSC(M) (eeprom 0x08)
[ 16.214206] tveeprom 2-0050: second tuner model is Philips TEA5768HL FM Radio (idx 101, type 62)
[ 16.214206] tveeprom 2-0050: audio processor is CX25843 (idx 37)
[ 16.214206] tveeprom 2-0050: decoder processor is CX25843 (idx 30)
[ 16.214206] tveeprom 2-0050: has radio, has no IR receiver, has no IR transmitter
[ 16.214206] ivtv1: Correcting tveeprom data: no radio present on second unit
[ 16.214206] ivtv1: Autodetected WinTV PVR 500 (unit #2)
[ 16.244731] cx25840 2-0044: cx25843-23 found @ 0x88 (ivtv i2c driver #1)
[ 16.254001] tuner 2-0043: chip found @ 0x86 (ivtv i2c driver #1)
[ 16.254018] tda9887 2-0043: creating new instance
[ 16.254020] tda9887 2-0043: tda988[5/6/7] found
[ 16.257949] tuner 2-0061: chip found @ 0xc2 (ivtv i2c driver #1)
[ 16.257969] wm8775 2-001b: chip found @ 0x36 (ivtv i2c driver #1)
[ 16.266692] tuner-simple 2-0061: creating new instance
[ 16.266694] tuner-simple 2-0061: type set to 57 (Philips FQ1236A MK4)
[ 16.276481] ivtv1: Registered device video1 for encoder MPG (4096 kB)
[ 16.276508] ivtv1: Registered device video33 for encoder YUV (2048 kB)
[ 16.276525] ivtv1: Registered device vbi1 for encoder VBI (1024 kB)
[ 16.276542] ivtv1: Registered device video25 for encoder PCM (320 kB)
[ 16.276544] ivtv1: Initialized card #1: WinTV PVR 500 (unit #2)
[ 16.276570] ivtv: End initialization
[ 297.202646] EXT3 FS on md0, internal journal
[ 297.938039] loop: module loaded
[ 298.001375] w83627ehf: Found W83627DHG chip at 0x290
[ 298.020007] coretemp coretemp.0: Using relative temperature scale!
[ 298.020045] coretemp coretemp.1: Using relative temperature scale!
[ 298.020079] coretemp coretemp.2: Using relative temperature scale!
[ 298.020110] coretemp coretemp.3: Using relative temperature scale!
[ 4599.205244] fuse init (API version 7.9)
[ 4599.623123] kjournald starting. Commit interval 5 seconds
[ 4599.652504] EXT3 FS on md1, internal journal
[ 4599.652504] EXT3-fs: mounted filesystem with ordered data mode.
[ 4599.749170] kjournald starting. Commit interval 5 seconds
[ 4599.782877] EXT3 FS on dm-1, internal journal
[ 4599.782877] EXT3-fs: mounted filesystem with ordered data mode.
[ 4600.001137] kjournald2 starting. Commit interval 5 seconds
[ 4600.029133] EXT4 FS on dm-2, internal journal
[ 4600.029133] EXT4-fs: mounted filesystem with ordered data mode.
[ 4600.029133] EXT4-fs: file extents enabled
[ 4600.033741] EXT4-fs: mballoc enabled
[ 4600.152047] Adding 6291448k swap on /dev/mapper/MainVG-Swap. Priority:-1 extents:1 across:6291448k
[ 4611.159518] atl1 0000:02:00.0: eth0 link is up 100 Mbps full duplex
[ 4611.159518] atl1 0000:02:00.0: eth0 link is up 1000 Mbps full duplex
[ 4612.636342] NET: Registered protocol family 10
[ 4612.636342] lo: Disabled Privacy Extensions
[ 4613.573786] RPC: Registered udp transport module.
[ 4613.573786] RPC: Registered tcp transport module.
[ 4613.757806] Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
[ 4614.522350] ttyS0: LSR safety check engaged!
[ 4614.532858] ttyS0: LSR safety check engaged!
So anything else I can provide?
--
Len Sorensen
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-01 22:57 ` Lennart Sorensen
@ 2009-04-03 14:46 ` Mark Lord
2009-04-03 15:16 ` Lennart Sorensen
0 siblings, 1 reply; 419+ messages in thread
From: Mark Lord @ 2009-04-03 14:46 UTC (permalink / raw)
To: Lennart Sorensen
Cc: Andrew Morton, torvalds, tytso, drees76, jesper, linux-kernel
Lennart Sorensen wrote:
> On Wed, Apr 01, 2009 at 02:36:22PM -0700, Andrew Morton wrote:
>> Back in 2002ish I did a *lot* of work on IO latency, reads-vs-writes,
>> etc, etc (but not fsync - for practical purposes it's unfixable on
>> ext3-ordered)
>>
>> Performance was pretty good. From some of the descriptions I'm seeing
>> get tossed around lately, I suspect that it has regressed.
>>
>> It would be useful/interesting if people were to rerun some of these
>> tests with `echo anticipatory > /sys/block/sda/queue/scheduler'.
>>
>> Or with linux-2.5.60 :(
>
> Well 2.6.18 seems to keep popping up as the last kernel with "sane"
> behaviour, at least in terms of not causing huge delays under many
> workloads. I currently run 2.6.26, although that could be updated as
> soon as I get around to figuring out why lirc isn't working for me when
> I move past 2.6.26.
>
> I could certainly try changing the scheduler on my mythtv box and seeing
> if that makes any difference to the behaviour. It is pretty darn obvious
> whether it is responsive or not when starting to play back a video.
..
My Myth box here was running 2.6.18 when originally set up,
and even back then it still took *minutes* to delete large files.
So that part hasn't really changed much in the interim.
Because of the multi-minute deletes, the distro shutdown scripts
would fails, and power off the box while it was still writing
to the drives. Ouch.
That system has had XFS on it for the past year and a half now,
and for Myth, there's no reason not to use XFS. It's great!
Cheers
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 14:21 ` Lennart Sorensen
@ 2009-04-03 15:05 ` Mark Lord
2009-04-03 15:14 ` Lennart Sorensen
2009-04-03 19:57 ` Jeff Garzik
0 siblings, 2 replies; 419+ messages in thread
From: Mark Lord @ 2009-04-03 15:05 UTC (permalink / raw)
To: Lennart Sorensen
Cc: Jens Axboe, Linus Torvalds, Ingo Molnar, Andrew Morton, tytso,
drees76, jesper, Linux Kernel Mailing List
Lennart Sorensen wrote:
>
> Well the system is setup like this:
>
> Core 2 Quad Q6600 CPU (2.4GHz quad core).
> Asus P5K mainboard (Intel P35 chipset)
> 6GB of ram
> PVR500 dual NTSC tuner pci card
..
> So the behaviour with cfq is:
> Disk light seems to be constantly on if there is any disk activity. iotop
> can show a total io of maybe 1MB/s and the disk light is on constantly.
..
Lennart,
I wonder if the problem with your system is really a Myth/driver issue?
Curiously, I have a HVR-1600 card here, and when recording analog TV with
it the disk lights are on constantly. The problem with it turns out to
be mythbackend doing fsync() calls ten times a second.
My other tuner cards don't have this problem.
So perhaps the PVR-500 triggers the same buggy behaviour as the HVR-1600?
To work around it here, I decided to use a preload library that replaces
the frequent fsync() calls with a more moderated behaviour:
http://rtr.ca/hvr1600/libfsync.tar.gz
Grab that file and try it out. Instructions are included within.
Report back again and let us know if it makes any difference.
Someday I may try and chase down the exact bug that causes mythbackend
to go fsyncing berserk like that, but for now this workaround is fine.
Cheers
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 11:32 ` Chris Mason
@ 2009-04-03 15:07 ` Linus Torvalds
2009-04-03 15:40 ` Chris Mason
2009-04-03 20:05 ` Jeff Garzik
0 siblings, 2 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-04-03 15:07 UTC (permalink / raw)
To: Chris Mason
Cc: Jeff Garzik, Andrew Morton, David Rees, Linux Kernel Mailing List
On Fri, 3 Apr 2009, Chris Mason wrote:
> On Thu, 2009-04-02 at 20:34 -0700, Linus Torvalds wrote:
> >
> > Well, one rather simple explanation is that if you hadn't been doing lots
> > of writes, then the background garbage collection on the Intel SSD gets
> > ahead of the game, and gives you lots of bursty nice write bandwidth due
> > to having a nicely compacted and pre-erased blocks.
> >
> > Then, after lots of writing, all the pre-erased blocks are gone, and you
> > are down to a steady state where it needs to GC and erase blocks to make
> > room for new writes.
> >
> > So that part doesn't suprise me per se. The Intel SSD's definitely
> > flucutate a bit timing-wise (but I love how they never degenerate to the
> > "ooh, that _really_ sucks" case that the other SSD's and the rotational
> > media I've seen does when you do random writes).
> >
>
> 23MB/s seems a bit low though, I'd try with O_DIRECT. ext3 doesn't do
> writepages, and the ssd may be very sensitive to smaller writes (what
> brand?)
I didn't realize that Jeff had a non-Intel SSD.
THAT sure explains the huge drop-off. I do see Intel SSD's fluctuating
too, but the Intel ones tend to be _fairly_ stable.
> > The fact that it also happens for the regular disk does imply that it's
> > not the _only_ thing going on, though.
>
> Jeff if you blktrace it I can make up a seekwatcher graph. My bet is
> that pdflush is stuck writing the indirect blocks, and doing a ton of
> seeks.
>
> You could change the overwrite program to also do sync_file_range on the
> block device ;)
Actually, that won't help. 'sync_file_range()' works only on the virtually
indexed page cache, and I think ext3 uses "struct buffer_head *" for all
it's metadata updates (due to how JBD works). So sync_file_range() will do
nothing at all to the metadata, regardless of what mapping you execute it
on.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-02 11:05 ` Janne Grunau
` (2 preceding siblings ...)
2009-04-02 21:50 ` Lennart Sorensen
@ 2009-04-03 15:07 ` Mark Lord
3 siblings, 0 replies; 419+ messages in thread
From: Mark Lord @ 2009-04-03 15:07 UTC (permalink / raw)
To: Janne Grunau
Cc: Lennart Sorensen, Andrew Morton, Linus Torvalds, Theodore Tso,
David Rees, Jesper Krogh, Linux Kernel Mailing List
Janne Grunau wrote:
>..
> MythTV calls fsync every few seconds on ongoing recordings to prevent
> stalls due to large cache writebacks on ext3.
>
> Janne
> (MythTV developer)
..
Oooh.. a myth dev!
With the HVR-1600 card, myth calls fsync() *ten* times a second
while recording analog TV (digital is fine).
Any chance you could track down and fix that ?
It might be the same thing that's biting Lennart's system
with his PVR-500 card.
Cheers
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-02 17:04 ` Linus Torvalds
2009-04-02 22:09 ` Jeff Garzik
@ 2009-04-03 15:14 ` Mark Lord
2009-04-03 15:18 ` Lennart Sorensen
2009-04-03 15:28 ` Linus Torvalds
1 sibling, 2 replies; 419+ messages in thread
From: Mark Lord @ 2009-04-03 15:14 UTC (permalink / raw)
To: Linus Torvalds
Cc: Andrew Morton, David Rees, Janne Grunau, Lennart Sorensen,
Theodore Tso, Jesper Krogh, Linux Kernel Mailing List
Linus Torvalds wrote:
>
> On Thu, 2 Apr 2009, Linus Torvalds wrote:
>> On Thu, 2 Apr 2009, Andrew Morton wrote:
>>> A suitable design for the streaming might be, every 4MB:
>>>
>>> - run sync_file_range(SYNC_FILE_RANGE_WRITE) to get the 4MB underway
>>> to the disk
>>>
>>> - run fadvise(POSIX_FADV_DONTNEED) against the previous 4MB to
>>> discard it from pagecache.
>> Here's an example. I call it "overwrite.c" for obvious reasons.
>
> Oh, except my example doesn't do the fadvise. Instead, I make sure to
> throttle the writes and the old range with
>
> SYNC_FILE_RANGE_WAIT_BEFORE|SYNC_FILE_RANGE_WRITE|SYNC_FILE_RANGE_WAIT_AFTER
>
> which makes sure that the old pages are easily dropped by the VM - and
> they will be, since they end up always being on the cold list.
>
> I _wanted_ to add a SYNC_FILE_RANGE_DROP but I never bothered because this
> particular load it didn't matter. The system was perfectly usable while
> overwriting even huge disks because there was never more than 8MB of dirty
> data in flight in the IO queues at any time.
..
Note that for mythtv, this may not be the best behaviour.
A common use scenario is "watching live TV", a few minutes behind
real-time so that the commercial-skipping can work its magic.
In that scenario, those pages are going to be needed again
within a short while, and it might be useful to keep them around.
But then Myth itself could probably decide whether to discard them
or not, not based upon that kind of knowledge.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 15:05 ` Mark Lord
@ 2009-04-03 15:14 ` Lennart Sorensen
2009-04-03 19:57 ` Jeff Garzik
1 sibling, 0 replies; 419+ messages in thread
From: Lennart Sorensen @ 2009-04-03 15:14 UTC (permalink / raw)
To: Mark Lord
Cc: Jens Axboe, Linus Torvalds, Ingo Molnar, Andrew Morton, tytso,
drees76, jesper, Linux Kernel Mailing List
On Fri, Apr 03, 2009 at 11:05:04AM -0400, Mark Lord wrote:
> I wonder if the problem with your system is really a Myth/driver issue?
Could be. That is the point of the box after all.
> Curiously, I have a HVR-1600 card here, and when recording analog TV with
> it the disk lights are on constantly. The problem with it turns out to
> be mythbackend doing fsync() calls ten times a second.
But why would anticipatory help that?
> My other tuner cards don't have this problem.
>
> So perhaps the PVR-500 triggers the same buggy behaviour as the HVR-1600?
> To work around it here, I decided to use a preload library that replaces
> the frequent fsync() calls with a more moderated behaviour:
>
> http://rtr.ca/hvr1600/libfsync.tar.gz
>
> Grab that file and try it out. Instructions are included within.
> Report back again and let us know if it makes any difference.
I can have a try at that. I will see how cfq behaves with that installed.
> Someday I may try and chase down the exact bug that causes mythbackend
> to go fsyncing berserk like that, but for now this workaround is fine.
Well if it is the real cause of the bad behaviour then it would certainly
be good to track down.
--
Len Sorensen
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 14:46 ` Mark Lord
@ 2009-04-03 15:16 ` Lennart Sorensen
2009-04-03 15:42 ` Mark Lord
2009-04-03 18:59 ` Jeff Garzik
0 siblings, 2 replies; 419+ messages in thread
From: Lennart Sorensen @ 2009-04-03 15:16 UTC (permalink / raw)
To: Mark Lord; +Cc: Andrew Morton, torvalds, tytso, drees76, jesper, linux-kernel
On Fri, Apr 03, 2009 at 10:46:34AM -0400, Mark Lord wrote:
> My Myth box here was running 2.6.18 when originally set up,
> and even back then it still took *minutes* to delete large files.
> So that part hasn't really changed much in the interim.
>
> Because of the multi-minute deletes, the distro shutdown scripts
> would fails, and power off the box while it was still writing
> to the drives. Ouch.
>
> That system has had XFS on it for the past year and a half now,
> and for Myth, there's no reason not to use XFS. It's great!
Mythtv has a 'slow delete' option that I believe works by slowly
truncating the file. Seems they believe that ext3 is bad at handling
large file deletes, so they try to spread out the pain. I don't remember
if that option is on by default or not. I turned it off.
--
Len Sorensen
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 15:14 ` Mark Lord
@ 2009-04-03 15:18 ` Lennart Sorensen
2009-04-03 15:46 ` Mark Lord
2009-04-03 15:28 ` Linus Torvalds
1 sibling, 1 reply; 419+ messages in thread
From: Lennart Sorensen @ 2009-04-03 15:18 UTC (permalink / raw)
To: Mark Lord
Cc: Linus Torvalds, Andrew Morton, David Rees, Janne Grunau,
Theodore Tso, Jesper Krogh, Linux Kernel Mailing List
On Fri, Apr 03, 2009 at 11:14:39AM -0400, Mark Lord wrote:
> Note that for mythtv, this may not be the best behaviour.
>
> A common use scenario is "watching live TV", a few minutes behind
> real-time so that the commercial-skipping can work its magic.
Well I really never watch live TV. I watch shows when I want to, not
when they happen to be on the air. So I certainly couldn't care less
if they were no longer cached.
> In that scenario, those pages are going to be needed again
> within a short while, and it might be useful to keep them around.
Within 1 minute might be a lot of data for an MPEG2 stream.
> But then Myth itself could probably decide whether to discard them
> or not, not based upon that kind of knowledge.
--
Len Sorensen
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 15:14 ` Mark Lord
2009-04-03 15:18 ` Lennart Sorensen
@ 2009-04-03 15:28 ` Linus Torvalds
1 sibling, 0 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-04-03 15:28 UTC (permalink / raw)
To: Mark Lord
Cc: Andrew Morton, David Rees, Janne Grunau, Lennart Sorensen,
Theodore Tso, Jesper Krogh, Linux Kernel Mailing List
On Fri, 3 Apr 2009, Mark Lord wrote:
>
> Note that for mythtv, this may not be the best behaviour.
>
> A common use scenario is "watching live TV", a few minutes behind
> real-time so that the commercial-skipping can work its magic.
>
> In that scenario, those pages are going to be needed again
> within a short while, and it might be useful to keep them around.
>
> But then Myth itself could probably decide whether to discard them
> or not, not based upon that kind of knowledge.
Yes. I suspect that Myth could do heuristics like "when watching live TV,
do drop-behind about 30s after the currently showing stream". That still
allows for replay, but older stuff you've watched really likely isn't all
that interesting and migth be worth dropping in order to make room for
more data.
And you can use posix_fadvise() for that, since it's now no longer
connected with "wait for background IO to complete" at all.
The reason for wanting "SYNC_FILE_RANGE_DROP" was simply that I was doing
the "wait after write" anyway, and thinking I wanted to get rid of the
pages while I was already handling them. But that was for an app where I
_new_ the data was uninteresting as soon as it was on disk. Doing a secure
delete is different from recording video ;)
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 15:07 ` Linus Torvalds
@ 2009-04-03 15:40 ` Chris Mason
2009-04-03 20:05 ` Jeff Garzik
1 sibling, 0 replies; 419+ messages in thread
From: Chris Mason @ 2009-04-03 15:40 UTC (permalink / raw)
To: Linus Torvalds
Cc: Jeff Garzik, Andrew Morton, David Rees, Linux Kernel Mailing List
On Fri, 2009-04-03 at 08:07 -0700, Linus Torvalds wrote:
>
> On Fri, 3 Apr 2009, Chris Mason wrote:
>
> > On Thu, 2009-04-02 at 20:34 -0700, Linus Torvalds wrote:
> > >
> > > Well, one rather simple explanation is that if you hadn't been doing lots
> > > of writes, then the background garbage collection on the Intel SSD gets
> > > ahead of the game, and gives you lots of bursty nice write bandwidth due
> > > to having a nicely compacted and pre-erased blocks.
> > >
> > > Then, after lots of writing, all the pre-erased blocks are gone, and you
> > > are down to a steady state where it needs to GC and erase blocks to make
> > > room for new writes.
> > >
> > > So that part doesn't suprise me per se. The Intel SSD's definitely
> > > flucutate a bit timing-wise (but I love how they never degenerate to the
> > > "ooh, that _really_ sucks" case that the other SSD's and the rotational
> > > media I've seen does when you do random writes).
> > >
> >
> > 23MB/s seems a bit low though, I'd try with O_DIRECT. ext3 doesn't do
> > writepages, and the ssd may be very sensitive to smaller writes (what
> > brand?)
>
> I didn't realize that Jeff had a non-Intel SSD.
>
> THAT sure explains the huge drop-off. I do see Intel SSD's fluctuating
> too, but the Intel ones tend to be _fairly_ stable.
Even the intel ones have cliffs for long running random io workloads
(where the bottom of the cliff is still very fast), but something like
this should be stable.
>
> > > The fact that it also happens for the regular disk does imply that it's
> > > not the _only_ thing going on, though.
> >
> > Jeff if you blktrace it I can make up a seekwatcher graph. My bet is
> > that pdflush is stuck writing the indirect blocks, and doing a ton of
> > seeks.
> >
> > You could change the overwrite program to also do sync_file_range on the
> > block device ;)
>
> Actually, that won't help. 'sync_file_range()' works only on the virtually
> indexed page cache, and I think ext3 uses "struct buffer_head *" for all
> it's metadata updates (due to how JBD works). So sync_file_range() will do
> nothing at all to the metadata, regardless of what mapping you execute it
> on.
The buffer heads do end up on the block device inode's pages, and ext3
is letting pdflush do some of the writeback. Its hard to say if the
sync_file_range is going to help, the IO on the metadata may be random
enough for that ssd that it won't really matter who writes it or when.
Spinning disks might suck, but at least they all suck in the same
way...tuning for all these different ssds isn't going to be fun at all.
-chris
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 15:16 ` Lennart Sorensen
@ 2009-04-03 15:42 ` Mark Lord
2009-04-03 18:59 ` Jeff Garzik
1 sibling, 0 replies; 419+ messages in thread
From: Mark Lord @ 2009-04-03 15:42 UTC (permalink / raw)
To: Lennart Sorensen
Cc: Andrew Morton, torvalds, tytso, drees76, jesper, linux-kernel
Lennart Sorensen wrote:
> On Fri, Apr 03, 2009 at 10:46:34AM -0400, Mark Lord wrote:
>> My Myth box here was running 2.6.18 when originally set up,
>> and even back then it still took *minutes* to delete large files.
>> So that part hasn't really changed much in the interim.
>>
>> Because of the multi-minute deletes, the distro shutdown scripts
>> would fails, and power off the box while it was still writing
>> to the drives. Ouch.
>>
>> That system has had XFS on it for the past year and a half now,
>> and for Myth, there's no reason not to use XFS. It's great!
>
> Mythtv has a 'slow delete' option that I believe works by slowly
> truncating the file. Seems they believe that ext3 is bad at handling
> large file deletes, so they try to spread out the pain. I don't remember
> if that option is on by default or not. I turned it off.
..
That option doesn't make much difference for the shutdown failure.
And with XFS there's no need for it, so I now have it "off".
Cheers
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 15:18 ` Lennart Sorensen
@ 2009-04-03 15:46 ` Mark Lord
0 siblings, 0 replies; 419+ messages in thread
From: Mark Lord @ 2009-04-03 15:46 UTC (permalink / raw)
To: Lennart Sorensen
Cc: Linus Torvalds, Andrew Morton, David Rees, Janne Grunau,
Theodore Tso, Jesper Krogh, Linux Kernel Mailing List
Lennart Sorensen wrote:
> On Fri, Apr 03, 2009 at 11:14:39AM -0400, Mark Lord wrote:
>> Note that for mythtv, this may not be the best behaviour.
>>
>> A common use scenario is "watching live TV", a few minutes behind
>> real-time so that the commercial-skipping can work its magic.
>
> Well I really never watch live TV.
..
A *true* myth dev! (pretenders use LiveTV, *real* devs don't!)
But mythcommflag also benefits from having the pages
hang around for an extra short time.
Cheers
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 15:16 ` Lennart Sorensen
2009-04-03 15:42 ` Mark Lord
@ 2009-04-03 18:59 ` Jeff Garzik
2009-04-04 8:18 ` Andrew Morton
1 sibling, 1 reply; 419+ messages in thread
From: Jeff Garzik @ 2009-04-03 18:59 UTC (permalink / raw)
To: Lennart Sorensen
Cc: Mark Lord, Andrew Morton, torvalds, tytso, drees76, jesper,
linux-kernel
Lennart Sorensen wrote:
> On Fri, Apr 03, 2009 at 10:46:34AM -0400, Mark Lord wrote:
>> My Myth box here was running 2.6.18 when originally set up,
>> and even back then it still took *minutes* to delete large files.
>> So that part hasn't really changed much in the interim.
>>
>> Because of the multi-minute deletes, the distro shutdown scripts
>> would fails, and power off the box while it was still writing
>> to the drives. Ouch.
>>
>> That system has had XFS on it for the past year and a half now,
>> and for Myth, there's no reason not to use XFS. It's great!
>
> Mythtv has a 'slow delete' option that I believe works by slowly
> truncating the file. Seems they believe that ext3 is bad at handling
> large file deletes, so they try to spread out the pain. I don't remember
> if that option is on by default or not. I turned it off.
It's pretty painful for super-large files with lots of metadata.
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 15:05 ` Mark Lord
2009-04-03 15:14 ` Lennart Sorensen
@ 2009-04-03 19:57 ` Jeff Garzik
2009-04-03 21:28 ` Janne Grunau
1 sibling, 1 reply; 419+ messages in thread
From: Jeff Garzik @ 2009-04-03 19:57 UTC (permalink / raw)
To: Mark Lord
Cc: Lennart Sorensen, Jens Axboe, Linus Torvalds, Ingo Molnar,
Andrew Morton, tytso, drees76, jesper, Linux Kernel Mailing List
Mark Lord wrote:
> Lennart Sorensen wrote:
>>
>> Well the system is setup like this:
>>
>> Core 2 Quad Q6600 CPU (2.4GHz quad core).
>> Asus P5K mainboard (Intel P35 chipset)
>> 6GB of ram
>> PVR500 dual NTSC tuner pci card
> ..
>> So the behaviour with cfq is:
>> Disk light seems to be constantly on if there is any disk activity.
>> iotop
>> can show a total io of maybe 1MB/s and the disk light is on constantly.
> ..
>
> Lennart,
>
> I wonder if the problem with your system is really a Myth/driver issue?
>
> Curiously, I have a HVR-1600 card here, and when recording analog TV with
> it the disk lights are on constantly. The problem with it turns out to
> be mythbackend doing fsync() calls ten times a second.
>
> My other tuner cards don't have this problem.
>
> So perhaps the PVR-500 triggers the same buggy behaviour as the HVR-1600?
> To work around it here, I decided to use a preload library that replaces
> the frequent fsync() calls with a more moderated behaviour:
>
> http://rtr.ca/hvr1600/libfsync.tar.gz
>
> Grab that file and try it out. Instructions are included within.
> Report back again and let us know if it makes any difference.
>
> Someday I may try and chase down the exact bug that causes mythbackend
> to go fsyncing berserk like that, but for now this workaround is fine.
mythtv/libs/libmythtv/ThreadedFileWriter.cpp is a good place to start
(Sync method... uses fdatasync if available, fsync if not).
mythtv is definitely a candidate for sync_file_range() style output, IMO.
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 15:07 ` Linus Torvalds
2009-04-03 15:40 ` Chris Mason
@ 2009-04-03 20:05 ` Jeff Garzik
2009-04-03 20:14 ` Linus Torvalds
2009-04-04 12:44 ` Mark Lord
1 sibling, 2 replies; 419+ messages in thread
From: Jeff Garzik @ 2009-04-03 20:05 UTC (permalink / raw)
To: Linus Torvalds
Cc: Chris Mason, Andrew Morton, David Rees, Linux Kernel Mailing List
[-- Attachment #1: Type: text/plain, Size: 342 bytes --]
Linus Torvalds wrote:
> I didn't realize that Jeff had a non-Intel SSD.
> THAT sure explains the huge drop-off. I do see Intel SSD's fluctuating
> too, but the Intel ones tend to be _fairly_ stable.
Yeah, it's a no-name SSD.
I've attached 'hdparm -I' in case anyone is curious. It's from
newegg.com, so nothing NDA'd or sekrit.
Jeff
[-- Attachment #2: ssd-hdparm.txt --]
[-- Type: text/plain, Size: 1335 bytes --]
/dev/sdb:
ATA device, with non-removable media
Model Number: G.SKILL 128GB SSD
Serial Number: MK0108480A545003B
Firmware Revision: 02.10104
Standards:
Used: ATA/ATAPI-7 T13 1532D revision 4a
Supported: 8 7 6 5 & some of 8
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
sectors/track 63 63
--
CHS current addressable sectors: 16514064
LBA user addressable sectors: 250445824
device size with M = 1024*1024: 122288 MBytes
device size with M = 1000*1000: 128228 MBytes (128 GB)
Capabilities:
LBA, IORDY(can be disabled)
Standby timer values: spec'd by Standard, no device specific minimum
R/W multiple sector transfer: Max = 1 Current = ?
Recommended acoustic management value: 128, current value: 254
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 *udma5
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
* SMART feature set
* Power Management feature set
Write cache
Look-ahead
* Mandatory FLUSH_CACHE
* SATA-I signaling speed (1.5Gb/s)
* SATA-II signaling speed (3.0Gb/s)
* Host-initiated interface power management
* Phy event counters
Checksum: correct
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 20:05 ` Jeff Garzik
@ 2009-04-03 20:14 ` Linus Torvalds
2009-04-03 21:48 ` Jeff Garzik
2009-04-03 23:35 ` Dave Jones
2009-04-04 12:44 ` Mark Lord
1 sibling, 2 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-04-03 20:14 UTC (permalink / raw)
To: Jeff Garzik
Cc: Chris Mason, Andrew Morton, David Rees, Linux Kernel Mailing List
On Fri, 3 Apr 2009, Jeff Garzik wrote:
>
> Yeah, it's a no-name SSD.
>
> I've attached 'hdparm -I' in case anyone is curious. It's from newegg.com, so
> nothing NDA'd or sekrit.
Hmm. Does it do ok on the "random write" test? There's a few non-intel
controllers that are fine - apparently the newer samsung ones, and the one
from Indilinx.
But I _think_ G.SKILL uses those horribly broken JMicron controllers.
Judging by your performance numbers, it's the slightly fancier double
controller version (ie basically an internal RAID0 of two identical
JMicron controllers, each handling half of the flash chips).
Try a random write test. If it's the JMicron controllers, performance will
plummet to a few tens of kilobytes per second.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 19:57 ` Jeff Garzik
@ 2009-04-03 21:28 ` Janne Grunau
2009-04-03 21:57 ` Jeff Garzik
0 siblings, 1 reply; 419+ messages in thread
From: Janne Grunau @ 2009-04-03 21:28 UTC (permalink / raw)
To: Jeff Garzik
Cc: Mark Lord, Lennart Sorensen, Jens Axboe, Linus Torvalds,
Ingo Molnar, Andrew Morton, tytso, drees76, jesper,
Linux Kernel Mailing List
On Fri, Apr 03, 2009 at 03:57:52PM -0400, Jeff Garzik wrote:
> Mark Lord wrote:
> >
> > I wonder if the problem with your system is really a Myth/driver issue?
> >
> > Curiously, I have a HVR-1600 card here, and when recording analog TV with
> > it the disk lights are on constantly. The problem with it turns out to
> > be mythbackend doing fsync() calls ten times a second.
> >
> > My other tuner cards don't have this problem.
> >
> > So perhaps the PVR-500 triggers the same buggy behaviour as the HVR-1600?
> > To work around it here, I decided to use a preload library that replaces
> > the frequent fsync() calls with a more moderated behaviour:
> >
> > http://rtr.ca/hvr1600/libfsync.tar.gz
> >
> > Grab that file and try it out. Instructions are included within.
> > Report back again and let us know if it makes any difference.
> >
> > Someday I may try and chase down the exact bug that causes mythbackend
> > to go fsyncing berserk like that, but for now this workaround is fine.
that sounds if it indeed syncs every 100ms instead of once per second
over the whole recording. It's inteneded behaviour for the first 64K.
> mythtv/libs/libmythtv/ThreadedFileWriter.cpp is a good place to start
> (Sync method... uses fdatasync if available, fsync if not).
>
> mythtv is definitely a candidate for sync_file_range() style output, IMO.
yeah, I'm on it.
Janne
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 20:14 ` Linus Torvalds
@ 2009-04-03 21:48 ` Jeff Garzik
2009-04-03 22:06 ` Linus Torvalds
2009-04-03 23:35 ` Dave Jones
1 sibling, 1 reply; 419+ messages in thread
From: Jeff Garzik @ 2009-04-03 21:48 UTC (permalink / raw)
To: Linus Torvalds
Cc: Chris Mason, Andrew Morton, David Rees, Linux Kernel Mailing List
Linus Torvalds wrote:
>
> On Fri, 3 Apr 2009, Jeff Garzik wrote:
>> Yeah, it's a no-name SSD.
>>
>> I've attached 'hdparm -I' in case anyone is curious. It's from newegg.com, so
>> nothing NDA'd or sekrit.
>
> Hmm. Does it do ok on the "random write" test? There's a few non-intel
> controllers that are fine - apparently the newer samsung ones, and the one
> from Indilinx.
>
> But I _think_ G.SKILL uses those horribly broken JMicron controllers.
> Judging by your performance numbers, it's the slightly fancier double
> controller version (ie basically an internal RAID0 of two identical
> JMicron controllers, each handling half of the flash chips).
Quoting from the review at
http://www.bit-tech.net/hardware/storage/2008/12/03/g-skill-patriot-and-intel-ssd-test/2
"Cracking the drive open reveals the PCB fitted with sixteen Samsung
840, 8GB MLC NAND flash memory modules, linked to a J-Micron JMF 602
storage controller chip."
> Try a random write test. If it's the JMicron controllers, performance will
> plummet to a few tens of kilobytes per second.
Since I am hacking on osdblk currently, I was too slack to code up a
test. This is what bonnie++ says, at least...
> Version 1.03c ------Sequential Output------ --Sequential Input- --Random-
> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
> bd.yyz.us 8000M 28678 6 27836 5 133246 12 5237 10
> ------Sequential Create------ --------Random Create--------
> -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
> 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
> bd.yyz.us,8000M,,,28678,6,27836,5,,,133246,12,5236.6,10,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
But I guess seeks are not very helpful on an SSD :) Any pre-built
random write tests out there?
Regards,
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 21:28 ` Janne Grunau
@ 2009-04-03 21:57 ` Jeff Garzik
2009-04-03 22:32 ` Janne Grunau
2009-04-03 22:53 ` David Rees
0 siblings, 2 replies; 419+ messages in thread
From: Jeff Garzik @ 2009-04-03 21:57 UTC (permalink / raw)
To: Janne Grunau
Cc: Mark Lord, Lennart Sorensen, Jens Axboe, Linus Torvalds,
Ingo Molnar, Andrew Morton, tytso, drees76, jesper,
Linux Kernel Mailing List
Janne Grunau wrote:
> On Fri, Apr 03, 2009 at 03:57:52PM -0400, Jeff Garzik wrote:
>> Mark Lord wrote:
>>> Grab that file and try it out. Instructions are included within.
>>> Report back again and let us know if it makes any difference.
>>>
>>> Someday I may try and chase down the exact bug that causes mythbackend
>>> to go fsyncing berserk like that, but for now this workaround is fine.
>
> that sounds if it indeed syncs every 100ms instead of once per second
> over the whole recording. It's inteneded behaviour for the first 64K.
>
>> mythtv/libs/libmythtv/ThreadedFileWriter.cpp is a good place to start
>> (Sync method... uses fdatasync if available, fsync if not).
>>
>> mythtv is definitely a candidate for sync_file_range() style output, IMO.
> yeah, I'm on it.
Just curious, does MythTV need fsync(), or merely to tell the kernel to
begin asynchronously writing data to storage?
sync_file_range(..., SYNC_FILE_RANGE_WRITE) might be enough, if you do
not need to actually wait for completion.
This may be the case, if the idea behind MythTV's fsync(2) is simply to
prevent the kernel from building up a huge amount of dirty pages in the
pagecache [which, in turn, produces bursty write-out behavior].
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 21:48 ` Jeff Garzik
@ 2009-04-03 22:06 ` Linus Torvalds
2009-04-03 23:48 ` Jeff Garzik
0 siblings, 1 reply; 419+ messages in thread
From: Linus Torvalds @ 2009-04-03 22:06 UTC (permalink / raw)
To: Jeff Garzik
Cc: Chris Mason, Andrew Morton, David Rees, Linux Kernel Mailing List
On Fri, 3 Apr 2009, Jeff Garzik wrote:
>
> Since I am hacking on osdblk currently, I was too slack to code up a test.
> This is what bonnie++ says, at least...
Afaik, bonnie does it all in the page cache, and only tests random reads
(in the "random seek" test), not random writes.
> But I guess seeks are not very helpful on an SSD :) Any pre-built random
> write tests out there?
"fio" does well:
http://git.kernel.dk/?p=fio.git;a=summary
and I think it comes with a few example files. Here's the random write
file that Jens suggested, and that works pretty well..
It first creates a 2GB file to do the IO on, then does random 4k writes to
it with O_DIRECT.
If your SSD does badly at it, you'll just want to kill it, but it shows
you how many MB/s it's doing (or, in the sucky case, how many kB/s).
Linus
---
[global]
filename=testfile
size=2g
create_fsync=1
overwrite=1
[randwrites]
# make rw= 'randread' for random reads, 'read' for reads, etc
rw=randwrite
bs=4k
direct=1
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 4:06 ` Lennart Sorensen
2009-04-03 4:13 ` Linus Torvalds
@ 2009-04-03 22:28 ` Jeff Moyer
2009-04-06 14:15 ` Lennart Sorensen
1 sibling, 1 reply; 419+ messages in thread
From: Jeff Moyer @ 2009-04-03 22:28 UTC (permalink / raw)
To: Lennart Sorensen
Cc: Ingo Molnar, Andrew Morton, torvalds, tytso, drees76, jesper,
linux-kernel, jens.axboe
lsorense@csclub.uwaterloo.ca (Lennart Sorensen) writes:
> On Thu, Apr 02, 2009 at 03:00:44AM +0200, Ingo Molnar wrote:
>> I'll test this (and the other suggestions) once i'm out of the merge
>> window.
>>
>> I probably wont test that though ;-)
>>
>> Going back to v2.6.14 to do pre-mutex-merge performance tests was
>> already quite a challenge on modern hardware.
>
> Well after a day of running my mythtv box with anticipatiry rather than
> the default cfq scheduler, it certainly looks a lot better. I haven't
> seen any slowdowns, the disk activity light isn't on solidly (it just
> flashes every couple of seconds instead), and it doesn't even mind
> me lanuching bittornado on multiple torrents at the same time as two
> recordings are taking place and some commercial flagging is taking place.
> With cfq this would usually make the system unusable (and a Q6600 with
> 6GB ram should never be unresponsive in my opinion).
>
> So so far I would rank anticipatory at about 1000x better than cfq for
> my work load. It sure acts a lot more like it used to back in 2.6.18
> times.
Hi, Lennart,
Could you try one more test, please? Switch back to CFQ and set
/sys/block/sdX/queue/iosched/slice_idle to 0?
I'm not sure how the applications you are running write to disk, but if
they interleave I/O between processes, this could help. I'm not too
confident that this will make a difference, though, since CFQ changed to
time-slice based instead of quantum based before 2.6.18. Still, it
would be another data point if you have the time.
Thanks in advance!
-Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 21:57 ` Jeff Garzik
@ 2009-04-03 22:32 ` Janne Grunau
2009-04-03 22:57 ` David Rees
2009-04-03 23:29 ` Jeff Garzik
2009-04-03 22:53 ` David Rees
1 sibling, 2 replies; 419+ messages in thread
From: Janne Grunau @ 2009-04-03 22:32 UTC (permalink / raw)
To: Jeff Garzik
Cc: Mark Lord, Lennart Sorensen, Jens Axboe, Linus Torvalds,
Ingo Molnar, Andrew Morton, tytso, drees76, jesper,
Linux Kernel Mailing List
On Fri, Apr 03, 2009 at 05:57:05PM -0400, Jeff Garzik wrote:
> Janne Grunau wrote:
> > On Fri, Apr 03, 2009 at 03:57:52PM -0400, Jeff Garzik wrote:
> >> mythtv/libs/libmythtv/ThreadedFileWriter.cpp is a good place to start
> >> (Sync method... uses fdatasync if available, fsync if not).
> >>
> >> mythtv is definitely a candidate for sync_file_range() style output, IMO.
>
> > yeah, I'm on it.
>
> Just curious, does MythTV need fsync(), or merely to tell the kernel to
> begin asynchronously writing data to storage?
quoting the TheadedFileWriter comments
/*
* NOTE: This doesn't even try flush our queue of data.
* This only ensures that data which has already been sent
* to the kernel for this file is written to disk. This
* means that if this backend is writing the data over a
* network filesystem like NFS, then the data will be visible
* to the NFS server after this is called. It is also useful
* in preventing the kernel from buffering up so many writes
* that they steal the CPU for a long time when the write
* to disk actually occurs.
*/
> sync_file_range(..., SYNC_FILE_RANGE_WRITE) might be enough, if you do
> not need to actually wait for completion.
>
> This may be the case, if the idea behind MythTV's fsync(2) is simply to
> prevent the kernel from building up a huge amount of dirty pages in the
> pagecache [which, in turn, produces bursty write-out behavior].
see above, we care only about the write-out. The f{data}*sync calls are
already in a seperate thread doing nothing else.
Janne
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 21:57 ` Jeff Garzik
2009-04-03 22:32 ` Janne Grunau
@ 2009-04-03 22:53 ` David Rees
2009-04-03 23:30 ` Jeff Garzik
1 sibling, 1 reply; 419+ messages in thread
From: David Rees @ 2009-04-03 22:53 UTC (permalink / raw)
To: Jeff Garzik
Cc: Janne Grunau, Mark Lord, Lennart Sorensen, Jens Axboe,
Linus Torvalds, Ingo Molnar, Andrew Morton, tytso, jesper,
Linux Kernel Mailing List
On Fri, Apr 3, 2009 at 2:57 PM, Jeff Garzik <jeff@garzik.org> wrote:
> Just curious, does MythTV need fsync(), or merely to tell the kernel to
> begin asynchronously writing data to storage?
>
> sync_file_range(..., SYNC_FILE_RANGE_WRITE) might be enough, if you do not
> need to actually wait for completion.
>
> This may be the case, if the idea behind MythTV's fsync(2) is simply to
> prevent the kernel from building up a huge amount of dirty pages in the
> pagecache [which, in turn, produces bursty write-out behavior].
The *only* reason MythTV fsyncs (or fdatasyncs) the data to disk all
the time is to keep a large amount of dirty pages from building up and
then causing horrible latencies when that data starts getting flushed
to disk.
A typical example of this would be that MythTV is recording a show in
the background while playing back another show.
When the dirty limit is hit and data gets flushed to disk, this would
keep the read buffer on the player from happening fast enough and then
playback would stutter.
Instead of telling people ext3 sucks - mount it in writeback or use
xfs or tweak your vm knobs, they simply put a hack in there instead
which largely eliminates the effect.
I don't think many people would care too much if they lost 30-60
seconds of their recorded TV show if the system crashes for whatever
reason.
-Dave
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 22:32 ` Janne Grunau
@ 2009-04-03 22:57 ` David Rees
2009-04-03 23:29 ` Jeff Garzik
1 sibling, 0 replies; 419+ messages in thread
From: David Rees @ 2009-04-03 22:57 UTC (permalink / raw)
To: Janne Grunau
Cc: Jeff Garzik, Mark Lord, Lennart Sorensen, Jens Axboe,
Linus Torvalds, Ingo Molnar, Andrew Morton, tytso, jesper,
Linux Kernel Mailing List
On Fri, Apr 3, 2009 at 3:32 PM, Janne Grunau <j@jannau.net> wrote:
> On Fri, Apr 03, 2009 at 05:57:05PM -0400, Jeff Garzik wrote:
>> Just curious, does MythTV need fsync(), or merely to tell the kernel to
>> begin asynchronously writing data to storage?
>
> quoting the TheadedFileWriter comments
>
> /*
> * NOTE: This doesn't even try flush our queue of data.
> * This only ensures that data which has already been sent
> * to the kernel for this file is written to disk. This
> * means that if this backend is writing the data over a
> * network filesystem like NFS, then the data will be visible
> * to the NFS server after this is called. It is also useful
> * in preventing the kernel from buffering up so many writes
> * that they steal the CPU for a long time when the write
> * to disk actually occurs.
> */
There is no need to fsync data on a NFS mount in Linux anymore. All
NFS mounts are mounted sync by default now unless you explicitly
specify otherwise (and then you should then know what you're getting
in to).
-Dave
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 22:32 ` Janne Grunau
2009-04-03 22:57 ` David Rees
@ 2009-04-03 23:29 ` Jeff Garzik
2009-04-03 23:52 ` Linus Torvalds
1 sibling, 1 reply; 419+ messages in thread
From: Jeff Garzik @ 2009-04-03 23:29 UTC (permalink / raw)
Cc: Mark Lord, Lennart Sorensen, Jens Axboe, Linus Torvalds,
Ingo Molnar, Andrew Morton, tytso, drees76, jesper,
Linux Kernel Mailing List
Janne Grunau wrote:
> On Fri, Apr 03, 2009 at 05:57:05PM -0400, Jeff Garzik wrote:
>> Janne Grunau wrote:
>>> On Fri, Apr 03, 2009 at 03:57:52PM -0400, Jeff Garzik wrote:
>>>> mythtv/libs/libmythtv/ThreadedFileWriter.cpp is a good place to start
>>>> (Sync method... uses fdatasync if available, fsync if not).
>>>>
>>>> mythtv is definitely a candidate for sync_file_range() style output, IMO.
>>> yeah, I'm on it.
>> Just curious, does MythTV need fsync(), or merely to tell the kernel to
>> begin asynchronously writing data to storage?
>
> quoting the TheadedFileWriter comments
>
> /*
> * NOTE: This doesn't even try flush our queue of data.
> * This only ensures that data which has already been sent
> * to the kernel for this file is written to disk. This
> * means that if this backend is writing the data over a
> * network filesystem like NFS, then the data will be visible
> * to the NFS server after this is called. It is also useful
> * in preventing the kernel from buffering up so many writes
> * that they steal the CPU for a long time when the write
> * to disk actually occurs.
> */
>
>> sync_file_range(..., SYNC_FILE_RANGE_WRITE) might be enough, if you do
>> not need to actually wait for completion.
>>
>> This may be the case, if the idea behind MythTV's fsync(2) is simply to
>> prevent the kernel from building up a huge amount of dirty pages in the
>> pagecache [which, in turn, produces bursty write-out behavior].
>
> see above, we care only about the write-out. The f{data}*sync calls are
> already in a seperate thread doing nothing else.
If all you want to do is _start_ the write-out from kernel to disk, and
let the kernel handle it asynchronously, SYNC_FILE_RANGE_WRITE will do
that for you, eliminating the need for a separate thread.
If you need to wait for the data to hit disk, you will need the other
SYNC_FILE_RANGE_xxx bits.
On a related subject, reads: consider
posix_fadvise(POSIX_FADV_SEQUENTIAL) and/or readahead(2) for optimizing
the reading side of things.
Jeff
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 22:53 ` David Rees
@ 2009-04-03 23:30 ` Jeff Garzik
2009-04-04 16:29 ` Janne Grunau
0 siblings, 1 reply; 419+ messages in thread
From: Jeff Garzik @ 2009-04-03 23:30 UTC (permalink / raw)
To: David Rees
Cc: Janne Grunau, Mark Lord, Lennart Sorensen, Jens Axboe,
Linus Torvalds, Ingo Molnar, Andrew Morton, tytso, jesper,
Linux Kernel Mailing List
David Rees wrote:
> The *only* reason MythTV fsyncs (or fdatasyncs) the data to disk all
> the time is to keep a large amount of dirty pages from building up and
> then causing horrible latencies when that data starts getting flushed
> to disk.
sync_file_range() will definitely help that situation.
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 20:14 ` Linus Torvalds
2009-04-03 21:48 ` Jeff Garzik
@ 2009-04-03 23:35 ` Dave Jones
1 sibling, 0 replies; 419+ messages in thread
From: Dave Jones @ 2009-04-03 23:35 UTC (permalink / raw)
To: Linus Torvalds
Cc: Jeff Garzik, Chris Mason, Andrew Morton, David Rees,
Linux Kernel Mailing List
On Fri, Apr 03, 2009 at 01:14:00PM -0700, Linus Torvalds wrote:
> But I _think_ G.SKILL uses those horribly broken JMicron controllers.
> Judging by your performance numbers, it's the slightly fancier double
> controller version (ie basically an internal RAID0 of two identical
> JMicron controllers, each handling half of the flash chips).
>
> Try a random write test. If it's the JMicron controllers, performance will
> plummet to a few tens of kilobytes per second.
I got the 64GB variant of Jeff's g-skill SSD. When I first got it,
I ran aio-stress on it. The numbers from the smaller blocksize tests
are pitiful. To the extent that after running for 24hrs, I ctrl-c'd
the test. Really, really abysmal.
Dave
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 22:06 ` Linus Torvalds
@ 2009-04-03 23:48 ` Jeff Garzik
2009-04-04 12:46 ` Mark Lord
0 siblings, 1 reply; 419+ messages in thread
From: Jeff Garzik @ 2009-04-03 23:48 UTC (permalink / raw)
To: Linus Torvalds
Cc: Chris Mason, Andrew Morton, Jens Axboe, Linux Kernel Mailing List
Linus Torvalds wrote:
> "fio" does well:
>
> http://git.kernel.dk/?p=fio.git;a=summary
Neat tool, Jens...
> and I think it comes with a few example files. Here's the random write
> file that Jens suggested, and that works pretty well..
>
> It first creates a 2GB file to do the IO on, then does random 4k writes to
> it with O_DIRECT.
>
> If your SSD does badly at it, you'll just want to kill it, but it shows
> you how many MB/s it's doing (or, in the sucky case, how many kB/s).
heh, so far, the SSD is poking along...
Jobs: 1 (f=1): [w] [2.5% done] [ 0/ 282 kb/s] [eta 02h:24m:59s]
Compared to the same job file, started at the same time, on the Seagate
500GB SATA:
Jobs: 1 (f=1): [w] [9.9% done] [ 0/ 1204 kb/s] [eta 26m:28s]
Regards,
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 23:29 ` Jeff Garzik
@ 2009-04-03 23:52 ` Linus Torvalds
0 siblings, 0 replies; 419+ messages in thread
From: Linus Torvalds @ 2009-04-03 23:52 UTC (permalink / raw)
To: Jeff Garzik
Cc: Mark Lord, Lennart Sorensen, Jens Axboe, Ingo Molnar,
Andrew Morton, tytso, drees76, jesper, Linux Kernel Mailing List
On Fri, 3 Apr 2009, Jeff Garzik wrote:
>
> If all you want to do is _start_ the write-out from kernel to disk, and let
> the kernel handle it asynchronously, SYNC_FILE_RANGE_WRITE will do that for
> you, eliminating the need for a separate thread.
It may not eliminate the need for a separate thread.
SYNC_FILE_RANGE_WRITE will still block on things. It just will block on
_much_ less than fsync.
In particular, it will block on:
- actually queuing up the IO (ie we need to get the bio, request etc all
allocated and queued up)
- if a page is under writeback, and has been marked dirty since that
writeback started, we'll wait for that IO to finish in order to start a
new one.
and depending on load, both of these things _can_ be issues and you might
still want to do the SYNC_FILE_RANGE_WRITE as a async thread separate
from the main loop so that the latency of the main loop is not
affected by that.
But the latencies will be _much_ smaller issues than with f[data]sync(),
though, especially if you're not ever really hitting the limits on the
disk subsystem. Because those will additionally
- wait for all old writeback to complete (whether the page was dirtied
after the writeback started or not)
- additionally, wait for all the new writeback it started.
- wait for the metadata too (fsync()).
so they are pretty much _guaranteed_ to sleep for actual IO to complete
(unless you didn't write anything at all to the file ;)
> On a related subject, reads: consider posix_fadvise(POSIX_FADV_SEQUENTIAL)
> and/or readahead(2) for optimizing the reading side of things.
I doubt POSIX_FADV_SEQUENTIAL will do very much. The kernel tends to
figure out the read patterns on its own pretty well. Of course, explicit
readahead() can be noticeable for the right patterns.
Linus
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 18:59 ` Jeff Garzik
@ 2009-04-04 8:18 ` Andrew Morton
2009-04-04 12:40 ` Mark Lord
2009-04-05 1:57 ` David Newall
0 siblings, 2 replies; 419+ messages in thread
From: Andrew Morton @ 2009-04-04 8:18 UTC (permalink / raw)
To: Jeff Garzik
Cc: Lennart Sorensen, Mark Lord, torvalds, tytso, drees76, jesper,
linux-kernel
On Fri, 03 Apr 2009 14:59:12 -0400 Jeff Garzik <jeff@garzik.org> wrote:
> Lennart Sorensen wrote:
> > On Fri, Apr 03, 2009 at 10:46:34AM -0400, Mark Lord wrote:
> >> My Myth box here was running 2.6.18 when originally set up,
> >> and even back then it still took *minutes* to delete large files.
> >> So that part hasn't really changed much in the interim.
> >>
> >> Because of the multi-minute deletes, the distro shutdown scripts
> >> would fails, and power off the box while it was still writing
> >> to the drives. Ouch.
> >>
> >> That system has had XFS on it for the past year and a half now,
> >> and for Myth, there's no reason not to use XFS. It's great!
> >
> > Mythtv has a 'slow delete' option that I believe works by slowly
> > truncating the file. Seems they believe that ext3 is bad at handling
> > large file deletes, so they try to spread out the pain. I don't remember
> > if that option is on by default or not. I turned it off.
>
> It's pretty painful for super-large files with lots of metadata.
>
yeah.
There's a dirty hack you can do where you append one byte to the file
every 4MB, across 1GB (say). That will then lay the file out on-disk as
one bitmap block
one data block
one bitmap block
one data block
one bitmap block
one data block
one bitmap block
one data block
<etc>
lots-of-data-blocks
So when the time comes to delete that gigabyte, the bitmaps blocks are
only one block apart, and reading them is much faster.
That was one of the gruesome hacks I did way back when I was in the
streaming video recording game.
Another was the slow-delete thing.
- open the file
- unlink the file
- now sit in a loop, slowly nibbling away at the tail with
ftruncate() until the file is gone.
The open/unlink was there so that if the system were to crash midway,
ext3 orphan recovery at reboot time would fully delete the remainder of
the file.
Another was to add an ioctl to ext3 to extend the file outside EOF, but
only metadata - the corresponding data blocks are left uninitialised.
That permitted large amount of data blocks to be allocated to the file
with high contiguity, fixing the block-intermingling problems when ext3
is writing multiple files (which reservations later addressed).
This is of course insecure, but that isn't a problem on an
embedded/consumer black box device.
ext3 sucks less nowadays, but it's still a hard vacuum.
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-04 8:18 ` Andrew Morton
@ 2009-04-04 12:40 ` Mark Lord
2009-04-05 1:57 ` David Newall
1 sibling, 0 replies; 419+ messages in thread
From: Mark Lord @ 2009-04-04 12:40 UTC (permalink / raw)
To: Andrew Morton
Cc: Jeff Garzik, Lennart Sorensen, torvalds, tytso, drees76, jesper,
linux-kernel
Andrew Morton wrote:
> On Fri, 03 Apr 2009 14:59:12 -0400 Jeff Garzik <jeff@garzik.org> wrote:
>
>> Lennart Sorensen wrote:
>>> On Fri, Apr 03, 2009 at 10:46:34AM -0400, Mark Lord wrote:
>>>> My Myth box here was running 2.6.18 when originally set up,
>>>> and even back then it still took *minutes* to delete large files.
>>>> So that part hasn't really changed much in the interim.
>>>>
>>>> Because of the multi-minute deletes, the distro shutdown scripts
>>>> would fails, and power off the box while it was still writing
>>>> to the drives. Ouch.
>>>>
>>>> That system has had XFS on it for the past year and a half now,
>>>> and for Myth, there's no reason not to use XFS. It's great!
>>> Mythtv has a 'slow delete' option that I believe works by slowly
>>> truncating the file. Seems they believe that ext3 is bad at handling
>>> large file deletes, so they try to spread out the pain. I don't remember
>>> if that option is on by default or not. I turned it off.
>> It's pretty painful for super-large files with lots of metadata.
>>
>
> yeah.
>
> There's a dirty hack you can do where you append one byte to the file
> every 4MB, across 1GB (say). That will then lay the file out on-disk as
>
> one bitmap block
> one data block
> one bitmap block
> one data block
> one bitmap block
> one data block
> one bitmap block
> one data block
> <etc>
> lots-of-data-blocks
>
> So when the time comes to delete that gigabyte, the bitmaps blocks are
> only one block apart, and reading them is much faster.
>
> That was one of the gruesome hacks I did way back when I was in the
> streaming video recording game.
>
> Another was the slow-delete thing.
>
> - open the file
>
> - unlink the file
>
> - now sit in a loop, slowly nibbling away at the tail with
> ftruncate() until the file is gone.
>
> The open/unlink was there so that if the system were to crash midway,
> ext3 orphan recovery at reboot time would fully delete the remainder of
> the file.
..
That's similar to what Mythtv currently does.
Except it nibbles away in painfully tiny chunks,
so deleting takes hours that way.
Which means it's still in progress when the system
auto-shutdowns between uses. So the delete process
gets killed, and the subsequent remount,ro and umount
calls simply fail (fs is still busy), and it then
powers off while the drive light is still solidly busy.
That's where I modified the shutdown script to check
the result code, sleep, and loop again, for up to five
minutes before pulling the plug.
But switching to xfs cured all of that. :)
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 20:05 ` Jeff Garzik
2009-04-03 20:14 ` Linus Torvalds
@ 2009-04-04 12:44 ` Mark Lord
2009-04-04 21:10 ` Jeff Garzik
1 sibling, 1 reply; 419+ messages in thread
From: Mark Lord @ 2009-04-04 12:44 UTC (permalink / raw)
To: Jeff Garzik
Cc: Linus Torvalds, Chris Mason, Andrew Morton, David Rees,
Linux Kernel Mailing List
Jeff Garzik wrote:
>
> I've attached 'hdparm -I' in case anyone is curious. It's from
> newegg.com, so nothing NDA'd or sekrit.
>
> ATA device, with non-removable media
> Model Number: G.SKILL 128GB SSD
> Serial Number: MK0108480A545003B
> Firmware Revision: 02.10104
> Standards:
> Used: ATA/ATAPI-7 T13 1532D revision 4a
> Supported: 8 7 6 5 & some of 8
> Configuration:
> Logical max current
> cylinders 16383 16383
> heads 16 16
> sectors/track 63 63
> --
> CHS current addressable sectors: 16514064
> LBA user addressable sectors: 250445824
> device size with M = 1024*1024: 122288 MBytes
> device size with M = 1000*1000: 128228 MBytes (128 GB)
..
That's odd. I kind of expected to see the sector size,
cache size, and perhaps media rotation rate reported there..
Can you update your hdparm (sourceforge) and repost?
There might be other useful features of that drive,
which some of us are quite curious to know about! :)
Thanks
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 23:48 ` Jeff Garzik
@ 2009-04-04 12:46 ` Mark Lord
2009-04-04 12:52 ` Huang Yuntao
0 siblings, 1 reply; 419+ messages in thread
From: Mark Lord @ 2009-04-04 12:46 UTC (permalink / raw)
To: Jeff Garzik
Cc: Linus Torvalds, Chris Mason, Andrew Morton, Jens Axboe,
Linux Kernel Mailing List
Jeff Garzik wrote:
> Linus Torvalds wrote:
>> "fio" does well:
>>
>> http://git.kernel.dk/?p=fio.git;a=summary
>
> Neat tool, Jens...
>
>
>> and I think it comes with a few example files. Here's the random write
>> file that Jens suggested, and that works pretty well..
>>
>> It first creates a 2GB file to do the IO on, then does random 4k
>> writes to it with O_DIRECT.
>>
>> If your SSD does badly at it, you'll just want to kill it, but it
>> shows you how many MB/s it's doing (or, in the sucky case, how many
>> kB/s).
>
>
> heh, so far, the SSD is poking along...
>
> Jobs: 1 (f=1): [w] [2.5% done] [ 0/ 282 kb/s] [eta 02h:24m:59s]
..
Try turning on the drive write cache? hdparm -W1
^ permalink raw reply [flat|nested] 419+ messages in thread
* Linux 2.6.29
2009-04-04 12:46 ` Mark Lord
@ 2009-04-04 12:52 ` Huang Yuntao
0 siblings, 0 replies; 419+ messages in thread
From: Huang Yuntao @ 2009-04-04 12:52 UTC (permalink / raw)
To: 'Linux Kernel Mailing List'
unsubscribe linux-kernel
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 23:30 ` Jeff Garzik
@ 2009-04-04 16:29 ` Janne Grunau
2009-04-04 23:02 ` Jeff Garzik
0 siblings, 1 reply; 419+ messages in thread
From: Janne Grunau @ 2009-04-04 16:29 UTC (permalink / raw)
To: Jeff Garzik
Cc: David Rees, Mark Lord, Lennart Sorensen, Jens Axboe,
Linus Torvalds, Ingo Molnar, Andrew Morton, tytso, jesper,
Linux Kernel Mailing List
On Fri, Apr 03, 2009 at 07:30:37PM -0400, Jeff Garzik wrote:
> David Rees wrote:
> > The *only* reason MythTV fsyncs (or fdatasyncs) the data to disk all
> > the time is to keep a large amount of dirty pages from building up and
> > then causing horrible latencies when that data starts getting flushed
> > to disk.
>
> sync_file_range() will definitely help that situation.
Jeff, could you please try following patch for 0.21 or update to the
latest trunk revision. I don't have a way to reproduce the high
latencies with fdatasync on ext3, data=ordered. Doing a parallel
"dd if=/dev/zero of=file" on the same partition introduces even with
sync_file_range latencies over 1 second.
Janne
---
Index: configure
===================================================================
--- configure (revision 20302)
+++ configure (working copy)
@@ -873,6 +873,7 @@
sdl_video_size
soundcard_h
stdint_h
+ sync_file_range
sys_poll_h
sys_soundcard_h
termios_h
@@ -2413,6 +2414,17 @@
int main( void ) { return (round(3.999f) > 0)?0:1; }
EOF
+# test for sync_file_range (linux only system call since 2.6.17)
+check_ld <<EOF && enable sync_file_range
+#define _GNU_SOURCE
+#include <fcntl.h>
+
+int main(int argc, char **argv){
+ sync_file_range(0,0,0,0);
+ return 0;
+}
+EOF
+
# test for sizeof(int)
for sizeof in 1 2 4 8 16; do
check_cc <<EOF && _sizeof_int=$sizeof && break
Index: libs/libmythtv/ThreadedFileWriter.cpp
===================================================================
--- libs/libmythtv/ThreadedFileWriter.cpp (revision 20302)
+++ libs/libmythtv/ThreadedFileWriter.cpp (working copy)
@@ -18,6 +18,7 @@
#include "ThreadedFileWriter.h"
#include "mythcontext.h"
#include "compat.h"
+#include "mythconfig.h"
#if defined(_POSIX_SYNCHRONIZED_IO) && _POSIX_SYNCHRONIZED_IO > 0
#define HAVE_FDATASYNC
@@ -122,6 +123,7 @@
// file stuff
filename(QDeepCopy<QString>(fname)), flags(pflags),
mode(pmode), fd(-1),
+ m_file_sync(0), m_file_wpos(0),
// state
no_writes(false), flush(false),
in_dtor(false), ignore_writes(false),
@@ -154,6 +156,8 @@
buf = new char[TFW_DEF_BUF_SIZE + 1024];
bzero(buf, TFW_DEF_BUF_SIZE + 64);
+ m_file_sync = m_file_wpos = 0;
+
tfw_buf_size = TFW_DEF_BUF_SIZE;
tfw_min_write_size = TFW_MIN_WRITE_SIZE;
pthread_create(&writer, NULL, boot_writer, this);
@@ -292,7 +296,22 @@
{
if (fd >= 0)
{
-#ifdef HAVE_FDATASYNC
+#ifdef HAVE_SYNC_FILE_RANGE
+ uint64_t write_position;
+
+ buflock.lock();
+ write_position = m_file_wpos;
+ buflock.unlock();
+
+ if ((write_position - m_file_sync) > TFW_MAX_WRITE_SIZE ||
+ (write_position && m_file_sync < (uint64_t)tfw_min_write_size))
+ {
+ sync_file_range(fd, m_file_sync, write_position - m_file_sync,
+ SYNC_FILE_RANGE_WRITE);
+ m_file_sync = write_position;
+ }
+
+#elif defined(HAVE_FDATASYNC)
fdatasync(fd);
#else
fsync(fd);
@@ -414,6 +433,7 @@
buflock.lock();
rpos = (rpos + size) % tfw_buf_size;
+ m_file_wpos += size;
buflock.unlock();
bufferWroteData.wakeAll();
Index: libs/libmythtv/ThreadedFileWriter.h
===================================================================
--- libs/libmythtv/ThreadedFileWriter.h (revision 20302)
+++ libs/libmythtv/ThreadedFileWriter.h (working copy)
@@ -40,6 +40,8 @@
int flags;
mode_t mode;
int fd;
+ uint64_t m_file_sync; ///< offset synced to disk
+ uint64_t m_file_wpos; ///< offset written to disk
// state
bool no_writes;
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-04 12:44 ` Mark Lord
@ 2009-04-04 21:10 ` Jeff Garzik
0 siblings, 0 replies; 419+ messages in thread
From: Jeff Garzik @ 2009-04-04 21:10 UTC (permalink / raw)
To: Mark Lord
Cc: Linus Torvalds, Chris Mason, Andrew Morton, David Rees,
Linux Kernel Mailing List
[-- Attachment #1: Type: text/plain, Size: 1364 bytes --]
Mark Lord wrote:
> Jeff Garzik wrote:
>>
>> I've attached 'hdparm -I' in case anyone is curious. It's from
>> newegg.com, so nothing NDA'd or sekrit.
>>
>> ATA device, with non-removable media
>> Model Number: G.SKILL 128GB SSD
>> Serial Number: MK0108480A545003B Firmware Revision:
>> 02.10104
>> Standards:
>> Used: ATA/ATAPI-7 T13 1532D revision 4a Supported: 8 7 6 5 &
>> some of 8
>> Configuration:
>> Logical max current
>> cylinders 16383 16383
>> heads 16 16
>> sectors/track 63 63
>> --
>> CHS current addressable sectors: 16514064
>> LBA user addressable sectors: 250445824
>> device size with M = 1024*1024: 122288 MBytes
>> device size with M = 1000*1000: 128228 MBytes (128 GB)
> ..
>
> That's odd. I kind of expected to see the sector size,
> cache size, and perhaps media rotation rate reported there..
> Can you update your hdparm (sourceforge) and repost?
>
> There might be other useful features of that drive,
> which some of us are quite curious to know about! :)
Here's output of hdparm 9.12, from Fedora rawhide.
I was unaware that both read-ahead and writeback caching were disabling
on this drive, until that was pointed out to me in email. huh.
I'll have to redo my tests...
Jeff
[-- Attachment #2: ssd-hdparm.txt --]
[-- Type: text/plain, Size: 1411 bytes --]
/dev/sdb:
ATA device, with non-removable media
Model Number: G.SKILL 128GB SSD
Serial Number: MK0108480A545003B
Firmware Revision: 02.10104
Standards:
Used: ATA/ATAPI-7 T13 1532D revision 4a
Supported: 8 7 6 5 & some of 8
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
sectors/track 63 63
--
CHS current addressable sectors: 16514064
LBA user addressable sectors: 250445824
Logical/Physical Sector size: 512 bytes
device size with M = 1024*1024: 122288 MBytes
device size with M = 1000*1000: 128228 MBytes (128 GB)
cache/buffer size = unknown
Capabilities:
LBA, IORDY(can be disabled)
Standby timer values: spec'd by Standard, no device specific minimum
R/W multiple sector transfer: Max = 1 Current = ?
Recommended acoustic management value: 128, current value: 254
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 *udma5
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
* SMART feature set
* Power Management feature set
* Write cache
Look-ahead
* Mandatory FLUSH_CACHE
* Gen1 signaling speed (1.5Gb/s)
* Gen2 signaling speed (3.0Gb/s)
* Host-initiated interface power management
* Phy event counters
Checksum: correct
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-04 16:29 ` Janne Grunau
@ 2009-04-04 23:02 ` Jeff Garzik
2009-04-05 14:20 ` Janne Grunau
2009-04-06 14:06 ` Lennart Sorensen
0 siblings, 2 replies; 419+ messages in thread
From: Jeff Garzik @ 2009-04-04 23:02 UTC (permalink / raw)
To: Janne Grunau
Cc: David Rees, Mark Lord, Lennart Sorensen, Jens Axboe,
Linus Torvalds, Ingo Molnar, Andrew Morton, tytso, jesper,
Linux Kernel Mailing List
Janne Grunau wrote:
> On Fri, Apr 03, 2009 at 07:30:37PM -0400, Jeff Garzik wrote:
>> David Rees wrote:
>>> The *only* reason MythTV fsyncs (or fdatasyncs) the data to disk all
>>> the time is to keep a large amount of dirty pages from building up and
>>> then causing horrible latencies when that data starts getting flushed
>>> to disk.
>> sync_file_range() will definitely help that situation.
>
> Jeff, could you please try following patch for 0.21 or update to the
> latest trunk revision. I don't have a way to reproduce the high
> latencies with fdatasync on ext3, data=ordered. Doing a parallel
> "dd if=/dev/zero of=file" on the same partition introduces even with
> sync_file_range latencies over 1 second.
Is dd + sync_file_range really a realistic comparison? dd is streaming
as fast as the disk can output data, whereas MythTV is streaming as fast
as video is being recorded. If you are maxing out your disk throughput,
there will be obvious impact no matter what.
I would think a more accurate comparison would be recording multiple
video streams in parallel, comparing fsync/fdatasync/sync_file_range?
IOW, what is an average MythTV setup -- what processes are actively
reading/writing storage? Where are you noticing latencies, and does
sync_file_range decrease those areas of high latency?
Jeff
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-04 8:18 ` Andrew Morton
2009-04-04 12:40 ` Mark Lord
@ 2009-04-05 1:57 ` David Newall
2009-04-05 3:46 ` Mark Lord
1 sibling, 1 reply; 419+ messages in thread
From: David Newall @ 2009-04-05 1:57 UTC (permalink / raw)
To: Andrew Morton
Cc: Jeff Garzik, Lennart Sorensen, Mark Lord, torvalds, tytso,
drees76, jesper, linux-kernel
Andrew Morton wrote:
> - open the file
>
> - unlink the file
>
> - now sit in a loop, slowly nibbling away at the tail with
> ftruncate() until the file is gone.
Why not fork and unlink in the child?
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-05 1:57 ` David Newall
@ 2009-04-05 3:46 ` Mark Lord
0 siblings, 0 replies; 419+ messages in thread
From: Mark Lord @ 2009-04-05 3:46 UTC (permalink / raw)
To: David Newall
Cc: Andrew Morton, Jeff Garzik, Lennart Sorensen, torvalds, tytso,
drees76, jesper, linux-kernel
David Newall wrote:
> Andrew Morton wrote:
>> - open the file
>>
>> - unlink the file
>>
>> - now sit in a loop, slowly nibbling away at the tail with
>> ftruncate() until the file is gone.
>
> Why not fork and unlink in the child?
..
I think it does the equivalent of that today.
Problem is, if you do the unlink without the nibbling,
then the disk locks up the system cold for 2-3 minutes
until the disk delete actually completes.
-ml
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-04 23:02 ` Jeff Garzik
@ 2009-04-05 14:20 ` Janne Grunau
2009-04-06 14:06 ` Lennart Sorensen
1 sibling, 0 replies; 419+ messages in thread
From: Janne Grunau @ 2009-04-05 14:20 UTC (permalink / raw)
To: Jeff Garzik
Cc: David Rees, Mark Lord, Lennart Sorensen, Jens Axboe,
Linus Torvalds, Ingo Molnar, Andrew Morton, tytso, jesper,
Linux Kernel Mailing List
On Sat, Apr 04, 2009 at 07:02:51PM -0400, Jeff Garzik wrote:
> Janne Grunau wrote:
> >
> > Jeff, could you please try following patch for 0.21 or update to the
> > latest trunk revision. I don't have a way to reproduce the high
> > latencies with fdatasync on ext3, data=ordered. Doing a parallel
> > "dd if=/dev/zero of=file" on the same partition introduces even with
> > sync_file_range latencies over 1 second.
>
> Is dd + sync_file_range really a realistic comparison? dd is streaming
> as fast as the disk can output data, whereas MythTV is streaming as fast
> as video is being recorded. If you are maxing out your disk throughput,
> there will be obvious impact no matter what.
sure, I tried simulating a case where the fsync/fdatasync from mythtv
impose high latencies on other processes due to syncing other big writes
too.
> I would think a more accurate comparison would be recording multiple
> video streams in parallel, comparing fsync/fdatasync/sync_file_range?
I tested 3 simultaneous recordings and haven't noticed a difference. I'm
even sure if I should. With multiple recording at the same time mythtv
would also call fdatasync multiple times per second.
I guess I could compare how long fdatasync and sync_file_range with
SYNC_FILE_RANGE_WAIT_BEFORE | SYNC_FILE_RANGE_WRITE |
SYNC_FILE_RANGE_WAIT_AFTER are blocking. Not that mythtv would care
since they are called in it's own thread.
> IOW, what is an average MythTV setup -- what processes are actively
> reading/writing storage?
writing:
mythbackend - recordings and preview images (2-20mbps)
reading:
mythfrontend - viewing (2-20mbps)
mythcommflag - faster than viewing, maybe up to 50mbps (depending on cpu)
writing+reading:
mythtranscode - combined rate less than 50mbps, usually more reads than
writes (depending on cpu)
mythtranscode (lossless) - around maximal disk throughput
> Where are you noticing latencies, and does
> sync_file_range decrease those areas of high latency?
I don't notice latencies in mythtv, at least no for which file systems
or the block layer can be blamed for. But my setup is build to avoid
these. Mythtv records to it's own disks formatted with xfs. Mythtv
generally tries to spread simultaneous recodings over different file
systems. The tests were on a different system though.
Janne
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-04 23:02 ` Jeff Garzik
2009-04-05 14:20 ` Janne Grunau
@ 2009-04-06 14:06 ` Lennart Sorensen
1 sibling, 0 replies; 419+ messages in thread
From: Lennart Sorensen @ 2009-04-06 14:06 UTC (permalink / raw)
To: Jeff Garzik
Cc: Janne Grunau, David Rees, Mark Lord, Jens Axboe, Linus Torvalds,
Ingo Molnar, Andrew Morton, tytso, jesper,
Linux Kernel Mailing List
On Sat, Apr 04, 2009 at 07:02:51PM -0400, Jeff Garzik wrote:
> Is dd + sync_file_range really a realistic comparison? dd is streaming
> as fast as the disk can output data, whereas MythTV is streaming as fast
> as video is being recorded. If you are maxing out your disk throughput,
> there will be obvious impact no matter what.
>
> I would think a more accurate comparison would be recording multiple
> video streams in parallel, comparing fsync/fdatasync/sync_file_range?
>
> IOW, what is an average MythTV setup -- what processes are actively
> reading/writing storage? Where are you noticing latencies, and does
> sync_file_range decrease those areas of high latency?
I am going to give the patch a shot. I run dual tuners after all, so
I do get multiple streams recording while doing playback at the same time.
--
Len Sorensen
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 22:28 ` Jeff Moyer
@ 2009-04-06 14:15 ` Lennart Sorensen
2009-04-06 21:27 ` Mark Lord
0 siblings, 1 reply; 419+ messages in thread
From: Lennart Sorensen @ 2009-04-06 14:15 UTC (permalink / raw)
To: Jeff Moyer
Cc: Ingo Molnar, Andrew Morton, torvalds, tytso, drees76, jesper,
linux-kernel, jens.axboe
On Fri, Apr 03, 2009 at 06:28:36PM -0400, Jeff Moyer wrote:
> Could you try one more test, please? Switch back to CFQ and set
> /sys/block/sdX/queue/iosched/slice_idle to 0?
I actually am running cfq at the moment, but with Mark's (I think it
was) preload library to change fsync calls to at most one per 5 seconds
instead of 10 per second. So far that has certainly made things a lot
better as far as I can tell. Maybe not as good as anticipatory seemed
to be but certainly better.
I can try your suggestion too.
I set sda-sdd to 0. I removed the preload library from the mythbackend.
> I'm not sure how the applications you are running write to disk, but if
> they interleave I/O between processes, this could help. I'm not too
> confident that this will make a difference, though, since CFQ changed to
> time-slice based instead of quantum based before 2.6.18. Still, it
> would be another data point if you have the time.
Well when recording two shows at once, there will be two processes
streaming to seperate files, and usually there will be two commercial
flagging processes following behind reading those files and doing mysql
updates as they go.
> Thanks in advance!
No problem. If it solves this bad behaviour, it will be all worth it.
--
Len Sorensen
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-06 14:15 ` Lennart Sorensen
@ 2009-04-06 21:27 ` Mark Lord
2009-04-06 21:56 ` Lennart Sorensen
0 siblings, 1 reply; 419+ messages in thread
From: Mark Lord @ 2009-04-06 21:27 UTC (permalink / raw)
To: Lennart Sorensen
Cc: Jeff Moyer, Ingo Molnar, Andrew Morton, torvalds, tytso, drees76,
jesper, linux-kernel, jens.axboe
Lennart Sorensen wrote:
> On Fri, Apr 03, 2009 at 06:28:36PM -0400, Jeff Moyer wrote:
>> Could you try one more test, please? Switch back to CFQ and set
>> /sys/block/sdX/queue/iosched/slice_idle to 0?
>
> I actually am running cfq at the moment, but with Mark's (I think it
> was) preload library to change fsync calls to at most one per 5 seconds
> instead of 10 per second. So far that has certainly made things a lot
> better as far as I can tell. Maybe not as good as anticipatory seemed
> to be but certainly better.
>
> I can try your suggestion too.
..
Yeah, I think the sync_file_range() patch is the way to go.
It seems to be smooth enough here with four or five simultaneous recordings,
a couple of commflaggers, and an HD playback all happening at once.
Cheers
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-03 8:15 ` Ingo Molnar
@ 2009-04-06 21:46 ` Bill Davidsen
0 siblings, 0 replies; 419+ messages in thread
From: Bill Davidsen @ 2009-04-06 21:46 UTC (permalink / raw)
To: linux-kernel
Cc: Jens Axboe, Nick Piggin, Linus Torvalds, Lennart Sorensen,
Andrew Morton, tytso, drees76, jesper, Linux Kernel Mailing List,
Peter Zijlstra
Ingo Molnar wrote:
> Ergo, i think pluggable designs for something as critical and as
> central as IO scheduling has its clear downsides as it created two
> mediocre schedulers:
>
> - CFQ with all the modern features but performance problems on
> certain workloads
>
> - Anticipatory with legacy features only but works (much!) better
> on some workloads.
>
> ... instead of giving us just a single well-working CFQ scheduler.
>
> This, IMHO, in its current form, seems to trump the upsides of IO
> schedulers.
>
> So i do think that late during development (i.e. now), _years_ down
> the line, we should make it gradually harder for people to use AS.
>
I rarely disagree with you, and more rarely feel like arguing a point in public,
but you are basing your whole opinion on the premise that it is possible to have
one io scheduler which handles all cases. And that seems obviously wrong,
because you address different types of activity with tuning or adapting, in some
cases you need a whole different approach, and you need to lock in that approach
even if some metric says something else would be better for the "better" seen by
the developer rather than the user.
> What do you think?
>
I think that by trying to create "one size fits all" you will hit a significant
number of cases where it really doesn't fit well and you have so many tuning
features both automatic and manual that you wind up with code which is big,
inefficient, confusing to tune, hard to maintain, and generally not optimal for
any one thing.
What we have is easy to test and the behavior is different enough in most cases
that you can tell which is best, or at least that a change didn't help. I have
watched long threads and chats about tuning VM (dirty_*, swappiness, etc) to be
aware that in most cases either faster disk or more memory is the answer, not
tuning to be "less unsatisfactory." Several distinct io schedulers is good, one
complex bland one would not be.
--
Bill Davidsen <davidsen@tmr.com>
"We have more to fear from the bungling of the incompetent than from
the machinations of the wicked." - from Slashdot
^ permalink raw reply [flat|nested] 419+ messages in thread
* Re: Linux 2.6.29
2009-04-06 21:27 ` Mark Lord
@ 2009-04-06 21:56 ` Lennart Sorensen
0 siblings, 0 replies; 419+ messages in thread
From: Lennart Sorensen @ 2009-04-06 21:56 UTC (permalink / raw)
To: Mark Lord
Cc: Jeff Moyer, Ingo Molnar, Andrew Morton, torvalds, tytso, drees76,
jesper, linux-kernel, jens.axboe
On Mon, Apr 06, 2009 at 05:27:10PM -0400, Mark Lord wrote:
> Yeah, I think the sync_file_range() patch is the way to go.
> It seems to be smooth enough here with four or five simultaneous recordings,
> a couple of commflaggers, and an HD playback all happening at once.
Well would be worth a try. So far I am not sure if the slice_idle works
or not. I will have to try playback when I get home and see how it feels.
--
Len Sorensen
^ permalink raw reply [flat|nested] 419+ messages in thread
end of thread, other threads:[~2009-04-06 22:02 UTC | newest]
Thread overview: 419+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <cj1Ut-1i2-7@gated-at.bofh.it>
[not found] ` <cj8ji-35a-5@gated-at.bofh.it>
[not found] ` <cj8CC-3uO-5@gated-at.bofh.it>
[not found] ` <cjaXP-7vB-17@gated-at.bofh.it>
[not found] ` <cjbhc-7VP-13@gated-at.bofh.it>
[not found] ` <cjbTS-AR-11@gated-at.bofh.it>
[not found] ` <cjcdl-10E-31@gated-at.bofh.it>
[not found] ` <cjeS0-5nC-33@gated-at.bofh.it>
2009-03-25 15:19 ` Linux 2.6.29 Bodo Eggert
[not found] ` <cj9oW-4PO-1@gated-at.bofh.it>
[not found] ` <cjkaL-5xg-15@gated-at.bofh.it>
[not found] ` <cjGbn-6Xx-35@gated-at.bofh.it>
[not found] ` <cjGl4-7ak-61@gated-at.bofh.it>
[not found] ` <cjJsv-3Uu-5@gated-at.bofh.it>
[not found] ` <cjKHX-5MF-19@gated-at.bofh.it>
[not found] ` <ck7XP-JA-13@gated-at.bofh.it>
[not found] ` <ck8h7-18H-1@gated-at.bofh.it>
[not found] ` <ck8AA-1y0-7@gated-at.bofh.it>
[not found] ` <ck8Kd-20o-7@gated-at.bofh.it>
[not found] ` <ckaVR-5o8-9@gated-at.bofh.it>
[not found] ` <ckbf4-5M8-7@gated-at.bofh.it>
[not found] ` <ckcE8-845-9@gated-at.bofh.it>
2009-03-27 21:53 ` Bodo Eggert
2009-03-28 6:51 ` Mike Galbraith
2009-03-28 12:12 ` Theodore Tso
[not found] ` <ckdgW-uA-1@gated-at.bofh.it>
[not found] ` <ckdK0-1om-1@gated-at.bofh.it>
[not found] ` <ckiqk-uZ-13@gated-at.bofh.it>
[not found] ` <cklHL-5IW-23@gated-at.bofh.it>
[not found] ` <ckm0Q-6o4-3@gated-at.bofh.it>
[not found] ` <ckmaG-6AC-5@gated-at.bofh.it>
[not found] ` <ckmWO-7Ro-5@gated-at.bofh.it>
[not found] ` <ckn6L-84h-19@gated-at.bofh.it>
[not found] ` <cknzG-ff-7@gated-at.bofh.it>
[not found] ` <cknJk-IB-9@gated-at.bofh.it>
[not found] ` <ckoFm-2eg-3@gated-at.bofh.it>
[not found] ` <ckoYP-2DC-13@gated-at.bofh.it>
2009-03-28 11:53 ` Bodo Eggert
2009-03-29 14:45 ` Pavel Machek
2009-03-29 15:47 ` Linus Torvalds
2009-03-29 19:15 ` Pavel Machek
2009-03-30 14:22 ` Morten P.D. Stevens
[not found] ` <ck93x-2oJ-3@gated-at.bofh.it>
2009-03-27 23:22 ` Bodo Eggert
2009-03-23 23:29 Linus Torvalds
2009-03-24 6:19 ` Jesper Krogh
2009-03-24 6:46 ` David Rees
2009-03-24 7:32 ` Jesper Krogh
2009-03-24 8:16 ` Ingo Molnar
2009-03-24 11:10 ` Jesper Krogh
2009-03-24 19:00 ` David Rees
2009-03-25 17:42 ` Jesper Krogh
2009-03-25 18:16 ` David Rees
2009-03-25 18:46 ` Jesper Krogh
2009-03-25 18:30 ` Theodore Tso
2009-03-25 18:40 ` Linus Torvalds
2009-03-25 22:05 ` Theodore Tso
2009-03-25 23:23 ` Linus Torvalds
2009-03-25 23:46 ` Bron Gondwana
2009-03-26 0:32 ` Ric Wheeler
2009-03-27 0:11 ` Andrew Morton
2009-03-27 0:27 ` Linus Torvalds
2009-03-27 0:47 ` Andrew Morton
2009-03-27 1:03 ` Linus Torvalds
2009-03-27 1:25 ` Andrew Morton
2009-03-27 2:21 ` David Rees
2009-03-27 3:03 ` Matthew Garrett
2009-03-27 3:36 ` Dave Jones
2009-03-27 3:01 ` Matthew Garrett
2009-03-27 3:38 ` Linus Torvalds
2009-03-27 3:59 ` Linus Torvalds
2009-03-28 23:52 ` david
2009-03-28 5:06 ` Ingo Molnar
2009-04-01 21:03 ` Lennart Sorensen
2009-04-01 21:36 ` Andrew Morton
2009-04-01 22:57 ` Lennart Sorensen
2009-04-03 14:46 ` Mark Lord
2009-04-03 15:16 ` Lennart Sorensen
2009-04-03 15:42 ` Mark Lord
2009-04-03 18:59 ` Jeff Garzik
2009-04-04 8:18 ` Andrew Morton
2009-04-04 12:40 ` Mark Lord
2009-04-05 1:57 ` David Newall
2009-04-05 3:46 ` Mark Lord
2009-04-02 1:00 ` Ingo Molnar
2009-04-03 4:06 ` Lennart Sorensen
2009-04-03 4:13 ` Linus Torvalds
2009-04-03 7:25 ` Jens Axboe
2009-04-03 8:15 ` Ingo Molnar
2009-04-06 21:46 ` Bill Davidsen
2009-04-03 14:21 ` Lennart Sorensen
2009-04-03 15:05 ` Mark Lord
2009-04-03 15:14 ` Lennart Sorensen
2009-04-03 19:57 ` Jeff Garzik
2009-04-03 21:28 ` Janne Grunau
2009-04-03 21:57 ` Jeff Garzik
2009-04-03 22:32 ` Janne Grunau
2009-04-03 22:57 ` David Rees
2009-04-03 23:29 ` Jeff Garzik
2009-04-03 23:52 ` Linus Torvalds
2009-04-03 22:53 ` David Rees
2009-04-03 23:30 ` Jeff Garzik
2009-04-04 16:29 ` Janne Grunau
2009-04-04 23:02 ` Jeff Garzik
2009-04-05 14:20 ` Janne Grunau
2009-04-06 14:06 ` Lennart Sorensen
2009-04-03 22:28 ` Jeff Moyer
2009-04-06 14:15 ` Lennart Sorensen
2009-04-06 21:27 ` Mark Lord
2009-04-06 21:56 ` Lennart Sorensen
2009-04-02 11:05 ` Janne Grunau
2009-04-02 16:09 ` Andrew Morton
2009-04-02 16:33 ` David Rees
2009-04-02 16:46 ` Linus Torvalds
2009-04-02 16:51 ` Andrew Morton
2009-04-02 22:13 ` Jeff Garzik
2009-04-02 21:56 ` Jeff Garzik
2009-04-02 16:29 ` David Rees
2009-04-02 16:42 ` Andrew Morton
2009-04-02 16:57 ` Linus Torvalds
2009-04-02 17:04 ` Linus Torvalds
2009-04-02 22:09 ` Jeff Garzik
2009-04-02 22:42 ` Linus Torvalds
2009-04-02 22:51 ` Andrew Morton
2009-04-02 23:00 ` Linus Torvalds
2009-04-03 2:01 ` Jeff Garzik
2009-04-03 2:16 ` Linus Torvalds
2009-04-03 3:05 ` Jeff Garzik
2009-04-03 3:34 ` Linus Torvalds
2009-04-03 11:32 ` Chris Mason
2009-04-03 15:07 ` Linus Torvalds
2009-04-03 15:40 ` Chris Mason
2009-04-03 20:05 ` Jeff Garzik
2009-04-03 20:14 ` Linus Torvalds
2009-04-03 21:48 ` Jeff Garzik
2009-04-03 22:06 ` Linus Torvalds
2009-04-03 23:48 ` Jeff Garzik
2009-04-04 12:46 ` Mark Lord
2009-04-04 12:52 ` Huang Yuntao
2009-04-03 23:35 ` Dave Jones
2009-04-04 12:44 ` Mark Lord
2009-04-04 21:10 ` Jeff Garzik
2009-04-03 5:05 ` Nick Piggin
2009-04-03 8:31 ` Jeff Garzik
2009-04-03 8:35 ` Jeff Garzik
2009-04-03 2:38 ` Trenton D. Adams
2009-04-03 2:54 ` Jeff Garzik
2009-04-03 15:14 ` Mark Lord
2009-04-03 15:18 ` Lennart Sorensen
2009-04-03 15:46 ` Mark Lord
2009-04-03 15:28 ` Linus Torvalds
2009-04-02 18:52 ` David Rees
2009-04-02 21:42 ` Theodore Tso
2009-04-02 21:50 ` Lennart Sorensen
2009-04-03 15:07 ` Mark Lord
2009-04-02 12:17 ` Theodore Tso
2009-04-02 21:54 ` Lennart Sorensen
2009-04-02 23:27 ` Theodore Tso
2009-04-03 0:32 ` Lennart Sorensen
2009-03-27 3:23 ` Theodore Tso
2009-03-27 3:47 ` Matthew Garrett
2009-03-27 5:13 ` Theodore Tso
2009-03-27 5:57 ` Matthew Garrett
2009-03-27 6:21 ` Matthew Garrett
2009-03-27 11:24 ` Theodore Tso
2009-03-27 14:51 ` Matthew Garrett
2009-03-27 15:08 ` Alan Cox
2009-03-27 15:22 ` Matthew Garrett
2009-03-27 16:15 ` Alan Cox
2009-03-27 16:28 ` Matthew Garrett
2009-03-27 16:51 ` Alan Cox
2009-03-27 17:02 ` Matthew Garrett
2009-03-27 17:19 ` Alan Cox
2009-03-27 18:05 ` Linus Torvalds
2009-03-27 18:35 ` Alan Cox
2009-03-27 19:03 ` Theodore Tso
2009-03-27 19:14 ` Alan Cox
2009-03-27 19:32 ` Theodore Tso
2009-03-27 20:11 ` Andreas T.Auer
2009-03-27 22:01 ` Linus Torvalds
2009-03-31 9:58 ` Neil Brown
2009-03-27 19:19 ` Gene Heskett
2009-03-27 19:48 ` Theodore Tso
2009-03-27 20:02 ` Aaron Cohen
[not found] ` <727e50150903271301l36cff340l33e813bf6f77b4b@mail.gmail.com>
2009-03-27 20:04 ` Theodore Tso
2009-03-27 22:37 ` Gene Heskett
2009-03-27 22:55 ` Theodore Tso
2009-03-28 0:42 ` Gene Heskett
2009-03-27 18:36 ` Hua Zhong
2009-03-27 17:57 ` Linus Torvalds
2009-03-27 18:22 ` Linus Torvalds
2009-03-27 18:32 ` Alan Cox
2009-03-27 18:40 ` Linus Torvalds
2009-03-27 19:00 ` Alan Cox
2009-03-29 9:15 ` Xavier Bestel
2009-03-29 20:16 ` Alan Cox
2009-03-29 21:07 ` Linus Torvalds
2009-03-30 19:37 ` Jeremy Fitzhardinge
2009-03-27 20:27 ` Felipe Contreras
2009-03-27 19:43 ` Jeff Garzik
2009-03-27 20:01 ` Theodore Tso
2009-03-27 22:20 ` Jeff Garzik
2009-03-27 21:46 ` Linus Torvalds
2009-03-27 22:06 ` Jeff Garzik
2009-03-27 22:19 ` Linus Torvalds
2009-03-27 22:25 ` Linus Torvalds
2009-03-28 1:19 ` Jeff Garzik
2009-03-28 1:30 ` David Miller
2009-03-28 2:19 ` Mark Lord
2009-03-28 2:49 ` Jeff Garzik
2009-03-28 13:29 ` Stefan Richter
2009-03-28 14:17 ` Jeff Garzik
2009-03-28 14:35 ` Stefan Richter
2009-03-28 15:17 ` Mark Lord
2009-03-28 16:08 ` Stefan Richter
2009-03-28 16:32 ` Linus Torvalds
2009-03-28 17:22 ` David Hagood
2009-03-29 1:18 ` Jeff Garzik
2009-03-31 18:45 ` Jörn Engel
2009-03-29 23:14 ` Dave Chinner
2009-03-30 0:39 ` Theodore Tso
2009-03-30 1:29 ` Trenton Adams
2009-03-30 3:28 ` Theodore Tso
2009-03-30 3:55 ` Trenton D. Adams
2009-03-30 13:45 ` Theodore Tso
2009-03-30 6:31 ` Dave Chinner
2009-03-30 13:55 ` Theodore Tso
2009-03-30 7:13 ` Andreas T.Auer
2009-03-30 9:05 ` Alan Cox
2009-03-30 10:49 ` Andreas T.Auer
2009-03-30 11:12 ` Alan Cox
2009-03-30 11:17 ` Ric Wheeler
2009-03-30 13:48 ` Mark Lord
2009-03-30 14:00 ` Ric Wheeler
2009-03-30 14:44 ` Mark Lord
2009-03-30 14:58 ` Ric Wheeler
2009-03-30 15:21 ` Mark Lord
2009-03-30 15:27 ` Ric Wheeler
2009-03-30 16:13 ` Linus Torvalds
2009-03-30 16:30 ` Mark Lord
2009-03-30 16:58 ` Linus Torvalds
2009-03-30 17:29 ` Mark Lord
2009-03-30 17:57 ` Chris Mason
2009-03-30 18:39 ` Mark Lord
2009-03-30 18:52 ` Chris Mason
2009-03-30 20:19 ` Mark Lord
2009-03-30 18:54 ` Pasi Kärkkäinen
2009-03-30 15:00 ` Jeff Garzik
2009-03-30 15:34 ` Linus Torvalds
2009-03-30 16:11 ` Ric Wheeler
2009-03-30 16:34 ` Linus Torvalds
2009-03-30 17:11 ` Ric Wheeler
2009-03-30 17:39 ` Mark Lord
2009-03-30 17:51 ` Linus Torvalds
2009-03-30 18:15 ` Ric Wheeler
2009-03-30 19:08 ` Eric Sandeen
2009-03-30 19:22 ` Rik van Riel
2009-03-30 19:41 ` Jeff Garzik
2009-03-30 20:21 ` Michael Tokarev
2009-03-30 20:26 ` Mark Lord
2009-03-30 20:29 ` Mark Lord
2009-03-30 20:35 ` Jeff Garzik
2009-03-30 20:40 ` Mark Lord
2009-03-30 20:34 ` Jeff Garzik
2009-03-30 20:05 ` Linus Torvalds
2009-03-31 9:27 ` Neil Brown
2009-03-31 21:13 ` Alan Cox
2009-03-31 21:10 ` Alan Cox
2009-03-31 21:55 ` Linus Torvalds
2009-03-30 19:02 ` Bill Davidsen
2009-04-01 1:19 ` david
2009-04-01 16:24 ` Bill Davidsen
2009-04-01 20:15 ` david
2009-04-01 21:33 ` Andreas T.Auer
2009-04-01 22:29 ` david
2009-04-02 2:30 ` Bron Gondwana
2009-04-02 4:55 ` david
2009-04-02 5:29 ` Bron Gondwana
2009-04-02 9:58 ` Andreas T.Auer
2009-04-02 12:30 ` Bill Davidsen
2009-04-01 22:00 ` Harald Arnesen
2009-04-01 22:09 ` Alejandro Riveira Fernández
2009-04-01 22:28 ` david
2009-03-30 3:01 ` Mark Lord
2009-03-30 6:41 ` Andreas T.Auer
2009-03-30 12:55 ` Chris Mason
2009-03-30 17:42 ` Theodore Tso
2009-03-31 23:55 ` Dave Chinner
2009-04-01 12:53 ` Chris Mason
2009-04-01 15:41 ` Andreas T.Auer
2009-04-01 16:02 ` Chris Mason
2009-04-01 18:37 ` Andreas T.Auer
2009-04-01 21:50 ` Theodore Tso
2009-04-01 23:44 ` Matthew Garrett
2009-03-28 16:25 ` Alex Goebel
2009-03-28 21:12 ` Hua Zhong
2009-03-29 8:22 ` Stefan Richter
2009-03-29 0:33 ` david
2009-03-29 1:24 ` Jeff Garzik
2009-03-29 3:43 ` Theodore Tso
2009-03-29 4:53 ` Jeff Garzik
2009-03-31 15:01 ` Thierry Vignaud
2009-03-28 0:18 ` Jeff Garzik
2009-03-28 1:45 ` Linus Torvalds
2009-03-28 2:53 ` Jeff Garzik
2009-03-28 2:56 ` Zid Null
2009-03-28 3:55 ` Gene Heskett
2009-03-28 11:29 ` Alejandro Riveira Fernández
2009-03-28 2:16 ` Mark Lord
2009-03-28 2:38 ` Linus Torvalds
2009-03-28 11:57 ` Andreas T.Auer
2009-03-27 15:20 ` Giacomo A. Catenazzi
2009-03-27 21:11 ` Jeremy Fitzhardinge
2009-03-28 7:45 ` Bojan Smojver
2009-03-28 8:43 ` Bojan Smojver
2009-03-28 21:55 ` Bojan Smojver
2009-03-31 21:51 ` Jeremy Fitzhardinge
2009-03-31 22:30 ` Bojan Smojver
2009-04-01 5:26 ` Bojan Smojver
2009-04-01 6:35 ` Jeremy Fitzhardinge
2009-04-03 12:39 ` Pavel Machek
2009-03-27 0:51 ` Linus Torvalds
2009-03-27 1:03 ` Andrew Morton
2009-03-27 9:58 ` Alan Cox
2009-03-26 2:50 ` Neil Brown
2009-03-26 3:13 ` Theodore Tso
2009-03-24 9:15 ` Alan Cox
2009-03-24 9:32 ` Ingo Molnar
2009-03-24 10:10 ` Alan Cox
2009-03-24 10:31 ` Ingo Molnar
2009-03-24 11:12 ` Andrew Morton
2009-03-24 12:23 ` Alan Cox
2009-03-24 13:37 ` Theodore Tso
2009-03-25 12:37 ` Jan Kara
2009-03-25 15:00 ` Theodore Tso
2009-03-25 17:29 ` Linus Torvalds
2009-03-25 17:57 ` Alan Cox
2009-03-25 18:09 ` David Rees
2009-03-25 18:21 ` Linus Torvalds
2009-03-25 18:26 ` Linus Torvalds
2009-03-25 18:48 ` Ric Wheeler
2009-03-25 18:49 ` Alan Cox
2009-03-25 18:55 ` Ric Wheeler
2009-03-25 18:58 ` Theodore Tso
2009-03-25 19:48 ` Christoph Hellwig
2009-03-25 21:50 ` Theodore Tso
2009-03-26 2:10 ` Matthew Garrett
2009-03-26 2:36 ` Jeff Garzik
2009-03-26 2:42 ` Matthew Garrett
[not found] ` <f73f7ab80903251944s581166bbk31c26db50750814a@mail.gmail.com>
2009-03-26 2:46 ` Kyle Moffett
2009-03-26 2:51 ` Jeff Garzik
2009-03-26 3:03 ` Kyle Moffett
2009-03-26 3:40 ` Linus Torvalds
2009-03-26 3:57 ` David Miller
2009-03-26 4:58 ` Kyle Moffett
2009-03-26 6:24 ` Jeff Garzik
2009-03-26 12:49 ` Kyle Moffett
2009-03-26 2:47 ` Matthew Garrett
2009-03-26 2:54 ` Kyle Moffett
2009-03-25 20:45 ` Linus Torvalds
2009-03-25 21:51 ` Theodore Tso
2009-03-25 23:21 ` Linus Torvalds
2009-03-25 23:50 ` Jan Kara
2009-03-26 0:04 ` Linus Torvalds
2009-03-25 23:57 ` Linus Torvalds
2009-03-26 0:22 ` Jan Kara
2009-03-26 1:34 ` Linus Torvalds
2009-03-26 2:59 ` Theodore Tso
2009-03-26 16:24 ` Jan Kara
2009-03-24 13:20 ` Theodore Tso
2009-03-24 13:30 ` Ingo Molnar
2009-03-24 13:51 ` Theodore Tso
2009-03-24 16:34 ` Jesper Krogh
2009-03-24 17:32 ` Linus Torvalds
2009-03-24 18:20 ` Mark Lord
2009-03-24 18:41 ` Eric Sandeen
2009-03-24 13:52 ` Alan Cox
2009-03-24 14:28 ` Theodore Tso
2009-03-24 15:18 ` Alan Cox
2009-03-24 17:55 ` Jan Kara
2009-03-24 17:55 ` Linus Torvalds
2009-03-24 18:41 ` Kyle Moffett
2009-03-24 19:17 ` Linus Torvalds
2009-03-24 18:45 ` Theodore Tso
2009-03-24 19:21 ` Linus Torvalds
2009-03-24 19:40 ` Ric Wheeler
2009-03-24 19:55 ` Jeff Garzik
2009-03-25 9:34 ` Benny Halevy
2009-03-25 9:39 ` Jens Axboe
2009-03-25 19:32 ` Jeff Garzik
2009-03-25 19:43 ` Christoph Hellwig
2009-03-25 19:43 ` Jens Axboe
2009-03-25 19:49 ` Ric Wheeler
2009-03-25 19:57 ` Jens Axboe
2009-03-25 20:41 ` Hugh Dickins
2009-03-26 8:57 ` Jens Axboe
2009-03-26 14:47 ` Hugh Dickins
2009-03-26 15:46 ` Jens Axboe
2009-03-26 18:21 ` Hugh Dickins
2009-03-26 18:32 ` Jens Axboe
2009-03-26 19:00 ` Hugh Dickins
2009-03-26 19:03 ` Jens Axboe
2009-03-25 20:16 ` Jeff Garzik
2009-03-25 20:25 ` Ric Wheeler
2009-03-25 21:22 ` James Bottomley
2009-03-26 8:59 ` Jens Axboe
2009-03-25 21:27 ` Benny Halevy
2009-03-25 20:25 ` Jeff Garzik
2009-03-25 20:40 ` Linus Torvalds
2009-03-25 20:57 ` Ric Wheeler
2009-03-25 23:02 ` Linus Torvalds
2009-03-26 0:28 ` Ric Wheeler
2009-03-26 1:36 ` Linus Torvalds
2009-03-25 21:33 ` Jeff Garzik
2009-03-27 7:57 ` Jens Axboe
2009-03-27 14:13 ` Theodore Tso
2009-03-27 14:35 ` Christoph Hellwig
2009-03-27 15:03 ` Ric Wheeler
2009-03-27 20:38 ` Jeff Garzik
2009-03-28 0:14 ` Alan Cox
2009-03-29 8:25 ` Christoph Hellwig
2009-03-27 19:14 ` Chris Mason
2009-03-27 7:46 ` Jens Axboe
2009-03-31 20:49 ` Jeff Garzik
2009-03-31 22:02 ` Ric Wheeler
2009-03-31 22:22 ` Jeff Garzik
2009-04-01 18:34 ` Mark Lord
2009-03-24 20:24 ` David Rees
2009-03-25 7:30 ` David Rees
2009-03-24 23:03 ` Jesse Barnes
2009-03-25 0:05 ` Arjan van de Ven
2009-03-25 17:59 ` David Rees
2009-03-25 18:40 ` Stephen Clark
2009-03-26 23:53 ` Mark Lord
2009-03-25 2:09 ` Theodore Tso
2009-03-25 3:57 ` Jesse Barnes
2009-03-27 11:27 ` Martin Steigerwald
2009-03-24 12:27 ` Andi Kleen
2009-04-02 14:00 ` Mathieu Desnoyers
2009-03-27 13:35 ` Hans-Peter Jansen
2009-03-27 14:53 ` Geert Uytterhoeven
2009-03-27 15:46 ` Mike Galbraith
2009-03-27 16:02 ` Linus Torvalds
2009-03-28 7:50 ` Mike Galbraith
2009-03-30 22:00 ` Hans-Peter Jansen
2009-03-30 22:07 ` Arjan van de Ven
2009-03-30 10:18 ` Pavel Machek
2009-03-31 13:33 ` Rafael J. Wysocki
2009-03-31 15:30 ` Hans-Peter Jansen
2009-03-31 19:37 ` Jeff Garzik
2009-03-31 19:47 ` Arjan van de Ven
2009-04-02 19:01 ` Andreas T.Auer
2009-03-27 16:49 ` Frans Pop
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox