linux-ext4.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: bugzilla-daemon@bugzilla.kernel.org
To: linux-ext4@vger.kernel.org
Subject: [Bug 186551] New: mkfs.ext4 tries to discard sector beyond the end of the device
Date: Tue, 01 Nov 2016 17:30:21 +0000	[thread overview]
Message-ID: <bug-186551-13602@https.bugzilla.kernel.org/> (raw)

https://bugzilla.kernel.org/show_bug.cgi?id=186551

            Bug ID: 186551
           Summary: mkfs.ext4 tries to discard sector beyond the end of
                    the device
           Product: File System
           Version: 2.5
    Kernel Version: 3.10.0-327.36.3.el7.x86_64
          Hardware: x86-64
                OS: Linux
              Tree: Mainline
            Status: NEW
          Severity: normal
          Priority: P1
         Component: ext4
          Assignee: fs_ext4@kernel-bugs.osdl.org
          Reporter: tom@vincze.org
        Regression: No

800GB Intel P3700 SSD has 195,352,576 4k sectors, but mkfs.ext4 tried to
discard sector 1,530,955,776 that is beyond the end of the device.

# mkfs.ext4 /dev/nvme0n1p1
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: failed - Input/output error
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
48840704 inodes, 195352576 blocks
9767628 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2344615936
5962 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
    4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
    102400000

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done     

[61825.159172] blk_update_request: I/O error, dev nvme0n1, sector 1530955776

# parted /dev/nvme0n1 "unit s" "print"
Model: Unknown (unknown)
Disk /dev/nvme0n1: 195353046s
Sector size (logical/physical): 4096B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start  End         Size        File system  Name     Flags
 1      256s   195352831s  195352576s               primary

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

             reply	other threads:[~2016-11-01 17:30 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-11-01 17:30 bugzilla-daemon [this message]
2016-11-01 17:40 ` [Bug 186551] mkfs.ext4 tries to discard sector beyond the end of the device bugzilla-daemon
2016-11-01 17:48 ` bugzilla-daemon
2016-11-01 17:50 ` bugzilla-daemon
2016-11-01 17:51 ` bugzilla-daemon
2016-11-01 19:04 ` bugzilla-daemon
2016-11-01 19:14 ` bugzilla-daemon
2016-11-01 19:41 ` bugzilla-daemon
2016-11-01 19:42 ` bugzilla-daemon
2016-11-01 19:47 ` bugzilla-daemon
2016-11-01 19:54 ` bugzilla-daemon
2016-11-02 14:30 ` bugzilla-daemon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bug-186551-13602@https.bugzilla.kernel.org/ \
    --to=bugzilla-daemon@bugzilla.kernel.org \
    --cc=linux-ext4@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).