* [PATCH v3 0/6] vvfat: Fix write bugs for large files and add iotests
@ 2024-05-26 9:56 Amjad Alsharafi
2024-05-26 9:56 ` [PATCH v3 1/6] vvfat: Fix bug in writing to middle of file Amjad Alsharafi
` (6 more replies)
0 siblings, 7 replies; 11+ messages in thread
From: Amjad Alsharafi @ 2024-05-26 9:56 UTC (permalink / raw)
To: qemu-devel; +Cc: Hanna Reitz, Kevin Wolf, open list:vvfat, Amjad Alsharafi
These patches fix some bugs found when modifying files in vvfat.
First, there was a bug when writing to the cluster 2 or above of a file, it
will copy the cluster before it instead, so, when writing to cluster=2, the
content of cluster=1 will be copied into disk instead in its place.
Another issue was modifying the clusters of a file and adding new
clusters, this showed 2 issues:
- If the new cluster is not immediately after the last cluster, it will
cause issues when reading from this file in the future.
- Generally, the usage of info.file.offset was incorrect, and the
system would crash on abort() when the file is modified and a new
cluster was added.
Also, added some iotests for vvfat, covering the this fix and also
general behavior such as reading, writing, and creating files on the filesystem.
Including tests for reading/writing the first cluster which
would pass even before this patch.
v3:
Added test for creating new files in vvfat.
v2:
Added iotests for `vvfat` driver along with a simple `fat16` module to run the tests.
v1:
https://patchew.org/QEMU/20240327201231.31046-1-amjadsharafi10@gmail.com/
Fix the issue of writing to the middle of the file in vvfat
Amjad Alsharafi (6):
vvfat: Fix bug in writing to middle of file
vvfat: Fix usage of `info.file.offset`
vvfat: Fix reading files with non-continuous clusters
iotests: Add `vvfat` tests
iotests: Filter out `vvfat` fmt from failing tests
iotests: Add `create_file` test for `vvfat` driver
.gitlab-ci.d/buildtest.yml | 1 +
block/vvfat.c | 32 +-
tests/qemu-iotests/001 | 1 +
tests/qemu-iotests/002 | 1 +
tests/qemu-iotests/003 | 1 +
tests/qemu-iotests/005 | 1 +
tests/qemu-iotests/008 | 1 +
tests/qemu-iotests/009 | 1 +
tests/qemu-iotests/010 | 1 +
tests/qemu-iotests/011 | 1 +
tests/qemu-iotests/012 | 1 +
tests/qemu-iotests/021 | 1 +
tests/qemu-iotests/032 | 1 +
tests/qemu-iotests/033 | 1 +
tests/qemu-iotests/052 | 1 +
tests/qemu-iotests/094 | 1 +
tests/qemu-iotests/120 | 2 +-
tests/qemu-iotests/140 | 1 +
tests/qemu-iotests/145 | 1 +
tests/qemu-iotests/157 | 1 +
tests/qemu-iotests/159 | 2 +-
tests/qemu-iotests/170 | 2 +-
tests/qemu-iotests/192 | 1 +
tests/qemu-iotests/197 | 2 +-
tests/qemu-iotests/208 | 2 +-
tests/qemu-iotests/215 | 2 +-
tests/qemu-iotests/236 | 2 +-
tests/qemu-iotests/251 | 1 +
tests/qemu-iotests/307 | 2 +-
tests/qemu-iotests/308 | 2 +-
tests/qemu-iotests/check | 2 +-
tests/qemu-iotests/fat16.py | 619 ++++++++++++++++++
tests/qemu-iotests/meson.build | 3 +-
.../tests/export-incoming-iothread | 2 +-
tests/qemu-iotests/tests/fuse-allow-other | 1 +
.../tests/mirror-ready-cancel-error | 2 +-
tests/qemu-iotests/tests/regression-vhdx-log | 1 +
tests/qemu-iotests/tests/vvfat | 419 ++++++++++++
tests/qemu-iotests/tests/vvfat.out | 5 +
39 files changed, 1098 insertions(+), 26 deletions(-)
create mode 100644 tests/qemu-iotests/fat16.py
create mode 100755 tests/qemu-iotests/tests/vvfat
create mode 100755 tests/qemu-iotests/tests/vvfat.out
--
2.45.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v3 1/6] vvfat: Fix bug in writing to middle of file
2024-05-26 9:56 [PATCH v3 0/6] vvfat: Fix write bugs for large files and add iotests Amjad Alsharafi
@ 2024-05-26 9:56 ` Amjad Alsharafi
2024-05-31 16:15 ` Kevin Wolf
2024-05-26 9:56 ` [PATCH v3 2/6] vvfat: Fix usage of `info.file.offset` Amjad Alsharafi
` (5 subsequent siblings)
6 siblings, 1 reply; 11+ messages in thread
From: Amjad Alsharafi @ 2024-05-26 9:56 UTC (permalink / raw)
To: qemu-devel; +Cc: Hanna Reitz, Kevin Wolf, open list:vvfat, Amjad Alsharafi
Before this commit, the behavior when calling `commit_one_file` for
example with `offset=0x2000` (second cluster), what will happen is that
we won't fetch the next cluster from the fat, and instead use the first
cluster for the read operation.
This is due to off-by-one error here, where `i=0x2000 !< offset=0x2000`,
thus not fetching the next cluster.
Signed-off-by: Amjad Alsharafi <amjadsharafi10@gmail.com>
---
block/vvfat.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/vvfat.c b/block/vvfat.c
index 9d050ba3ae..ab342f0743 100644
--- a/block/vvfat.c
+++ b/block/vvfat.c
@@ -2525,7 +2525,7 @@ commit_one_file(BDRVVVFATState* s, int dir_index, uint32_t offset)
return -1;
}
- for (i = s->cluster_size; i < offset; i += s->cluster_size)
+ for (i = s->cluster_size; i <= offset; i += s->cluster_size)
c = modified_fat_get(s, c);
fd = qemu_open_old(mapping->path, O_RDWR | O_CREAT | O_BINARY, 0666);
--
2.45.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v3 2/6] vvfat: Fix usage of `info.file.offset`
2024-05-26 9:56 [PATCH v3 0/6] vvfat: Fix write bugs for large files and add iotests Amjad Alsharafi
2024-05-26 9:56 ` [PATCH v3 1/6] vvfat: Fix bug in writing to middle of file Amjad Alsharafi
@ 2024-05-26 9:56 ` Amjad Alsharafi
2024-05-26 9:56 ` [PATCH v3 3/6] vvfat: Fix reading files with non-continuous clusters Amjad Alsharafi
` (4 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Amjad Alsharafi @ 2024-05-26 9:56 UTC (permalink / raw)
To: qemu-devel; +Cc: Hanna Reitz, Kevin Wolf, open list:vvfat, Amjad Alsharafi
The field is marked as "the offset in the file (in clusters)", but it
was being used like this
`cluster_size*(nums)+mapping->info.file.offset`, which is incorrect.
Additionally, removed the `abort` when `first_mapping_index` does not
match, as this matches the case when adding new clusters for files, and
its inevitable that we reach this condition when doing that if the
clusters are not after one another, so there is no reason to `abort`
here, execution continues and the new clusters are written to disk
correctly.
Signed-off-by: Amjad Alsharafi <amjadsharafi10@gmail.com>
---
block/vvfat.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/block/vvfat.c b/block/vvfat.c
index ab342f0743..cb3ab81e29 100644
--- a/block/vvfat.c
+++ b/block/vvfat.c
@@ -1408,7 +1408,7 @@ read_cluster_directory:
assert(s->current_fd);
- offset=s->cluster_size*(cluster_num-s->current_mapping->begin)+s->current_mapping->info.file.offset;
+ offset=s->cluster_size*((cluster_num - s->current_mapping->begin) + s->current_mapping->info.file.offset);
if(lseek(s->current_fd, offset, SEEK_SET)!=offset)
return -3;
s->cluster=s->cluster_buffer;
@@ -1929,8 +1929,8 @@ get_cluster_count_for_direntry(BDRVVVFATState* s, direntry_t* direntry, const ch
(mapping->mode & MODE_DIRECTORY) == 0) {
/* was modified in qcow */
- if (offset != mapping->info.file.offset + s->cluster_size
- * (cluster_num - mapping->begin)) {
+ if (offset != s->cluster_size
+ * ((cluster_num - mapping->begin) + mapping->info.file.offset)) {
/* offset of this cluster in file chain has changed */
abort();
copy_it = 1;
@@ -1944,7 +1944,6 @@ get_cluster_count_for_direntry(BDRVVVFATState* s, direntry_t* direntry, const ch
if (mapping->first_mapping_index != first_mapping_index
&& mapping->info.file.offset > 0) {
- abort();
copy_it = 1;
}
@@ -2404,7 +2403,7 @@ static int commit_mappings(BDRVVVFATState* s,
(mapping->end - mapping->begin);
} else
next_mapping->info.file.offset = mapping->info.file.offset +
- mapping->end - mapping->begin;
+ (mapping->end - mapping->begin);
mapping = next_mapping;
}
--
2.45.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v3 3/6] vvfat: Fix reading files with non-continuous clusters
2024-05-26 9:56 [PATCH v3 0/6] vvfat: Fix write bugs for large files and add iotests Amjad Alsharafi
2024-05-26 9:56 ` [PATCH v3 1/6] vvfat: Fix bug in writing to middle of file Amjad Alsharafi
2024-05-26 9:56 ` [PATCH v3 2/6] vvfat: Fix usage of `info.file.offset` Amjad Alsharafi
@ 2024-05-26 9:56 ` Amjad Alsharafi
2024-05-26 9:56 ` [PATCH v3 4/6] iotests: Add `vvfat` tests Amjad Alsharafi
` (3 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Amjad Alsharafi @ 2024-05-26 9:56 UTC (permalink / raw)
To: qemu-devel; +Cc: Hanna Reitz, Kevin Wolf, open list:vvfat, Amjad Alsharafi
When reading with `read_cluster` we get the `mapping` with
`find_mapping_for_cluster` and then we call `open_file` for this
mapping.
The issue appear when its the same file, but a second cluster that is
not immediately after it, imagine clusters `500 -> 503`, this will give
us 2 mappings one has the range `500..501` and another `503..504`, both
point to the same file, but different offsets.
When we don't open the file since the path is the same, we won't assign
`s->current_mapping` and thus accessing way out of bound of the file.
From our example above, after `open_file` (that didn't open anything) we
will get the offset into the file with
`s->cluster_size*(cluster_num-s->current_mapping->begin)`, which will
give us `0x2000 * (504-500)`, which is out of bound for this mapping and
will produce some issues.
Signed-off-by: Amjad Alsharafi <amjadsharafi10@gmail.com>
---
block/vvfat.c | 21 ++++++++++++++-------
1 file changed, 14 insertions(+), 7 deletions(-)
diff --git a/block/vvfat.c b/block/vvfat.c
index cb3ab81e29..87165abc26 100644
--- a/block/vvfat.c
+++ b/block/vvfat.c
@@ -1360,15 +1360,22 @@ static int open_file(BDRVVVFATState* s,mapping_t* mapping)
{
if(!mapping)
return -1;
+ int new_path = 1;
if(!s->current_mapping ||
- strcmp(s->current_mapping->path,mapping->path)) {
- /* open file */
- int fd = qemu_open_old(mapping->path,
+ s->current_mapping->first_mapping_index!=mapping->first_mapping_index ||
+ (new_path = strcmp(s->current_mapping->path,mapping->path))) {
+
+ if (new_path) {
+ /* open file */
+ int fd = qemu_open_old(mapping->path,
O_RDONLY | O_BINARY | O_LARGEFILE);
- if(fd<0)
- return -1;
- vvfat_close_current_file(s);
- s->current_fd = fd;
+ if(fd<0)
+ return -1;
+ vvfat_close_current_file(s);
+
+ s->current_fd = fd;
+ }
+ assert(s->current_fd);
s->current_mapping = mapping;
}
return 0;
--
2.45.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v3 4/6] iotests: Add `vvfat` tests
2024-05-26 9:56 [PATCH v3 0/6] vvfat: Fix write bugs for large files and add iotests Amjad Alsharafi
` (2 preceding siblings ...)
2024-05-26 9:56 ` [PATCH v3 3/6] vvfat: Fix reading files with non-continuous clusters Amjad Alsharafi
@ 2024-05-26 9:56 ` Amjad Alsharafi
2024-05-26 9:56 ` [PATCH v3 5/6] iotests: Filter out `vvfat` fmt from failing tests Amjad Alsharafi
` (2 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Amjad Alsharafi @ 2024-05-26 9:56 UTC (permalink / raw)
To: qemu-devel; +Cc: Hanna Reitz, Kevin Wolf, open list:vvfat, Amjad Alsharafi
Added several tests to verify the implementation of the vvfat driver.
We needed a way to interact with it, so created a basic `fat16.py` driver that handled writing correct sectors for us.
Signed-off-by: Amjad Alsharafi <amjadsharafi10@gmail.com>
---
tests/qemu-iotests/check | 2 +-
tests/qemu-iotests/fat16.py | 507 +++++++++++++++++++++++++++++
tests/qemu-iotests/tests/vvfat | 400 +++++++++++++++++++++++
tests/qemu-iotests/tests/vvfat.out | 5 +
4 files changed, 913 insertions(+), 1 deletion(-)
create mode 100644 tests/qemu-iotests/fat16.py
create mode 100755 tests/qemu-iotests/tests/vvfat
create mode 100755 tests/qemu-iotests/tests/vvfat.out
diff --git a/tests/qemu-iotests/check b/tests/qemu-iotests/check
index 56d88ca423..545f9ec7bd 100755
--- a/tests/qemu-iotests/check
+++ b/tests/qemu-iotests/check
@@ -84,7 +84,7 @@ def make_argparser() -> argparse.ArgumentParser:
p.set_defaults(imgfmt='raw', imgproto='file')
format_list = ['raw', 'bochs', 'cloop', 'parallels', 'qcow', 'qcow2',
- 'qed', 'vdi', 'vpc', 'vhdx', 'vmdk', 'luks', 'dmg']
+ 'qed', 'vdi', 'vpc', 'vhdx', 'vmdk', 'luks', 'dmg', 'vvfat']
g_fmt = p.add_argument_group(
' image format options',
'The following options set the IMGFMT environment variable. '
diff --git a/tests/qemu-iotests/fat16.py b/tests/qemu-iotests/fat16.py
new file mode 100644
index 0000000000..6ac5508d8d
--- /dev/null
+++ b/tests/qemu-iotests/fat16.py
@@ -0,0 +1,507 @@
+# A simple FAT16 driver that is used to test the `vvfat` driver in QEMU.
+#
+# Copyright (C) 2024 Amjad Alsharafi <amjadsharafi10@gmail.com>
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+from typing import List
+
+SECTOR_SIZE = 512
+DIRENTRY_SIZE = 32
+
+
+class MBR:
+ def __init__(self, data: bytes):
+ assert len(data) == 512
+ self.partition_table = []
+ for i in range(4):
+ partition = data[446 + i * 16 : 446 + (i + 1) * 16]
+ self.partition_table.append(
+ {
+ "status": partition[0],
+ "start_head": partition[1],
+ "start_sector": partition[2] & 0x3F,
+ "start_cylinder": ((partition[2] & 0xC0) << 2) | partition[3],
+ "type": partition[4],
+ "end_head": partition[5],
+ "end_sector": partition[6] & 0x3F,
+ "end_cylinder": ((partition[6] & 0xC0) << 2) | partition[7],
+ "start_lba": int.from_bytes(partition[8:12], "little"),
+ "size": int.from_bytes(partition[12:16], "little"),
+ }
+ )
+
+ def __str__(self):
+ return "\n".join(
+ [f"{i}: {partition}" for i, partition in enumerate(self.partition_table)]
+ )
+
+
+class FatBootSector:
+ def __init__(self, data: bytes):
+ assert len(data) == 512
+ self.bytes_per_sector = int.from_bytes(data[11:13], "little")
+ self.sectors_per_cluster = data[13]
+ self.reserved_sectors = int.from_bytes(data[14:16], "little")
+ self.fat_count = data[16]
+ self.root_entries = int.from_bytes(data[17:19], "little")
+ self.media_descriptor = data[21]
+ self.fat_size = int.from_bytes(data[22:24], "little")
+ self.sectors_per_fat = int.from_bytes(data[22:24], "little")
+ self.sectors_per_track = int.from_bytes(data[24:26], "little")
+ self.heads = int.from_bytes(data[26:28], "little")
+ self.hidden_sectors = int.from_bytes(data[28:32], "little")
+ self.total_sectors = int.from_bytes(data[32:36], "little")
+ self.drive_number = data[36]
+ self.volume_id = int.from_bytes(data[39:43], "little")
+ self.volume_label = data[43:54].decode("ascii").strip()
+ self.fs_type = data[54:62].decode("ascii").strip()
+
+ def root_dir_start(self):
+ """
+ Calculate the start sector of the root directory.
+ """
+ return self.reserved_sectors + self.fat_count * self.sectors_per_fat
+
+ def root_dir_size(self):
+ """
+ Calculate the size of the root directory in sectors.
+ """
+ return (
+ self.root_entries * DIRENTRY_SIZE + self.bytes_per_sector - 1
+ ) // self.bytes_per_sector
+
+ def data_sector_start(self):
+ """
+ Calculate the start sector of the data region.
+ """
+ return self.root_dir_start() + self.root_dir_size()
+
+ def first_sector_of_cluster(self, cluster: int):
+ """
+ Calculate the first sector of the given cluster.
+ """
+ return self.data_sector_start() + (cluster - 2) * self.sectors_per_cluster
+
+ def cluster_bytes(self):
+ """
+ Calculate the number of bytes in a cluster.
+ """
+ return self.bytes_per_sector * self.sectors_per_cluster
+
+ def __str__(self):
+ return (
+ f"Bytes per sector: {self.bytes_per_sector}\n"
+ f"Sectors per cluster: {self.sectors_per_cluster}\n"
+ f"Reserved sectors: {self.reserved_sectors}\n"
+ f"FAT count: {self.fat_count}\n"
+ f"Root entries: {self.root_entries}\n"
+ f"Total sectors: {self.total_sectors}\n"
+ f"Media descriptor: {self.media_descriptor}\n"
+ f"Sectors per FAT: {self.sectors_per_fat}\n"
+ f"Sectors per track: {self.sectors_per_track}\n"
+ f"Heads: {self.heads}\n"
+ f"Hidden sectors: {self.hidden_sectors}\n"
+ f"Drive number: {self.drive_number}\n"
+ f"Volume ID: {self.volume_id}\n"
+ f"Volume label: {self.volume_label}\n"
+ f"FS type: {self.fs_type}\n"
+ )
+
+
+class FatDirectoryEntry:
+ def __init__(self, data: bytes, sector: int, offset: int):
+ self.name = data[0:8].decode("ascii").strip()
+ self.ext = data[8:11].decode("ascii").strip()
+ self.attributes = data[11]
+ self.reserved = data[12]
+ self.create_time_tenth = data[13]
+ self.create_time = int.from_bytes(data[14:16], "little")
+ self.create_date = int.from_bytes(data[16:18], "little")
+ self.last_access_date = int.from_bytes(data[18:20], "little")
+ high_cluster = int.from_bytes(data[20:22], "little")
+ self.last_mod_time = int.from_bytes(data[22:24], "little")
+ self.last_mod_date = int.from_bytes(data[24:26], "little")
+ low_cluster = int.from_bytes(data[26:28], "little")
+ self.cluster = (high_cluster << 16) | low_cluster
+ self.size_bytes = int.from_bytes(data[28:32], "little")
+
+ # extra (to help write back to disk)
+ self.sector = sector
+ self.offset = offset
+
+ def as_bytes(self) -> bytes:
+ return (
+ self.name.ljust(8, " ").encode("ascii")
+ + self.ext.ljust(3, " ").encode("ascii")
+ + self.attributes.to_bytes(1, "little")
+ + self.reserved.to_bytes(1, "little")
+ + self.create_time_tenth.to_bytes(1, "little")
+ + self.create_time.to_bytes(2, "little")
+ + self.create_date.to_bytes(2, "little")
+ + self.last_access_date.to_bytes(2, "little")
+ + (self.cluster >> 16).to_bytes(2, "little")
+ + self.last_mod_time.to_bytes(2, "little")
+ + self.last_mod_date.to_bytes(2, "little")
+ + (self.cluster & 0xFFFF).to_bytes(2, "little")
+ + self.size_bytes.to_bytes(4, "little")
+ )
+
+ def whole_name(self):
+ if self.ext:
+ return f"{self.name}.{self.ext}"
+ else:
+ return self.name
+
+ def __str__(self):
+ return (
+ f"Name: {self.name}\n"
+ f"Ext: {self.ext}\n"
+ f"Attributes: {self.attributes}\n"
+ f"Reserved: {self.reserved}\n"
+ f"Create time tenth: {self.create_time_tenth}\n"
+ f"Create time: {self.create_time}\n"
+ f"Create date: {self.create_date}\n"
+ f"Last access date: {self.last_access_date}\n"
+ f"Last mod time: {self.last_mod_time}\n"
+ f"Last mod date: {self.last_mod_date}\n"
+ f"Cluster: {self.cluster}\n"
+ f"Size: {self.size_bytes}\n"
+ )
+
+ def __repr__(self):
+ # convert to dict
+ return str(vars(self))
+
+
+class Fat16:
+ def __init__(
+ self,
+ start_sector: int,
+ size: int,
+ sector_reader: callable,
+ sector_writer: callable,
+ ):
+ self.start_sector = start_sector
+ self.size_in_sectors = size
+ self.sector_reader = sector_reader
+ self.sector_writer = sector_writer
+
+ self.boot_sector = FatBootSector(self.sector_reader(start_sector))
+
+ fat_size_in_sectors = self.boot_sector.fat_size * self.boot_sector.fat_count
+ self.fats = self.read_sectors(
+ self.boot_sector.reserved_sectors, fat_size_in_sectors
+ )
+ self.fats_dirty_sectors = set()
+
+ def read_sectors(self, start_sector: int, num_sectors: int) -> bytes:
+ return self.sector_reader(start_sector + self.start_sector, num_sectors)
+
+ def write_sectors(self, start_sector: int, data: bytes):
+ return self.sector_writer(start_sector + self.start_sector, data)
+
+ def directory_from_bytes(
+ self, data: bytes, start_sector: int
+ ) -> List[FatDirectoryEntry]:
+ """
+ Convert `bytes` into a list of `FatDirectoryEntry` objects.
+ Will ignore long file names.
+ Will stop when it encounters a 0x00 byte.
+ """
+
+ entries = []
+ for i in range(0, len(data), DIRENTRY_SIZE):
+ entry = data[i : i + DIRENTRY_SIZE]
+
+ current_sector = start_sector + (i // SECTOR_SIZE)
+ current_offset = i % SECTOR_SIZE
+
+ if entry[0] == 0:
+ break
+ elif entry[0] == 0xE5:
+ # Deleted file
+ continue
+
+ if entry[11] & 0xF == 0xF:
+ # Long file name
+ continue
+
+ entries.append(FatDirectoryEntry(entry, current_sector, current_offset))
+ return entries
+
+ def read_root_directory(self) -> List[FatDirectoryEntry]:
+ root_dir = self.read_sectors(
+ self.boot_sector.root_dir_start(), self.boot_sector.root_dir_size()
+ )
+ return self.directory_from_bytes(root_dir, self.boot_sector.root_dir_start())
+
+ def read_fat_entry(self, cluster: int) -> int:
+ """
+ Read the FAT entry for the given cluster.
+ """
+ fat_offset = cluster * 2 # FAT16
+ return int.from_bytes(self.fats[fat_offset : fat_offset + 2], "little")
+
+ def write_fat_entry(self, cluster: int, value: int):
+ """
+ Write the FAT entry for the given cluster.
+ """
+ fat_offset = cluster * 2
+ self.fats = (
+ self.fats[:fat_offset]
+ + value.to_bytes(2, "little")
+ + self.fats[fat_offset + 2 :]
+ )
+ self.fats_dirty_sectors.add(fat_offset // SECTOR_SIZE)
+
+ def flush_fats(self):
+ """
+ Write the FATs back to the disk.
+ """
+ for sector in self.fats_dirty_sectors:
+ data = self.fats[sector * SECTOR_SIZE : (sector + 1) * SECTOR_SIZE]
+ sector = self.boot_sector.reserved_sectors + sector
+ self.write_sectors(sector, data)
+ self.fats_dirty_sectors = set()
+
+ def next_cluster(self, cluster: int) -> int | None:
+ """
+ Get the next cluster in the chain.
+ If its `None`, then its the last cluster.
+ The function will crash if the next cluster is `FREE` (unexpected) or invalid entry.
+ """
+ fat_entry = self.read_fat_entry(cluster)
+ if fat_entry == 0:
+ raise Exception("Unexpected: FREE cluster")
+ elif fat_entry == 1:
+ raise Exception("Unexpected: RESERVED cluster")
+ elif fat_entry >= 0xFFF8:
+ return None
+ elif fat_entry >= 0xFFF7:
+ raise Exception("Invalid FAT entry")
+ else:
+ return fat_entry
+
+ def next_free_cluster(self) -> int:
+ """
+ Find the next free cluster.
+ """
+ # simple linear search
+ for i in range(2, 0xFFFF):
+ if self.read_fat_entry(i) == 0:
+ return i
+ raise Exception("No free clusters")
+
+ def read_cluster(self, cluster: int) -> bytes:
+ """
+ Read the cluster at the given cluster.
+ """
+ return self.read_sectors(
+ self.boot_sector.first_sector_of_cluster(cluster),
+ self.boot_sector.sectors_per_cluster,
+ )
+
+ def write_cluster(self, cluster: int, data: bytes):
+ """
+ Write the cluster at the given cluster.
+ """
+ assert len(data) == self.boot_sector.cluster_bytes()
+ return self.write_sectors(
+ self.boot_sector.first_sector_of_cluster(cluster),
+ data,
+ )
+
+ def read_directory(self, cluster: int) -> List[FatDirectoryEntry]:
+ """
+ Read the directory at the given cluster.
+ """
+ entries = []
+ while cluster is not None:
+ data = self.read_cluster(cluster)
+ entries.extend(
+ self.directory_from_bytes(
+ data, self.boot_sector.first_sector_of_cluster(cluster)
+ )
+ )
+ cluster = self.next_cluster(cluster)
+ return entries
+
+ def update_direntry(self, entry: FatDirectoryEntry):
+ """
+ Write the directory entry back to the disk.
+ """
+ sector = self.read_sectors(entry.sector, 1)
+ sector = (
+ sector[: entry.offset]
+ + entry.as_bytes()
+ + sector[entry.offset + DIRENTRY_SIZE :]
+ )
+ self.write_sectors(entry.sector, sector)
+
+ def find_direntry(self, path: str) -> FatDirectoryEntry | None:
+ """
+ Find the directory entry for the given path.
+ """
+ assert path[0] == "/", "Path must start with /"
+
+ path = path[1:] # remove the leading /
+ parts = path.split("/")
+ directory = self.read_root_directory()
+
+ current_entry = None
+
+ for i, part in enumerate(parts):
+ is_last = i == len(parts) - 1
+
+ for entry in directory:
+ if entry.whole_name() == part:
+ current_entry = entry
+ break
+ if current_entry is None:
+ return None
+
+ if is_last:
+ return current_entry
+ else:
+ if current_entry.attributes & 0x10 == 0:
+ raise Exception(f"{current_entry.whole_name()} is not a directory")
+ else:
+ directory = self.read_directory(current_entry.cluster)
+
+ def read_file(self, entry: FatDirectoryEntry) -> bytes:
+ """
+ Read the content of the file at the given path.
+ """
+ if entry is None:
+ return None
+ if entry.attributes & 0x10 != 0:
+ raise Exception(f"{entry.whole_name()} is a directory")
+
+ data = b""
+ cluster = entry.cluster
+ while cluster is not None and len(data) <= entry.size_bytes:
+ data += self.read_cluster(cluster)
+ cluster = self.next_cluster(cluster)
+ return data[: entry.size_bytes]
+
+ def truncate_file(self, entry: FatDirectoryEntry, new_size: int):
+ """
+ Truncate the file at the given path to the new size.
+ """
+ if entry is None:
+ return Exception("entry is None")
+ if entry.attributes & 0x10 != 0:
+ raise Exception(f"{entry.whole_name()} is a directory")
+
+ def clusters_from_size(size: int):
+ return (size + self.boot_sector.cluster_bytes() - 1) // self.boot_sector.cluster_bytes()
+
+
+ # First, allocate new FATs if we need to
+ required_clusters = clusters_from_size(new_size)
+ current_clusters = clusters_from_size(entry.size_bytes)
+
+ affected_clusters = set()
+
+ # Keep at least one cluster, easier to manage this way
+ if required_clusters == 0:
+ required_clusters = 1
+ if current_clusters == 0:
+ current_clusters = 1
+
+ if required_clusters > current_clusters:
+ # Allocate new clusters
+ cluster = entry.cluster
+ to_add = required_clusters
+ for _ in range(current_clusters - 1):
+ to_add -= 1
+ cluster = self.next_cluster(cluster)
+ assert required_clusters > 0, "No new clusters to allocate"
+ assert cluster is not None, "Cluster is None"
+ assert self.next_cluster(cluster) is None, "Cluster is not the last cluster"
+
+ # Allocate new clusters
+ for _ in range(to_add - 1):
+ new_cluster = self.next_free_cluster()
+ self.write_fat_entry(cluster, new_cluster)
+ self.write_fat_entry(new_cluster, 0xFFFF)
+ cluster = new_cluster
+
+ elif required_clusters < current_clusters:
+ # Truncate the file
+ cluster = entry.cluster
+ for _ in range(required_clusters - 1):
+ cluster = self.next_cluster(cluster)
+ assert cluster is not None, "Cluster is None"
+
+ next_cluster = self.next_cluster(cluster)
+ # mark last as EOF
+ self.write_fat_entry(cluster, 0xFFFF)
+ # free the rest
+ while next_cluster is not None:
+ cluster = next_cluster
+ next_cluster = self.next_cluster(next_cluster)
+ self.write_fat_entry(cluster, 0)
+
+ self.flush_fats()
+
+ # verify number of clusters
+ cluster = entry.cluster
+ count = 0
+ while cluster is not None:
+ count += 1
+ affected_clusters.add(cluster)
+ cluster = self.next_cluster(cluster)
+ assert count == required_clusters, f"Expected {required_clusters} clusters, got {count}"
+
+ # update the size
+ entry.size_bytes = new_size
+ self.update_direntry(entry)
+
+ # trigger every affected cluster
+ for cluster in affected_clusters:
+ first_sector = self.boot_sector.first_sector_of_cluster(cluster)
+ first_sector_data = self.read_sectors(first_sector, 1)
+ self.write_sectors(first_sector, first_sector_data)
+
+ def write_file(self, entry: FatDirectoryEntry, data: bytes):
+ """
+ Write the content of the file at the given path.
+ """
+ if entry is None:
+ return Exception("entry is None")
+ if entry.attributes & 0x10 != 0:
+ raise Exception(f"{entry.whole_name()} is a directory")
+
+ data_len = len(data)
+
+ self.truncate_file(entry, data_len)
+
+ cluster = entry.cluster
+ while cluster is not None:
+ data_to_write = data[: self.boot_sector.cluster_bytes()]
+ last_data = False
+ if len(data_to_write) < self.boot_sector.cluster_bytes():
+ last_data = True
+ old_data = self.read_cluster(cluster)
+ data_to_write += old_data[len(data_to_write) :]
+
+ self.write_cluster(cluster, data_to_write)
+ data = data[self.boot_sector.cluster_bytes() :]
+ if len(data) == 0:
+ break
+ cluster = self.next_cluster(cluster)
+
+ assert len(data) == 0, "Data was not written completely, clusters missing"
diff --git a/tests/qemu-iotests/tests/vvfat b/tests/qemu-iotests/tests/vvfat
new file mode 100755
index 0000000000..e0e23d1ab8
--- /dev/null
+++ b/tests/qemu-iotests/tests/vvfat
@@ -0,0 +1,400 @@
+#!/usr/bin/env python3
+# group: rw vvfat
+#
+# Test vvfat driver implementation
+# Here, we use a simple FAT16 implementation and check the behavior of the vvfat driver.
+#
+# Copyright (C) 2024 Amjad Alsharafi <amjadsharafi10@gmail.com>
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+import os, shutil
+import iotests
+from iotests import imgfmt, QMPTestCase
+from fat16 import MBR, Fat16, DIRENTRY_SIZE
+
+filesystem = os.path.join(iotests.test_dir, "filesystem")
+
+nbd_sock = iotests.file_path("nbd.sock", base_dir=iotests.sock_dir)
+nbd_uri = "nbd+unix:///disk?socket=" + nbd_sock
+
+SECTOR_SIZE = 512
+
+
+class TestVVFatDriver(QMPTestCase):
+ def setUp(self) -> None:
+ if os.path.exists(filesystem):
+ if os.path.isdir(filesystem):
+ shutil.rmtree(filesystem)
+ else:
+ print(f"Error: {filesystem} exists and is not a directory")
+ exit(1)
+ os.mkdir(filesystem)
+
+ # Add some text files to the filesystem
+ for i in range(10):
+ with open(os.path.join(filesystem, f"file{i}.txt"), "w") as f:
+ f.write(f"Hello, world! {i}\n")
+
+ # Add 2 large files, above the cluster size (8KB)
+ with open(os.path.join(filesystem, "large1.txt"), "wb") as f:
+ # write 'A' * 1KB, 'B' * 1KB, 'C' * 1KB, ...
+ for i in range(8 * 2): # two clusters
+ f.write(bytes([0x41 + i] * 1024))
+
+ with open(os.path.join(filesystem, "large2.txt"), "wb") as f:
+ # write 'A' * 1KB, 'B' * 1KB, 'C' * 1KB, ...
+ for i in range(8 * 3): # 3 clusters
+ f.write(bytes([0x41 + i] * 1024))
+
+ self.vm = iotests.VM()
+
+ self.vm.add_blockdev(
+ self.vm.qmp_to_opts(
+ {
+ "driver": imgfmt,
+ "node-name": "disk",
+ "rw": "true",
+ "fat-type": "16",
+ "dir": filesystem,
+ }
+ )
+ )
+
+ self.vm.launch()
+
+ self.vm.qmp_log("block-dirty-bitmap-add", **{"node": "disk", "name": "bitmap0"})
+
+ # attach nbd server
+ self.vm.qmp_log(
+ "nbd-server-start",
+ **{"addr": {"type": "unix", "data": {"path": nbd_sock}}},
+ filters=[],
+ )
+
+ self.vm.qmp_log(
+ "nbd-server-add",
+ **{"device": "disk", "writable": True, "bitmap": "bitmap0"},
+ )
+
+ self.qio = iotests.QemuIoInteractive("-f", "raw", nbd_uri)
+
+ def tearDown(self) -> None:
+ self.qio.close()
+ self.vm.shutdown()
+ # print(self.vm.get_log())
+ shutil.rmtree(filesystem)
+
+ def read_sectors(self, sector: int, num: int = 1) -> bytes:
+ """
+ Read `num` sectors starting from `sector` from the `disk`.
+ This uses `QemuIoInteractive` to read the sectors into `stdout` and then parse the output.
+ """
+ self.assertGreater(num, 0)
+ # The output contains the content of the sector in hex dump format
+ # We need to extract the content from it
+ output = self.qio.cmd(f"read -v {sector * SECTOR_SIZE} {num * SECTOR_SIZE}")
+ # Each row is 16 bytes long, and we are writing `num` sectors
+ rows = num * SECTOR_SIZE // 16
+ output_rows = output.split("\n")[:rows]
+
+ hex_content = "".join(
+ [(row.split(": ")[1]).split(" ")[0] for row in output_rows]
+ )
+ bytes_content = bytes.fromhex(hex_content)
+
+ self.assertEqual(len(bytes_content), num * SECTOR_SIZE)
+
+ return bytes_content
+
+ def write_sectors(self, sector: int, data: bytes):
+ """
+ Write `data` to the `disk` starting from `sector`.
+ This uses `QemuIoInteractive` to write the data into the disk.
+ """
+
+ self.assertGreater(len(data), 0)
+ self.assertEqual(len(data) % SECTOR_SIZE, 0)
+
+ temp_file = os.path.join(iotests.test_dir, "temp.bin")
+ with open(temp_file, "wb") as f:
+ f.write(data)
+
+ self.qio.cmd(f"write -s {temp_file} {sector * SECTOR_SIZE} {len(data)}")
+
+ os.remove(temp_file)
+
+ def init_fat16(self):
+ mbr = MBR(self.read_sectors(0))
+ return Fat16(
+ mbr.partition_table[0]["start_lba"],
+ mbr.partition_table[0]["size"],
+ self.read_sectors,
+ self.write_sectors,
+ )
+
+ # Tests
+
+ def test_fat_filesystem(self):
+ """
+ Test that vvfat produce a valid FAT16 and MBR sectors
+ """
+ mbr = MBR(self.read_sectors(0))
+
+ self.assertEqual(mbr.partition_table[0]["status"], 0x80)
+ self.assertEqual(mbr.partition_table[0]["type"], 6)
+
+ fat16 = Fat16(
+ mbr.partition_table[0]["start_lba"],
+ mbr.partition_table[0]["size"],
+ self.read_sectors,
+ self.write_sectors,
+ )
+ self.assertEqual(fat16.boot_sector.bytes_per_sector, 512)
+ self.assertEqual(fat16.boot_sector.volume_label, "QEMU VVFAT")
+
+ def test_read_root_directory(self):
+ """
+ Test the content of the root directory
+ """
+ fat16 = self.init_fat16()
+
+ root_dir = fat16.read_root_directory()
+
+ self.assertEqual(len(root_dir), 13) # 12 + 1 special file
+
+ files = {
+ "QEMU VVF.AT": 0, # special empty file
+ "FILE0.TXT": 16,
+ "FILE1.TXT": 16,
+ "FILE2.TXT": 16,
+ "FILE3.TXT": 16,
+ "FILE4.TXT": 16,
+ "FILE5.TXT": 16,
+ "FILE6.TXT": 16,
+ "FILE7.TXT": 16,
+ "FILE8.TXT": 16,
+ "FILE9.TXT": 16,
+ "LARGE1.TXT": 0x2000 * 2,
+ "LARGE2.TXT": 0x2000 * 3,
+ }
+
+ for entry in root_dir:
+ self.assertIn(entry.whole_name(), files)
+ self.assertEqual(entry.size_bytes, files[entry.whole_name()])
+
+ def test_direntry_as_bytes(self):
+ """
+ Test if we can convert Direntry back to bytes, so that we can write it back to the disk safely.
+ """
+ fat16 = self.init_fat16()
+
+ root_dir = fat16.read_root_directory()
+ first_entry_bytes = fat16.read_sectors(fat16.boot_sector.root_dir_start(), 1)
+ # The first entry won't be deleted, so we can compare it with the first entry in the root directory
+ self.assertEqual(root_dir[0].as_bytes(), first_entry_bytes[:DIRENTRY_SIZE])
+
+ def test_read_files(self):
+ """
+ Test reading the content of the files
+ """
+ fat16 = self.init_fat16()
+
+ for i in range(10):
+ file = fat16.find_direntry(f"/FILE{i}.TXT")
+ self.assertIsNotNone(file)
+ self.assertEqual(
+ fat16.read_file(file), f"Hello, world! {i}\n".encode("ascii")
+ )
+
+ # test large files
+ large1 = fat16.find_direntry("/LARGE1.TXT")
+ with open(os.path.join(filesystem, "large1.txt"), "rb") as f:
+ self.assertEqual(fat16.read_file(large1), f.read())
+
+ large2 = fat16.find_direntry("/LARGE2.TXT")
+ self.assertIsNotNone(large2)
+ with open(os.path.join(filesystem, "large2.txt"), "rb") as f:
+ self.assertEqual(fat16.read_file(large2), f.read())
+
+ def test_write_file_same_content_direct(self):
+ """
+ Similar to `test_write_file_in_same_content`, but we write the file directly clusters
+ and thus we don't go through the modification of direntry.
+ """
+ fat16 = self.init_fat16()
+
+ file = fat16.find_direntry("/FILE0.TXT")
+ self.assertIsNotNone(file)
+
+ data = fat16.read_cluster(file.cluster)
+ fat16.write_cluster(file.cluster, data)
+
+ with open(os.path.join(filesystem, "file0.txt"), "rb") as f:
+ self.assertEqual(fat16.read_file(file), f.read())
+
+ def test_write_file_in_same_content(self):
+ """
+ Test writing the same content to the file back to it
+ """
+ fat16 = self.init_fat16()
+
+ file = fat16.find_direntry("/FILE0.TXT")
+ self.assertIsNotNone(file)
+
+ self.assertEqual(fat16.read_file(file), b"Hello, world! 0\n")
+
+ fat16.write_file(file, b"Hello, world! 0\n")
+
+ self.assertEqual(fat16.read_file(file), b"Hello, world! 0\n")
+
+ with open(os.path.join(filesystem, "file0.txt"), "rb") as f:
+ self.assertEqual(f.read(), b"Hello, world! 0\n")
+
+ def test_modify_content_same_clusters(self):
+ """
+ Test modifying the content of the file without changing the number of clusters
+ """
+ fat16 = self.init_fat16()
+
+ file = fat16.find_direntry("/FILE0.TXT")
+ self.assertIsNotNone(file)
+
+ new_content = b"Hello, world! Modified\n"
+ self.assertEqual(fat16.read_file(file), b"Hello, world! 0\n")
+
+ fat16.write_file(file, new_content)
+
+ self.assertEqual(fat16.read_file(file), new_content)
+ with open(os.path.join(filesystem, "file0.txt"), "rb") as f:
+ self.assertEqual(f.read(), new_content)
+
+ def test_truncate_file_same_clusters_less(self):
+ """
+ Test truncating the file without changing number of clusters
+ Test decreasing the file size
+ """
+ fat16 = self.init_fat16()
+
+ file = fat16.find_direntry("/FILE0.TXT")
+ self.assertIsNotNone(file)
+
+ self.assertEqual(fat16.read_file(file), b"Hello, world! 0\n")
+
+ fat16.truncate_file(file, 5)
+
+ new_content = fat16.read_file(file)
+
+ self.assertEqual(new_content, b"Hello")
+
+ with open(os.path.join(filesystem, "file0.txt"), "rb") as f:
+ self.assertEqual(f.read(), new_content)
+
+ def test_truncate_file_same_clusters_more(self):
+ """
+ Test truncating the file without changing number of clusters
+ Test increase the file size
+ """
+ fat16 = self.init_fat16()
+
+ file = fat16.find_direntry("/FILE0.TXT")
+ self.assertIsNotNone(file)
+
+ self.assertEqual(fat16.read_file(file), b"Hello, world! 0\n")
+
+ fat16.truncate_file(file, 20)
+
+ new_content = fat16.read_file(file)
+
+ # random pattern will be appended to the file, and its not always the same
+ self.assertEqual(new_content[:16], b"Hello, world! 0\n")
+ self.assertEqual(len(new_content), 20)
+
+ with open(os.path.join(filesystem, "file0.txt"), "rb") as f:
+ self.assertEqual(f.read(), new_content)
+
+ def test_write_large_file(self):
+ """
+ Test writing a large file
+ """
+ fat16 = self.init_fat16()
+
+ file = fat16.find_direntry("/LARGE1.TXT")
+ self.assertIsNotNone(file)
+
+ # The content of LARGE1 is A * 1KB, B * 1KB, C * 1KB, ..., P * 1KB
+ # Lets change it to be Z * 1KB, Y * 1KB, X * 1KB, ..., K * 1KB
+ # without changing the number of clusters or filesize
+ new_content = b"".join([bytes([0x5A - i] * 1024) for i in range(16)])
+
+ fat16.write_file(file, new_content)
+
+ with open(os.path.join(filesystem, "large1.txt"), "rb") as f:
+ self.assertEqual(f.read(), new_content)
+
+ def test_truncate_file_change_clusters_less(self):
+ """
+ Test truncating a file by reducing the number of clusters
+ """
+ fat16 = self.init_fat16()
+
+ file = fat16.find_direntry("/LARGE1.TXT")
+ self.assertIsNotNone(file)
+
+ fat16.truncate_file(file, 1)
+
+ self.assertEqual(fat16.read_file(file), b"A")
+
+ with open(os.path.join(filesystem, "large1.txt"), "rb") as f:
+ self.assertEqual(f.read(), b"A")
+
+
+ def test_write_file_change_clusters_less(self):
+ """
+ Test truncating a file by reducing the number of clusters
+ """
+ fat16 = self.init_fat16()
+
+ file = fat16.find_direntry("/LARGE2.TXT")
+ self.assertIsNotNone(file)
+
+ new_content = b"Hello, world! This was a large file\n"
+ new_content = b"Z" * 8 * 1024 * 2
+
+ fat16.write_file(file, new_content)
+
+ with open(os.path.join(filesystem, "large2.txt"), "rb") as f:
+ self.assertEqual(f.read(), new_content)
+
+ def test_write_file_change_clusters_more(self):
+ """
+ Test truncating a file by increasing the number of clusters
+ """
+ fat16 = self.init_fat16()
+
+ file = fat16.find_direntry("/LARGE2.TXT")
+ self.assertIsNotNone(file)
+
+ new_content = b"Z" * 8 * 1024 * 4
+
+ fat16.write_file(file, new_content)
+
+ with open(os.path.join(filesystem, "large2.txt"), "rb") as f:
+ self.assertEqual(f.read(), new_content)
+
+
+
+if __name__ == "__main__":
+ # This is a specific test for vvfat driver
+ iotests.main(supported_fmts=["vvfat"], supported_protocols=["file"])
diff --git a/tests/qemu-iotests/tests/vvfat.out b/tests/qemu-iotests/tests/vvfat.out
new file mode 100755
index 0000000000..fa16b5ccef
--- /dev/null
+++ b/tests/qemu-iotests/tests/vvfat.out
@@ -0,0 +1,5 @@
+.............
+----------------------------------------------------------------------
+Ran 13 tests
+
+OK
--
2.45.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v3 5/6] iotests: Filter out `vvfat` fmt from failing tests
2024-05-26 9:56 [PATCH v3 0/6] vvfat: Fix write bugs for large files and add iotests Amjad Alsharafi
` (3 preceding siblings ...)
2024-05-26 9:56 ` [PATCH v3 4/6] iotests: Add `vvfat` tests Amjad Alsharafi
@ 2024-05-26 9:56 ` Amjad Alsharafi
2024-05-31 17:29 ` Kevin Wolf
2024-05-26 9:56 ` [PATCH v3 6/6] iotests: Add `create_file` test for `vvfat` driver Amjad Alsharafi
2024-05-31 17:22 ` [PATCH v3 0/6] vvfat: Fix write bugs for large files and add iotests Kevin Wolf
6 siblings, 1 reply; 11+ messages in thread
From: Amjad Alsharafi @ 2024-05-26 9:56 UTC (permalink / raw)
To: qemu-devel; +Cc: Hanna Reitz, Kevin Wolf, open list:vvfat, Amjad Alsharafi
`vvfat` is a special format and not all tests (even generic) can run without crashing.
So, added `unsupported_fmt: vvfat` to all failling tests.
Also added `vvfat` format into `meson.build`, vvfaat tests can be run on the `block-thorough` suite.
Signed-off-by: Amjad Alsharafi <amjadsharafi10@gmail.com>
---
.gitlab-ci.d/buildtest.yml | 1 +
tests/qemu-iotests/001 | 1 +
tests/qemu-iotests/002 | 1 +
tests/qemu-iotests/003 | 1 +
tests/qemu-iotests/005 | 1 +
tests/qemu-iotests/008 | 1 +
tests/qemu-iotests/009 | 1 +
tests/qemu-iotests/010 | 1 +
tests/qemu-iotests/011 | 1 +
tests/qemu-iotests/012 | 1 +
tests/qemu-iotests/021 | 1 +
tests/qemu-iotests/032 | 1 +
tests/qemu-iotests/033 | 1 +
tests/qemu-iotests/052 | 1 +
tests/qemu-iotests/094 | 1 +
tests/qemu-iotests/120 | 2 +-
tests/qemu-iotests/140 | 1 +
tests/qemu-iotests/145 | 1 +
tests/qemu-iotests/157 | 1 +
tests/qemu-iotests/159 | 2 +-
tests/qemu-iotests/170 | 2 +-
tests/qemu-iotests/192 | 1 +
tests/qemu-iotests/197 | 2 +-
tests/qemu-iotests/208 | 2 +-
tests/qemu-iotests/215 | 2 +-
tests/qemu-iotests/236 | 2 +-
tests/qemu-iotests/251 | 1 +
tests/qemu-iotests/307 | 2 +-
tests/qemu-iotests/308 | 2 +-
tests/qemu-iotests/meson.build | 3 ++-
tests/qemu-iotests/tests/export-incoming-iothread | 2 +-
tests/qemu-iotests/tests/fuse-allow-other | 1 +
tests/qemu-iotests/tests/mirror-ready-cancel-error | 2 +-
tests/qemu-iotests/tests/regression-vhdx-log | 1 +
34 files changed, 35 insertions(+), 12 deletions(-)
diff --git a/.gitlab-ci.d/buildtest.yml b/.gitlab-ci.d/buildtest.yml
index cfdff175c3..a46c179a6b 100644
--- a/.gitlab-ci.d/buildtest.yml
+++ b/.gitlab-ci.d/buildtest.yml
@@ -347,6 +347,7 @@ build-tcg-disabled:
124 132 139 142 144 145 151 152 155 157 165 194 196 200 202
208 209 216 218 227 234 246 247 248 250 254 255 257 258
260 261 262 263 264 270 272 273 277 279 image-fleecing
+ - ./check -vvfat vvfat
build-user:
extends: .native_build_job_template
diff --git a/tests/qemu-iotests/001 b/tests/qemu-iotests/001
index 6f980fd34d..cf905b5d00 100755
--- a/tests/qemu-iotests/001
+++ b/tests/qemu-iotests/001
@@ -38,6 +38,7 @@ trap "_cleanup; exit \$status" 0 1 2 3 15
. ./common.filter
_supported_fmt generic
+_unsupported_fmt vvfat
_supported_proto generic
diff --git a/tests/qemu-iotests/002 b/tests/qemu-iotests/002
index 5ce1647531..1e557fad8c 100755
--- a/tests/qemu-iotests/002
+++ b/tests/qemu-iotests/002
@@ -38,6 +38,7 @@ trap "_cleanup; exit \$status" 0 1 2 3 15
. ./common.filter
_supported_fmt generic
+_unsupported_fmt vvfat
_supported_proto generic
_unsupported_imgopts "subformat=streamOptimized"
diff --git a/tests/qemu-iotests/003 b/tests/qemu-iotests/003
index 03f902a83c..6e74f1faeb 100755
--- a/tests/qemu-iotests/003
+++ b/tests/qemu-iotests/003
@@ -38,6 +38,7 @@ trap "_cleanup; exit \$status" 0 1 2 3 15
. ./common.filter
_supported_fmt generic
+_unsupported_fmt vvfat
_supported_proto generic
_unsupported_imgopts "subformat=streamOptimized"
diff --git a/tests/qemu-iotests/005 b/tests/qemu-iotests/005
index ba377543b0..28ae66bfcd 100755
--- a/tests/qemu-iotests/005
+++ b/tests/qemu-iotests/005
@@ -41,6 +41,7 @@ trap "_cleanup; exit \$status" 0 1 2 3 15
. ./common.filter
_supported_fmt generic
+_unsupported_fmt vvfat
_supported_proto generic
_supported_os Linux
_unsupported_imgopts "subformat=twoGbMaxExtentFlat" \
diff --git a/tests/qemu-iotests/008 b/tests/qemu-iotests/008
index fa4990b513..80850ecf12 100755
--- a/tests/qemu-iotests/008
+++ b/tests/qemu-iotests/008
@@ -38,6 +38,7 @@ trap "_cleanup; exit \$status" 0 1 2 3 15
. ./common.filter
_supported_fmt generic
+_unsupported_fmt vvfat
_supported_proto generic
diff --git a/tests/qemu-iotests/009 b/tests/qemu-iotests/009
index efa852bad3..408617b0bc 100755
--- a/tests/qemu-iotests/009
+++ b/tests/qemu-iotests/009
@@ -38,6 +38,7 @@ trap "_cleanup; exit \$status" 0 1 2 3 15
. ./common.filter
_supported_fmt generic
+_unsupported_fmt vvfat
_supported_proto generic
_unsupported_imgopts "subformat=streamOptimized"
diff --git a/tests/qemu-iotests/010 b/tests/qemu-iotests/010
index 4ae9027b47..c9f6279255 100755
--- a/tests/qemu-iotests/010
+++ b/tests/qemu-iotests/010
@@ -38,6 +38,7 @@ trap "_cleanup; exit \$status" 0 1 2 3 15
. ./common.filter
_supported_fmt generic
+_unsupported_fmt vvfat
_supported_proto generic
_unsupported_imgopts "subformat=streamOptimized"
diff --git a/tests/qemu-iotests/011 b/tests/qemu-iotests/011
index 5c99ac987f..92039fa949 100755
--- a/tests/qemu-iotests/011
+++ b/tests/qemu-iotests/011
@@ -38,6 +38,7 @@ trap "_cleanup; exit \$status" 0 1 2 3 15
. ./common.filter
_supported_fmt generic
+_unsupported_fmt vvfat
_supported_proto generic
_unsupported_imgopts "subformat=streamOptimized"
diff --git a/tests/qemu-iotests/012 b/tests/qemu-iotests/012
index 3a24d2ca8d..5b0f1338e6 100755
--- a/tests/qemu-iotests/012
+++ b/tests/qemu-iotests/012
@@ -40,6 +40,7 @@ trap "_cleanup; exit \$status" 0 1 2 3 15
. ./common.filter
_supported_fmt generic
+_unsupported_fmt vvfat
_supported_proto file
# Remove once all tests are fixed to use TEST_IMG_FILE
diff --git a/tests/qemu-iotests/021 b/tests/qemu-iotests/021
index 0fc89df2fe..475f9b2116 100755
--- a/tests/qemu-iotests/021
+++ b/tests/qemu-iotests/021
@@ -38,6 +38,7 @@ trap "_cleanup; exit \$status" 0 1 2 3 15
. ./common.filter
_supported_fmt generic
+_unsupported_fmt vvfat
_supported_proto generic
diff --git a/tests/qemu-iotests/032 b/tests/qemu-iotests/032
index ebbe7cb0ba..b58141f132 100755
--- a/tests/qemu-iotests/032
+++ b/tests/qemu-iotests/032
@@ -42,6 +42,7 @@ trap "_cleanup; exit \$status" 0 1 2 3 15
# This works for any image format (though unlikely to segfault for raw)
_supported_fmt generic
+_unsupported_fmt vvfat
_supported_proto generic
_unsupported_imgopts "subformat=streamOptimized"
diff --git a/tests/qemu-iotests/033 b/tests/qemu-iotests/033
index 4bc7a071bd..6410c8717e 100755
--- a/tests/qemu-iotests/033
+++ b/tests/qemu-iotests/033
@@ -38,6 +38,7 @@ trap "_cleanup; exit \$status" 0 1 2 3 15
. ./common.filter
_supported_fmt generic
+_unsupported_fmt vvfat
_supported_proto generic
_unsupported_imgopts "subformat=streamOptimized"
diff --git a/tests/qemu-iotests/052 b/tests/qemu-iotests/052
index 2f23ac9b65..5b3545c8b9 100755
--- a/tests/qemu-iotests/052
+++ b/tests/qemu-iotests/052
@@ -38,6 +38,7 @@ trap "_cleanup; exit \$status" 0 1 2 3 15
. ./common.filter
_supported_fmt generic
+_unsupported_fmt vvfat
_supported_proto file
# Don't do O_DIRECT on tmpfs
diff --git a/tests/qemu-iotests/094 b/tests/qemu-iotests/094
index 4766e9a458..d8da955c1b 100755
--- a/tests/qemu-iotests/094
+++ b/tests/qemu-iotests/094
@@ -42,6 +42,7 @@ trap "_cleanup; exit \$status" 0 1 2 3 15
. ./common.qemu
_supported_fmt generic
+_unsupported_fmt vvfat
_supported_proto nbd
_unsupported_imgopts "subformat=monolithicFlat" "subformat=twoGbMaxExtentFlat"
diff --git a/tests/qemu-iotests/120 b/tests/qemu-iotests/120
index ac7bd8c4e3..d8e5f4241a 100755
--- a/tests/qemu-iotests/120
+++ b/tests/qemu-iotests/120
@@ -40,7 +40,7 @@ trap "_cleanup; exit \$status" 0 1 2 3 15
_supported_fmt generic
_supported_proto file fuse
-_unsupported_fmt luks
+_unsupported_fmt luks vvfat
_require_drivers raw
_make_test_img 64M
diff --git a/tests/qemu-iotests/140 b/tests/qemu-iotests/140
index d923b777e2..42a96d9097 100755
--- a/tests/qemu-iotests/140
+++ b/tests/qemu-iotests/140
@@ -45,6 +45,7 @@ trap "_cleanup; exit \$status" 0 1 2 3 15
. ./common.qemu
_supported_fmt generic
+_unsupported_fmt vvfat
_supported_proto file fuse
_supported_os Linux
diff --git a/tests/qemu-iotests/145 b/tests/qemu-iotests/145
index a2ce92516d..ff9c6ff54f 100755
--- a/tests/qemu-iotests/145
+++ b/tests/qemu-iotests/145
@@ -39,6 +39,7 @@ trap "_cleanup; exit \$status" 0 1 2 3 15
. ./common.filter
_supported_fmt generic
+_unsupported_fmt vvfat
_supported_proto generic
_make_test_img 1M
diff --git a/tests/qemu-iotests/157 b/tests/qemu-iotests/157
index aa2ebbfb4b..419d3b8b7a 100755
--- a/tests/qemu-iotests/157
+++ b/tests/qemu-iotests/157
@@ -38,6 +38,7 @@ trap "_cleanup; exit \$status" 0 1 2 3 15
. ./common.filter
_supported_fmt generic
+_unsupported_fmt vvfat
_supported_proto file
_require_devices virtio-blk
diff --git a/tests/qemu-iotests/159 b/tests/qemu-iotests/159
index 4eb476d3a8..70a1079ae5 100755
--- a/tests/qemu-iotests/159
+++ b/tests/qemu-iotests/159
@@ -39,7 +39,7 @@ trap "_cleanup; exit \$status" 0 1 2 3 15
_supported_fmt generic
_supported_proto file
-_unsupported_fmt luks
+_unsupported_fmt luks vvfat
TEST_SIZES="5 512 1024 1999 1K 64K 1M"
diff --git a/tests/qemu-iotests/170 b/tests/qemu-iotests/170
index 41387e4d66..f08fb0e8bd 100755
--- a/tests/qemu-iotests/170
+++ b/tests/qemu-iotests/170
@@ -39,7 +39,7 @@ trap "_cleanup; exit \$status" 0 1 2 3 15
_supported_fmt generic
_supported_proto file
-_unsupported_fmt luks
+_unsupported_fmt luks vvfat
echo
echo "== Creating image =="
diff --git a/tests/qemu-iotests/192 b/tests/qemu-iotests/192
index e66e1a4f06..ca72b0b7c8 100755
--- a/tests/qemu-iotests/192
+++ b/tests/qemu-iotests/192
@@ -42,6 +42,7 @@ trap "_cleanup; exit \$status" 0 1 2 3 15
. ./common.qemu
_supported_fmt generic
+_unsupported_fmt vvfat
_supported_proto file
if [ "$QEMU_DEFAULT_MACHINE" != "pc" ]; then
diff --git a/tests/qemu-iotests/197 b/tests/qemu-iotests/197
index 69849c800e..76b30672d9 100755
--- a/tests/qemu-iotests/197
+++ b/tests/qemu-iotests/197
@@ -53,7 +53,7 @@ trap "_cleanup; exit \$status" 0 1 2 3 15
_supported_fmt generic
_supported_proto generic
# LUKS support may be possible, but it complicates things.
-_unsupported_fmt luks
+_unsupported_fmt luks vvfat
_unsupported_imgopts "subformat=streamOptimized"
echo
diff --git a/tests/qemu-iotests/208 b/tests/qemu-iotests/208
index 6117f165fa..f08c83c0c1 100755
--- a/tests/qemu-iotests/208
+++ b/tests/qemu-iotests/208
@@ -23,7 +23,7 @@
import iotests
-iotests.script_initialize(supported_fmts=['generic'])
+iotests.script_initialize(supported_fmts=['generic'], unsupported_fmts=['vvfat'])
with iotests.FilePath('disk.img') as disk_img_path, \
iotests.FilePath('disk-snapshot.img') as disk_snapshot_img_path, \
diff --git a/tests/qemu-iotests/215 b/tests/qemu-iotests/215
index 6babbcdc1f..3bd03c741e 100755
--- a/tests/qemu-iotests/215
+++ b/tests/qemu-iotests/215
@@ -50,7 +50,7 @@ trap "_cleanup; exit \$status" 0 1 2 3 15
_supported_fmt generic
_supported_proto generic
# LUKS support may be possible, but it complicates things.
-_unsupported_fmt luks
+_unsupported_fmt luks vvfat
_unsupported_imgopts "subformat=streamOptimized"
echo
diff --git a/tests/qemu-iotests/236 b/tests/qemu-iotests/236
index 20419bbb9e..4bcca355ab 100755
--- a/tests/qemu-iotests/236
+++ b/tests/qemu-iotests/236
@@ -23,7 +23,7 @@
import iotests
from iotests import log
-iotests.script_initialize(supported_fmts=['generic'])
+iotests.script_initialize(supported_fmts=['generic'], unsupported_fmts=['vvfat'])
size = 64 * 1024 * 1024
granularity = 64 * 1024
diff --git a/tests/qemu-iotests/251 b/tests/qemu-iotests/251
index 794cad58b2..ac83f69d9a 100755
--- a/tests/qemu-iotests/251
+++ b/tests/qemu-iotests/251
@@ -39,6 +39,7 @@ trap "_cleanup; exit \$status" 0 1 2 3 15
. ./common.qemu
_supported_fmt generic
+_unsupported_fmt vvfat
_supported_proto file
_supported_os Linux
_unsupported_imgopts "subformat=streamOptimized"
diff --git a/tests/qemu-iotests/307 b/tests/qemu-iotests/307
index b429b5aa50..548d2f040d 100755
--- a/tests/qemu-iotests/307
+++ b/tests/qemu-iotests/307
@@ -27,7 +27,7 @@ import os
# luks which requires special command lines)
iotests.script_initialize(
supported_fmts=['generic'],
- unsupported_fmts=['luks', 'vpc'],
+ unsupported_fmts=['luks', 'vpc', 'vvfat'],
supported_platforms=['linux'],
)
diff --git a/tests/qemu-iotests/308 b/tests/qemu-iotests/308
index ea81dc496a..993dd2a5f9 100755
--- a/tests/qemu-iotests/308
+++ b/tests/qemu-iotests/308
@@ -47,7 +47,7 @@ if [ "$IMGOPTSSYNTAX" = "true" ]; then
fi
# We need the image to have exactly the specified size, and VPC does
# not allow that by default
-_unsupported_fmt vpc
+_unsupported_fmt vpc vvfat
_supported_proto file # We create the FUSE export manually
_supported_os Linux # We need /dev/urandom
diff --git a/tests/qemu-iotests/meson.build b/tests/qemu-iotests/meson.build
index fad340ad59..e87cf71fc4 100644
--- a/tests/qemu-iotests/meson.build
+++ b/tests/qemu-iotests/meson.build
@@ -23,7 +23,8 @@ qemu_iotests_formats = {
'raw': 'slow',
'qed': 'thorough',
'vmdk': 'thorough',
- 'vpc': 'thorough'
+ 'vpc': 'thorough',
+ 'vvfat': 'thorough',
}
foreach k, v : emulators
diff --git a/tests/qemu-iotests/tests/export-incoming-iothread b/tests/qemu-iotests/tests/export-incoming-iothread
index d36d6194e0..9535046dfd 100755
--- a/tests/qemu-iotests/tests/export-incoming-iothread
+++ b/tests/qemu-iotests/tests/export-incoming-iothread
@@ -75,5 +75,5 @@ class TestExportIncomingIothread(iotests.QMPTestCase):
if __name__ == '__main__':
iotests.main(supported_fmts=['generic'],
- unsupported_fmts=['luks'], # Would need a secret
+ unsupported_fmts=['luks', 'vvfat'], # Would need a secret
supported_protocols=['file'])
diff --git a/tests/qemu-iotests/tests/fuse-allow-other b/tests/qemu-iotests/tests/fuse-allow-other
index 19f494aefb..6cfbe9ef1f 100755
--- a/tests/qemu-iotests/tests/fuse-allow-other
+++ b/tests/qemu-iotests/tests/fuse-allow-other
@@ -38,6 +38,7 @@ trap "_cleanup; exit \$status" 0 1 2 3 15
. ../common.qemu
_supported_fmt generic
+_unsupported_fmt vvfat
_supported_proto file # We create the FUSE export manually
diff --git a/tests/qemu-iotests/tests/mirror-ready-cancel-error b/tests/qemu-iotests/tests/mirror-ready-cancel-error
index ed2e46447e..3b36764ecb 100755
--- a/tests/qemu-iotests/tests/mirror-ready-cancel-error
+++ b/tests/qemu-iotests/tests/mirror-ready-cancel-error
@@ -138,5 +138,5 @@ class TestMirrorReadyCancelError(iotests.QMPTestCase):
if __name__ == '__main__':
# LUKS would require special key-secret handling in add_blockdevs()
iotests.main(supported_fmts=['generic'],
- unsupported_fmts=['luks'],
+ unsupported_fmts=['luks', 'vvfat'],
supported_protocols=['file'])
diff --git a/tests/qemu-iotests/tests/regression-vhdx-log b/tests/qemu-iotests/tests/regression-vhdx-log
index ca264e93d6..eb216c27dd 100755
--- a/tests/qemu-iotests/tests/regression-vhdx-log
+++ b/tests/qemu-iotests/tests/regression-vhdx-log
@@ -40,6 +40,7 @@ cd ..
. ./common.filter
_supported_fmt generic
+_unsupported_fmt vvfat
_supported_proto generic
_unsupported_imgopts "subformat=streamOptimized"
--
2.45.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v3 6/6] iotests: Add `create_file` test for `vvfat` driver
2024-05-26 9:56 [PATCH v3 0/6] vvfat: Fix write bugs for large files and add iotests Amjad Alsharafi
` (4 preceding siblings ...)
2024-05-26 9:56 ` [PATCH v3 5/6] iotests: Filter out `vvfat` fmt from failing tests Amjad Alsharafi
@ 2024-05-26 9:56 ` Amjad Alsharafi
2024-05-31 17:22 ` [PATCH v3 0/6] vvfat: Fix write bugs for large files and add iotests Kevin Wolf
6 siblings, 0 replies; 11+ messages in thread
From: Amjad Alsharafi @ 2024-05-26 9:56 UTC (permalink / raw)
To: qemu-devel; +Cc: Hanna Reitz, Kevin Wolf, open list:vvfat, Amjad Alsharafi
We test the ability to create new files in the filesystem, this is done by
adding an entry in the desired directory list.
The file will also be created in the host filesystem with matching filename.
Signed-off-by: Amjad Alsharafi <amjadsharafi10@gmail.com>
---
tests/qemu-iotests/fat16.py | 124 +++++++++++++++++++++++++++--
tests/qemu-iotests/tests/vvfat | 29 +++++--
tests/qemu-iotests/tests/vvfat.out | 4 +-
3 files changed, 144 insertions(+), 13 deletions(-)
diff --git a/tests/qemu-iotests/fat16.py b/tests/qemu-iotests/fat16.py
index 6ac5508d8d..e86bdd0b10 100644
--- a/tests/qemu-iotests/fat16.py
+++ b/tests/qemu-iotests/fat16.py
@@ -16,9 +16,11 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from typing import List
+import string
SECTOR_SIZE = 512
DIRENTRY_SIZE = 32
+ALLOWED_FILE_CHARS = set("!#$%&'()-@^_`{}~" + string.digits + string.ascii_uppercase)
class MBR:
@@ -265,7 +267,7 @@ def write_fat_entry(self, cluster: int, value: int):
+ self.fats[fat_offset + 2 :]
)
self.fats_dirty_sectors.add(fat_offset // SECTOR_SIZE)
-
+
def flush_fats(self):
"""
Write the FATs back to the disk.
@@ -293,7 +295,7 @@ def next_cluster(self, cluster: int) -> int | None:
raise Exception("Invalid FAT entry")
else:
return fat_entry
-
+
def next_free_cluster(self) -> int:
"""
Find the next free cluster.
@@ -338,6 +340,67 @@ def read_directory(self, cluster: int) -> List[FatDirectoryEntry]:
cluster = self.next_cluster(cluster)
return entries
+ def add_direntry(self, cluster: int | None, name: str, ext: str, attributes: int):
+ """
+ Add a new directory entry to the given cluster.
+ If the cluster is `None`, then it will be added to the root directory.
+ """
+
+ def find_free_entry(data: bytes):
+ for i in range(0, len(data), DIRENTRY_SIZE):
+ entry = data[i : i + DIRENTRY_SIZE]
+ if entry[0] == 0 or entry[0] == 0xE5:
+ return i
+ return None
+
+ assert len(name) <= 8, "Name must be 8 characters or less"
+ assert len(ext) <= 3, "Ext must be 3 characters or less"
+ assert attributes % 0x15 != 0x15, "Invalid attributes"
+
+ # initial dummy data
+ new_entry = FatDirectoryEntry(b"\0" * 32, 0, 0)
+ new_entry.name = name.ljust(8, " ")
+ new_entry.ext = ext.ljust(3, " ")
+ new_entry.attributes = attributes
+ new_entry.reserved = 0
+ new_entry.create_time_tenth = 0
+ new_entry.create_time = 0
+ new_entry.create_date = 0
+ new_entry.last_access_date = 0
+ new_entry.last_mod_time = 0
+ new_entry.last_mod_date = 0
+ new_entry.cluster = self.next_free_cluster()
+ new_entry.size_bytes = 0
+
+ # mark as EOF
+ self.write_fat_entry(new_entry.cluster, 0xFFFF)
+
+ if cluster is None:
+ for i in range(self.boot_sector.root_dir_size()):
+ sector_data = self.read_sectors(
+ self.boot_sector.root_dir_start() + i, 1
+ )
+ offset = find_free_entry(sector_data)
+ if offset is not None:
+ new_entry.sector = self.boot_sector.root_dir_start() + i
+ new_entry.offset = offset
+ self.update_direntry(new_entry)
+ return new_entry
+ else:
+ while cluster is not None:
+ data = self.read_cluster(cluster)
+ offset = find_free_entry(data)
+ if offset is not None:
+ new_entry.sector = self.boot_sector.first_sector_of_cluster(
+ cluster
+ ) + (offset // SECTOR_SIZE)
+ new_entry.offset = offset % SECTOR_SIZE
+ self.update_direntry(new_entry)
+ return new_entry
+ cluster = self.next_cluster(cluster)
+
+ raise Exception("No free directory entries")
+
def update_direntry(self, entry: FatDirectoryEntry):
"""
Write the directory entry back to the disk.
@@ -406,9 +469,10 @@ def truncate_file(self, entry: FatDirectoryEntry, new_size: int):
raise Exception(f"{entry.whole_name()} is a directory")
def clusters_from_size(size: int):
- return (size + self.boot_sector.cluster_bytes() - 1) // self.boot_sector.cluster_bytes()
+ return (
+ size + self.boot_sector.cluster_bytes() - 1
+ ) // self.boot_sector.cluster_bytes()
-
# First, allocate new FATs if we need to
required_clusters = clusters_from_size(new_size)
current_clusters = clusters_from_size(entry.size_bytes)
@@ -438,7 +502,7 @@ def clusters_from_size(size: int):
self.write_fat_entry(cluster, new_cluster)
self.write_fat_entry(new_cluster, 0xFFFF)
cluster = new_cluster
-
+
elif required_clusters < current_clusters:
# Truncate the file
cluster = entry.cluster
@@ -464,7 +528,9 @@ def clusters_from_size(size: int):
count += 1
affected_clusters.add(cluster)
cluster = self.next_cluster(cluster)
- assert count == required_clusters, f"Expected {required_clusters} clusters, got {count}"
+ assert (
+ count == required_clusters
+ ), f"Expected {required_clusters} clusters, got {count}"
# update the size
entry.size_bytes = new_size
@@ -505,3 +571,49 @@ def write_file(self, entry: FatDirectoryEntry, data: bytes):
cluster = self.next_cluster(cluster)
assert len(data) == 0, "Data was not written completely, clusters missing"
+
+ def create_file(self, path: str):
+ """
+ Create a new file at the given path.
+ """
+ assert path[0] == "/", "Path must start with /"
+
+ path = path[1:] # remove the leading /
+
+ parts = path.split("/")
+
+ directory_cluster = None
+ directory = self.read_root_directory()
+
+ parts, filename = parts[:-1], parts[-1]
+
+ for i, part in enumerate(parts):
+ current_entry = None
+ for entry in directory:
+ if entry.whole_name() == part:
+ current_entry = entry
+ break
+ if current_entry is None:
+ return None
+
+ if current_entry.attributes & 0x10 == 0:
+ raise Exception(f"{current_entry.whole_name()} is not a directory")
+ else:
+ directory = self.read_directory(current_entry.cluster)
+ directory_cluster = current_entry.cluster
+
+ # add new entry to the directory
+
+ filename, ext = filename.split(".")
+
+ if len(ext) > 3:
+ raise Exception("Ext must be 3 characters or less")
+ if len(filename) > 8:
+ raise Exception("Name must be 8 characters or less")
+
+ for c in filename + ext:
+
+ if c not in ALLOWED_FILE_CHARS:
+ raise Exception("Invalid character in filename")
+
+ return self.add_direntry(directory_cluster, filename, ext, 0)
diff --git a/tests/qemu-iotests/tests/vvfat b/tests/qemu-iotests/tests/vvfat
index e0e23d1ab8..d8d802589d 100755
--- a/tests/qemu-iotests/tests/vvfat
+++ b/tests/qemu-iotests/tests/vvfat
@@ -323,7 +323,7 @@ class TestVVFatDriver(QMPTestCase):
with open(os.path.join(filesystem, "file0.txt"), "rb") as f:
self.assertEqual(f.read(), new_content)
-
+
def test_write_large_file(self):
"""
Test writing a large file
@@ -342,7 +342,7 @@ class TestVVFatDriver(QMPTestCase):
with open(os.path.join(filesystem, "large1.txt"), "rb") as f:
self.assertEqual(f.read(), new_content)
-
+
def test_truncate_file_change_clusters_less(self):
"""
Test truncating a file by reducing the number of clusters
@@ -359,7 +359,6 @@ class TestVVFatDriver(QMPTestCase):
with open(os.path.join(filesystem, "large1.txt"), "rb") as f:
self.assertEqual(f.read(), b"A")
-
def test_write_file_change_clusters_less(self):
"""
Test truncating a file by reducing the number of clusters
@@ -376,7 +375,7 @@ class TestVVFatDriver(QMPTestCase):
with open(os.path.join(filesystem, "large2.txt"), "rb") as f:
self.assertEqual(f.read(), new_content)
-
+
def test_write_file_change_clusters_more(self):
"""
Test truncating a file by increasing the number of clusters
@@ -392,7 +391,27 @@ class TestVVFatDriver(QMPTestCase):
with open(os.path.join(filesystem, "large2.txt"), "rb") as f:
self.assertEqual(f.read(), new_content)
-
+
+ def test_create_file(self):
+ """
+ Test creating a new file
+ """
+ fat16 = self.init_fat16()
+
+ new_file = fat16.create_file("/NEWFILE.TXT")
+
+ self.assertIsNotNone(new_file)
+ self.assertEqual(new_file.size_bytes, 0)
+
+ new_content = b"Hello, world! New file\n"
+ fat16.write_file(new_file, new_content)
+
+ self.assertEqual(fat16.read_file(new_file), new_content)
+
+ with open(os.path.join(filesystem, "newfile.txt"), "rb") as f:
+ self.assertEqual(f.read(), new_content)
+
+ # TODO: support deleting files
if __name__ == "__main__":
diff --git a/tests/qemu-iotests/tests/vvfat.out b/tests/qemu-iotests/tests/vvfat.out
index fa16b5ccef..6323079e08 100755
--- a/tests/qemu-iotests/tests/vvfat.out
+++ b/tests/qemu-iotests/tests/vvfat.out
@@ -1,5 +1,5 @@
-.............
+..............
----------------------------------------------------------------------
-Ran 13 tests
+Ran 14 tests
OK
--
2.45.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v3 1/6] vvfat: Fix bug in writing to middle of file
2024-05-26 9:56 ` [PATCH v3 1/6] vvfat: Fix bug in writing to middle of file Amjad Alsharafi
@ 2024-05-31 16:15 ` Kevin Wolf
0 siblings, 0 replies; 11+ messages in thread
From: Kevin Wolf @ 2024-05-31 16:15 UTC (permalink / raw)
To: Amjad Alsharafi; +Cc: qemu-devel, Hanna Reitz, open list:vvfat
Am 26.05.2024 um 11:56 hat Amjad Alsharafi geschrieben:
> Before this commit, the behavior when calling `commit_one_file` for
> example with `offset=0x2000` (second cluster), what will happen is that
> we won't fetch the next cluster from the fat, and instead use the first
> cluster for the read operation.
>
> This is due to off-by-one error here, where `i=0x2000 !< offset=0x2000`,
> thus not fetching the next cluster.
>
> Signed-off-by: Amjad Alsharafi <amjadsharafi10@gmail.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Tested-by: Kevin Wolf <kwolf@redhat.com>
> diff --git a/block/vvfat.c b/block/vvfat.c
> index 9d050ba3ae..ab342f0743 100644
> --- a/block/vvfat.c
> +++ b/block/vvfat.c
> @@ -2525,7 +2525,7 @@ commit_one_file(BDRVVVFATState* s, int dir_index, uint32_t offset)
> return -1;
> }
>
> - for (i = s->cluster_size; i < offset; i += s->cluster_size)
> + for (i = s->cluster_size; i <= offset; i += s->cluster_size)
> c = modified_fat_get(s, c);
While your change results in the correct behaviour, I think I would
prefer the code to be changed like this so that at the start of each
loop iteration, 'i' always refers to the offset that matches 'c':
for (i = 0; i < offset; i += s->cluster_size) {
c = modified_fat_get(s, c);
}
I'm also adding braces here to make the code conform with the QEMU
coding style. You can use scripts/checkpatch.pl to make sure that all
code you add has the correct style. Much of the vvfat code predates the
coding style, so you'll often have to change style when you touch a
line. (Which is good, because it slowly fixes the existing mess.)
You can keep my Reviewed/Tested-by if you change this.
Kevin
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 0/6] vvfat: Fix write bugs for large files and add iotests
2024-05-26 9:56 [PATCH v3 0/6] vvfat: Fix write bugs for large files and add iotests Amjad Alsharafi
` (5 preceding siblings ...)
2024-05-26 9:56 ` [PATCH v3 6/6] iotests: Add `create_file` test for `vvfat` driver Amjad Alsharafi
@ 2024-05-31 17:22 ` Kevin Wolf
2024-06-05 0:38 ` Amjad Alsharafi
6 siblings, 1 reply; 11+ messages in thread
From: Kevin Wolf @ 2024-05-31 17:22 UTC (permalink / raw)
To: Amjad Alsharafi; +Cc: qemu-devel, Hanna Reitz, open list:vvfat
Am 26.05.2024 um 11:56 hat Amjad Alsharafi geschrieben:
> These patches fix some bugs found when modifying files in vvfat.
> First, there was a bug when writing to the cluster 2 or above of a file, it
> will copy the cluster before it instead, so, when writing to cluster=2, the
> content of cluster=1 will be copied into disk instead in its place.
>
> Another issue was modifying the clusters of a file and adding new
> clusters, this showed 2 issues:
> - If the new cluster is not immediately after the last cluster, it will
> cause issues when reading from this file in the future.
> - Generally, the usage of info.file.offset was incorrect, and the
> system would crash on abort() when the file is modified and a new
> cluster was added.
>
> Also, added some iotests for vvfat, covering the this fix and also
> general behavior such as reading, writing, and creating files on the filesystem.
> Including tests for reading/writing the first cluster which
> would pass even before this patch.
I was wondering how to reproduce the bugs that patches 2 and 3 fix. So I
tried to run your iotests case, and while it does catch the bug that
patch 1 fixes, it passes even without the other two fixes.
Is this expected? If so, can we add more tests that trigger the problems
the other two patches address?
Kevin
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 5/6] iotests: Filter out `vvfat` fmt from failing tests
2024-05-26 9:56 ` [PATCH v3 5/6] iotests: Filter out `vvfat` fmt from failing tests Amjad Alsharafi
@ 2024-05-31 17:29 ` Kevin Wolf
0 siblings, 0 replies; 11+ messages in thread
From: Kevin Wolf @ 2024-05-31 17:29 UTC (permalink / raw)
To: Amjad Alsharafi; +Cc: qemu-devel, Hanna Reitz, open list:vvfat
Am 26.05.2024 um 11:56 hat Amjad Alsharafi geschrieben:
> `vvfat` is a special format and not all tests (even generic) can run
> without crashing. So, added `unsupported_fmt: vvfat` to all failling
> tests.
>
> Also added `vvfat` format into `meson.build`, vvfaat tests can be run
> on the `block-thorough` suite.
>
> Signed-off-by: Amjad Alsharafi <amjadsharafi10@gmail.com>
I think the better approach is just not counting vvfat as generic. It's
technically not even a format, but a protocol, though I think I agree
with adding it as a format anyway because you can't store a normal image
inside of it.
This should do the trick and avoid most of the changes in this patch:
diff --git a/tests/qemu-iotests/testenv.py b/tests/qemu-iotests/testenv.py
index 588f30a4f1..4053d29de4 100644
--- a/tests/qemu-iotests/testenv.py
+++ b/tests/qemu-iotests/testenv.py
@@ -250,7 +250,7 @@ def __init__(self, source_dir: str, build_dir: str,
self.qemu_img_options = os.getenv('QEMU_IMG_OPTIONS')
self.qemu_nbd_options = os.getenv('QEMU_NBD_OPTIONS')
- is_generic = self.imgfmt not in ['bochs', 'cloop', 'dmg']
+ is_generic = self.imgfmt not in ['bochs', 'cloop', 'dmg', 'vvfat']
self.imgfmt_generic = 'true' if is_generic else 'false'
self.qemu_io_options = f'--cache {self.cachemode} --aio {self.aiomode}'
Kevin
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v3 0/6] vvfat: Fix write bugs for large files and add iotests
2024-05-31 17:22 ` [PATCH v3 0/6] vvfat: Fix write bugs for large files and add iotests Kevin Wolf
@ 2024-06-05 0:38 ` Amjad Alsharafi
0 siblings, 0 replies; 11+ messages in thread
From: Amjad Alsharafi @ 2024-06-05 0:38 UTC (permalink / raw)
To: Kevin Wolf; +Cc: qemu-devel, Hanna Reitz, open list:vvfat
On Fri, May 31, 2024 at 07:22:49PM +0200, Kevin Wolf wrote:
> Am 26.05.2024 um 11:56 hat Amjad Alsharafi geschrieben:
> > These patches fix some bugs found when modifying files in vvfat.
> > First, there was a bug when writing to the cluster 2 or above of a file, it
> > will copy the cluster before it instead, so, when writing to cluster=2, the
> > content of cluster=1 will be copied into disk instead in its place.
> >
> > Another issue was modifying the clusters of a file and adding new
> > clusters, this showed 2 issues:
> > - If the new cluster is not immediately after the last cluster, it will
> > cause issues when reading from this file in the future.
> > - Generally, the usage of info.file.offset was incorrect, and the
> > system would crash on abort() when the file is modified and a new
> > cluster was added.
> >
> > Also, added some iotests for vvfat, covering the this fix and also
> > general behavior such as reading, writing, and creating files on the filesystem.
> > Including tests for reading/writing the first cluster which
> > would pass even before this patch.
>
> I was wondering how to reproduce the bugs that patches 2 and 3 fix. So I
> tried to run your iotests case, and while it does catch the bug that
> patch 1 fixes, it passes even without the other two fixes.
>
> Is this expected? If so, can we add more tests that trigger the problems
> the other two patches address?
>
> Kevin
>
Thanks for checking, so this bug happens when you have mapping for file,
and the clusters are not contiguous.
For example, a file with clusters `12, 13, 15`, here when trying to
read from cluster 15, it will get the offset in the file by using
the formula `cluster_size * (15-12)` (`12` is the first cluster).
This is of course is not correct, and will result in error reading the
file from outside the range.
The reason it wasn't clear when you tested it, is that since I'm
modifying `large2.txt`, and its the last file in the disk, when trying
to allocate new clusters, coincidentally, the new clusters are allocated
after the last cluster of that same file, so the issue wasn't triggered.
I'll modify the test to use the other file, so that we can trigger the
issue.
I'll also modify the other suggestions you had in the other patches and
submit a new version.
Amjad
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2024-06-05 0:39 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-05-26 9:56 [PATCH v3 0/6] vvfat: Fix write bugs for large files and add iotests Amjad Alsharafi
2024-05-26 9:56 ` [PATCH v3 1/6] vvfat: Fix bug in writing to middle of file Amjad Alsharafi
2024-05-31 16:15 ` Kevin Wolf
2024-05-26 9:56 ` [PATCH v3 2/6] vvfat: Fix usage of `info.file.offset` Amjad Alsharafi
2024-05-26 9:56 ` [PATCH v3 3/6] vvfat: Fix reading files with non-continuous clusters Amjad Alsharafi
2024-05-26 9:56 ` [PATCH v3 4/6] iotests: Add `vvfat` tests Amjad Alsharafi
2024-05-26 9:56 ` [PATCH v3 5/6] iotests: Filter out `vvfat` fmt from failing tests Amjad Alsharafi
2024-05-31 17:29 ` Kevin Wolf
2024-05-26 9:56 ` [PATCH v3 6/6] iotests: Add `create_file` test for `vvfat` driver Amjad Alsharafi
2024-05-31 17:22 ` [PATCH v3 0/6] vvfat: Fix write bugs for large files and add iotests Kevin Wolf
2024-06-05 0:38 ` Amjad Alsharafi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).