* [PATCH v1 00/13] Block device provisioning for storage nodes
@ 2025-03-10 14:18 cel
2025-03-10 14:18 ` [PATCH v1 01/13] terraform/AWS: Upgrade the EBS volume type to "gp3" cel
` (13 more replies)
0 siblings, 14 replies; 17+ messages in thread
From: cel @ 2025-03-10 14:18 UTC (permalink / raw)
To: Luis Chamberlain, Chandan Babu R, Jeff Layton; +Cc: kdevops, Chuck Lever
From: Chuck Lever <chuck.lever@oracle.com>
Hi -
Sorry for the length of the series. I'm posting the series as a
whole to provide context for each of the individual changes. Feel
free to focus on whichever patch or patches in this series are most
interesting to you. All review comments are welcome.
The high-level goal is to enable testing NFS / SMB / iSCSI with
kdevops in the cloud. These workflows are already operational for
guestfs. The basic challenge is each cloud provider has a distinct
way of provisioning and attaching block devices.
This series can be considered in two sections:
- the first four patches wrangle the terraform for some of the
cloud providers to make them provision a set of extra block
volumes, each of the same size, just as guestfs and AWS
currently do.
- the second half of the series adds a new playbook that:
- de-duplicates the scripting that creates an LVM volume group,
because three different roles each implement this the same way
- replaces the "skip one device" method for determining which
extra block volume the /data partition should live on. AWS still
needs some work here because NVMe devices can get renamed after
every instance reboot
- adds LVM support that handles the differences amongst the cloud
providers, tucked away in the new playbook
GCE and OpenStack are not yet updated, but they are in plan.
Chuck Lever (13):
terraform/AWS: Upgrade the EBS volume type to "gp3"
terraform/Azure: Remove managed_disk_type selection
terraform/Azure: Create a set of multiple generic block devices
terraform/OCI: Create a set of multiple generic block devices
guestfs: Set storage options consistently
playbooks: Add a role to create an LVM volume group
volume_group: Detect the /data partition directly
volume_group: Prepare to support cloud providers
volume_group: Create volume group on terraform/AWS nodes
volume_group: Create a volume group on Azure nodes
volume_group: Create a volume group on GCE nodes
volume_group: Create a volume group on OCI nodes
volume_group: Create a volume group on OpenStack public clouds
kconfigs/Kconfig.iscsi | 22 ---
kconfigs/Kconfig.libvirt | 3 +
kconfigs/Kconfig.nfsd | 27 +--
kconfigs/Kconfig.smbd | 17 --
playbooks/roles/gen_nodes/defaults/main.yml | 2 +-
playbooks/roles/gen_tfvars/defaults/main.yml | 1 -
.../templates/azure/terraform.tfvars.j2 | 4 +-
.../templates/oci/terraform.tfvars.j2 | 6 +
playbooks/roles/iscsi/defaults/main.yml | 2 -
playbooks/roles/iscsi/tasks/main.yml | 23 +--
playbooks/roles/iscsi/vars/Debian.yml | 1 -
playbooks/roles/iscsi/vars/RedHat.yml | 1 -
playbooks/roles/iscsi/vars/Suse.yml | 1 -
playbooks/roles/nfsd/defaults/main.yml | 3 -
playbooks/roles/nfsd/tasks/main.yml | 21 +--
playbooks/roles/smbd/defaults/main.yml | 3 -
playbooks/roles/smbd/tasks/main.yml | 21 +--
playbooks/roles/volume_group/README.md | 42 +++++
.../roles/volume_group/defaults/main.yml | 4 +
.../roles/volume_group/tasks/guestfs.yml | 59 ++++++
playbooks/roles/volume_group/tasks/main.yml | 42 +++++
.../volume_group/tasks/terraform/aws.yml | 54 ++++++
.../volume_group/tasks/terraform/azure.yml | 40 +++++
.../volume_group/tasks/terraform/gce.yml | 4 +
.../volume_group/tasks/terraform/oci.yml | 38 ++++
.../tasks/terraform/openstack.yml | 4 +
scripts/gen-nodes.Makefile | 10 --
scripts/iscsi.Makefile | 2 -
scripts/nfsd.Makefile | 2 -
scripts/smbd.Makefile | 2 -
scripts/terraform.Makefile | 10 +-
terraform/aws/main.tf | 3 +-
terraform/azure/Kconfig | 168 ++++++++++++++++--
terraform/azure/main.tf | 48 ++---
terraform/azure/managed_disks/main.tf | 20 +++
terraform/azure/managed_disks/vars.tf | 29 +++
terraform/azure/vars.tf | 17 +-
terraform/oci/Kconfig | 153 ++++++++++++++++
terraform/oci/main.tf | 28 ++-
terraform/oci/vars.tf | 17 ++
40 files changed, 744 insertions(+), 210 deletions(-)
create mode 100644 playbooks/roles/volume_group/README.md
create mode 100644 playbooks/roles/volume_group/defaults/main.yml
create mode 100644 playbooks/roles/volume_group/tasks/guestfs.yml
create mode 100644 playbooks/roles/volume_group/tasks/main.yml
create mode 100644 playbooks/roles/volume_group/tasks/terraform/aws.yml
create mode 100644 playbooks/roles/volume_group/tasks/terraform/azure.yml
create mode 100644 playbooks/roles/volume_group/tasks/terraform/gce.yml
create mode 100644 playbooks/roles/volume_group/tasks/terraform/oci.yml
create mode 100644 playbooks/roles/volume_group/tasks/terraform/openstack.yml
create mode 100644 terraform/azure/managed_disks/main.tf
create mode 100644 terraform/azure/managed_disks/vars.tf
--
2.48.1
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v1 01/13] terraform/AWS: Upgrade the EBS volume type to "gp3"
2025-03-10 14:18 [PATCH v1 00/13] Block device provisioning for storage nodes cel
@ 2025-03-10 14:18 ` cel
2025-03-10 14:18 ` [PATCH v1 02/13] terraform/Azure: Remove managed_disk_type selection cel
` (12 subsequent siblings)
13 siblings, 0 replies; 17+ messages in thread
From: cel @ 2025-03-10 14:18 UTC (permalink / raw)
To: Luis Chamberlain, Chandan Babu R, Jeff Layton; +Cc: kdevops, Chuck Lever
From: Chuck Lever <chuck.lever@oracle.com>
The default is "gp2". We get better throughput with the newer
device type, helping tests run to completion more quickly.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
terraform/aws/main.tf | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/terraform/aws/main.tf b/terraform/aws/main.tf
index a9407a745bcc..4679ca79159c 100644
--- a/terraform/aws/main.tf
+++ b/terraform/aws/main.tf
@@ -145,7 +145,8 @@ resource "aws_instance" "kdevops_instance" {
resource "aws_ebs_volume" "kdevops_vols" {
count = var.aws_enable_ebs == "true" ? local.kdevops_num_boxes * var.aws_ebs_num_volumes_per_instance : 0
availability_zone = var.aws_availability_region
- size = var.aws_ebs_volume_size
+ size = var.aws_ebs_volume_size
+ type = "gp3"
}
resource "aws_volume_attachment" "kdevops_att" {
--
2.48.1
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v1 02/13] terraform/Azure: Remove managed_disk_type selection
2025-03-10 14:18 [PATCH v1 00/13] Block device provisioning for storage nodes cel
2025-03-10 14:18 ` [PATCH v1 01/13] terraform/AWS: Upgrade the EBS volume type to "gp3" cel
@ 2025-03-10 14:18 ` cel
2025-03-10 14:18 ` [PATCH v1 03/13] terraform/Azure: Create a set of multiple generic block devices cel
` (11 subsequent siblings)
13 siblings, 0 replies; 17+ messages in thread
From: cel @ 2025-03-10 14:18 UTC (permalink / raw)
To: Luis Chamberlain, Chandan Babu R, Jeff Layton; +Cc: kdevops, Chuck Lever
From: Chuck Lever <chuck.lever@oracle.com>
Using anything but "Premium_LRS" does not seem sensible, so remove
the choice from Kconfig to keep things simple.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
playbooks/roles/gen_tfvars/defaults/main.yml | 1 -
.../templates/azure/terraform.tfvars.j2 | 1 -
scripts/terraform.Makefile | 1 -
terraform/azure/Kconfig | 17 -----------------
terraform/azure/main.tf | 6 +++---
terraform/azure/vars.tf | 5 -----
6 files changed, 3 insertions(+), 28 deletions(-)
diff --git a/playbooks/roles/gen_tfvars/defaults/main.yml b/playbooks/roles/gen_tfvars/defaults/main.yml
index 8d13e04bd33a..c14ff59c90df 100644
--- a/playbooks/roles/gen_tfvars/defaults/main.yml
+++ b/playbooks/roles/gen_tfvars/defaults/main.yml
@@ -30,7 +30,6 @@ terraform_aws_ebs_volume_size: 0
terraform_azure_resource_location: "invalid"
terraform_azure_vm_size: "invalid"
-terraform_azure_managed_disk_type: "invalid"
terraform_azure_image_publisher: "invalid"
terraform_azure_image_offer: "invalid"
terraform_azure_image_sku: "invalid"
diff --git a/playbooks/roles/gen_tfvars/templates/azure/terraform.tfvars.j2 b/playbooks/roles/gen_tfvars/templates/azure/terraform.tfvars.j2
index 278101cf4cb1..37db35d2cbed 100644
--- a/playbooks/roles/gen_tfvars/templates/azure/terraform.tfvars.j2
+++ b/playbooks/roles/gen_tfvars/templates/azure/terraform.tfvars.j2
@@ -6,7 +6,6 @@ tenant_id = "{{ terraform_azure_tenant_id }}"
resource_location = "{{ terraform_azure_resource_location }}"
vmsize = "{{ terraform_azure_vm_size }}"
-managed_disk_type = "{{ terraform_azure_managed_disk_type }}"
image_publisher = "{{ terraform_azure_image_publisher }}"
image_offer = "{{ terraform_azure_image_offer }}"
image_sku = "{{ terraform_azure_image_sku }}"
diff --git a/scripts/terraform.Makefile b/scripts/terraform.Makefile
index 6543da89a17f..19c2384fb2ad 100644
--- a/scripts/terraform.Makefile
+++ b/scripts/terraform.Makefile
@@ -66,7 +66,6 @@ endif
ifeq (y,$(CONFIG_TERRAFORM_AZURE))
TERRAFORM_EXTRA_VARS += terraform_azure_resource_location=$(subst ",,$(CONFIG_TERRAFORM_AZURE_RESOURCE_LOCATION))
TERRAFORM_EXTRA_VARS += terraform_azure_vm_size=$(subst ",,$(CONFIG_TERRAFORM_AZURE_VM_SIZE))
-TERRAFORM_EXTRA_VARS += terraform_azure_managed_disk_type=$(subst ",,$(CONFIG_TERRAFORM_AZURE_MANAGED_DISK_TYPE))
TERRAFORM_EXTRA_VARS += terraform_azure_image_publisher=$(subst ",,$(CONFIG_TERRAFORM_AZURE_IMAGE_PUBLISHER))
TERRAFORM_EXTRA_VARS += terraform_azure_image_offer=$(subst ",,$(CONFIG_TERRAFORM_AZURE_IMAGE_OFFER))
TERRAFORM_EXTRA_VARS += terraform_azure_image_sku=$(subst ",,$(CONFIG_TERRAFORM_AZURE_IMAGE_SKU))
diff --git a/terraform/azure/Kconfig b/terraform/azure/Kconfig
index 30acefd301db..0c5a0df9fbc5 100644
--- a/terraform/azure/Kconfig
+++ b/terraform/azure/Kconfig
@@ -42,23 +42,6 @@ config TERRAFORM_AZURE_VM_SIZE
help
This option will set the azure vm image size.
-choice
- prompt "Azure managed disk type"
- default TERRAFORM_AZURE_MANAGED_DISK_PREMIUM_LRS
-
-config TERRAFORM_AZURE_MANAGED_DISK_PREMIUM_LRS
- bool "Premium_LRS"
- help
- This option will set the azure vm image size to Standard_DS1_v2.
-
-endchoice
-
-config TERRAFORM_AZURE_MANAGED_DISK_TYPE
- string "Azure managed disk type"
- default "Premium_LRS" if TERRAFORM_AZURE_MANAGED_DISK_PREMIUM_LRS
- help
- This option will set azure managed disk type.
-
choice
prompt "Azure image publisher"
default TERRAFORM_AZURE_IMAGE_PUBLISHER_DEBIAN
diff --git a/terraform/azure/main.tf b/terraform/azure/main.tf
index 55c66b458a92..d2e90ff7f7f0 100644
--- a/terraform/azure/main.tf
+++ b/terraform/azure/main.tf
@@ -140,7 +140,7 @@ resource "azurerm_linux_virtual_machine" "kdevops_vm" {
#name = "${format("kdevops-main-disk-%s", element(azurerm_virtual_machine.kdevops_vm.*.name, count.index))}"
name = format("kdevops-main-disk-%02d", count.index + 1)
caching = "ReadWrite"
- storage_account_type = var.managed_disk_type
+ storage_account_type = "Premium_LRS"
#disk_size_gb = 64
}
@@ -174,7 +174,7 @@ resource "azurerm_managed_disk" "kdevops_data_disk" {
location = var.resource_location
resource_group_name = azurerm_resource_group.kdevops_group.name
create_option = "Empty"
- storage_account_type = var.managed_disk_type
+ storage_account_type = "Premium_LRS"
disk_size_gb = 100
}
@@ -193,7 +193,7 @@ resource "azurerm_managed_disk" "kdevops_scratch_disk" {
location = var.resource_location
resource_group_name = azurerm_resource_group.kdevops_group.name
create_option = "Empty"
- storage_account_type = var.managed_disk_type
+ storage_account_type = "Premium_LRS"
disk_size_gb = 100
}
diff --git a/terraform/azure/vars.tf b/terraform/azure/vars.tf
index 0a7f9585f66b..3981ccb01faf 100644
--- a/terraform/azure/vars.tf
+++ b/terraform/azure/vars.tf
@@ -40,11 +40,6 @@ variable "vmsize" {
default = "Standard_DS3_v2"
}
-variable "managed_disk_type" {
- description = "Managed disk type"
- default = "Premium_LRS"
-}
-
variable "image_publisher" {
description = "Storage image publisher"
default = "Debian"
--
2.48.1
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v1 03/13] terraform/Azure: Create a set of multiple generic block devices
2025-03-10 14:18 [PATCH v1 00/13] Block device provisioning for storage nodes cel
2025-03-10 14:18 ` [PATCH v1 01/13] terraform/AWS: Upgrade the EBS volume type to "gp3" cel
2025-03-10 14:18 ` [PATCH v1 02/13] terraform/Azure: Remove managed_disk_type selection cel
@ 2025-03-10 14:18 ` cel
2025-03-10 14:18 ` [PATCH v1 04/13] terraform/OCI: " cel
` (10 subsequent siblings)
13 siblings, 0 replies; 17+ messages in thread
From: cel @ 2025-03-10 14:18 UTC (permalink / raw)
To: Luis Chamberlain, Chandan Babu R, Jeff Layton; +Cc: kdevops, Chuck Lever
From: Chuck Lever <chuck.lever@oracle.com>
When provisioning on Azure, terraform creates one block device for
the /data file system, and one for unnamed device. This is unlike
other provisioning methods (guestfs and AWS being the primary
examples) which instead create a set of generic block devices and
then set up the sparse files or /data file system on one of those.
Luis has agreed to changing Azure to work like the other terraform
providers and guestfs.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
.../templates/azure/terraform.tfvars.j2 | 3 +
terraform/azure/Kconfig | 151 +++++++++++++++++-
terraform/azure/main.tf | 46 ++----
terraform/azure/managed_disks/main.tf | 20 +++
terraform/azure/managed_disks/vars.tf | 29 ++++
terraform/azure/vars.tf | 12 ++
6 files changed, 224 insertions(+), 37 deletions(-)
create mode 100644 terraform/azure/managed_disks/main.tf
create mode 100644 terraform/azure/managed_disks/vars.tf
diff --git a/playbooks/roles/gen_tfvars/templates/azure/terraform.tfvars.j2 b/playbooks/roles/gen_tfvars/templates/azure/terraform.tfvars.j2
index 37db35d2cbed..117c9b7c49e5 100644
--- a/playbooks/roles/gen_tfvars/templates/azure/terraform.tfvars.j2
+++ b/playbooks/roles/gen_tfvars/templates/azure/terraform.tfvars.j2
@@ -11,6 +11,9 @@ image_offer = "{{ terraform_azure_image_offer }}"
image_sku = "{{ terraform_azure_image_sku }}"
image_version = "{{ terraform_azure_image_version }}"
+managed_disks_per_instance = {{ terraform_azure_managed_disks_per_instance }}
+managed_disks_size = {{ terraform_azure_managed_disks_size }}
+
ssh_config_pubkey_file = "{{ kdevops_terraform_ssh_config_pubkey_file }}"
ssh_config_user = "{{ kdevops_terraform_ssh_config_user }}"
ssh_config = "{{ sshconfig }}"
diff --git a/terraform/azure/Kconfig b/terraform/azure/Kconfig
index 0c5a0df9fbc5..18062ddf7cb2 100644
--- a/terraform/azure/Kconfig
+++ b/terraform/azure/Kconfig
@@ -155,7 +155,156 @@ config TERRAFORM_AZURE_APPLICATION_ID
help
The application ID to use.
+choice
+ prompt "Count of extra managed disks"
+ default TERRAFORM_AZURE_MANAGED_DISKS_PER_INSTANCE_4
+ help
+ The count of managed disks attached to each target node.
+
+config TERRAFORM_AZURE_MANAGED_DISKS_PER_INSTANCE_2
+ bool "2"
+ help
+ Provision 2 extra managed disks per target node.
+
+config TERRAFORM_AZURE_MANAGED_DISKS_PER_INSTANCE_3
+ bool "3"
+ help
+ Provision 3 extra managed disks per target node.
+
+config TERRAFORM_AZURE_MANAGED_DISKS_PER_INSTANCE_4
+ bool "4"
+ help
+ Provision 4 extra managed disks per target node.
+
+config TERRAFORM_AZURE_MANAGED_DISKS_PER_INSTANCE_5
+ bool "5"
+ help
+ Provision 5 extra managed disks per target node.
+
+config TERRAFORM_AZURE_MANAGED_DISKS_PER_INSTANCE_6
+ bool "6"
+ help
+ Provision 6 extra managed disks per target node.
+
+config TERRAFORM_AZURE_MANAGED_DISKS_PER_INSTANCE_7
+ bool "7"
+ help
+ Provision 7 extra managed disks per target node.
+
+config TERRAFORM_AZURE_MANAGED_DISKS_PER_INSTANCE_8
+ bool "8"
+ help
+ Provision 8 extra managed disks per target node.
+
+config TERRAFORM_AZURE_MANAGED_DISKS_PER_INSTANCE_9
+ bool "9"
+ help
+ Provision 9 extra managed disks per target node.
+
+config TERRAFORM_AZURE_MANAGED_DISKS_PER_INSTANCE_10
+ bool "10"
+ help
+ Provision 10 extra managed disks per target node.
+
+endchoice
+
+config TERRAFORM_AZURE_MANAGED_DISKS_PER_INSTANCE
+ int
+ output yaml
+ default 2 if TERRAFORM_AZURE_MANAGED_DISKS_PER_INSTANCE_2
+ default 3 if TERRAFORM_AZURE_MANAGED_DISKS_PER_INSTANCE_3
+ default 4 if TERRAFORM_AZURE_MANAGED_DISKS_PER_INSTANCE_4
+ default 5 if TERRAFORM_AZURE_MANAGED_DISKS_PER_INSTANCE_5
+ default 6 if TERRAFORM_AZURE_MANAGED_DISKS_PER_INSTANCE_6
+ default 7 if TERRAFORM_AZURE_MANAGED_DISKS_PER_INSTANCE_7
+ default 8 if TERRAFORM_AZURE_MANAGED_DISKS_PER_INSTANCE_8
+ default 9 if TERRAFORM_AZURE_MANAGED_DISKS_PER_INSTANCE_9
+ default 10 if TERRAFORM_AZURE_MANAGED_DISKS_PER_INSTANCE_10
+
+choice
+ prompt "Volume size for each additional volume"
+ default TERRAFORM_AZURE_MANAGED_DISKS_SIZE_64G
+ help
+ This option selects the size (in gibibytes) of managed
+ disks create for the target nodes.
+
+config TERRAFORM_AZURE_MANAGED_DISKS_SIZE_4G
+ bool "4G"
+ help
+ Managed disks are 4 GiB in size.
+
+config TERRAFORM_AZURE_MANAGED_DISKS_SIZE_8G
+ bool "8G"
+ help
+ Managed disks are 8 GiB in size.
+
+config TERRAFORM_AZURE_MANAGED_DISKS_SIZE_16G
+ bool "16G"
+ help
+ Managed disks are 16 GiB in size.
+
+config TERRAFORM_AZURE_MANAGED_DISKS_SIZE_32G
+ bool "32G"
+ help
+ Managed disks are 32 GiB in size.
+
+config TERRAFORM_AZURE_MANAGED_DISKS_SIZE_64G
+ bool "64G"
+ help
+ Managed disks are 64 GiB in size.
+
+config TERRAFORM_AZURE_MANAGED_DISKS_SIZE_128G
+ bool "128G"
+ help
+ Managed disks are 128 GiB in size.
+
+config TERRAFORM_AZURE_MANAGED_DISKS_SIZE_256G
+ bool "256G"
+ help
+ Managed disks are 256 GiB in size.
+
+config TERRAFORM_AZURE_MANAGED_DISKS_SIZE_512G
+ bool "512G"
+ help
+ Managed disks are 512 GiB in size.
+
+config TERRAFORM_AZURE_MANAGED_DISKS_SIZE_1024G
+ bool "1024G"
+ help
+ Managed disks are 1024 GiB in size.
+
+config TERRAFORM_AZURE_MANAGED_DISKS_SIZE_2048G
+ bool "2048G"
+ help
+ Managed disks are 2048 GiB in size.
+
+config TERRAFORM_AZURE_MANAGED_DISKS_SIZE_4096G
+ bool "4096G"
+ help
+ Managed disks are 4096 GiB in size.
+
+endchoice
+
+config TERRAFORM_AZURE_MANAGED_DISKS_SIZE
+ int
+ output yaml
+ default 4 if TERRAFORM_AZURE_MANAGED_DISKS_SIZE_4G
+ default 8 if TERRAFORM_AZURE_MANAGED_DISKS_SIZE_8G
+ default 16 if TERRAFORM_AZURE_MANAGED_DISKS_SIZE_16G
+ default 32 if TERRAFORM_AZURE_MANAGED_DISKS_SIZE_32G
+ default 64 if TERRAFORM_AZURE_MANAGED_DISKS_SIZE_64G
+ default 128 if TERRAFORM_AZURE_MANAGED_DISKS_SIZE_128G
+ default 256 if TERRAFORM_AZURE_MANAGED_DISKS_SIZE_256G
+ default 512 if TERRAFORM_AZURE_MANAGED_DISKS_SIZE_512G
+ default 1024 if TERRAFORM_AZURE_MANAGED_DISKS_SIZE_1024G
+ default 2048 if TERRAFORM_AZURE_MANAGED_DISKS_SIZE_2048G
+ default 4096 if TERRAFORM_AZURE_MANAGED_DISKS_SIZE_4096G
+
config TERRAFORM_AZURE_DATA_VOLUME_DEVICE_FILE_NAME
- string "/dev/sdc"
+ string "Device name for the /data file system"
+ default "/dev/sdc"
+ help
+ This option sets the name of the block device on each target
+ node that is to be used for the /data file system.
endif # TERRAFORM_AZURE
diff --git a/terraform/azure/main.tf b/terraform/azure/main.tf
index d2e90ff7f7f0..9b7b9228eb0e 100644
--- a/terraform/azure/main.tf
+++ b/terraform/azure/main.tf
@@ -10,7 +10,7 @@ resource "azurerm_resource_group" "kdevops_group" {
}
locals {
- kdevops_private_net = format("%s/%d", var.private_net_prefix, var.private_net_mask)
+ kdevops_private_net = format("%s/%d", var.private_net_prefix, var.private_net_mask)
}
resource "azurerm_virtual_network" "kdevops_network" {
@@ -168,40 +168,14 @@ resource "azurerm_linux_virtual_machine" "kdevops_vm" {
}
}
-resource "azurerm_managed_disk" "kdevops_data_disk" {
- count = local.kdevops_num_boxes
- name = format("kdevops-data-disk-%02d", count.index + 1)
- location = var.resource_location
- resource_group_name = azurerm_resource_group.kdevops_group.name
- create_option = "Empty"
- storage_account_type = "Premium_LRS"
- disk_size_gb = 100
-}
+module "managed_disks" {
+ count = local.kdevops_num_boxes
+ source = "./managed_disks"
-resource "azurerm_virtual_machine_data_disk_attachment" "kdevops_data_disk" {
- count = local.kdevops_num_boxes
- managed_disk_id = azurerm_managed_disk.kdevops_data_disk[count.index].id
- virtual_machine_id = element(azurerm_linux_virtual_machine.kdevops_vm.*.id, count.index)
- caching = "None"
- write_accelerator_enabled = false
- lun = 0
-}
-
-resource "azurerm_managed_disk" "kdevops_scratch_disk" {
- count = local.kdevops_num_boxes
- name = format("kdevops-scratch-disk-%02d", count.index + 1)
- location = var.resource_location
- resource_group_name = azurerm_resource_group.kdevops_group.name
- create_option = "Empty"
- storage_account_type = "Premium_LRS"
- disk_size_gb = 100
-}
-
-resource "azurerm_virtual_machine_data_disk_attachment" "kdevops_scratch_disk" {
- count = local.kdevops_num_boxes
- managed_disk_id = azurerm_managed_disk.kdevops_scratch_disk[count.index].id
- virtual_machine_id = element(azurerm_linux_virtual_machine.kdevops_vm.*.id, count.index)
- caching = "None"
- write_accelerator_enabled = false
- lun = 1
+ md_disk_size = var.managed_disks_size
+ md_disk_count = var.managed_disks_per_instance
+ md_location = var.resource_location
+ md_resource_group_name = azurerm_resource_group.kdevops_group.name
+ md_virtual_machine_id = element(azurerm_linux_virtual_machine.kdevops_vm.*.id, count.index)
+ md_virtual_machine_name = element(azurerm_linux_virtual_machine.kdevops_vm.*.name, count.index)
}
diff --git a/terraform/azure/managed_disks/main.tf b/terraform/azure/managed_disks/main.tf
new file mode 100644
index 000000000000..503c782662fc
--- /dev/null
+++ b/terraform/azure/managed_disks/main.tf
@@ -0,0 +1,20 @@
+resource "azurerm_managed_disk" "kdevops_managed_disk" {
+ count = var.md_disk_count
+
+ name = format("kdevops_%s_disk%02d", var.md_virtual_machine_name, count.index + 1)
+ location = var.md_location
+ resource_group_name = var.md_resource_group_name
+ create_option = "Empty"
+ storage_account_type = "Premium_LRS"
+ disk_size_gb = var.md_disk_size
+}
+
+resource "azurerm_virtual_machine_data_disk_attachment" "kdevops_disk_attachment" {
+ count = var.md_disk_count
+
+ managed_disk_id = azurerm_managed_disk.kdevops_managed_disk[count.index].id
+ virtual_machine_id = var.md_virtual_machine_id
+ caching = "None"
+ write_accelerator_enabled = false
+ lun = count.index
+}
diff --git a/terraform/azure/managed_disks/vars.tf b/terraform/azure/managed_disks/vars.tf
new file mode 100644
index 000000000000..568df7c6fc41
--- /dev/null
+++ b/terraform/azure/managed_disks/vars.tf
@@ -0,0 +1,29 @@
+variable "md_disk_count" {
+ type = number
+ description = "Count of managed disks to attach to the virtual machine"
+}
+
+variable "md_disk_size" {
+ type = number
+ description = "Size of each managed disk, in gibibytes"
+}
+
+variable "md_location" {
+ type = string
+ description = "Azure resource location"
+}
+
+variable "md_resource_group_name" {
+ type = string
+ description = "Azure resource group name"
+}
+
+variable "md_virtual_machine_id" {
+ type = string
+ description = "Azure ID of the virtual machine to attach the disks to"
+}
+
+variable "md_virtual_machine_name" {
+ type = string
+ description = "Name of the virtual machine to attach the disks to"
+}
diff --git a/terraform/azure/vars.tf b/terraform/azure/vars.tf
index 3981ccb01faf..bf20adf813e0 100644
--- a/terraform/azure/vars.tf
+++ b/terraform/azure/vars.tf
@@ -59,3 +59,15 @@ variable "image_version" {
description = "Storage image version"
default = "latest"
}
+
+variable "managed_disks_per_instance" {
+ description = "Count of managed disks per VM instance"
+ type = number
+ default = 0
+}
+
+variable "managed_disks_size" {
+ description = "Size of each managed disk, in gibibytes"
+ type = number
+ default = 0
+}
--
2.48.1
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v1 04/13] terraform/OCI: Create a set of multiple generic block devices
2025-03-10 14:18 [PATCH v1 00/13] Block device provisioning for storage nodes cel
` (2 preceding siblings ...)
2025-03-10 14:18 ` [PATCH v1 03/13] terraform/Azure: Create a set of multiple generic block devices cel
@ 2025-03-10 14:18 ` cel
2025-03-13 5:56 ` Chandan Babu R
2025-03-10 14:18 ` [PATCH v1 05/13] guestfs: Set storage options consistently cel
` (9 subsequent siblings)
13 siblings, 1 reply; 17+ messages in thread
From: cel @ 2025-03-10 14:18 UTC (permalink / raw)
To: Luis Chamberlain, Chandan Babu R, Jeff Layton; +Cc: kdevops, Chuck Lever
From: Chuck Lever <chuck.lever@oracle.com>
When provisioning on OCI, terraform creates one block device for
the /data file system, and one for sparse files. This is unlike
other provisioning methods (guestfs and AWS being the primary
examples) which instead create a set of generic block devices and
then set up the sparse files or /data file system on one of those.
Luis and Chandan have agreed to changing OCI to work like the
other terraform providers and guestfs.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
.../templates/oci/terraform.tfvars.j2 | 6 +
scripts/terraform.Makefile | 9 +-
terraform/oci/Kconfig | 153 ++++++++++++++++++
terraform/oci/main.tf | 28 +++-
terraform/oci/vars.tf | 17 ++
5 files changed, 208 insertions(+), 5 deletions(-)
diff --git a/playbooks/roles/gen_tfvars/templates/oci/terraform.tfvars.j2 b/playbooks/roles/gen_tfvars/templates/oci/terraform.tfvars.j2
index 6429c7289f52..2d45fd77d510 100644
--- a/playbooks/roles/gen_tfvars/templates/oci/terraform.tfvars.j2
+++ b/playbooks/roles/gen_tfvars/templates/oci/terraform.tfvars.j2
@@ -12,10 +12,16 @@ oci_os_image_ocid = "{{ terraform_oci_os_image_ocid }}"
oci_assign_public_ip = "{{ terraform_oci_assign_public_ip | lower }}"
oci_instance_display_name = "{{ terraform_oci_instance_display_name }}"
oci_subnet_ocid = "{{ terraform_oci_subnet_ocid }}"
+oci_volumes_enable_extra = "{{ terraform_oci_volumes_enable_extra | lower }}"
+{% if terraform_oci_volumes_enable_extra %}
+oci_volumes_per_instance = {{ terraform_oci_volumes_per_instance }}
+oci_volumes_size = {{ terraform_oci_volumes_size }}
+{% else %}
oci_data_volume_display_name = "{{ terraform_oci_data_volume_display_name }}"
oci_data_volume_device_file_name = "{{ terraform_oci_data_volume_device_file_name }}"
oci_sparse_volume_display_name = "{{ terraform_oci_sparse_volume_display_name }}"
oci_sparse_volume_device_file_name = "{{ terraform_oci_sparse_volume_device_file_name }}"
+{% endif %}
ssh_config_pubkey_file = "{{ kdevops_terraform_ssh_config_pubkey_file }}"
ssh_config_user = "{{ kdevops_terraform_ssh_config_user }}"
diff --git a/scripts/terraform.Makefile b/scripts/terraform.Makefile
index 19c2384fb2ad..e3d8c6b003ce 100644
--- a/scripts/terraform.Makefile
+++ b/scripts/terraform.Makefile
@@ -104,10 +104,17 @@ else
TERRAFORM_EXTRA_VARS += terraform_oci_assign_public_ip=false
endif
TERRAFORM_EXTRA_VARS += terraform_oci_subnet_ocid=$(subst ",,$(CONFIG_TERRAFORM_OCI_SUBNET_OCID))
+
+ifeq (y, $(CONFIG_TERRAFORM_OCI_VOLUMES_ENABLE_EXTRA))
+TERRAFORM_EXTRA_VARS += terraform_oci_volumes_enable_extra=true
+else
+TERRAFORM_EXTRA_VARS += terraform_oci_volumes_enable_extra=false
TERRAFORM_EXTRA_VARS += terraform_oci_data_volume_display_name=$(subst ",,$(CONFIG_TERRAFORM_OCI_DATA_VOLUME_DISPLAY_NAME))
-TERRAFORM_EXTRA_VARS += terraform_oci_data_volume_device_file_name=$(subst ",,$(CONFIG_TERRAFORM_OCI_DATA_VOLUME_DEVICE_FILE_NAME))
TERRAFORM_EXTRA_VARS += terraform_oci_sparse_volume_display_name=$(subst ",,$(CONFIG_TERRAFORM_OCI_SPARSE_VOLUME_DISPLAY_NAME))
+endif
+TERRAFORM_EXTRA_VARS += terraform_oci_data_volume_device_file_name=$(subst ",,$(CONFIG_TERRAFORM_OCI_DATA_VOLUME_DEVICE_FILE_NAME))
TERRAFORM_EXTRA_VARS += terraform_oci_sparse_volume_device_file_name=$(subst ",,$(CONFIG_TERRAFORM_OCI_SPARSE_VOLUME_DEVICE_FILE_NAME))
+
endif
ifeq (y,$(CONFIG_TERRAFORM_OPENSTACK))
diff --git a/terraform/oci/Kconfig b/terraform/oci/Kconfig
index 4b37ad91d4b9..00f03163ed83 100644
--- a/terraform/oci/Kconfig
+++ b/terraform/oci/Kconfig
@@ -90,6 +90,153 @@ config TERRAFORM_OCI_SUBNET_OCID
Read this:
https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/terraformproviderconfiguration.htm
+config TERRAFORM_OCI_VOLUMES_ENABLE_EXTRA
+ bool "Enable additional block devices"
+ default n
+ help
+ Enable this to provision up to 10 extra block devices
+ on each target node.
+
+if TERRAFORM_OCI_VOLUMES_ENABLE_EXTRA
+
+choice
+ prompt "Count of extra block volumes"
+ default TERRAFORM_OCI_VOLUMES_PER_INSTANCE_4
+ help
+ The count of extra block devices attached to each target
+ node.
+
+config TERRAFORM_OCI_VOLUMES_PER_INSTANCE_2
+ bool "2"
+ help
+ Provision 2 extra volumes per target node.
+
+config TERRAFORM_OCI_VOLUMES_PER_INSTANCE_3
+ bool "3"
+ help
+ Provision 3 extra volumes per target node.
+
+config TERRAFORM_OCI_VOLUMES_PER_INSTANCE_4
+ bool "4"
+ help
+ Provision 4 extra volumes per target node.
+
+config TERRAFORM_OCI_VOLUMES_PER_INSTANCE_5
+ bool "5"
+ help
+ Provision 5 extra volumes per target node.
+
+config TERRAFORM_OCI_VOLUMES_PER_INSTANCE_6
+ bool "6"
+ help
+ Provision 6 extra volumes per target node.
+
+config TERRAFORM_OCI_VOLUMES_PER_INSTANCE_7
+ bool "7"
+ help
+ Provision 7 extra volumes per target node.
+
+config TERRAFORM_OCI_VOLUMES_PER_INSTANCE_8
+ bool "8"
+ help
+ Provision 8 extra volumes per target node.
+
+config TERRAFORM_OCI_VOLUMES_PER_INSTANCE_9
+ bool "9"
+ help
+ Provision 9 extra volumes per target node.
+
+config TERRAFORM_OCI_VOLUMES_PER_INSTANCE_10
+ bool "10"
+ help
+ Provision 10 extra volumes per target node.
+
+endchoice
+
+config TERRAFORM_OCI_VOLUMES_PER_INSTANCE
+ int
+ output yaml
+ default 2 if TERRAFORM_OCI_VOLUMES_PER_INSTANCE_2
+ default 3 if TERRAFORM_OCI_VOLUMES_PER_INSTANCE_3
+ default 4 if TERRAFORM_OCI_VOLUMES_PER_INSTANCE_4
+ default 5 if TERRAFORM_OCI_VOLUMES_PER_INSTANCE_5
+ default 6 if TERRAFORM_OCI_VOLUMES_PER_INSTANCE_6
+ default 7 if TERRAFORM_OCI_VOLUMES_PER_INSTANCE_7
+ default 8 if TERRAFORM_OCI_VOLUMES_PER_INSTANCE_8
+ default 9 if TERRAFORM_OCI_VOLUMES_PER_INSTANCE_9
+ default 10 if TERRAFORM_OCI_VOLUMES_PER_INSTANCE_10
+
+choice
+ prompt "Volume size for each additional volume"
+ default TERRAFORM_OCI_VOLUMES_SIZE_50G
+ help
+ OCI implements volume sizes between 50G and 32T. In some
+ cases, 50G volumes are in the free tier.
+
+config TERRAFORM_OCI_VOLUMES_SIZE_50G
+ bool "50G"
+ help
+ Extra block volumes are 50 GiB in size.
+
+config TERRAFORM_OCI_VOLUMES_SIZE_64G
+ bool "64G"
+ help
+ Extra block volumes are 64 GiB in size.
+
+config TERRAFORM_OCI_VOLUMES_SIZE_128G
+ bool "128G"
+ help
+ Extra block volumes are 128 GiB in size.
+
+config TERRAFORM_OCI_VOLUMES_SIZE_256G
+ bool "256G"
+ help
+ Extra block volumes are 256 GiB in size.
+
+config TERRAFORM_OCI_VOLUMES_SIZE_512G
+ bool "512G"
+ help
+ Extra block volumes are 512 GiB in size.
+
+config TERRAFORM_OCI_VOLUMES_SIZE_1024G
+ bool "1024G"
+ help
+ Extra block volumes are 1024 GiB in size.
+
+config TERRAFORM_OCI_VOLUMES_SIZE_2048G
+ bool "2048G"
+ help
+ Extra block volumes are 2048 GiB in size.
+
+config TERRAFORM_OCI_VOLUMES_SIZE_4096G
+ bool "4096G"
+ help
+ Extra block volumes are 4096 GiB in size.
+
+config TERRAFORM_OCI_VOLUMES_SIZE_8192G
+ bool "8192G"
+ help
+ Extra block volumes are 8192 GiB in size.
+
+endchoice
+
+config TERRAFORM_OCI_VOLUMES_SIZE
+ int
+ output yaml
+ default 50 if TERRAFORM_OCI_VOLUMES_SIZE_50G
+ default 64 if TERRAFORM_OCI_VOLUMES_SIZE_64G
+ default 128 if TERRAFORM_OCI_VOLUMES_SIZE_128G
+ default 256 if TERRAFORM_OCI_VOLUMES_SIZE_256G
+ default 512 if TERRAFORM_OCI_VOLUMES_SIZE_512G
+ default 1024 if TERRAFORM_OCI_VOLUMES_SIZE_1024G
+ default 2048 if TERRAFORM_OCI_VOLUMES_SIZE_2048G
+ default 4096 if TERRAFORM_OCI_VOLUMES_SIZE_4096G
+ default 8192 if TERRAFORM_OCI_VOLUMES_SIZE_8192G
+
+endif # TERRAFORM_OCI_VOLUMES_ENABLE_EXTRA
+
+if !TERRAFORM_OCI_VOLUMES_ENABLE_EXTRA
+
config TERRAFORM_OCI_DATA_VOLUME_DISPLAY_NAME
string "Display name to use for the data volume"
default "data"
@@ -98,6 +245,8 @@ config TERRAFORM_OCI_DATA_VOLUME_DISPLAY_NAME
Read this:
https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/terraformproviderconfiguration.htm
+endif # !TERRAFORM_OCI_VOLUMES_ENABLE_EXTRA
+
config TERRAFORM_OCI_DATA_VOLUME_DEVICE_FILE_NAME
string "Data volume's device file name"
default "/dev/oracleoci/oraclevdb"
@@ -106,6 +255,8 @@ config TERRAFORM_OCI_DATA_VOLUME_DEVICE_FILE_NAME
Read this:
https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/terraformproviderconfiguration.htm
+if !TERRAFORM_OCI_VOLUMES_ENABLE_EXTRA
+
config TERRAFORM_OCI_SPARSE_VOLUME_DISPLAY_NAME
string "Display name to use for the sparse volume"
default "sparse"
@@ -114,6 +265,8 @@ config TERRAFORM_OCI_SPARSE_VOLUME_DISPLAY_NAME
Read this:
https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/terraformproviderconfiguration.htm
+endif # !TERRAFORM_OCI_VOLUMES_ENABLE_EXTRA
+
config TERRAFORM_OCI_SPARSE_VOLUME_DEVICE_FILE_NAME
string "Sparse volume's device file name"
default "/dev/oracleoci/oraclevdc"
diff --git a/terraform/oci/main.tf b/terraform/oci/main.tf
index 033f821d9502..c3c477a6b4bd 100644
--- a/terraform/oci/main.tf
+++ b/terraform/oci/main.tf
@@ -28,7 +28,7 @@ resource "oci_core_instance" "kdevops_instance" {
}
resource "oci_core_volume" "kdevops_data_disk" {
- count = local.kdevops_num_boxes
+ count = var.oci_volumes_enable_extra == "true" ? 0 : local.kdevops_num_boxes
compartment_id = var.oci_compartment_ocid
@@ -38,7 +38,7 @@ resource "oci_core_volume" "kdevops_data_disk" {
}
resource "oci_core_volume" "kdevops_sparse_disk" {
- count = local.kdevops_num_boxes
+ count = var.oci_volumes_enable_extra == "true" ? 0 : local.kdevops_num_boxes
compartment_id = var.oci_compartment_ocid
@@ -48,7 +48,7 @@ resource "oci_core_volume" "kdevops_sparse_disk" {
}
resource "oci_core_volume_attachment" "kdevops_data_volume_attachment" {
- count = local.kdevops_num_boxes
+ count = var.oci_volumes_enable_extra == "true" ? 0 : local.kdevops_num_boxes
attachment_type = "paravirtualized"
instance_id = element(oci_core_instance.kdevops_instance.*.id, count.index)
@@ -58,7 +58,7 @@ resource "oci_core_volume_attachment" "kdevops_data_volume_attachment" {
}
resource "oci_core_volume_attachment" "kdevops_sparse_disk_attachment" {
- count = local.kdevops_num_boxes
+ count = var.oci_volumes_enable_extra == "true" ? 0 : local.kdevops_num_boxes
attachment_type = "paravirtualized"
instance_id = element(oci_core_instance.kdevops_instance.*.id, count.index)
@@ -66,3 +66,23 @@ resource "oci_core_volume_attachment" "kdevops_sparse_disk_attachment" {
device = var.oci_sparse_volume_device_file_name
}
+
+resource "oci_core_volume" "kdevops_volume_extra" {
+ count = var.oci_volumes_enable_extra == "false" ? 0 : local.kdevops_num_boxes * var.oci_volumes_per_instance
+ availability_domain = var.oci_availablity_domain
+ display_name = format("kdevops_volume%02d", count.index + 1)
+ compartment_id = var.oci_compartment_ocid
+ size_in_gbs = var.oci_volumes_size
+}
+
+locals {
+ volume_name_suffixes = [ "b", "c", "d", "e", "f", "g", "h", "i", "j", "k" ]
+}
+
+resource "oci_core_volume_attachment" "kdevops_volume_extra_att" {
+ count = var.oci_volumes_enable_extra == "false" ? 0 : local.kdevops_num_boxes * var.oci_volumes_per_instance
+ attachment_type = "paravirtualized"
+ instance_id = element(oci_core_instance.kdevops_instance.*.id, count.index)
+ volume_id = element(oci_core_volume.kdevops_volume_extra.*.id, count.index)
+ device = format("/dev/oracleoci/oraclevd%s", element(local.volume_name_suffixes, count.index))
+}
diff --git a/terraform/oci/vars.tf b/terraform/oci/vars.tf
index b02e79c597ec..077a9a4afdaa 100644
--- a/terraform/oci/vars.tf
+++ b/terraform/oci/vars.tf
@@ -70,6 +70,23 @@ variable "oci_subnet_ocid" {
default = ""
}
+variable "oci_volumes_enable_extra" {
+ description = "Create additional block volumes per instance"
+ default = false
+}
+
+variable "oci_volumes_per_instance" {
+ description = "The count of additional block volumes per instance"
+ type = number
+ default = 0
+}
+
+variable "oci_volumes_size" {
+ description = "The size of additional block volumes, in gibibytes"
+ type = number
+ default = 0
+}
+
variable "oci_data_volume_display_name" {
description = "Display name to use for the data volume"
default = "data"
--
2.48.1
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v1 05/13] guestfs: Set storage options consistently
2025-03-10 14:18 [PATCH v1 00/13] Block device provisioning for storage nodes cel
` (3 preceding siblings ...)
2025-03-10 14:18 ` [PATCH v1 04/13] terraform/OCI: " cel
@ 2025-03-10 14:18 ` cel
2025-03-10 14:18 ` [PATCH v1 06/13] playbooks: Add a role to create an LVM volume group cel
` (8 subsequent siblings)
13 siblings, 0 replies; 17+ messages in thread
From: cel @ 2025-03-10 14:18 UTC (permalink / raw)
To: Luis Chamberlain, Chandan Babu R, Jeff Layton; +Cc: kdevops, Chuck Lever
From: Chuck Lever <chuck.lever@oracle.com>
Clean up: The set-up for guestfs extra storage options is done
differently for each of the three libvirt storage choices. Ensure
that all three set the global variables consistently.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
kconfigs/Kconfig.libvirt | 3 +++
playbooks/roles/gen_nodes/defaults/main.yml | 2 +-
scripts/gen-nodes.Makefile | 10 ----------
3 files changed, 4 insertions(+), 11 deletions(-)
diff --git a/kconfigs/Kconfig.libvirt b/kconfigs/Kconfig.libvirt
index e64db2b64c89..1ed967423096 100644
--- a/kconfigs/Kconfig.libvirt
+++ b/kconfigs/Kconfig.libvirt
@@ -528,6 +528,7 @@ choice
config LIBVIRT_EXTRA_STORAGE_DRIVE_NVME
bool "NVMe"
+ output yaml
help
Use the QEMU NVMe driver for extra storage drives. We always expect
to use this as we expect *could* be outperforming the virtio driver.
@@ -537,6 +538,7 @@ config LIBVIRT_EXTRA_STORAGE_DRIVE_NVME
config LIBVIRT_EXTRA_STORAGE_DRIVE_VIRTIO
bool "virtio"
+ output yaml
help
Use the QEMU virtio driver for extra storage drives. Use this if you
are having issues with "NVMe timeouts" issues when testing in a loop
@@ -545,6 +547,7 @@ config LIBVIRT_EXTRA_STORAGE_DRIVE_VIRTIO
config LIBVIRT_EXTRA_STORAGE_DRIVE_IDE
bool "ide"
+ output yaml
help
Use the QEMU ide driver for extra storage drives. This is useful for
really old Linux distributions that lack the virtio backend driver.
diff --git a/playbooks/roles/gen_nodes/defaults/main.yml b/playbooks/roles/gen_nodes/defaults/main.yml
index 00971e293d12..8ff9b87993a7 100644
--- a/playbooks/roles/gen_nodes/defaults/main.yml
+++ b/playbooks/roles/gen_nodes/defaults/main.yml
@@ -64,7 +64,7 @@ libvirt_enable_cxl_demo_topo2: False
libvirt_enable_cxl_switch_topo1: False
libvirt_enable_cxl_dcd_topo1: False
libvirt_extra_drive_id_prefix: 'drv'
-libvirt_extra_storage_drive_nvme: True
+libvirt_extra_storage_drive_nvme: False
libvirt_extra_storage_drive_virtio: False
libvirt_extra_storage_drive_ide: False
libvirt_extra_storage_aio_mode: "native"
diff --git a/scripts/gen-nodes.Makefile b/scripts/gen-nodes.Makefile
index ce6b794f1fb1..8bee2db57591 100644
--- a/scripts/gen-nodes.Makefile
+++ b/scripts/gen-nodes.Makefile
@@ -48,16 +48,6 @@ GEN_NODES_EXTRA_ARGS += libvirt_storage_pool_name='$(subst ",,$(CONFIG_LIBVIRT_S
GEN_NODES_EXTRA_ARGS += libvirt_storage_pool_path='$(subst ",,$(CONFIG_KDEVOPS_STORAGE_POOL_PATH))'
endif
-ifeq (y,$(CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_IDE))
-GEN_NODES_EXTRA_ARGS += libvirt_extra_storage_drive_nvme='False'
-GEN_NODES_EXTRA_ARGS += libvirt_extra_storage_drive_ide='True'
-endif
-
-ifeq (y,$(CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_VIRTIO))
-GEN_NODES_EXTRA_ARGS += libvirt_extra_storage_drive_nvme='False'
-GEN_NODES_EXTRA_ARGS += libvirt_extra_storage_drive_virtio='True'
-endif
-
GEN_NODES_EXTRA_ARGS += libvirt_extra_storage_aio_mode='$(subst ",,$(CONFIG_LIBVIRT_AIO_MODE))'
GEN_NODES_EXTRA_ARGS += libvirt_extra_storage_aio_cache_mode='$(subst ",,$(CONFIG_LIBVIRT_AIO_CACHE_MODE))'
--
2.48.1
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v1 06/13] playbooks: Add a role to create an LVM volume group
2025-03-10 14:18 [PATCH v1 00/13] Block device provisioning for storage nodes cel
` (4 preceding siblings ...)
2025-03-10 14:18 ` [PATCH v1 05/13] guestfs: Set storage options consistently cel
@ 2025-03-10 14:18 ` cel
2025-03-10 14:18 ` [PATCH v1 07/13] volume_group: Detect the /data partition directly cel
` (7 subsequent siblings)
13 siblings, 0 replies; 17+ messages in thread
From: cel @ 2025-03-10 14:18 UTC (permalink / raw)
To: Luis Chamberlain, Chandan Babu R, Jeff Layton; +Cc: kdevops, Chuck Lever
From: Chuck Lever <chuck.lever@oracle.com>
There are currently three playbooks that need to set up a volume
group: nfsd, smbd, and iscsi. All three need to steer their way
around the physical root and data partitions, in addition to
managing the differences between cloud providers.
Refactor (de-duplicate) the LVM-related tasks in these three
playbooks into a new role that can be shared and then later
updated to avoid already in-use physical block devices.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
kconfigs/Kconfig.nfsd | 6 +--
playbooks/roles/iscsi/tasks/main.yml | 25 +++-------
playbooks/roles/iscsi/vars/Debian.yml | 1 -
playbooks/roles/iscsi/vars/RedHat.yml | 1 -
playbooks/roles/iscsi/vars/Suse.yml | 1 -
playbooks/roles/nfsd/defaults/main.yml | 1 -
playbooks/roles/nfsd/tasks/main.yml | 23 +++------
playbooks/roles/smbd/defaults/main.yml | 1 -
playbooks/roles/smbd/tasks/main.yml | 23 +++------
playbooks/roles/volume_group/README.md | 48 +++++++++++++++++++
.../roles/volume_group/defaults/main.yml | 2 +
playbooks/roles/volume_group/tasks/main.yml | 31 ++++++++++++
12 files changed, 105 insertions(+), 58 deletions(-)
create mode 100644 playbooks/roles/volume_group/README.md
create mode 100644 playbooks/roles/volume_group/defaults/main.yml
create mode 100644 playbooks/roles/volume_group/tasks/main.yml
diff --git a/kconfigs/Kconfig.nfsd b/kconfigs/Kconfig.nfsd
index d071f5fba278..60e9da1aba2d 100644
--- a/kconfigs/Kconfig.nfsd
+++ b/kconfigs/Kconfig.nfsd
@@ -67,14 +67,14 @@ config NFSD_LEASE_TIME
complete faster.
choice
- prompt "Local or external physical storage"
+ prompt "Persistent storage for exported file systems"
default NFSD_EXPORT_STORAGE_LOCAL
config NFSD_EXPORT_STORAGE_LOCAL
bool "Local"
help
- Exported file systems will reside on physical storage
- local to the NFS server itself.
+ Exported file systems will reside on block devices local
+ to the NFS server itself.
config NFSD_EXPORT_STORAGE_ISCSI
bool "iSCSI"
diff --git a/playbooks/roles/iscsi/tasks/main.yml b/playbooks/roles/iscsi/tasks/main.yml
index bbb49756f4da..2638bacc882b 100644
--- a/playbooks/roles/iscsi/tasks/main.yml
+++ b/playbooks/roles/iscsi/tasks/main.yml
@@ -18,24 +18,13 @@
name: "{{ iscsi_target_packages }}"
state: present
-- name: Initialize the list of local physical volumes
- ansible.builtin.set_fact:
- iscsi_lvm_pvs: []
-
-- name: Expand the list of local physical volumes
- ansible.builtin.set_fact:
- iscsi_lvm_pvs: "{{ iscsi_lvm_pvs + [iscsi_target_pv_prefix + item | string] }}"
- with_items: "{{ range(1, iscsi_target_pv_count + 1) }}"
- loop_control:
- label: "Adding {{ iscsi_target_pv_prefix + item | string }} ..."
-
-- name: Create LVM volume group {{ iscsi_target_vg_name }}
- become: true
- become_flags: 'su - -c'
- become_method: ansible.builtin.sudo
- community.general.lvg:
- vg: "{{ iscsi_target_vg_name }}"
- pvs: "{{ iscsi_lvm_pvs | join(',') }}"
+- name: Set up a volume group on local block devices
+ ansible.builtin.include_role:
+ name: volume_group
+ vars:
+ volume_group_name: "{{ iscsi_target_vg_name }}"
+ volume_device_prefix: "{{ iscsi_target_pv_prefix }}"
+ volume_device_count: "{{ iscsi_target_pv_count }}"
- name: Create a directory for storing iSCSI persistent reservations
become: true
diff --git a/playbooks/roles/iscsi/vars/Debian.yml b/playbooks/roles/iscsi/vars/Debian.yml
index 9571f1de98c2..e353c17ee568 100644
--- a/playbooks/roles/iscsi/vars/Debian.yml
+++ b/playbooks/roles/iscsi/vars/Debian.yml
@@ -1,5 +1,4 @@
---
iscsi_target_packages:
- - lvm2
- targetcli-fb
- sg3_utils
diff --git a/playbooks/roles/iscsi/vars/RedHat.yml b/playbooks/roles/iscsi/vars/RedHat.yml
index 6082344368d6..b5376613d55e 100644
--- a/playbooks/roles/iscsi/vars/RedHat.yml
+++ b/playbooks/roles/iscsi/vars/RedHat.yml
@@ -1,6 +1,5 @@
---
iscsi_target_packages:
- - lvm2
- targetcli
- sg3_utils
diff --git a/playbooks/roles/iscsi/vars/Suse.yml b/playbooks/roles/iscsi/vars/Suse.yml
index 63393d988f1e..1b1c0b25a8aa 100644
--- a/playbooks/roles/iscsi/vars/Suse.yml
+++ b/playbooks/roles/iscsi/vars/Suse.yml
@@ -1,6 +1,5 @@
---
iscsi_target_packages:
- - lvm2
- targetcli-fb
- sg3_utils
diff --git a/playbooks/roles/nfsd/defaults/main.yml b/playbooks/roles/nfsd/defaults/main.yml
index 271d2d1d8912..ccdee468bd02 100644
--- a/playbooks/roles/nfsd/defaults/main.yml
+++ b/playbooks/roles/nfsd/defaults/main.yml
@@ -1,7 +1,6 @@
# SPDX-License-Identifier GPL-2.0+
---
# Our sensible defaults for the nfsd role.
-nfsd_lvm_pvs: []
nfsd_export_device_prefix: ""
nfsd_export_device_count: 0
nfsd_export_label: "export"
diff --git a/playbooks/roles/nfsd/tasks/main.yml b/playbooks/roles/nfsd/tasks/main.yml
index 7e18c3813900..144ecd86686e 100644
--- a/playbooks/roles/nfsd/tasks/main.yml
+++ b/playbooks/roles/nfsd/tasks/main.yml
@@ -30,22 +30,13 @@
- nfsd_export_storage_iscsi|bool
- nfsd_export_fstype != "tmpfs"
-- name: Build string of devices to use as PVs
- set_fact:
- nfsd_lvm_pvs: "{{ nfsd_lvm_pvs + [ nfsd_export_device_prefix + item|string ] }}"
- with_items: "{{ range(1, nfsd_export_device_count + 1) }}"
- loop_control:
- label: "Physical volume: {{ nfsd_export_device_prefix + item|string }}"
- when:
- - nfsd_export_storage_local|bool
-
-- name: Create a new LVM VG
- become: yes
- become_flags: 'su - -c'
- become_method: sudo
- community.general.lvg:
- vg: "exports"
- pvs: "{{ nfsd_lvm_pvs | join(',') }}"
+- name: Set up a volume group on local block devices
+ ansible.builtin.include_role:
+ name: volume_group
+ vars:
+ volume_group_name: "exports"
+ volume_device_prefix: "{{ nfsd_export_device_prefix }}"
+ volume_device_count: "{{ nfsd_export_device_count }}"
when:
- nfsd_export_storage_local|bool
- nfsd_export_fstype != "tmpfs"
diff --git a/playbooks/roles/smbd/defaults/main.yml b/playbooks/roles/smbd/defaults/main.yml
index d75cc0b4ce75..4161224f27b9 100644
--- a/playbooks/roles/smbd/defaults/main.yml
+++ b/playbooks/roles/smbd/defaults/main.yml
@@ -1,5 +1,4 @@
---
-smbd_lvm_pvs: []
smbd_share_device_prefix: ""
smbd_share_device_count: 0
smbd_share_label: "share"
diff --git a/playbooks/roles/smbd/tasks/main.yml b/playbooks/roles/smbd/tasks/main.yml
index bb19009606f7..486ccdc613bf 100644
--- a/playbooks/roles/smbd/tasks/main.yml
+++ b/playbooks/roles/smbd/tasks/main.yml
@@ -22,22 +22,13 @@
group: root
mode: 0644
-- name: Build string of devices to use as PVs
- set_fact:
- smbd_lvm_pvs: "{{ smbd_lvm_pvs + [ smbd_share_device_prefix + item|string ] }}"
- with_items: "{{ range(1, smbd_share_device_count + 1) }}"
-
-- name: Print the PV list
- ansible.builtin.debug:
- var: smbd_lvm_pvs
-
-- name: Create a new LVM VG
- become: yes
- become_flags: 'su - -c'
- become_method: sudo
- community.general.lvg:
- vg: "shares"
- pvs: "{{ smbd_lvm_pvs | join(',') }}"
+- name: Set up a volume group on local block devices
+ ansbiel.builtin.include_role:
+ name: volume_group
+ var:
+ volume_group_name: "shares"
+ volume_device_prefix: "{{ smbd_share_device_prefix }}"
+ volume_device_count: "{{ smbd_share_device_count }}"
- name: Create {{ smbd_share_path }}
become: yes
diff --git a/playbooks/roles/volume_group/README.md b/playbooks/roles/volume_group/README.md
new file mode 100644
index 000000000000..56cb7c55f9f4
--- /dev/null
+++ b/playbooks/roles/volume_group/README.md
@@ -0,0 +1,48 @@
+volume_group
+============
+
+The volume_group playbook creates a logical volume group
+on a target node using unused block devices.
+
+Requirements
+------------
+
+The ansible community.general collection must be installed on the
+control host.
+
+Role Variables
+--------------
+
+ * volume_group_name: The name for new volume group (string)
+ * volume_device_prefix: The pathname prefix for block devices to
+ consider for the new volume group (string)
+ * volume_device_count: The number of block devices to include in
+ the new volume group (int)
+
+Dependencies
+------------
+
+None.
+
+Example Playbook
+----------------
+
+Below is an example playbook task:
+
+```
+- name: Create a volume group for NFSD exports
+ ansible.builtin.include_role:
+ name: volume_group
+ vars:
+ volume_group_name: "exports"
+ volume_device_prefix: "/dev/disk/by-id/virtio*"
+ volume_count: 3
+```
+
+For further examples refer to one of this role's users, the
+[https://github.com/linux-kdevops/kdevops](kdevops) project.
+
+License
+-------
+
+copyleft-next-0.3.1
diff --git a/playbooks/roles/volume_group/defaults/main.yml b/playbooks/roles/volume_group/defaults/main.yml
new file mode 100644
index 000000000000..68f3fb3f5775
--- /dev/null
+++ b/playbooks/roles/volume_group/defaults/main.yml
@@ -0,0 +1,2 @@
+---
+physical_volumes: []
diff --git a/playbooks/roles/volume_group/tasks/main.yml b/playbooks/roles/volume_group/tasks/main.yml
new file mode 100644
index 000000000000..086c86454893
--- /dev/null
+++ b/playbooks/roles/volume_group/tasks/main.yml
@@ -0,0 +1,31 @@
+---
+- name: Gather hardware facts
+ ansible.builtin.gather_facts:
+ gather_subset:
+ - "!all"
+ - "!min"
+ - "hardware"
+
+- name: Install dependencies for LVM support
+ become: true
+ become_flags: 'su - -c'
+ become_method: ansible.builtin.sudo
+ ansible.builtin.package:
+ name:
+ - lvm2
+ state: present
+
+- name: Enumerate block devices to provision as physical volumes
+ ansible.builtin.set_fact:
+ physical_volumes: "{{ physical_volumes + [volume_device_prefix + item | string] }}"
+ with_items: "{{ range(1, volume_device_count + 1) }}"
+ loop_control:
+ label: "Block device: {{ volume_device_prefix + item | string }}"
+
+- name: Create an LVM Volume Group
+ become: true
+ become_flags: "su - -c"
+ become_method: ansible.builtin.sudo
+ community.general.lvg:
+ vg: "{{ volume_group_name }}"
+ pvs: "{{ physical_volumes | join(',') }}"
--
2.48.1
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v1 07/13] volume_group: Detect the /data partition directly
2025-03-10 14:18 [PATCH v1 00/13] Block device provisioning for storage nodes cel
` (5 preceding siblings ...)
2025-03-10 14:18 ` [PATCH v1 06/13] playbooks: Add a role to create an LVM volume group cel
@ 2025-03-10 14:18 ` cel
2025-03-10 14:18 ` [PATCH v1 08/13] volume_group: Prepare to support cloud providers cel
` (6 subsequent siblings)
13 siblings, 0 replies; 17+ messages in thread
From: cel @ 2025-03-10 14:18 UTC (permalink / raw)
To: Luis Chamberlain, Chandan Babu R, Jeff Layton; +Cc: kdevops, Chuck Lever
From: Chuck Lever <chuck.lever@oracle.com>
The volume_group role currently runs during "bringup", but the
/data partition is set up later by subsequent make targets (eg, when
the test kernel is built or when workflows are set up).
This means the volume_group role has to avoid using one of the block
devices so that later steps can put the /data partition on the
skipped device.
Instead of counting the number of available block devices and just
skipping one of them, explicitly avoid using the device listed in
{{ data_device }}. This is a more specific and reliable check.
Works only for guestfs. Support for terraform providers is next.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
kconfigs/Kconfig.iscsi | 22 -------
kconfigs/Kconfig.nfsd | 21 -------
kconfigs/Kconfig.smbd | 17 ------
playbooks/roles/iscsi/defaults/main.yml | 2 -
playbooks/roles/iscsi/tasks/main.yml | 2 -
playbooks/roles/nfsd/defaults/main.yml | 2 -
playbooks/roles/nfsd/tasks/main.yml | 2 -
playbooks/roles/smbd/defaults/main.yml | 2 -
playbooks/roles/smbd/tasks/main.yml | 2 -
playbooks/roles/volume_group/README.md | 6 --
.../roles/volume_group/defaults/main.yml | 1 +
.../roles/volume_group/tasks/guestfs.yml | 59 +++++++++++++++++++
playbooks/roles/volume_group/tasks/main.yml | 17 ++++--
scripts/iscsi.Makefile | 2 -
scripts/nfsd.Makefile | 2 -
scripts/smbd.Makefile | 2 -
16 files changed, 71 insertions(+), 90 deletions(-)
create mode 100644 playbooks/roles/volume_group/tasks/guestfs.yml
diff --git a/kconfigs/Kconfig.iscsi b/kconfigs/Kconfig.iscsi
index d95c82ac34c7..2c40372621b8 100644
--- a/kconfigs/Kconfig.iscsi
+++ b/kconfigs/Kconfig.iscsi
@@ -19,26 +19,4 @@ config ISCSI_TARGET_WWN
If you do not know what this means, the default is safe
to use.
-config ISCSI_TARGET_PV_PREFIX
- string "Prefix to use for iSCSI target LVM physical volume names"
- default "/dev/disk/by-id/nvme-QEMU_NVMe_Ctrl_kdevops" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_NVME
- default "/dev/disk/by-id/virtio-kdevops" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_VIRTIO
- default "/dev/disk/by-id/ata-QEMU_HARDDISK_kdevops" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_IDE
- default ""
- help
- This string is the prefix for LVM physical volume names.
-
- If you do not know what this means, the default is safe
- to use.
-
-config ISCSI_TARGET_PV_COUNT
- int "Number of devices to add as LVM physical volumes"
- default 3 if LIBVIRT
- help
- The number of physical devices on the iSCSI target node to
- dedicate as LVM physical volumes.
-
- If you do not know what this means, the default is safe
- to use.
-
endif
diff --git a/kconfigs/Kconfig.nfsd b/kconfigs/Kconfig.nfsd
index 60e9da1aba2d..69fc9d2e38d8 100644
--- a/kconfigs/Kconfig.nfsd
+++ b/kconfigs/Kconfig.nfsd
@@ -85,27 +85,6 @@ config NFSD_EXPORT_STORAGE_ISCSI
endchoice
-if NFSD_EXPORT_STORAGE_LOCAL
-
-config NFSD_EXPORT_DEVICE_PREFIX
- string "The device prefix to use for LVM PVs"
- default "/dev/disk/by-id/nvme-QEMU_NVMe_Ctrl_kdevops" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_NVME
- default "/dev/disk/by-id/virtio-kdevops" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_VIRTIO
- default "/dev/disk/by-id/ata-QEMU_HARDDISK_kdevops" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_IDE
- default ""
- help
- To set up nfsd for testing, we give it filesystems to export. This string
- will be the prefix for the block devices used as PVs for LVM.
-
-config NFSD_EXPORT_DEVICE_COUNT
- int "Number of devices to add as LVM PVs"
- default 3 if LIBVIRT
- help
- The number of disk devices to dedicate as LVM PVs. In general, we
- avoid using device index 0 as that is used for /data.
-
-endif
-
endmenu
endif
diff --git a/kconfigs/Kconfig.smbd b/kconfigs/Kconfig.smbd
index 251327d67376..b7c33d180abd 100644
--- a/kconfigs/Kconfig.smbd
+++ b/kconfigs/Kconfig.smbd
@@ -11,23 +11,6 @@ if KDEVOPS_SETUP_SMBD
menu "Configure the Samba SMB server"
-config SMBD_SHARE_DEVICE_PREFIX
- string "The device prefix to use for LVM PVs"
- default "/dev/disk/by-id/nvme-QEMU_NVMe_Ctrl_kdevops" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_NVME
- default "/dev/disk/by-id/virtio-kdevops" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_VIRTIO
- default "/dev/disk/by-id/ata-QEMU_HARDDISK_kdevops" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_IDE
- default ""
- help
- To set up smbd for testing, we give it filesystems to share. This string
- will be the prefix for the block devices used as PVs for LVM.
-
-config SMBD_SHARE_DEVICE_COUNT
- int "Number of devices to add as LVM PVs"
- default 3 if LIBVIRT
- help
- The number of disk devices to dedicate as LVM PVs. In general, we
- avoid using device index 0 as that is used for /data.
-
choice
prompt "Type of filesystem to share"
default SMBD_SHARE_FSTYPE_BTRFS
diff --git a/playbooks/roles/iscsi/defaults/main.yml b/playbooks/roles/iscsi/defaults/main.yml
index 5219a2f3ba30..3617ac0d333a 100644
--- a/playbooks/roles/iscsi/defaults/main.yml
+++ b/playbooks/roles/iscsi/defaults/main.yml
@@ -1,6 +1,4 @@
---
# Our sensible defaults for the iscsi role.
iscsi_target_hostname: "{{ kdevops_host_prefix }}-iscsi"
-iscsi_target_pv_prefix: ""
-iscsi_target_pv_count: 0
iscsi_target_vg_name: "iscsi_luns"
diff --git a/playbooks/roles/iscsi/tasks/main.yml b/playbooks/roles/iscsi/tasks/main.yml
index 2638bacc882b..a95351cca5a7 100644
--- a/playbooks/roles/iscsi/tasks/main.yml
+++ b/playbooks/roles/iscsi/tasks/main.yml
@@ -23,8 +23,6 @@
name: volume_group
vars:
volume_group_name: "{{ iscsi_target_vg_name }}"
- volume_device_prefix: "{{ iscsi_target_pv_prefix }}"
- volume_device_count: "{{ iscsi_target_pv_count }}"
- name: Create a directory for storing iSCSI persistent reservations
become: true
diff --git a/playbooks/roles/nfsd/defaults/main.yml b/playbooks/roles/nfsd/defaults/main.yml
index ccdee468bd02..788d26224266 100644
--- a/playbooks/roles/nfsd/defaults/main.yml
+++ b/playbooks/roles/nfsd/defaults/main.yml
@@ -1,8 +1,6 @@
# SPDX-License-Identifier GPL-2.0+
---
# Our sensible defaults for the nfsd role.
-nfsd_export_device_prefix: ""
-nfsd_export_device_count: 0
nfsd_export_label: "export"
nfsd_export_fs_opts: ""
nfsd_lease_time: "90"
diff --git a/playbooks/roles/nfsd/tasks/main.yml b/playbooks/roles/nfsd/tasks/main.yml
index 144ecd86686e..21446f224a08 100644
--- a/playbooks/roles/nfsd/tasks/main.yml
+++ b/playbooks/roles/nfsd/tasks/main.yml
@@ -35,8 +35,6 @@
name: volume_group
vars:
volume_group_name: "exports"
- volume_device_prefix: "{{ nfsd_export_device_prefix }}"
- volume_device_count: "{{ nfsd_export_device_count }}"
when:
- nfsd_export_storage_local|bool
- nfsd_export_fstype != "tmpfs"
diff --git a/playbooks/roles/smbd/defaults/main.yml b/playbooks/roles/smbd/defaults/main.yml
index 4161224f27b9..cbb4974ed3bc 100644
--- a/playbooks/roles/smbd/defaults/main.yml
+++ b/playbooks/roles/smbd/defaults/main.yml
@@ -1,4 +1,2 @@
---
-smbd_share_device_prefix: ""
-smbd_share_device_count: 0
smbd_share_label: "share"
diff --git a/playbooks/roles/smbd/tasks/main.yml b/playbooks/roles/smbd/tasks/main.yml
index 486ccdc613bf..926358b3f3f6 100644
--- a/playbooks/roles/smbd/tasks/main.yml
+++ b/playbooks/roles/smbd/tasks/main.yml
@@ -27,8 +27,6 @@
name: volume_group
var:
volume_group_name: "shares"
- volume_device_prefix: "{{ smbd_share_device_prefix }}"
- volume_device_count: "{{ smbd_share_device_count }}"
- name: Create {{ smbd_share_path }}
become: yes
diff --git a/playbooks/roles/volume_group/README.md b/playbooks/roles/volume_group/README.md
index 56cb7c55f9f4..cd1ab48ce3ba 100644
--- a/playbooks/roles/volume_group/README.md
+++ b/playbooks/roles/volume_group/README.md
@@ -14,10 +14,6 @@ Role Variables
--------------
* volume_group_name: The name for new volume group (string)
- * volume_device_prefix: The pathname prefix for block devices to
- consider for the new volume group (string)
- * volume_device_count: The number of block devices to include in
- the new volume group (int)
Dependencies
------------
@@ -35,8 +31,6 @@ Below is an example playbook task:
name: volume_group
vars:
volume_group_name: "exports"
- volume_device_prefix: "/dev/disk/by-id/virtio*"
- volume_count: 3
```
For further examples refer to one of this role's users, the
diff --git a/playbooks/roles/volume_group/defaults/main.yml b/playbooks/roles/volume_group/defaults/main.yml
index 68f3fb3f5775..b7707cab59d5 100644
--- a/playbooks/roles/volume_group/defaults/main.yml
+++ b/playbooks/roles/volume_group/defaults/main.yml
@@ -1,2 +1,3 @@
---
physical_volumes: []
+kdevops_enable_guestfs: false
diff --git a/playbooks/roles/volume_group/tasks/guestfs.yml b/playbooks/roles/volume_group/tasks/guestfs.yml
new file mode 100644
index 000000000000..a5536159a1b6
--- /dev/null
+++ b/playbooks/roles/volume_group/tasks/guestfs.yml
@@ -0,0 +1,59 @@
+---
+# Select unused block devices under /dev/disk/by-id/ .
+#
+# Avoid these two block devices:
+# 1. The block device where the root partition resides. For
+# guestfs, this device is /dev/vda and is not listed under
+# /dev/disk/by-id/
+# 2. The block device where the /data partition resides. For
+# guestfs, this device is named by a path to a symlink
+# under /dev/disk/by-id/
+#
+
+- name: Set the NVMe device search pattern
+ ansible.builtin.set_fact:
+ by_id_pattern: "nvme-QEMU_NVMe_Ctrl_kdevops*"
+ when:
+ - libvirt_extra_storage_drive_nvme is defined
+ - libvirt_extra_storage_drive_nvme|bool
+
+- name: Set the virtio block device search pattern
+ ansible.builtin.set_fact:
+ by_id_pattern: "virtio-kdevops*"
+ when:
+ - libvirt_extra_storage_drive_virtio is defined
+ - libvirt_extra_storage_drive_virtio|bool
+
+- name: Set the IDE device search pattern
+ ansible.builtin.set_fact:
+ by_id_pattern: "ata-QEMU_HARDDISK_kdevops*"
+ when:
+ - libvirt_extra_storage_drive_ide is defined
+ - libvirt_extra_storage_drive_ide|bool
+
+- name: Verify there are block devices to search
+ ansible.builtin.fail:
+ msg: No supported block devices are available for NFSD.
+ when:
+ - by_id_pattern is not defined
+
+- name: Show the pathname of the data device
+ ansible.builtin.debug:
+ msg: "Reserved device for /data: {{ data_device }}"
+
+- name: Discover usable block devices
+ ansible.builtin.find:
+ paths: /dev/disk/by-id
+ file_type: link
+ patterns: "{{ by_id_pattern }}"
+ excludes: "{{ data_device | basename }},*_?,*-part?"
+ register: device_ids
+ failed_when:
+ - device_ids.failed or device_ids.matched == 0
+
+- name: Build a list of block devices to provision as PVs
+ ansible.builtin.set_fact:
+ physical_volumes: "{{ physical_volumes + [item.path] }}"
+ loop: "{{ device_ids.files }}"
+ loop_control:
+ label: "Block device: {{ item.path }}"
diff --git a/playbooks/roles/volume_group/tasks/main.yml b/playbooks/roles/volume_group/tasks/main.yml
index 086c86454893..bc0fbcd8c578 100644
--- a/playbooks/roles/volume_group/tasks/main.yml
+++ b/playbooks/roles/volume_group/tasks/main.yml
@@ -15,12 +15,17 @@
- lvm2
state: present
-- name: Enumerate block devices to provision as physical volumes
- ansible.builtin.set_fact:
- physical_volumes: "{{ physical_volumes + [volume_device_prefix + item | string] }}"
- with_items: "{{ range(1, volume_device_count + 1) }}"
- loop_control:
- label: "Block device: {{ volume_device_prefix + item | string }}"
+- name: Enumerate block devices on the target nodes
+ ansible.builtin.include_tasks:
+ file: "guestfs.yml"
+ when:
+ - kdevops_enable_guestfs|bool
+
+- name: Verify there are remaining candidates to use for physical volumes
+ ansible.builtin.fail:
+ msg: No local block devices are available for an LVM volume group.
+ when:
+ - physical_volumes|length == 0
- name: Create an LVM Volume Group
become: true
diff --git a/scripts/iscsi.Makefile b/scripts/iscsi.Makefile
index 9ea524fd4417..893183721dfc 100644
--- a/scripts/iscsi.Makefile
+++ b/scripts/iscsi.Makefile
@@ -1,8 +1,6 @@
ifeq (y,$(CONFIG_KDEVOPS_ENABLE_ISCSI))
ISCSI_EXTRA_ARGS += iscsi_target_wwn='$(subst ",,$(CONFIG_ISCSI_TARGET_WWN))'
-ISCSI_EXTRA_ARGS += iscsi_target_pv_prefix='$(subst ",,$(CONFIG_ISCSI_TARGET_PV_PREFIX))'
-ISCSI_EXTRA_ARGS += iscsi_target_pv_count='$(subst ",,$(CONFIG_ISCSI_TARGET_PV_COUNT))'
ISCSI_EXTRA_ARGS += kdevops_enable_iscsi=true
ANSIBLE_EXTRA_ARGS += $(ISCSI_EXTRA_ARGS)
diff --git a/scripts/nfsd.Makefile b/scripts/nfsd.Makefile
index 959cc4b7652d..39e4a817421a 100644
--- a/scripts/nfsd.Makefile
+++ b/scripts/nfsd.Makefile
@@ -2,8 +2,6 @@ ifeq (y,$(CONFIG_KDEVOPS_SETUP_NFSD))
ifeq (y,$(CONFIG_NFSD_EXPORT_STORAGE_LOCAL))
NFSD_EXTRA_ARGS += nfsd_export_storage_local=true
-NFSD_EXTRA_ARGS += nfsd_export_device_prefix='$(subst ",,$(CONFIG_NFSD_EXPORT_DEVICE_PREFIX))'
-NFSD_EXTRA_ARGS += nfsd_export_device_count='$(subst ",,$(CONFIG_NFSD_EXPORT_DEVICE_COUNT))'
endif
ifeq (y,$(CONFIG_NFSD_EXPORT_STORAGE_ISCSI))
diff --git a/scripts/smbd.Makefile b/scripts/smbd.Makefile
index ae23497d29f0..77d7727b137a 100644
--- a/scripts/smbd.Makefile
+++ b/scripts/smbd.Makefile
@@ -1,7 +1,5 @@
ifeq (y,$(CONFIG_KDEVOPS_SETUP_SMBD))
-SMBD_EXTRA_ARGS += smbd_share_device_prefix='$(subst ",,$(CONFIG_SMBD_SHARE_DEVICE_PREFIX))'
-SMBD_EXTRA_ARGS += smbd_share_device_count='$(subst ",,$(CONFIG_SMBD_SHARE_DEVICE_COUNT))'
SMBD_EXTRA_ARGS += smbd_share_fstype='$(subst ",,$(CONFIG_SMBD_SHARE_FSTYPE))'
SMBD_EXTRA_ARGS += smbd_share_path='$(subst ",,$(CONFIG_SMBD_SHARE_PATH))'
SMBD_EXTRA_ARGS += smb_root_pw='$(subst ",,$(CONFIG_SMB_ROOT_PW))'
--
2.48.1
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v1 08/13] volume_group: Prepare to support cloud providers
2025-03-10 14:18 [PATCH v1 00/13] Block device provisioning for storage nodes cel
` (6 preceding siblings ...)
2025-03-10 14:18 ` [PATCH v1 07/13] volume_group: Detect the /data partition directly cel
@ 2025-03-10 14:18 ` cel
2025-03-10 14:18 ` [PATCH v1 09/13] volume_group: Create volume group on terraform/AWS nodes cel
` (5 subsequent siblings)
13 siblings, 0 replies; 17+ messages in thread
From: cel @ 2025-03-10 14:18 UTC (permalink / raw)
To: Luis Chamberlain, Chandan Babu R, Jeff Layton; +Cc: kdevops, Chuck Lever
From: Chuck Lever <chuck.lever@oracle.com>
The cloud providers each have different approaches to extra block
storage. The different steps will be included in separate .yml
files.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
playbooks/roles/volume_group/defaults/main.yml | 1 +
playbooks/roles/volume_group/tasks/main.yml | 6 ++++++
2 files changed, 7 insertions(+)
diff --git a/playbooks/roles/volume_group/defaults/main.yml b/playbooks/roles/volume_group/defaults/main.yml
index b7707cab59d5..a092c5149bcc 100644
--- a/playbooks/roles/volume_group/defaults/main.yml
+++ b/playbooks/roles/volume_group/defaults/main.yml
@@ -1,3 +1,4 @@
---
physical_volumes: []
kdevops_enable_guestfs: false
+kdevops_enable_terraform: false
diff --git a/playbooks/roles/volume_group/tasks/main.yml b/playbooks/roles/volume_group/tasks/main.yml
index bc0fbcd8c578..4cafe15017fe 100644
--- a/playbooks/roles/volume_group/tasks/main.yml
+++ b/playbooks/roles/volume_group/tasks/main.yml
@@ -21,6 +21,12 @@
when:
- kdevops_enable_guestfs|bool
+- name: Enumerate block devices on the target nodes
+ ansible.builtin.include_tasks:
+ file: "terraform/{{ kdevops_terraform_provider }}.yml"
+ when:
+ - kdevops_enable_terraform|bool
+
- name: Verify there are remaining candidates to use for physical volumes
ansible.builtin.fail:
msg: No local block devices are available for an LVM volume group.
--
2.48.1
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v1 09/13] volume_group: Create volume group on terraform/AWS nodes
2025-03-10 14:18 [PATCH v1 00/13] Block device provisioning for storage nodes cel
` (7 preceding siblings ...)
2025-03-10 14:18 ` [PATCH v1 08/13] volume_group: Prepare to support cloud providers cel
@ 2025-03-10 14:18 ` cel
2025-03-10 14:18 ` [PATCH v1 10/13] volume_group: Create a volume group on Azure nodes cel
` (4 subsequent siblings)
13 siblings, 0 replies; 17+ messages in thread
From: cel @ 2025-03-10 14:18 UTC (permalink / raw)
To: Luis Chamberlain, Chandan Babu R, Jeff Layton; +Cc: kdevops, Chuck Lever
From: Chuck Lever <chuck.lever@oracle.com>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
.../volume_group/tasks/terraform/aws.yml | 54 +++++++++++++++++++
1 file changed, 54 insertions(+)
create mode 100644 playbooks/roles/volume_group/tasks/terraform/aws.yml
diff --git a/playbooks/roles/volume_group/tasks/terraform/aws.yml b/playbooks/roles/volume_group/tasks/terraform/aws.yml
new file mode 100644
index 000000000000..e7cca3e259b0
--- /dev/null
+++ b/playbooks/roles/volume_group/tasks/terraform/aws.yml
@@ -0,0 +1,54 @@
+---
+#
+# To guarantee idempotency, these steps have to generate the exact
+# same physical_volumes list every time they are run.
+#
+# Skip the block device on which the root filesystem resides, and
+# skip the device that is to be used for /data.
+#
+# On AWS, normally the root device is /dev/nvme0n1 and the data
+# device is /dev/nvme1n1. However, this is not always the case:
+# block volumes can be attached to an instance in any order, thus
+# may appear as any device named /dev/nvmeNn1.
+#
+# So, extract the block device names, which should remain fixed for
+# the lifetime of the instance and its block devices. Use these to
+# avoid using the root and data devices as LVM physical volumes.
+#
+
+- name: Gather AWS instance information about the target node
+ delegate_to: localhost
+ amazon.aws.ec2_instance_info:
+ region: "{{ terraform_aws_region }}"
+ filters:
+ "tag:Name": "{{ inventory_hostname }}"
+ instance-state-name: ["running"]
+ register: instance_info
+
+# bdm is a list of dictionaries -- one dictionary per block device
+- name: Extract the block device mappings dictionary
+ ansible.builtin.set_fact:
+ bdm: "{{ instance_info.instances[0].block_device_mappings }}"
+
+- name: Discover the root device
+ ansible.builtin.set_fact:
+ root_ebs_volume: "{{ bdm | selectattr('device_name', 'match', '/dev/sda1') | first }}"
+
+# FIXME: Stuff "/dev/sdf" into the data_device variable for AWS
+- name: Discover the data device
+ ansible.builtin.set_fact:
+ data_ebs_volume: "{{ bdm | selectattr('device_name', 'match', '/dev/sdf') | first }}"
+
+- name: Add unused EBS volumes to the volume list
+ vars:
+ root_volume_id: "{{ root_ebs_volume.ebs.volume_id | string | regex_replace('-', '') }}"
+ data_volume_id: "{{ data_ebs_volume.ebs.volume_id | string | regex_replace('-', '') }}"
+ ansible.builtin.set_fact:
+ physical_volumes: "{{ physical_volumes + ['/dev/' + item.key] }}"
+ when:
+ - item.value.model == "Amazon Elastic Block Store"
+ - item.value.serial != root_volume_id
+ - item.value.serial != data_volume_id
+ loop_control:
+ label: "Adding block device: /dev/{{ item.key }}"
+ with_dict: "{{ ansible_devices }}"
--
2.48.1
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v1 10/13] volume_group: Create a volume group on Azure nodes
2025-03-10 14:18 [PATCH v1 00/13] Block device provisioning for storage nodes cel
` (8 preceding siblings ...)
2025-03-10 14:18 ` [PATCH v1 09/13] volume_group: Create volume group on terraform/AWS nodes cel
@ 2025-03-10 14:18 ` cel
2025-03-10 14:18 ` [PATCH v1 11/13] volume_group: Create a volume group on GCE nodes cel
` (3 subsequent siblings)
13 siblings, 0 replies; 17+ messages in thread
From: cel @ 2025-03-10 14:18 UTC (permalink / raw)
To: Luis Chamberlain, Chandan Babu R, Jeff Layton; +Cc: kdevops, Chuck Lever
From: Chuck Lever <chuck.lever@oracle.com>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
.../volume_group/tasks/terraform/azure.yml | 40 +++++++++++++++++++
1 file changed, 40 insertions(+)
create mode 100644 playbooks/roles/volume_group/tasks/terraform/azure.yml
diff --git a/playbooks/roles/volume_group/tasks/terraform/azure.yml b/playbooks/roles/volume_group/tasks/terraform/azure.yml
new file mode 100644
index 000000000000..698b4925a327
--- /dev/null
+++ b/playbooks/roles/volume_group/tasks/terraform/azure.yml
@@ -0,0 +1,40 @@
+---
+#
+# To guarantee idempotency, these steps have to generate the exact
+# same physical_volumes list every time they are run.
+#
+# Skip the block device on which the root and temporary file systems
+# reside. These devices reside under /dev/disk/cloud/
+#
+# Also skip the device to be used for the /data file system.
+#
+
+- name: Detect the root device
+ ansible.builtin.stat:
+ path: "/dev/disk/cloud/azure_root"
+ register: stat_output
+
+- name: Save the name of the root device
+ ansible.builtin.set_fact:
+ root_device: "{{ stat_output.stat.lnk_source.split('/dev/').1 }}"
+
+- name: Detect the temporary device
+ ansible.builtin.stat:
+ path: "/dev/disk/cloud/azure_resource"
+ register: stat_output
+
+- name: Save the name of the temporary device
+ ansible.builtin.set_fact:
+ tmp_device: "{{ stat_output.stat.lnk_source.split('/dev/').1 }}"
+
+- name: Add unused extra managed disks to the volume list
+ ansible.builtin.set_fact:
+ physical_volumes: "{{ physical_volumes + ['/dev/' + item.key] }}"
+ when:
+ - item.value.model == "Virtual Disk"
+ - root_device != item.key
+ - tmp_device != item.key
+ - data_device != "/dev/" + item.key
+ loop_control:
+ label: "Adding block device: /dev/{{ item.key }}"
+ with_dict: "{{ ansible_devices }}"
--
2.48.1
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v1 11/13] volume_group: Create a volume group on GCE nodes
2025-03-10 14:18 [PATCH v1 00/13] Block device provisioning for storage nodes cel
` (9 preceding siblings ...)
2025-03-10 14:18 ` [PATCH v1 10/13] volume_group: Create a volume group on Azure nodes cel
@ 2025-03-10 14:18 ` cel
2025-03-10 14:18 ` [PATCH v1 12/13] volume_group: Create a volume group on OCI nodes cel
` (2 subsequent siblings)
13 siblings, 0 replies; 17+ messages in thread
From: cel @ 2025-03-10 14:18 UTC (permalink / raw)
To: Luis Chamberlain, Chandan Babu R, Jeff Layton; +Cc: kdevops, Chuck Lever
From: Chuck Lever <chuck.lever@oracle.com>
Not yet supported.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
playbooks/roles/volume_group/tasks/terraform/gce.yml | 4 ++++
1 file changed, 4 insertions(+)
create mode 100644 playbooks/roles/volume_group/tasks/terraform/gce.yml
diff --git a/playbooks/roles/volume_group/tasks/terraform/gce.yml b/playbooks/roles/volume_group/tasks/terraform/gce.yml
new file mode 100644
index 000000000000..b3484177db4b
--- /dev/null
+++ b/playbooks/roles/volume_group/tasks/terraform/gce.yml
@@ -0,0 +1,4 @@
+---
+- name: Google Compute Engine
+ ansible.builtin.fail:
+ msg: Support for Google Compute Engine has not yet been implemented
--
2.48.1
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v1 12/13] volume_group: Create a volume group on OCI nodes
2025-03-10 14:18 [PATCH v1 00/13] Block device provisioning for storage nodes cel
` (10 preceding siblings ...)
2025-03-10 14:18 ` [PATCH v1 11/13] volume_group: Create a volume group on GCE nodes cel
@ 2025-03-10 14:18 ` cel
2025-03-13 6:29 ` Chandan Babu R
2025-03-10 14:18 ` [PATCH v1 13/13] volume_group: Create a volume group on OpenStack public clouds cel
2025-03-11 3:29 ` [PATCH v1 00/13] Block device provisioning for storage nodes Luis Chamberlain
13 siblings, 1 reply; 17+ messages in thread
From: cel @ 2025-03-10 14:18 UTC (permalink / raw)
To: Luis Chamberlain, Chandan Babu R, Jeff Layton; +Cc: kdevops, Chuck Lever
From: Chuck Lever <chuck.lever@oracle.com>
On storage nodes (eg., nfsd, iscsi, smbd), pick up the extra block
volumes, avoiding the root and data devices, and toss them into an
volume group to use for shared storage.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
.../volume_group/tasks/terraform/oci.yml | 38 +++++++++++++++++++
1 file changed, 38 insertions(+)
create mode 100644 playbooks/roles/volume_group/tasks/terraform/oci.yml
diff --git a/playbooks/roles/volume_group/tasks/terraform/oci.yml b/playbooks/roles/volume_group/tasks/terraform/oci.yml
new file mode 100644
index 000000000000..219e3d7edbfd
--- /dev/null
+++ b/playbooks/roles/volume_group/tasks/terraform/oci.yml
@@ -0,0 +1,38 @@
+---
+#
+# To guarantee idempotency, these steps have to generate the exact
+# same physical_volumes list every time they are run.
+#
+# Skip the block device on which the root filesystem resides, and
+# skip the device that is to be used for /data. These devices all
+# show up under /dev/oracleoci/ .
+#
+
+- name: Detect the root device
+ ansible.builtin.stat:
+ path: "/dev/oracleoci/oraclevda"
+ register: stat_output
+
+- name: Save the name of the root device
+ ansible.builtin.set_fact:
+ instance_root_device: "{{ stat_output.stat.lnk_source.split('/dev/').1 }}"
+
+- name: Detect the data device
+ ansible.builtin.stat:
+ path: "{{ data_device }}"
+ register: stat_output
+
+- name: Save the name of the data device
+ ansible.builtin.set_fact:
+ instance_data_device: "{{ stat_output.stat.lnk_source.split('/dev/').1 }}"
+
+- name: Add unused extra volumes to the volume list
+ ansible.builtin.set_fact:
+ physical_volumes: "{{ physical_volumes + ['/dev/' + item.key] }}"
+ when:
+ - item.value.model == "BlockVolume"
+ - item.key != instance_root_device
+ - item.key != instance_data_device
+ loop_control:
+ label: "Adding block device: /dev/{{ item.key }}"
+ with_dict: "{{ ansible_devices }}"
--
2.48.1
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v1 13/13] volume_group: Create a volume group on OpenStack public clouds
2025-03-10 14:18 [PATCH v1 00/13] Block device provisioning for storage nodes cel
` (11 preceding siblings ...)
2025-03-10 14:18 ` [PATCH v1 12/13] volume_group: Create a volume group on OCI nodes cel
@ 2025-03-10 14:18 ` cel
2025-03-11 3:29 ` [PATCH v1 00/13] Block device provisioning for storage nodes Luis Chamberlain
13 siblings, 0 replies; 17+ messages in thread
From: cel @ 2025-03-10 14:18 UTC (permalink / raw)
To: Luis Chamberlain, Chandan Babu R, Jeff Layton; +Cc: kdevops, Chuck Lever
From: Chuck Lever <chuck.lever@oracle.com>
Not yet supported.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
playbooks/roles/volume_group/tasks/terraform/openstack.yml | 4 ++++
1 file changed, 4 insertions(+)
create mode 100644 playbooks/roles/volume_group/tasks/terraform/openstack.yml
diff --git a/playbooks/roles/volume_group/tasks/terraform/openstack.yml b/playbooks/roles/volume_group/tasks/terraform/openstack.yml
new file mode 100644
index 000000000000..f5b6fbdf527d
--- /dev/null
+++ b/playbooks/roles/volume_group/tasks/terraform/openstack.yml
@@ -0,0 +1,4 @@
+---
+- name: OpenStack
+ ansible.builtin.fail:
+ msg: Support for OpenStack has not yet been implemented
--
2.48.1
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v1 00/13] Block device provisioning for storage nodes
2025-03-10 14:18 [PATCH v1 00/13] Block device provisioning for storage nodes cel
` (12 preceding siblings ...)
2025-03-10 14:18 ` [PATCH v1 13/13] volume_group: Create a volume group on OpenStack public clouds cel
@ 2025-03-11 3:29 ` Luis Chamberlain
13 siblings, 0 replies; 17+ messages in thread
From: Luis Chamberlain @ 2025-03-11 3:29 UTC (permalink / raw)
To: cel; +Cc: Chandan Babu R, Jeff Layton, kdevops, Chuck Lever
On Mon, Mar 10, 2025 at 10:18:00AM -0400, cel@kernel.org wrote:
> From: Chuck Lever <chuck.lever@oracle.com>
>
> Hi -
>
> Sorry for the length of the series. I'm posting the series as a
> whole to provide context for each of the individual changes. Feel
> free to focus on whichever patch or patches in this series are most
> interesting to you. All review comments are welcome.
>
> The high-level goal is to enable testing NFS / SMB / iSCSI with
> kdevops in the cloud. These workflows are already operational for
> guestfs. The basic challenge is each cloud provider has a distinct
> way of provisioning and attaching block devices.
>
> This series can be considered in two sections:
>
> - the first four patches wrangle the terraform for some of the
> cloud providers to make them provision a set of extra block
> volumes, each of the same size, just as guestfs and AWS
> currently do.
>
> - the second half of the series adds a new playbook that:
>
> - de-duplicates the scripting that creates an LVM volume group,
> because three different roles each implement this the same way
>
> - replaces the "skip one device" method for determining which
> extra block volume the /data partition should live on. AWS still
> needs some work here because NVMe devices can get renamed after
> every instance reboot
>
> - adds LVM support that handles the differences amongst the cloud
> providers, tucked away in the new playbook
>
>
> GCE and OpenStack are not yet updated, but they are in plan.
Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
Luis
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v1 04/13] terraform/OCI: Create a set of multiple generic block devices
2025-03-10 14:18 ` [PATCH v1 04/13] terraform/OCI: " cel
@ 2025-03-13 5:56 ` Chandan Babu R
0 siblings, 0 replies; 17+ messages in thread
From: Chandan Babu R @ 2025-03-13 5:56 UTC (permalink / raw)
To: cel; +Cc: Luis Chamberlain, Jeff Layton, kdevops, Chuck Lever
On Mon, Mar 10, 2025 at 10:18:04 AM -0400, cel@kernel.org wrote:
> From: Chuck Lever <chuck.lever@oracle.com>
>
> When provisioning on OCI, terraform creates one block device for
> the /data file system, and one for sparse files. This is unlike
> other provisioning methods (guestfs and AWS being the primary
> examples) which instead create a set of generic block devices and
> then set up the sparse files or /data file system on one of those.
>
> Luis and Chandan have agreed to changing OCI to work like the
> other terraform providers and guestfs.
>
Looks good to me.
Reviewed-by: Chandan Babu R <chandanbabu@kernel.org>
> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
> ---
> .../templates/oci/terraform.tfvars.j2 | 6 +
> scripts/terraform.Makefile | 9 +-
> terraform/oci/Kconfig | 153 ++++++++++++++++++
> terraform/oci/main.tf | 28 +++-
> terraform/oci/vars.tf | 17 ++
> 5 files changed, 208 insertions(+), 5 deletions(-)
>
> diff --git a/playbooks/roles/gen_tfvars/templates/oci/terraform.tfvars.j2 b/playbooks/roles/gen_tfvars/templates/oci/terraform.tfvars.j2
> index 6429c7289f52..2d45fd77d510 100644
> --- a/playbooks/roles/gen_tfvars/templates/oci/terraform.tfvars.j2
> +++ b/playbooks/roles/gen_tfvars/templates/oci/terraform.tfvars.j2
> @@ -12,10 +12,16 @@ oci_os_image_ocid = "{{ terraform_oci_os_image_ocid }}"
> oci_assign_public_ip = "{{ terraform_oci_assign_public_ip | lower }}"
> oci_instance_display_name = "{{ terraform_oci_instance_display_name }}"
> oci_subnet_ocid = "{{ terraform_oci_subnet_ocid }}"
> +oci_volumes_enable_extra = "{{ terraform_oci_volumes_enable_extra | lower }}"
> +{% if terraform_oci_volumes_enable_extra %}
> +oci_volumes_per_instance = {{ terraform_oci_volumes_per_instance }}
> +oci_volumes_size = {{ terraform_oci_volumes_size }}
> +{% else %}
> oci_data_volume_display_name = "{{ terraform_oci_data_volume_display_name }}"
> oci_data_volume_device_file_name = "{{ terraform_oci_data_volume_device_file_name }}"
> oci_sparse_volume_display_name = "{{ terraform_oci_sparse_volume_display_name }}"
> oci_sparse_volume_device_file_name = "{{ terraform_oci_sparse_volume_device_file_name }}"
> +{% endif %}
>
> ssh_config_pubkey_file = "{{ kdevops_terraform_ssh_config_pubkey_file }}"
> ssh_config_user = "{{ kdevops_terraform_ssh_config_user }}"
> diff --git a/scripts/terraform.Makefile b/scripts/terraform.Makefile
> index 19c2384fb2ad..e3d8c6b003ce 100644
> --- a/scripts/terraform.Makefile
> +++ b/scripts/terraform.Makefile
> @@ -104,10 +104,17 @@ else
> TERRAFORM_EXTRA_VARS += terraform_oci_assign_public_ip=false
> endif
> TERRAFORM_EXTRA_VARS += terraform_oci_subnet_ocid=$(subst ",,$(CONFIG_TERRAFORM_OCI_SUBNET_OCID))
> +
> +ifeq (y, $(CONFIG_TERRAFORM_OCI_VOLUMES_ENABLE_EXTRA))
> +TERRAFORM_EXTRA_VARS += terraform_oci_volumes_enable_extra=true
> +else
> +TERRAFORM_EXTRA_VARS += terraform_oci_volumes_enable_extra=false
> TERRAFORM_EXTRA_VARS += terraform_oci_data_volume_display_name=$(subst ",,$(CONFIG_TERRAFORM_OCI_DATA_VOLUME_DISPLAY_NAME))
> -TERRAFORM_EXTRA_VARS += terraform_oci_data_volume_device_file_name=$(subst ",,$(CONFIG_TERRAFORM_OCI_DATA_VOLUME_DEVICE_FILE_NAME))
> TERRAFORM_EXTRA_VARS += terraform_oci_sparse_volume_display_name=$(subst ",,$(CONFIG_TERRAFORM_OCI_SPARSE_VOLUME_DISPLAY_NAME))
> +endif
> +TERRAFORM_EXTRA_VARS += terraform_oci_data_volume_device_file_name=$(subst ",,$(CONFIG_TERRAFORM_OCI_DATA_VOLUME_DEVICE_FILE_NAME))
> TERRAFORM_EXTRA_VARS += terraform_oci_sparse_volume_device_file_name=$(subst ",,$(CONFIG_TERRAFORM_OCI_SPARSE_VOLUME_DEVICE_FILE_NAME))
> +
> endif
>
> ifeq (y,$(CONFIG_TERRAFORM_OPENSTACK))
> diff --git a/terraform/oci/Kconfig b/terraform/oci/Kconfig
> index 4b37ad91d4b9..00f03163ed83 100644
> --- a/terraform/oci/Kconfig
> +++ b/terraform/oci/Kconfig
> @@ -90,6 +90,153 @@ config TERRAFORM_OCI_SUBNET_OCID
> Read this:
> https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/terraformproviderconfiguration.htm
>
> +config TERRAFORM_OCI_VOLUMES_ENABLE_EXTRA
> + bool "Enable additional block devices"
> + default n
> + help
> + Enable this to provision up to 10 extra block devices
> + on each target node.
> +
> +if TERRAFORM_OCI_VOLUMES_ENABLE_EXTRA
> +
> +choice
> + prompt "Count of extra block volumes"
> + default TERRAFORM_OCI_VOLUMES_PER_INSTANCE_4
> + help
> + The count of extra block devices attached to each target
> + node.
> +
> +config TERRAFORM_OCI_VOLUMES_PER_INSTANCE_2
> + bool "2"
> + help
> + Provision 2 extra volumes per target node.
> +
> +config TERRAFORM_OCI_VOLUMES_PER_INSTANCE_3
> + bool "3"
> + help
> + Provision 3 extra volumes per target node.
> +
> +config TERRAFORM_OCI_VOLUMES_PER_INSTANCE_4
> + bool "4"
> + help
> + Provision 4 extra volumes per target node.
> +
> +config TERRAFORM_OCI_VOLUMES_PER_INSTANCE_5
> + bool "5"
> + help
> + Provision 5 extra volumes per target node.
> +
> +config TERRAFORM_OCI_VOLUMES_PER_INSTANCE_6
> + bool "6"
> + help
> + Provision 6 extra volumes per target node.
> +
> +config TERRAFORM_OCI_VOLUMES_PER_INSTANCE_7
> + bool "7"
> + help
> + Provision 7 extra volumes per target node.
> +
> +config TERRAFORM_OCI_VOLUMES_PER_INSTANCE_8
> + bool "8"
> + help
> + Provision 8 extra volumes per target node.
> +
> +config TERRAFORM_OCI_VOLUMES_PER_INSTANCE_9
> + bool "9"
> + help
> + Provision 9 extra volumes per target node.
> +
> +config TERRAFORM_OCI_VOLUMES_PER_INSTANCE_10
> + bool "10"
> + help
> + Provision 10 extra volumes per target node.
> +
> +endchoice
> +
> +config TERRAFORM_OCI_VOLUMES_PER_INSTANCE
> + int
> + output yaml
> + default 2 if TERRAFORM_OCI_VOLUMES_PER_INSTANCE_2
> + default 3 if TERRAFORM_OCI_VOLUMES_PER_INSTANCE_3
> + default 4 if TERRAFORM_OCI_VOLUMES_PER_INSTANCE_4
> + default 5 if TERRAFORM_OCI_VOLUMES_PER_INSTANCE_5
> + default 6 if TERRAFORM_OCI_VOLUMES_PER_INSTANCE_6
> + default 7 if TERRAFORM_OCI_VOLUMES_PER_INSTANCE_7
> + default 8 if TERRAFORM_OCI_VOLUMES_PER_INSTANCE_8
> + default 9 if TERRAFORM_OCI_VOLUMES_PER_INSTANCE_9
> + default 10 if TERRAFORM_OCI_VOLUMES_PER_INSTANCE_10
> +
> +choice
> + prompt "Volume size for each additional volume"
> + default TERRAFORM_OCI_VOLUMES_SIZE_50G
> + help
> + OCI implements volume sizes between 50G and 32T. In some
> + cases, 50G volumes are in the free tier.
> +
> +config TERRAFORM_OCI_VOLUMES_SIZE_50G
> + bool "50G"
> + help
> + Extra block volumes are 50 GiB in size.
> +
> +config TERRAFORM_OCI_VOLUMES_SIZE_64G
> + bool "64G"
> + help
> + Extra block volumes are 64 GiB in size.
> +
> +config TERRAFORM_OCI_VOLUMES_SIZE_128G
> + bool "128G"
> + help
> + Extra block volumes are 128 GiB in size.
> +
> +config TERRAFORM_OCI_VOLUMES_SIZE_256G
> + bool "256G"
> + help
> + Extra block volumes are 256 GiB in size.
> +
> +config TERRAFORM_OCI_VOLUMES_SIZE_512G
> + bool "512G"
> + help
> + Extra block volumes are 512 GiB in size.
> +
> +config TERRAFORM_OCI_VOLUMES_SIZE_1024G
> + bool "1024G"
> + help
> + Extra block volumes are 1024 GiB in size.
> +
> +config TERRAFORM_OCI_VOLUMES_SIZE_2048G
> + bool "2048G"
> + help
> + Extra block volumes are 2048 GiB in size.
> +
> +config TERRAFORM_OCI_VOLUMES_SIZE_4096G
> + bool "4096G"
> + help
> + Extra block volumes are 4096 GiB in size.
> +
> +config TERRAFORM_OCI_VOLUMES_SIZE_8192G
> + bool "8192G"
> + help
> + Extra block volumes are 8192 GiB in size.
> +
> +endchoice
> +
> +config TERRAFORM_OCI_VOLUMES_SIZE
> + int
> + output yaml
> + default 50 if TERRAFORM_OCI_VOLUMES_SIZE_50G
> + default 64 if TERRAFORM_OCI_VOLUMES_SIZE_64G
> + default 128 if TERRAFORM_OCI_VOLUMES_SIZE_128G
> + default 256 if TERRAFORM_OCI_VOLUMES_SIZE_256G
> + default 512 if TERRAFORM_OCI_VOLUMES_SIZE_512G
> + default 1024 if TERRAFORM_OCI_VOLUMES_SIZE_1024G
> + default 2048 if TERRAFORM_OCI_VOLUMES_SIZE_2048G
> + default 4096 if TERRAFORM_OCI_VOLUMES_SIZE_4096G
> + default 8192 if TERRAFORM_OCI_VOLUMES_SIZE_8192G
> +
> +endif # TERRAFORM_OCI_VOLUMES_ENABLE_EXTRA
> +
> +if !TERRAFORM_OCI_VOLUMES_ENABLE_EXTRA
> +
> config TERRAFORM_OCI_DATA_VOLUME_DISPLAY_NAME
> string "Display name to use for the data volume"
> default "data"
> @@ -98,6 +245,8 @@ config TERRAFORM_OCI_DATA_VOLUME_DISPLAY_NAME
> Read this:
> https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/terraformproviderconfiguration.htm
>
> +endif # !TERRAFORM_OCI_VOLUMES_ENABLE_EXTRA
> +
> config TERRAFORM_OCI_DATA_VOLUME_DEVICE_FILE_NAME
> string "Data volume's device file name"
> default "/dev/oracleoci/oraclevdb"
> @@ -106,6 +255,8 @@ config TERRAFORM_OCI_DATA_VOLUME_DEVICE_FILE_NAME
> Read this:
> https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/terraformproviderconfiguration.htm
>
> +if !TERRAFORM_OCI_VOLUMES_ENABLE_EXTRA
> +
> config TERRAFORM_OCI_SPARSE_VOLUME_DISPLAY_NAME
> string "Display name to use for the sparse volume"
> default "sparse"
> @@ -114,6 +265,8 @@ config TERRAFORM_OCI_SPARSE_VOLUME_DISPLAY_NAME
> Read this:
> https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/terraformproviderconfiguration.htm
>
> +endif # !TERRAFORM_OCI_VOLUMES_ENABLE_EXTRA
> +
> config TERRAFORM_OCI_SPARSE_VOLUME_DEVICE_FILE_NAME
> string "Sparse volume's device file name"
> default "/dev/oracleoci/oraclevdc"
> diff --git a/terraform/oci/main.tf b/terraform/oci/main.tf
> index 033f821d9502..c3c477a6b4bd 100644
> --- a/terraform/oci/main.tf
> +++ b/terraform/oci/main.tf
> @@ -28,7 +28,7 @@ resource "oci_core_instance" "kdevops_instance" {
> }
>
> resource "oci_core_volume" "kdevops_data_disk" {
> - count = local.kdevops_num_boxes
> + count = var.oci_volumes_enable_extra == "true" ? 0 : local.kdevops_num_boxes
>
> compartment_id = var.oci_compartment_ocid
>
> @@ -38,7 +38,7 @@ resource "oci_core_volume" "kdevops_data_disk" {
> }
>
> resource "oci_core_volume" "kdevops_sparse_disk" {
> - count = local.kdevops_num_boxes
> + count = var.oci_volumes_enable_extra == "true" ? 0 : local.kdevops_num_boxes
>
> compartment_id = var.oci_compartment_ocid
>
> @@ -48,7 +48,7 @@ resource "oci_core_volume" "kdevops_sparse_disk" {
> }
>
> resource "oci_core_volume_attachment" "kdevops_data_volume_attachment" {
> - count = local.kdevops_num_boxes
> + count = var.oci_volumes_enable_extra == "true" ? 0 : local.kdevops_num_boxes
>
> attachment_type = "paravirtualized"
> instance_id = element(oci_core_instance.kdevops_instance.*.id, count.index)
> @@ -58,7 +58,7 @@ resource "oci_core_volume_attachment" "kdevops_data_volume_attachment" {
> }
>
> resource "oci_core_volume_attachment" "kdevops_sparse_disk_attachment" {
> - count = local.kdevops_num_boxes
> + count = var.oci_volumes_enable_extra == "true" ? 0 : local.kdevops_num_boxes
>
> attachment_type = "paravirtualized"
> instance_id = element(oci_core_instance.kdevops_instance.*.id, count.index)
> @@ -66,3 +66,23 @@ resource "oci_core_volume_attachment" "kdevops_sparse_disk_attachment" {
>
> device = var.oci_sparse_volume_device_file_name
> }
> +
> +resource "oci_core_volume" "kdevops_volume_extra" {
> + count = var.oci_volumes_enable_extra == "false" ? 0 : local.kdevops_num_boxes * var.oci_volumes_per_instance
> + availability_domain = var.oci_availablity_domain
> + display_name = format("kdevops_volume%02d", count.index + 1)
> + compartment_id = var.oci_compartment_ocid
> + size_in_gbs = var.oci_volumes_size
> +}
> +
> +locals {
> + volume_name_suffixes = [ "b", "c", "d", "e", "f", "g", "h", "i", "j", "k" ]
> +}
> +
> +resource "oci_core_volume_attachment" "kdevops_volume_extra_att" {
> + count = var.oci_volumes_enable_extra == "false" ? 0 : local.kdevops_num_boxes * var.oci_volumes_per_instance
> + attachment_type = "paravirtualized"
> + instance_id = element(oci_core_instance.kdevops_instance.*.id, count.index)
> + volume_id = element(oci_core_volume.kdevops_volume_extra.*.id, count.index)
> + device = format("/dev/oracleoci/oraclevd%s", element(local.volume_name_suffixes, count.index))
> +}
> diff --git a/terraform/oci/vars.tf b/terraform/oci/vars.tf
> index b02e79c597ec..077a9a4afdaa 100644
> --- a/terraform/oci/vars.tf
> +++ b/terraform/oci/vars.tf
> @@ -70,6 +70,23 @@ variable "oci_subnet_ocid" {
> default = ""
> }
>
> +variable "oci_volumes_enable_extra" {
> + description = "Create additional block volumes per instance"
> + default = false
> +}
> +
> +variable "oci_volumes_per_instance" {
> + description = "The count of additional block volumes per instance"
> + type = number
> + default = 0
> +}
> +
> +variable "oci_volumes_size" {
> + description = "The size of additional block volumes, in gibibytes"
> + type = number
> + default = 0
> +}
> +
> variable "oci_data_volume_display_name" {
> description = "Display name to use for the data volume"
> default = "data"
--
Chandan
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v1 12/13] volume_group: Create a volume group on OCI nodes
2025-03-10 14:18 ` [PATCH v1 12/13] volume_group: Create a volume group on OCI nodes cel
@ 2025-03-13 6:29 ` Chandan Babu R
0 siblings, 0 replies; 17+ messages in thread
From: Chandan Babu R @ 2025-03-13 6:29 UTC (permalink / raw)
To: cel; +Cc: Luis Chamberlain, Jeff Layton, kdevops, Chuck Lever
On Mon, Mar 10, 2025 at 10:18:12 AM -0400, cel@kernel.org wrote:
> From: Chuck Lever <chuck.lever@oracle.com>
>
> On storage nodes (eg., nfsd, iscsi, smbd), pick up the extra block
> volumes, avoiding the root and data devices, and toss them into an
> volume group to use for shared storage.
Looks good to me.
Reviewed-by: Chandan Babu R <chandanbabu@kernel.org>
>
> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
> ---
> .../volume_group/tasks/terraform/oci.yml | 38 +++++++++++++++++++
> 1 file changed, 38 insertions(+)
> create mode 100644 playbooks/roles/volume_group/tasks/terraform/oci.yml
>
> diff --git a/playbooks/roles/volume_group/tasks/terraform/oci.yml b/playbooks/roles/volume_group/tasks/terraform/oci.yml
> new file mode 100644
> index 000000000000..219e3d7edbfd
> --- /dev/null
> +++ b/playbooks/roles/volume_group/tasks/terraform/oci.yml
> @@ -0,0 +1,38 @@
> +---
> +#
> +# To guarantee idempotency, these steps have to generate the exact
> +# same physical_volumes list every time they are run.
> +#
> +# Skip the block device on which the root filesystem resides, and
> +# skip the device that is to be used for /data. These devices all
> +# show up under /dev/oracleoci/ .
> +#
> +
> +- name: Detect the root device
> + ansible.builtin.stat:
> + path: "/dev/oracleoci/oraclevda"
> + register: stat_output
> +
> +- name: Save the name of the root device
> + ansible.builtin.set_fact:
> + instance_root_device: "{{ stat_output.stat.lnk_source.split('/dev/').1 }}"
> +
> +- name: Detect the data device
> + ansible.builtin.stat:
> + path: "{{ data_device }}"
> + register: stat_output
> +
> +- name: Save the name of the data device
> + ansible.builtin.set_fact:
> + instance_data_device: "{{ stat_output.stat.lnk_source.split('/dev/').1 }}"
> +
> +- name: Add unused extra volumes to the volume list
> + ansible.builtin.set_fact:
> + physical_volumes: "{{ physical_volumes + ['/dev/' + item.key] }}"
> + when:
> + - item.value.model == "BlockVolume"
> + - item.key != instance_root_device
> + - item.key != instance_data_device
> + loop_control:
> + label: "Adding block device: /dev/{{ item.key }}"
> + with_dict: "{{ ansible_devices }}"
--
Chandan
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2025-03-13 6:31 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-10 14:18 [PATCH v1 00/13] Block device provisioning for storage nodes cel
2025-03-10 14:18 ` [PATCH v1 01/13] terraform/AWS: Upgrade the EBS volume type to "gp3" cel
2025-03-10 14:18 ` [PATCH v1 02/13] terraform/Azure: Remove managed_disk_type selection cel
2025-03-10 14:18 ` [PATCH v1 03/13] terraform/Azure: Create a set of multiple generic block devices cel
2025-03-10 14:18 ` [PATCH v1 04/13] terraform/OCI: " cel
2025-03-13 5:56 ` Chandan Babu R
2025-03-10 14:18 ` [PATCH v1 05/13] guestfs: Set storage options consistently cel
2025-03-10 14:18 ` [PATCH v1 06/13] playbooks: Add a role to create an LVM volume group cel
2025-03-10 14:18 ` [PATCH v1 07/13] volume_group: Detect the /data partition directly cel
2025-03-10 14:18 ` [PATCH v1 08/13] volume_group: Prepare to support cloud providers cel
2025-03-10 14:18 ` [PATCH v1 09/13] volume_group: Create volume group on terraform/AWS nodes cel
2025-03-10 14:18 ` [PATCH v1 10/13] volume_group: Create a volume group on Azure nodes cel
2025-03-10 14:18 ` [PATCH v1 11/13] volume_group: Create a volume group on GCE nodes cel
2025-03-10 14:18 ` [PATCH v1 12/13] volume_group: Create a volume group on OCI nodes cel
2025-03-13 6:29 ` Chandan Babu R
2025-03-10 14:18 ` [PATCH v1 13/13] volume_group: Create a volume group on OpenStack public clouds cel
2025-03-11 3:29 ` [PATCH v1 00/13] Block device provisioning for storage nodes Luis Chamberlain
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox