REsize Amazon EBS volumes without a reboot
In rare cases, it happens that disk is full, and usually, it happens not in right time. No worries, AWS supports hot expanding disk for EBS volumes attached to modern instances. Manual process split to 2 phase:
-
change the volume size via Web console or command line;
-
ssh to the instance and update a partition information.
I want to automate a bit of those and register my changes. Terraform is one of the tools that help me with this task.
What do we have: a volume formatted with XFS, connected to Linux and mounted somewhere. Let’s begin the work!
Step 1: Create a terraform resource
Checking the document of the resource related to EBS volume and declare what do we know about the volume:
// volumes.tf
resource "aws_ebs_volume" "mysql" {
availability_zone = "us-east-1a"
size = 1000
type = "gp2"
tags {
Name = "mysql"
Role = "db"
Terraform = "true"
FS = "xfs"
}
}
Import existing AWS resource to our state:
$ terraform import aws_ebs_volume.mysql vol-0123456789abcdef0
Import successful!
Check missed values from resources like tags and fix conflicts:
$ terraform plan -target=aws_ebs_volume.mysql
~ aws_ebs_volume.mysql
tags.%: "1" => "2"
tags.Name: "Created for Mysql" => "mysql"
tags.Terraform: "" => "true"
$ terraform apply -target=aws_ebs_volume.mysql
Update size
field to 2000
and apply changes:
// volumes.tf
resource "aws_ebs_volume" "mysql" {
availability_zone = "us-east-1a"
size = 2000
type = "gp2"
tags {
Name = "mysql"
Role = "db"
Terraform = "true"
FS = "xfs"
}
}
$ terraform apply -target=aws_ebs_volume.mysql
~ aws_ebs_volume.mysql
size: "1000" => "2000"
Step 2: Detect instance IP and volume disk
Search for instance with this volume. Create a data resource and set output of the instance id:
// volumes.tf
// ...
data "aws_instance" "mysql" {
filter {
name = "block-device-mapping.volume-id"
values = ["${aws_ebs_volume.mysql.id}"]
}
}
output "instance_id" {
value = "${data.aws_instance.mysql.id}"
}
Update the state and check the results:
$ terraform refresh
aws_ebs_volume.mysql: Refreshing state... (ID: vol-0123456789abcdef0)
data.aws_instance.mysql: Refreshing state...
Outputs:
instance_id = i-0123456789abcdef0
It is time to get the device name or mount point of our volume inside the instance. Here is one of the example how to get this information:
// volumes.tf
// ...
locals {
mount_point = "${data.aws_instance.mysql.ebs_block_device.0.device_name}"
}
If you manage volumes with OpsWorks, then it could be done by tags:
// volumes.tf
// ...
locals {
mount_point = "${aws_ebs_volume.mysql.tags["opsworks:mount_point"]}"
}
Step 3: Execute a script
The last step is to update the partition to use whole new disk size. The resource has 2 blocks - connection manifest and script:
// volumes.tf
// ...
resource "null_resource" "expand_disk" {
connection {
type = "ssh"
user = "ubuntu"
private_key = "${file("~/.ssh/id_rsa")}"
host = "${data.aws_instance.mysql.public_ip}"
}
provisioner "remote-exec" {
inline = [
"sudo lsblk",
"sudo xfs_growfs ${local.mount_point}",
]
}
}
And last our command is to execute it:
$ terraform apply -target=null_resource.expand_disk
Resume
In this article I presented a small case, in the real world these resources would be bigger and have some dependencies on other resources. I will add tricky small exercise on your shoulders how to run the script on each change of volume’s size. Here is the last version of the document:
resource "aws_ebs_volume" "mysql" {
availability_zone = "us-east-1a"
size = 2000
type = "gp2"
tags {
Name = "mysql"
Role = "db"
Terraform = "true"
FS = "xfs"
}
}
data "aws_instance" "mysql" {
filter {
name = "block-device-mapping.volume-id"
values = ["${aws_ebs_volume.mysql.id}"]
}
}
output "instance_id" {
value = "${data.aws_instance.mysql.id}"
}
locals {
mount_point = "${data.aws_instance.mysql.ebs_block_device.0.device_name}"
}
resource "null_resource" "expand_disk" {
connection {
type = "ssh"
user = "ubuntu"
private_key = "${file("~/.ssh/id_rsa")}"
host = "${data.aws_instance.mysql.public_ip}"
}
provisioner "remote-exec" {
inline = [
"sudo lsblk",
"sudo xfs_growfs ${local.mount_point}",
]
}
}
P.S.: Of course, you can accomplish the same by:
$ aws ec2 modify-volume --region us-east-1 --volume-id vol-11111111111111111 --size 2000 --volume-type gp2 --iops 100
$ ssh -t 58.18.28.18 "sudo xfs_growfs /ebs" # For XFS volume
$ ssh -t 58.18.28.18 "sudo resize2fs /dev/xdvn" # For EXT4 volume