ReleaseEngineering/How To/Add Disk to an EC2 Instance
If you have an instance that doesn't have enough disk, despair not! You can add disk!
If the volume you need more space on is an LVM volume:
[email@example.com ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/cloud_root-lv_root 35G 1.5G 32G 5% /
then you can expand the volume in-place. If not, you'll need to create a new mountpoint.
Either way, start by adding a new volume to the instance:
- In the EC2 Console, go to "Volumes" and create a new general-purpose volume of the desired additional size.
- Find the volume in the list of volumes, and choose Actions -> Attach Volume
- Choose your instance, and accept the default device name
- On the instance, run `lsblk` and you should see your new disk:
[firstname.lastname@example.org ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 35G 0 disk ├─xvda1 202:1 0 61M 0 part /boot └─xvda2 202:2 0 35G 0 part └─cloud_root-lv_root (dm-0) 253:0 0 35G 0 lvm / xvdb 202:16 0 4G 0 disk /mnt xvdf 202:80 0 36G 0 disk
The new one is the last one, which isn't otherwise used. Note that you may also see ephemeral disks here -- that's the 4G xvdb in this case, which cloud-init helpfully formatted and mounted at /mnt. You can use that disk, too, but it is ephemeral -- if you stop the system, then data on that disk goes away. If it's part of your LVM logical root volume, the your instance is kaput. No bueno. Still, that disk is fast and if you just need a few gigs for a few minutes, go ahead and use it.
Expand In Place
This is really easy -- add a new physical volume, extend the volume group to include it, and then resize the logical volume to use the new space.
[email@example.com ~]# pvcreate /dev/xvdf Physical volume "/dev/xvdf" successfully created [firstname.lastname@example.org ~]# vgextend cloud_root /dev/xvdf Volume group "cloud_root" successfully extended [email@example.com ~]# lvresize -L+35G -r /dev/mapper/cloud_root-lv_root Extending logical volume lv_root to 69.94 GiB Logical volume lv_root successfully resized resize2fs 1.41.12 (17-May-2010) Filesystem at /dev/mapper/cloud_root-lv_root is mounted on /; on-line resizing required old desc_blocks = 3, new_desc_blocks = 5 Performing an on-line resize of /dev/mapper/cloud_root-lv_root to 18333696 (4k) blocks.
The value passed to `-L` is generally a bit smaller than the size of the volume you created, due to overhead.
Add New Volume
This is pretty easy, too.
[firstname.lastname@example.org ~]# mkfs.ext4 /dev/xvdg mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 655360 inodes, 2621440 blocks 131072 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2684354560 80 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 24 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [email@example.com ~]# echo '/dev/xvdg /builds auto defaults,noatime 0 2' >> /etc/fstab [firstname.lastname@example.org ~]# mount /builds [email@example.com ~]# df -h /builds Filesystem Size Used Avail Use% Mounted on /dev/xvdg 9.8G 23M 9.2G 1% /builds