Two volumes on same EC2 instance

Started by
7 comments, last by hplus0603 5 years, 5 months ago

I made some mistake, and ended up in a nonworking EC2 lamp/game server. Fortunately I had made a snapshot, and I used this instruction to restore my EC2 server https://www.techrepublic.com/blog/the-enterprise-cloud/backing-up-and-restoring-snapshots-on-amazon-ec2-machines/.

I followed the steps in "Restore a backup snapshot by mounting on an existing EC2 machine", as I didn't want to launch another EC2 instance.

My EC2 lamp server now works again, but I'm running with two volumes. I've not done the final steps "copy files" and  "clean up", because I didn't need to copy any files.

My old, non-functional volume, is still there, but I guess that I'm running on the snapshot volume because my server works again. I've tried to remove the lower (old) volume but then I think that I couldn't do that because it was a boot partition (if I remember correctly).

This is from my console:

df /

Filesystem     1K-blocks    Used Available Use% Mounted on

/dev/xvdf1       8123812 1931360   6092204  25% /

I don't know what's going on. Someone that can help me out?

volumes.png

Advertisement

Note that "allocated to instance in EC2" and "mounted by the Linux kernel as a file system" are different states.

A volume may still be allocated to the instance, but not visible through "df". However, you should not pass "/" as argument to "df" if you want to see all the volumes that are mounted.

You can also find unmounted disks (usually) using the "lsblk" command.

 

enum Bool { True, False, FileNotFound };


Ah, you're right, of cause. This is the correct df. 


df

Filesystem     1K-blocks    Used Available Use% Mounted on

/dev/xvdf1       8123812 1948064   6075500  25% /

devtmpfs          494232      36    494196   1% /dev

tmpfs             504720       0    504720   0% /dev/shm

But I still don't get it. My attached png shows /dev/sdf and /dev/xvda.

And maybe my first post was a bit unclear. As the server now works again, I guess that it runs on the volume that was made from the snapshot, but how do I find out that from those /dev paths?

And my goal was to run the EC2 instance with only one volume, the backup. But I can't remove the old volume, as it is a boot volume. Can I boot from the new (backup) volume instead, and then remove the failing volume?

What (do you think) is the current state? Is my EC2 instance running two volumes at the same time? One failing, and one working, where the working is handing the web requests?

I'm quite confused...

 

You cannot find out which EBS volume is used from "df." You have to check in the EBS control panel, to see what device the volume is attached to.

Separately, if two voumes are attached to your instance, but only one mounted, you should still be able to see both volumes with "lsblk."

 

enum Bool { True, False, FileNotFound };

Do you mean the information in the attached png, that I've copied from the Instance page? What does that mean? Do I still run on the old volume, or am I running on two volumes simultaneously? I'm confused? I really appreciate your help.

This is lsblk from the console BTW:


lsblk

NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

xvda    202:0    0   8G  0 disk

`-xvda1 202:1    0   8G  0 part

xvdf    202:80   0   8G  0 disk

`-xvdf1 202:81   0   8G  0 part /

 

rootdevice.png

The information from lsblk doesn't seem to match the information in the console.

Specifically, the console says "root" is xvda, but your instance mounts xvdf1 on "/"

My guess is that the instance early boots from xvda (the original image) but mounts xvdf (the second volume) on the "/" file system. So you are, indeed, running on two volumes, but the first volume is only used when the instance is first "turned on."

I imagine you could get rid of the xvda volume by shutting down the instance, then re-configuring it to use xvdf as the "root" and un-attaching the xvda volume. (This may end up renaming the volume known as xvdf to xvda from the point of view of the instance, btw; I'm not sure about what EC2 does in that case)

There really is a four-level mapping that you need to follow here:

1) EBS volumes contains "blocks" of data. Raw data. Much like a raw hard disk, or stick of RAM for that matter. EBS volumes have "ARN" identifiers, and perhaps user-given labels/names for easy of access, but they aren't "devices" by themselves.

2) EC2 instances configure specific EBS volumes (by ARN) to specific local device names (like xvda or xvdf). For historical reasons, the configuration panel may refer to devices as "sd<y>" when the instance actually sees the name "xvd<y>."

3) Data on the raw block devices are structured into a partition table (which defines ranges of the raw device as "smaller raw devices" -- these are called xvda1 and xvdf1 on your instance, and in this case, probably just say "map the entire raw volume as a single partition."

4) Within a partition, raw data is arranged in various ways based on partition type. For a "swap" partition, it's raw storage that copies what's in RAM when the RAM is needed for something else, for example. For file systems, on Linux, you typically make an ext4 file system on the device. File systems are then made available to the OS by "Mounting" them at some path. Some file system needs to be mounted on the "/" path, but you can mount additional devices on other paths to separate out different parts of the system to different volumes. This is useful to be able to grow particular parts of the file system, or make sure that, say, usage in the "/var/log" directory can't consume all storage so that there's no space in "/home" or somesuch.

 

enum Bool { True, False, FileNotFound };

OK, but then you don't think that the volumes run simultaneously. I was a bit worried that it was possible to, in some way, attach many volumes to one EC2 instance. Where the (virtualised) EC2 provided processing power and RAM, to serve more than one volume, swapping out volumes that were when waiting for file accesses etc (or something like that).

I really want to focus on releasing my game now. As the setup seems to work, I think I will leave it as it is. I don't want to mess something up right now. Please shout if you thing that is a very bad idea!

Thank you so much, hplus! I really appreciate your wisdom.

Quote

I was a bit worried that it was possible to, in some way, attach many volumes to one EC2 instance.

You can actually do that. You have attached two volumes to your instance. However, only one of them is mounted as a block device with a file system. Please read my description again, because "attaching volumes" is totally different from "mounting file systems." Volumes do not necessarily contain file systems (think of swap volumes.) Even if they do, they may not be mounted to the OS. (Think of an "ejected" flash disk, which is still in the USB port but unmounted from the OS.) File systems do not necessarily live on block volumes (consider "/proc" file systems or network file systems like nfs.)

You can attach as many volumes as you want, just like you can plug in as many hard drives as you want to a desktop computer. Whether the computer does anything with those plugged-in hard-drives, or whether your EC2 instance does anything with those attached volumes, depends on whether there's a file system on them, and whether that file system is mounted by the OS running inside the image.

You need to understand this difference, or you will make mistakes that may cause you to lose a lot of data in the future.

enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement