skip to Main Content

I wanted to mount the root EBS volume of one instance to another instance that is in the same region (both are operating on Amazon Linux).
Consider that there are two instances in the same region, say server 1 and server 2. I stopped server 1 and detached the root EBS volume, then attached that volume to server 2. Then I entered server 2 and tried to mount the attached volume. But I couldn’t attach the volume; it shows one error.

The command I used to mount the volume was 

mount /dev/xvdf /home/ec2-user/storage 

Storage is the folder to which I want to attach that volume.

What I want is to attach the root EBS to other server without losing the data

2

Answers


  1. Can you add the error you got to your question?

    If I were to mount the root EBS volume from server 1 to server 2 without losing data, I’ll follow these steps:

    1. Stop Server 1:

      aws ec2 stop-instances --instance-ids i-1234567890abcdef0
      
    2. Detach the Root EBS Volume from Server 1:

      aws ec2 detach-volume --volume-id vol-1234567890abcdef0
      
    3. Attach the EBS Volume to Server 2:

      aws ec2 attach-volume --volume-id vol-1234567890abcdef0 --instance-id i-0987654321fedcba0 --device /dev/xvdf
      
    4. Identify the Attached Volume on Server 2:
      Log in to Server 2 and list the block devices to confirm the attached volume:

      lsblk
      
    5. Create a Mount Point:
      Create a directory to serve as the mount point if it doesn’t already exist:

      mkdir -p /home/ec2-user/storage
      
    6. Mount the Volume:
      Mount the volume to the created directory:

      sudo mount /dev/xvdf1 /home/ec2-user/storage
      

      Note: Sometimes the device might appear as /dev/xvdf1 instead of /dev/xvdf. Verify the exact device name using lsblk.

    7. Verify the Mount:
      Check if the volume is mounted correctly:

      df -h
      

    Make sure that the device name is correct (/dev/xvdf or /dev/xvdf1). Use lsblk to find the exact device name.

    If the file system is not recognized, you may need to run a file system check:

    sudo fsck /dev/xvdf1
    

    By following these steps, you should be able to successfully mount the root EBS volume of Server 1 to Server 2 without losing any data.

    Login or Signup to reply.
  2. You mentioned that the two EC2 instances are in the same region but you didn’t say whether they are in the same Availability Zone.
    EBS are essentially disks in a data center and do not have cross AZ accessibility.

    If you are looking for cross AZ storage options you may consider an AWS centralized storage option such as EFS, though this is NFS only not block.
    Or something like FSx for NetApp ONTAP, which supports multi AZ storage as well as block (iSCSI) and NFS.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search