EFS Service on the AWS using Terraform
hey guys i have completed my task 2 successfully, and Today i’m gonna show you how to use EFS service using terraform.
TASK-2:
Step 1 — Create the key and security group which allow the port 80.
Step 2 — Launch one EFS & then launch EC2 instance.
Step 3 — In this Ec2 instance use the key and security group which we have created in step 1.
Step 4 — Mount that volume into /var/www/html
Task 5 — Developer have uploaded the code into github repo also the repo has some images.Copy the github repo code into /var/www/html
Step 6 — Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.
Step 7 — Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html
prerequisites:
- you must have account on AWS
- configure AWS CLI on your system
- install terraform in your system
sooo, lets get started……..
Step 1 — Create the key and security group which allow the port 80.
- In step1 you have to create key with security group that allow 80 port
Amazon EC2 key pairs - A key pair, consisting of a private key and a public key, is a set of security credentials that you use to prove your identity when connecting to an instance. Amazon EC2 stores the public key, and you store the private key.
A security group acts as a virtual firewall for your EC2 instances to control incoming and outgoing traffic. Inbound rules control the incoming traffic to your instance, and outbound rules control the outgoing traffic from your instance.
- create one folder. in my case i have created tera/ folder.
- create new .tf file
Inside .tf file, specify the provider. provider is used to interact with the many resources supported by AWS.
Next step,we have to create key pair as well as security group, In my case i have created security group and used my existing key pair.
step 2. Launch instance Launch EC2 instance.
step 3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.
step 4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html
EFS: Amazon EFS is a regional service storing data within and across multiple Availability Zones (AZs) for high availability and durability. Amazon EC2 instances can access your file system across AZs, regions, and VPCs, while on-premises servers can access using AWS Direct Connect or AWS VPN.
Now, we have to launch EFS (Elastic file system) and for that we have to create VPC and subnet. In my case i have used default subnet and VPC.
NOTE: you can create new one or else you can use your default VPC and subnet.
Lunch EC2 instance and mention the security group,subnet ad vpc inside your EC2 instance.
step 5. Developer have uploded the code into github repo also the repo has some images.
6. Copy the github repo code into /var/www/html
Here, we have attach Data to EFS and because of that data is persistent not Ephemeral.
step 7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.
step 8. Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html
Here, i have created cloudfront using s3 bucket.
Now your code is ready to run…so, run following command to launch your ec2 instance on AWS
terraform init
terraform validate
terraform apply -auto-approve
for deleting everything ,use following command
terraform destroy -auto-approve
Thanks for Reading!!!!!!!!!
Hybrid Multi Cloud → Mentor : vimal daga sir and preeti mam