Create and run web app on ECS using AWS Fargate

Reading Time: 11 minutes

AWS Fargate is an Amazon ECS solution that allows you to run containers without managing servers or clusters. You no longer need to provision, configure, or scale clusters of virtual machines to run containers with AWS Fargate. This eliminates the need to select server kinds, schedule cluster scaling, or optimize cluster packing.

You don’t have to engage with or worry about servers or clusters when you use AWS Fargate. Fargate allows you to concentrate on the design and development of your apps rather than the infrastructure that supports them.

In this blog, we will cover:

  • AWS Serverless Services
  • Types of Serverless Offering By AWS 
  • Who should go for a serverless approach VS Who shouldn’t and why?
  • What are Containers?
  • Categories of Containers
  • What is AWS Fargate?
  • How does AWS Fargate work?
  • Features of AWS Fargate
  • Benefits of AWS Fargate
  • Use cases of AWS Fargate
  • Companies using AWS Fargate
  • Pricing of AWS Fargate
  • Hands-on
  • Conclusion

AWS Serverless Services

Serverless is a term that refers to the services, techniques, and strategies that allow you to create more agile applications and adapt to change more quickly.

Serverless architecture allows you to create and execute apps and services without worrying about infrastructure. Your application will continue to operate on servers, but AWS will handle all of the server administration. To operate your apps, databases, and storage systems, you no longer need to provision, scale, or maintain servers.

Infrastructure management duties like capacity provisioning and patching are handled by AWS with serverless computing, allowing you to focus on building code that serves your customers, enhancing efficiency and productivity.

Types of Serverless Offering By AWS 

Types of Serverless Offering By AWS 

AWS has built serverless services for each of your stack’s three layers:

They are as follows: 


  • AWS Lambda: Run code without having to manage or deploy servers, on a pay-as-you-go basis.
  • AWS Fargate: Use ECS or EKS to run serverless containers.


  • SNS: Pub/sub paradigm, including SMS email and push alerts as well.
  • Amazon EventBridge: Building an event-driven architecture that integrates data from applications, SaaS, and AWS services is the goal of the event bridge.
  • SQS: Use message queues to decouple and scale microservices for transmitting and receiving data.
  • Step Functions: To create apps rapidly, combine numerous services into serverless processes.      

Data stores

  • S3: For storing large amounts of data in a scalable and performant manner.
  • DynamoDB: It is a millisecond latency key-value document database. 

Who should go for a serverless approach VS who shouldn’t and why?

Serverless computing may be beneficial to developers that wish to reduce their time to market and construct lightweight, flexible apps that can be expanded or changed fast.

Serverless designs will save costs for applications with erratic usage patterns, such as peak periods followed by periods of little to no traffic. Purchasing a server or a block of servers that are always operating and always available, even when not in use, maybe a waste of resources for such applications. When needed, a serverless configuration will respond immediately and will not incur costs when idle.

Additionally, developers that wish to push some or all of their application operations closer to end users for lower latency will need at least a largely serverless architecture, as this entails shifting certain activities away from the origin server.

There are times when using dedicated servers that are either self-managed or supplied as a service makes more sense, both financially and in terms of system design. Large applications with a reasonably steady, predictable workload, for example, may require a classical arrangement, which is likely less expensive in such instances.

Furthermore, migrating old programs to a new infrastructure with a completely different design may be prohibitively complex. As a result, in such circumstances, serverless is not the ideal alternative.

What are Containers? 

AWS Fargate - What are Containers? 

A container encapsulates a program as well as all of the components it needs to function effectively, such as system libraries, system settings, and other dependencies. Containers, like a ‘, simply add water’ pancake mix, just require one thing to accomplish their function: to be hosted and executed.

A container may execute any type of program. No matter where a containerized program is hosted, it will execute in the same manner. Containers, like real shipping containers, may be simply transported and deployed wherever required. Because they are a standard size, they can be carried anywhere using a number of modes of transportation (ships, trucks, trains, etc.) regardless of their contents.

Containers, in other words, are a technique of dividing a computer, or a server, into discrete user space environments, each of which runs just one program and is isolated from the other partitioned areas of the machine. Each container shares the machine’s kernel with other containers (the kernel is the operating system’s base and interacts with the computer’s hardware), but it operates as if it were the machine’s sole system.

Categories of Containers 

AWS Fargate - Categories of Containers 

Container management tools can be broken down into three categories: 

  • Registry
  • Orchestration
  • Compute.

AWS provides a safe place to store and manage your container images, as well as orchestration to control when and where your containers run and configurable computing engines to power them. So you don’t have to worry about the underlying infrastructure, AWS can assist you to manage your containers and their deployments. AWS makes using containers straightforward and efficient no matter what you’re developing.


Amazon Elastic Container Registry: A fully-managed container registry that simplifies and speeds up the storage, management, and deployment of container images for developers.


Amazon Elastic Container Service: A fully managed container orchestration service that allows you to execute containerized applications in the most secure, dependable, and scalable way possible.

Amazon Elastic Kubernetes Service: A fully managed Kubernetes service that offers the safest, most dependable, and scalable solution to execute containerized applications on Kubernetes.


AWS Fargate: A container serverless computing engine. Fargate allows you to concentrate on developing your applications.

Amazon EC2: Containers may be run on a virtual machine architecture with complete setup and scaling control.

What is AWS Fargate?

AWS Fargate is a container serverless computing engine that integrates with Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate allows you to concentrate on developing your applications. Fargate eliminates the need to set up and maintain servers, allows you to choose and pay for resources per application, and increases security by default.

Fargate automatically assigns the appropriate amount of computation, removing the need to select instances and scale cluster capacity. There is no over-provisioning or paying for additional servers because you just pay for the resources necessary to execute your containers. Fargate operates each job or pod in its own kernel, providing an isolated computing environment for the tasks and pods. As a result, your application may be designed to have workload segregation and increased security.

Customers including Vanguard, Accenture, Foursquare, and Ancestry have selected Fargate to host their mission-critical applications.

How does AWS Fargate work?

How does AWS Fargate work?

AWS Fargate is superior to ECS in that it allows you to manage your containers without the usage of servers. The service is built on the Elastic Container Service (ECS). This is a platform that lets your virtual machines handle container management. AWS Fargate started with Amazon ECS at first. However, ECS still needed servers to manage containers. The AWS Fargate launch type is cloud-based and does not require any servers. It does necessitate the creation of an AWS account.

AWS Fargate saves time by allowing you to manage your containers without having to create a cluster of virtual machines. This isn’t to say that you don’t have influence over how the work is completed. You may make the containers utilize an elastic network interface if you like. This guarantees that efficiency and speed are maximized.

The load balance of the containers is maintained by the virtual machine instances. It also performs frequent upgrades and replaces failed containers. This is one of AWS Fargate’s largest ECS. The responsibility of operating and managing the underlying infrastructure that supports the containers is separated by AWS Fargate. This means you can just pay for the underlying resources that each container requires.

Features of AWS Fargate

Features of AWS Fargate

Resource-based pricing and per-second billing: Pricing depending on resources and per-second billing: You simply pay for the time that the work consumes resources, not for the size of the assignment. The cost of the CPU and RAM is calculated per second. A one-minute minimum fee applies.

Flexible configurations options: Fargate is available in 50 various CPU and memory combinations, allowing you to precisely match your application’s requirements. For various setups, you can utilize 2 GB per vCPU up to 8 GB per vCPU. Whether your workloads are general-purpose, compute-intensive, or memory-intensive, make sure they are well-matched.

Networking: All Fargate tasks are performed within your own virtual private cloud (VPC). The elastic network interface for a job is visible in the subnet where the task is operating, and Fargate supports the freshly announced awsvpc networking mode. This creates a separation of responsibilities, allowing you to maintain complete control over your applications’ networking settings using VPC capabilities such as security groups, routing rules, and NACLs. Public IP addresses are also supported by Fargate.

Load Balancing: ECS Service Load Balancing is provided for the Application Load Balancer and Network Load Balancer. The IP addresses of the Fargate jobs to register with the load balancers are specified for the Fargate launch type.

Permission tiers: You continue to divide jobs into logical clusters even when there are no instances to manage using Fargate. This allows you to control who has access to the cluster’s services and who may execute or see them. The task IAM role can still be used. There’s also a new Task Execution Role, which gives Amazon ECS authorization to do things like publish logs to CloudWatch Logs and get images from Amazon Elastic Container Registry (Amazon ECR).Container 

Registry Support:  Support for Amazon ECR through the Task Execution Role: Fargate enables seamless authentication to help extract photos from Amazon ECR. Similarly, if you’re utilizing DockerHub as a public repository, you may keep doing so.

Amazon ECS CLI: The Amazon ECS CLI is a set of high-level commands that make it easier to construct and manage Amazon ECS clusters, tasks, and services. Fargate now supports executing tasks and services using the current version of the CLI.

Compatibility between EC2 and Fargate Launch Types: All Amazon ECS clusters are heterogeneous, which means you may execute Fargate and Amazon ECS jobs in the same cluster. This allows teams working on various apps to establish their own Fargate cadence or select a launch type that matches their needs without disturbing the existing paradigm. You can convert an existing ECS task definition to a Fargate service by making it compliant with the Fargate launch type, and vice versa. It’s not a one-way street when it comes to selecting a launch type!

Logging and visibility: You may transmit application logs to CloudWatch logs via Fargate. CloudWatch metrics include service metrics such as CPU and memory use. Datadog, Aquasec, Splunk, Twistlock, and New Relic, all AWS partners for visibility, monitoring, and application performance management, support Fargate jobs.

Benefits of AWS Fargate

Benefits of AWS Fargate

Deploy and manage applications, not infrastructure: Whether you’re using ECS or EKS, Fargate allows you to focus on creating and executing your applications. You simply interact with and pay for your containers, so you don’t have to worry about scaling, patching, securing, or maintaining servers.

Fargate ensures that the infrastructure on which your containers operate is always patched and up to date.

Right-sized resources with flexible pricing options: Fargate starts and scales the computation to meet the container’s resource requirements as nearly as possible. Fargate eliminates the need for over-provisioning and unnecessary server costs.

Design for secure isolation: Individual ECS tasks or EKS pods execute in their own dedicated kernel runtime environment, with no other tasks or pods sharing CPU, memory, storage, or network resources. For each job or pod, this guarantees workload segregation and increased security.

Rich application observability: Fargate provides out-of-the-box observability with built-in connections with other AWS services, such as Amazon CloudWatch Container Insights. Through a comprehensive range of third-party tools with open interfaces, Fargate allows you to collect metrics and logs for monitoring your applications.

Use cases of AWS Fargate

Use cases of AWS Fargate

Companies using AWS Fargate

Companies using AWS Fargate

Pricing of AWS Fargate

AWS Fargate pricing is computed by rounding up to the closest second the amount of vCPU and memory resources utilized from the moment you start downloading your container image until the Amazon ECS Task or Amazon EKS* Pod concludes.

Pricing of AWS Fargate

Additional Charges

If your containers utilize other AWS services or transfer data, you may be charged more. You’ll be charged for CloudWatch use if your containers use Amazon CloudWatch Logs for application logging.


Now, let’s do hands-on to set up ECS using AWS Fargate and run a sample web application using AWS Fargate.

The steps of this implementation are as follows:

Step 1: Create a Task Definition

Step 2: Configure the Service

Step 3: Configure the Cluster

Step 4: Review

Step 5: View your Service

Step 6: Clean Up

Step 1: Create a Task Definition

  1. Open the Amazon ECS console first-run wizard at
Amazon ECS console
  1. Region selected: Asia Pacific (Singapore)
  1. Set the parameters for your container definition. The sample-app, Nginx, and tomcat-webserver container definitions are preloaded in the console by the first-run wizard for container definition. If necessary, click edit to adjust the settings.
  1. The first-run wizard creates a task definition to use with the preloaded container definitions for task definition. By selecting Edit, you can rename the task definition and change the resources utilized by the job.
  1. Choose Next.

Step 2: Configure the Service

  1. A service definition is preloaded in the first-run wizard, and you can see the sample-app-service service defined in the console. By selecting Update, you may rename the service or examine and edit the details.
  1. Review your service settings and click Save, Next.

Step 3: Configure the cluster

You name your cluster in this section of the wizard, and Amazon ECS handles the networking and IAM configuration for you.

Choose a name for your cluster in the Cluster name field.

To continue, click Next.

 Step 4: Review 

To complete, review your task definition, task settings, and cluster setup before clicking Create. You’re taken to a Launch State page, which displays the current status of your launch. It explains each stage of the procedure (this can take a few minutes to complete while your Auto Scaling group is created and populated).

Select View service when the launch is complete.

Step 5: View your Service

You can access your service’s containers via a web browser if it’s a web-based application, like the Amazon ECS example application.

Select the Tasks tab from the Service: service-name page.

Choose a task from your service’s list of tasks.

Select the ENI Id for your assignment under the Network section. This will take you to the Amazon EC2 console, where you can see the network interface data for your job, including the IPv4 Public IP address.

In your web browser, type the IPv4 Public IP address, and you should see a page with the Amazon ECS example application.

Step 6: Clean Up

You should wipe away the resources connected with an Amazon ECS cluster once you’ve stopped using it to prevent incurring costs for resources you’re not utilizing.

Go to to access the Amazon ECS console.

Select Clusters from the navigation window.

Select the cluster you want to remove from the Clusters page.

Select Delete Cluster from the menu. Enter delete me at the confirmation screen, then select Delete. The cluster’s related resources, such as Auto Scaling groups, VPCs, and load balancers, are cleaned up when the cluster is deleted.


In this blog, we looked at the various serverless offerings supplied by AWS, as well as their benefits and drawbacks. We also looked at what containers are and the many types provided by AWS. We also took a closer look at AWS Fargate, including its benefits, features, and how it works, as well as pricing and use cases. We’ll have a look at other container services provided by AWS in our next blogs. Stay tuned to keep getting all updates about our upcoming new blogs on AWS and relevant technologies. 

Meanwhile …

Keep Exploring -> Keep Learning -> Keep Mastering

This blog is part of our effort towards building a knowledgeable and kick-ass tech community. At Workfall, we strive to provide the best tech and pay opportunities to AWS-certified talents. If you’re looking to work with global clients, build kick-ass products while making big bucks doing so, give it a shot at today.

Back To Top