The modern world of web development has led to a dramatic increase in the use of cloud hosting solutions, especially for large web projects. However, legacy resources like servers, virtual machines, IP addresses, file storage, etc., are often left behind after the project development stage or when the project has been alive for a long time. In this article, we'll look at the importance of tracking these legacy resources in cloud hosting and how to optimize them.
Effective management of legacy resources in cloud hosting of web projects is a crucial aspect of ensuring efficiency and cost savings. Regular testing, optimization, and safe release of resources allow you to maintain the project optimally and avoid unnecessary financial costs.
Step-by-step analysis for effective management and identification of obsolete resources
Let's start by considering the first and, to some extent, the most important step - identifying obsolete resources. The cloud infrastructure is subject to rapid and frequent changes, so devops engineers face the challenge of effective resource management on cloud hosting. In a large web project deployed on cloud platforms, there is a problem of identifying and tracking resources that are no longer needed but remain active, consuming resources and, accordingly, funds.
Creation of resource inventory:
The first step is to develop an inventory that includes all cloud resources used by the project. This inventory should consist of the automated collection and analysis of data about servers, virtual machines, databases, and other components. Using tools such as Ansible, Terraform, or AWS CloudFormation allows you to create a declarative view of the infrastructure.
Use of monitoring and analysis tools:
To automate this process, use specialized monitoring tools that can provide information about resource usage. Such tools can analyze load, traffic, availability, and other parameters, helping to identify inactive or underactive resources. In ITsyndicate, we use tools like Prometheus or Grafana to monitor cloud resources load constantly. Sometimes, we use a custom approach, which can be both self-created scripts and ready-made solutions, such as AWS Trusted Advisor. Defining the criteria of inactivity helps to create a system of notifications and automatic measures in case of detection of obsolete resources.
Automated classification and optimization of resources:
A DevOps engineer at ITSyndicate uses scripts and tools to classify resources automatically into active and inactive. Defined automatic optimization processes can allow the release of resources or change their configurations according to project requirements.
Regular reviews and planning:
Establish a regular schedule of systematic reviews that analyze the relevance and necessity of each resource. This includes regularly updating configurations and reviewing the project architecture to identify new optimization opportunities. A good devops engineer, if he’s savvy enough, should establish those reviews to check the relevance of resources in an automatic way.
Here is a straightforward example of how we track unused EC2 instances in the AWS cloud with a self-created script. It is realized by a Python script for the AWS Lambda function using Terragrunt to monitor unused EC2 instances. It checks the Amazon Elastic Compute Cloud (Amazon EC2) instances running at any time during the last 14 days and alerts you if the daily CPU utilization was 10% or less and network I/O was 5 MB or less on four or more days. The Lambda function generates a list of such instances on the 1st and 15th of the month and sends a notification to Slack.
import boto3
from datetime import datetime, timedelta
import os
def lambda_handler(event, context):
SNS_TOPIC_ARN = os.environ['SNS_TOPIC_ARN']
cloudwatch = boto3.client('cloudwatch')
sns = boto3.client('sns')
threshold_days = 4
max_cpu_utilization = 10.0
max_network_io = 5 * 1024 * 1024 # 5 MB in bytes
# Get the current time and time 14 days ago
end_time = datetime.utcnow()
start_time = end_time - timedelta(days=14)
# Get list of instances
# You can find the full code implementation in our GitHub repository. Link below.
For the full code, please refer to our GitHub repository.
So, coming back to the main thesis of this blog - the financial losses from obsolete resources become an essential point in managing the infrastructure of a large web project or application hosted on a cloud. DevOps engineers act as strategic participants in the process of cost analysis and optimization, directing their efforts to maximize resource circulation and reduce costs. Let's highlight several aspects of this work:
- It is important to understand that DevOps engineers not only perform purely technical tasks but also conduct a detailed analysis of financial losses, considering every aspect of maintaining obsolete resources. This includes account rental costs, data storage, server maintenance, network costs, etc.
- Using cost monitoring tools (such as AWS Cost Explorer or Google Cloud Cost Management Tools) allows DevOps engineers to visualize and analyze financial data. Using such or similar tools provides an opportunity to respond in time to any changes in expenses. Just remember that in order to use such tools effectively, you need someone who can set them up correctly and, most importantly, safely.
- DevOps engineers establish regular cost review processes to identify obsolete resources and their impact on a budget. After analysis, they develop optimization strategies, including downsizing servers, migrating to less expensive types of infrastructure, and other practical measures.
- Implementing automated processes to optimize costs wherever possible is also essential to our work. This can include the automatic shutdown of inactive resources, automatic reservations of reduced prices, and various scaling strategies.
- DevOps engineers consider financial risks from obsolete resources and develop strategies to reduce them. This may include moving to pay-per-use pricing models or using reserved instances.
Conclusion: Cost optimization in cloud hosting and the role of DevOps engineer in it
DevOps engineers become important strategists in managing legacy resources in the world of fast-growing web projects, where cloud technologies play a crucial role. After looking at different aspects of this problem, we identified the key elements that define this role in such a process.
Identifying obsolete resources through inventory creation and the use of monitoring tools becomes a fundamental step. Automation of processes and regular, systematic reviews allow efficient identification and classification of these resources.
The financial aspect is an integral part of the activity of a DevOps engineer. Analysis of the storage and maintenance costs of obsolete resources reveals a significant impact on the project budget. Regular monitoring of financial expenses allows you to identify and adjust optimization strategies on time.
Process automation and optimization, use of cost monitoring and analysis tools, as well as strategic planning become guiding principles for a DevOps engineer. By putting them into practice, engineers can effectively manage the infrastructure of large web projects, maximizing their performance and minimizing costs.
In conclusion, a correct approach to legacy resource management is critical to achieving infrastructural efficiency and financial stability of projects in the cloud. By combining technical expertise and strategic management, DevOps specialists play a crucial role in ensuring the success and viability of web projects in an ever-changing technology landscape.
Streamline CORS for your APIs on AWS Gateway with Terraform and Lambda secure scale done
Cut your Kubernetes cloud bill with these 5 hacks for smarter scaling and resource tuning
PostgreSQL blends relational and NoSQL for modern app needs