An effective continuous integration (CI) and continuous delivery (CD) pipeline is essential for modern DevOps teams to cope with the rapidly evolving technology landscape. Combined with agile concepts, a fine CI/CD pipeline can streamline the software development life cycle resulting in higher-quality software with faster delivery.
In this article, I will discuss:
(This article is part of our DevOps Guide. Use the right-hand menu to go deeper into individual practices and concepts.)
The primary goal of a CI/CD pipeline is to automate the software development lifecycle (SDLC).
The pipeline will cover many aspects of a software development process, from writing the code and running tests to delivery and deployment. Simply stated, a CI/CD pipeline integrates automation and continuous monitoring into the development lifecycle. This kind of pipeline, which encompasses all the stages of the software development life cycle and connects each stage, is collectively called a CI/CD pipeline.
It will reduce manual tasks for the development team which in turn reduces the number of human errors while delivering fast results. All these contribute towards the increased productivity of the delivery team.
(Learn more about stages in a CI/CD pipeline, deployment pipelines and the role of CI/CD.)
Jenkins is an open source server for continuous integration and continuous deployment (CI/CD) which automates the build, test, and deploy phases of software development. With numerous plugins you can easily integrate, along with choices of tools, programming languages, and cloud environments, Jenkins is highly flexible and makes it easier to efficiently develop reliable apps.
You might wonder why Jenkins is a good option for building your CI/CD pipeline. Here are some of the reasons it is popular:
What does a CI/CD pipeline built using Jenkins look like in action? Here is a simple web application development process.
Traditional CI/CD pipeline
The tools, processes, and complexity of a CI/CD pipeline vary from this example. Much depends on your development requirements and the business needs of your organization. Typical options include a straightforward, four-stage pipeline and a multi-stage concurrent pipeline — including multiple builds, different test stages (smoke test, regression test, user acceptance testing), and a multi-channel deployment (web, mobile).
In this section, I’ll show how to configure a simple CI/CD pipeline using Jenkins.
Before you start, make sure Jenkins is properly configured with the required dependencies. You’ll also want a basic understanding of Jenkins concepts. In this example, Jenkins is configured in a Windows environment.
Download Jenkins from the official website and install it. Install Jenkins using the following Docker command:
docker run -d -p 8080:8080 jenkins/jenkins:lts
Configuring Jenkins is a matter of choosing the plugins you need. Git and Pipeline are commonly used tools you might want to add from the start.
Login to Jenkins and click on “New Item.”
Select the “Pipeline” option from the menu, provide a name for the pipeline, and click “OK.”
We can configure the CI/CD pipeline in the pipeline configuration screen. There, we can set build triggers and other options for the pipeline. The most important section is the “Pipeline Definition” section, where you can define the stages of the pipeline. Pipeline supports both declarative and scripted syntaxes.
(Refer to the official Jenkins documentation for more detail.)
Let’s use the sample “Hello World” pipeline script:
pipeline { agent any stages { stage('Hello') { steps { echo 'Hello World' } } } }
Click on Apply and Save. You have configured a simple pipeline!
Click on “Build Now” to execute the pipeline.
This will result in the pipeline stages getting executed and the result getting displayed in the “Stage View” section. We’ve only configured a single pipeline stage, as indicated here:
We can verify that the pipeline has been successfully executed by checking the console output for the build process.
Let’s expand the pipeline by adding two more stages to the pipeline. For that, click on the “Configure” option and change the pipeline definition according to the following code block.
pipeline { agent any stages { stage('Stage #1') { steps { echo 'Hello World' sleep 10 echo 'This is the First Stage' } } stage('Stage #2') { steps { echo 'This is the Second Stage' } } stage('Stage #3') { steps { echo 'This is the Third Stage' } } } }
Save the changes and click on “Build Now” to execute the new pipeline. After successful execution, we can see each new stage in the Stage view.
The following console logs verify that the code was executed as expected:
We can use the “Pipeline timeline” plugin for better visualization of pipeline stages. Simply install the plugin, and inside the build stage, you will find an option called “Build timeline.”
Click on that option, and you will be presented with a timeline of the pipeline events, as shown below.
That’s it! You’ve successfully configured a CI/CD pipeline in Jenkins.
The next step is to expand the pipeline by integrating:
Good luck!
With the increased adoption of cloud technologies, the growing trend is to move the DevOps tasks to the cloud. Cloud service providers like Azure and AWS provide a full suite of services to manage all the required DevOps tasks using their respective platforms.
The following is a simple cloud-based DevOps CI/CD pipeline entirely based on Azure (Azure DevOps Services) tools.
Cloud-based CI/CD pipeline using Azure
In addition to the CI/CD pipeline, Azure also enables managing the SDLC using Azure Boards as an agile planning tool. Here, you’ll have two options:
A properly configured pipeline will increase the productivity of the delivery team by reducing the manual workload and eliminating most manual errors while increasing the overall product quality. This will ultimately lead to a faster and more agile development life cycle that benefits end-users, developers, and the business as a whole.
Learn from the choices Humana made when selecting a modern mainframe development environment for editing and debugging code to improve their velocity, quality and efficiency.
With the rapid growth of technology sector software, development teams are under constant pressure to meet the increased customer expectations for business applications. These expectations usually involve:
The traditional software development process has changed with the advent of cloud-based applications. The current paradigm is to consider developing the software as an ongoing service, rather than simply creating software for a specific requirement provided by a customer. Software development has changed from a monolithic structure to an agile structure, where developers consistently improve the software to meet the evolving customer requirements.
Software development companies have responded to this new approach by embracing modern Software Development Lifecycle (SDLC) methodologies such as Agile, Scrum, and Kanban, to deliver product features, improvement, and bug fixes.
DevOps and automation are two key components that help organizations streamline the development process. This results in two significant changes:
The combination of DevOps with automation leads to a more efficient SDLC.
(This article is part of our DevOps Guide. Use the right-hand menu to navigate.)
DevOps automation is the practice of automating repetitive and manual DevOps tasks to be carried out without any human interaction. Automation can be applied throughout the DevOps lifecycle, spanning:
The goal of DevOps automation is to streamline the DevOps lifecycle by reducing manual workload. This automation results in several key improvements:
Automation relies primarily on software tools and presetting configurations to automate necessary processes and tasks.
We already know how a monolithic SDLC approach will be unable to provide the flexibility and responsiveness you need to tackle the requirements of:
Certainly, each constraint comes with its own technical and business dependencies.
To address these challenges, DevOps teams must adopt standardized workflows, processes, technologies, protocols, and metrics. All these tools support an environment that:
Using standardized practices also enhances the potential for automating other manual processes—moving from automation to orchestration. So, we can say that:
Standardization is a key component of successfully and accurately capturing the automation scope and implementing a proper automation strategy for DevOps.
While standardization is preferable in many cases, it should not stand in the way of adaptability, especially when it comes to tooling.
DevOps is consistently evolving, and every organization will have different workflows, strategies, and implementations. Standardizing tools without any adaptability will cause conflicts with evolving technologies and practices in the industry.
The concept of automation in DevOps—encompassing standardization—also applies to the governance models. Any standardization should be flexible enough to easily adapt to:
This can be done by providing a mechanism to facilitate the adoption of new technologies that streamline the technologies utilized in the DevOps process.
For instance, a standardized library of tools requested by any team member for development, testing, deployment, or monitoring purposes must be created and vetted by the organization. When a new tool is required in the DevOps pipeline, a proper workflow should be in place to quickly vet the tool or technology and add it to the standard library.
Automation is not just about automating tasks and processes. In the wider DevOps landscape, automation helps to:
The benefits of automation are not limited to performance improvements. Here’s a look at additional benefits:
Automation is very helpful in identifying errors and behavioral issues in software applications.
In any highly automated process or task, the end result is always consistent and predictable. Thanks to its underlying static software configuration and the lack of human interaction, you’ve essentially eliminated user errors.
Automated processes are much easier to scale than manual processes. Scale automation processes simply by creating additional processes to meet the increased requirements.
In a manual environment, any scaling is severely constrained by the availability of team members.
However, in an automated environment, scaling is only constrained by the availability of underlying software and hardware, which is not an issue in cloud-based environments where resources are automatically scaled depending on the workload. A great example of this is Automatic Scale In/Out and Up/Down functions.
One of the most important factors in DevOps is that the ability to go through the lifecycle stages quickly has a significant effect on the deliverability of the project.
An automated process will be executed regardless of the time or availability of team members to manually trigger the task, allowing us to go through each process without any delays. Additionally, it’s almost always faster when a process is automated with a standard template compared to running it manually.
Automation allows us to be flexible in terms of both the scope and functionality of the automated process.
Most of the time, the only constraint of the functionality and scope is the configuration of the automation process, which can be changed easily to meet the requirements. It is more flexible than training a team member to adapt to the changes in the process.
The simple answer: almost everything that can be automated.
However, in practice, which processes to automation depends on external factors such as:
A good DevOps team will be able to choose the processes that should be automated in their DevOps lifecycle. Here are some common processes that are ideal for automation.
According to the core concepts and tools governing agile software development, CI/CD is the main component that needs to be automated in any organization. Automation can cover all aspects of this:
(See the many benefits of deployment automation.)
Managing infrastructure such as networks and servers requires a considerable investment of time from the initial setup, configuration, and maintenance.
Automate infrastructure management by creating software-defined-infrastructure which will manage infrastructure with minimal or no human interactions.
This is the best place to apply automation across the board. Utilizing test automation tools like Selenium and Puppeteer is easier than ever to automate tests of any kind. This can include:
(Learn more about testing automation.)
Rapid changes mean it is nearly impossible to keep track of components and changes. Automation helps by enabling the DevOps team to create automated monitoring rules and generate alerts to keep track of:
Logs are paramount in identifying issues in an application. An application might generate a large number of logs.
Through automation and the aggregation and analysis of these logs using log management tools, you’ll be able to easily pinpoint issues in software.
Let’s go through a few real-world examples of automation in DevOps
When it comes to automation, plenty of software options are available. Both open-source and licensed tools support end-to-end automation of a DevOps pipeline. Among them, CI/CD tools are the most common type of DevOps automation tools.
Puppet and Chef are solid cross-platform configuration management tools. These tools deal with infrastructure management, automating the configuration, deployment, and management of infrastructure.
Jenkins, TeamCity, and Bamboo are CI/CD software that automates tasks starting from development pipeline to deployment.
Beyond those, specialized software and tools focus on a single function that is a crucial part of the DevOps pipeline, for example:
You can combine all these tools to create a comprehensive automated DevOps lifecycle.
Another growing trend is migrating DevOps and automation tasks to cloud platforms, using cloud automation DevOps to leverage their power. The two market leaders AWS and Azure both offer a complete set of DevOps services that cover all the aspects of the DevOps lifecycle.
Automation is not all about replacing human interactions. Instead, think of automation as a tool to facilitate a more efficient workflow in the DevOps lifecycle.
Automation must be targeted towards tasks and processes that would gain a significant improvement in performance or efficiency. Otherwise, you’ll waste automation on a mundane task, leading to diminished returns compared to resources allocated to automate the task.
On the other hand, automation combined with a good DevOps workflow will lead to higher-quality software with frequent releases without making any negative impact on the organization or the end-users.
The rise in popularity of DevOps practices and tools comes as no surprise to those who have already utilize the new techniques centered around maximizing the efficiency of software enterprises. Similar to the way Agile quickly proved its capabilities, DevOps has taken its cue from that and built off of Agile to create tools and techniques that help organizations adapt to the rapid pace of development today’s customers have come to expect.
As DevOps is an extension of Agile methodology, DevOps itself calls for extension beyond its basic form as well.
Collaboration between development and operations team members in an Agile work environment is a core DevOps concept, but there is an assortment of tools that fall under the purview of DevOps that empower your teams to:
DevOps is both a set of tools and practices as well as a mentality of collaboration and communication. Tools built for DevOps teams are tools meant to enhance communication capabilities and create improved information visibility throughout the organization.
DevOps specifically looks to increase the frequency of updates by reducing the scope of changes being made. Focusing on smaller tasks at a time allows for teams to dedicate their attention to truly fixing an issue or adding robust functionality without stretching themselves thin across multiple tasks.
This means DevOps practices provide faster updates that also tend to be much more successful. Not only does the increased rate of change please customers as they can consistently see the product getting better over time, but it also trains DevOps teams to get better at making, testing, and deploying those changes. Over time, as teams adapt to the new formula, the rate of change becomes:
In addition to new tools and techniques being created, older roles and systems are also finding themselves in need of revamping to fit into these new structures. Release management is one of those roles that has found the need to change in response to the new world DevOps has heralded.
(This article is part of our DevOps Guide. Use the right-hand menu to navigate.)
Release management is the process of overseeing the planning, scheduling, and controlling of software builds throughout each stage of development and across various environments. Release management typically included the testing and deployment of software releases as well.
Release management has had an important role in the software development lifecycle since before it was known as release management. Deciding when and how to release updates was its own unique problem even when software saw physical disc releases with updates occurring as seldom as every few years.
Now that most software has moved from hard and fast release dates to the software as a service (SaaS) business model, release management has become a constant process that works alongside development. This is especially true for businesses that have converted to utilizing continuous delivery pipelines that see new releases occurring at blistering rates. DevOps now plays a large role in many of the duties that were originally considered to be under the purview of release management roles; however, DevOps has not resulted in the obsolescence of release management.
With the transition to DevOps practices, deployment duties have shifted onto the shoulders of the DevOps teams. This doesn’t remove the need for release management; instead, it modifies the data points that matter most to the new role release management performs.
Release management acts as a method for filling the data gap in DevOps. The planning of implementation and rollback safety nets is part of the DevOps world, but release management still needs to keep tabs on applications, its components, and the promotion schedule as part of change orders. The key to managing software releases in a way that keeps pace with DevOps deployment schedules is through automated management tools.
The modern business is under more pressure than ever to continuously deliver new features and boost their value to customers. Buyers have come to expect that their software evolves and continues to develop innovative ways to meet their needs. Businesses create an outside perspective to glean insights into their customer needs. However, IT has to have an inside perspective to develop these features.
Release management provides a critical bridge between these two gaps in perspective. It coordinates between IT work and business goals to maximize the success of each release. Release management balances customer desires with development work to deliver the greatest value to users.
(Learn more about IT/business alignment.)
Software products contain millions of interconnected parts that create an enormous risk of failure. Users are often affected differently by bugs depending on their other software, applications, and tools. Plus, faster deployments to production increase the overall risk that faulty code and bugs slip through the cracks.
Release management minimizes the risk of failure by employing various strategies. Testing and governance can catch critical faulty sections of code before they reach the customer. Deployment plans ensure there are enough team members and resources to address any potential issues before affecting users. All dependencies between the millions of interconnected parts are recognized and understood.
Release management is foundational to the discipline and skill of continuously producing enterprise-quality software. The rate of software delivery continues to accelerate and is unlikely to slow down anytime soon. The speed of changes makes release management more necessary than ever.
The move towards CI/CD and increases in automation ensure that the acceleration will only increase. However, it also means increased risk, unmet governance requirements, and potential disorder. Release management helps promote a culture of excellence to scale DevOps to an organizational level.
As DevOps increases and changes accelerate, it is critical to have best practices in place to ensure that it moves as quickly as possible. Well-refined processes enable DevOps teams to more effectively and efficiently. Some best practices to improve your processes include:
Well-defined requirements in releases and testing will create more dependable releases. Everyone should clearly understand when things are actually ready to ship.
Well-defined means that the criteria cannot be subjective. Any subjective criteria will keep you from learning from mistakes and refining your release management process to identify what works best. It also needs to be defined for every team member. Release managers, quality supervisors, product vendors, and product owners must all have an agreed-upon set of criteria before starting a project.
DevOps is about creating an ideal customer experience. Likewise, the goal of release management is to minimize the amount of disruption that customers feel with updates.
Strive to consistently reduce customer impact and downtime with active monitoring, proactive testing, and real-time collaborative alerts that enable you to quickly notify you of issues during a release. A good release manager will be able to identify any problems before the customer.
The team can resolve incidents quickly and experience a successful release when proactive efforts are combined with a collaborative response plan.
The staging environment requires constant upkeep. Maintaining an environment that is as close as possible to your production one ensures smoother and more successful releases. From QA to product owners, the whole team must maintain the staging environment by running tests and combing through staging to find potential issues with deployment. Identifying problems in staging before deploying to production is only possible with the right staging environment.
Maintaining a staging environment that is as close as possible to production will enable DevOps teams to confirm that all releases will meet acceptance criteria more quickly.
Whenever possible, aim to create new updates as opposed to modifying new ones. Immutable programming drives teams to build entirely new configurations instead of changing existing structures. These new updates reduce the risk of bugs and errors that typically happen when modifying current configurations.
The inherently reliable releases will result in more satisfied customers and employees.
Good records management on any release/deployment artifacts is critical. From release notes to binaries to compilation of known errors, records are vital for reproducing entire sets of assets. In most cases, tacit knowledge is required.
Well-defined and implemented DevOps procedures will usually create a more effective release management structure. They enable best practices for testing and cooperation during the complete delivery lifecycle.
Although automation is a critical aspect of DevOps and release management, it aims to enhance team productivity. The more that release management and DevOps focus on decreasing human error and improving operational efficiency, the more they’ll start to quickly release dependable services.
Release managers working with continuous delivery pipeline systems can quickly become overwhelmed by the volume of work necessary to keep up with deployment schedules. This means enterprises are left with the options of either hiring more release management staff or employing automated release management tools. Not only is staff the more expensive option in most cases but adding more chefs in the kitchen is not always the greatest way to get dinner ready faster. More hands working in the process creates more opportunities for miscommunication and over-complication.
Automated release management tools provide end-to-end visibility for tracking application development, quality assurance, and production from a central hub. Release managers can monitor how everything within the system fits together which provides a deeper insight into the changes made and the reasons behind them. This empowers collaboration by providing everyone with detailed updates on the software’s position in the current lifecycle which allows for the constant improvement of processes. The strength of automated release management tools is in their visibility and usability—many of which can be accessed through web-based portals.
Powerful release management tools make use of smart automation that ensures continuous integration which enhances the efficiency of continuous delivery pipelines. This allows for the steady deployment of stable and complex applications. Utilizing intuitive web-based interfaces provides enterprises with tools for centralized management and troubleshooting that helps them plan and coordinate deployments across multiple teams and environments. The ability to create a single application package and deploy it across multiple environments from one location expedites the processes involved in continuous delivery pipelines and makes the management of them much more simplified.
DevOps organizations monitor their CI/CD pipeline across three groups of metrics:
With continuous delivery of high-quality software releases, organizations are able to respond to changing market needs faster than their competition and maintain improved end-user experiences. How can you achieve this goal?
Let’s discuss some of the critical aspects of a healthy CI/CD pipeline and highlight the key metrics that must be monitored and improved to optimize CI/CD performance.
(This article is part of our DevOps Guide. Use the right-hand menu to go deeper into individual practices and concepts.)
But first, what is CI/CD and why is it important?
Continuous Integration (CI) refers to the process of merging software builds on a continuous basis. The development teams divide the large-scale project into small coding tasks and deliver the code updates iteratively, on an ongoing basis. The builds are pushed to a centralized repository where further automation, QA, and analysis takes place.
Continuous Delivery (CD) takes the continuously integrated software builds and extends the process with automated release. All approved code changes and software builds are automatically released to production where the test results are further evaluated and the software is available for deployment in the real world.
Deployment often requires DevOps teams to follow a manual governance process. However, an automation solution may also be used to continuously approve software builds at the end of the software development (SDLC) pipeline, making it a Continuous Deployment process.
(Read more about CI/CD or set up your own CI/CD pipeline.)
Now, let’s turn to actual metrics that can help you determine how mature your DevOps pipeline is. We’ll look at three areas.
In regard to delivering high quality software, infusing performance and security into the code from the ground up, developers should be able to write code that is QA-ready.
DevOps organizations should introduce test procedures early during the SDLC lifecycle—a practice known as shifting left—and developers should respond with quality improvements well before the build reaches production environments.
DevOps organizations can measure and optimize the performance of their CI/CD pipeline by using the following key metrics:
Automation is the heart of DevOps and a critical component of a healthy CI/CD pipeline. However, DevOps is not solely about automation. In fact, DevOps thrives on automation adopted strategically—to replace repetitive and predictable tasks by automation solutions and scripts.
Considering the lack of skilled workforce and the scale of development tasks in a CI/CD pipeline, DevOps organizations should maximize the scope of their automation capabilities while also closely evaluating automation performance. They can do so by monitoring the following automation metrics:
DevOps organizations are expected to improve performance without disrupting the business. Considering the increased dependence on automation technologies and a cultural change focused on rapid and continuous delivery cycles, DevOps organizations need consistency of performance across the SDLC pipeline.
Dependability of infrastructure underlying high performance CI/CD pipeline responsible for hundreds (at times, thousands) of delivery cycles on a daily basis is therefore critical to the success of DevOps. How do you measure the dependability of your IT infrastructure?
Here are a few metrics to get you started:
With these metrics reliably in place, you’ll be ready to understand how close to optimal you really are.
Flexibility, speed, and quality are the core pillars of modern software development. Increased customer demand and the evolving technological landscape have made software development more complex than ever, making traditional software development lifecycle (SDLC) methods unable to cope with the rapidly changing nature of developments.
Practices like Agile and DevOps have gained popularity in facilitating these changing requirements by bringing flexibility and speed to the development process without sacrificing the overall quality of the end product.
Together, Continuous Integration (CD) and Continuous Delivery (CD) is a key aspect that helps in this regard. It allows users to build integrated development pipelines that spread from development to production deployments across the software development process. So, what exactly are Continuous Integration and Continuous Delivery? Let’s take a look.
(This article is part of our DevOps Guide. Use the right-hand menu to navigate.)
CI/CD refers to Continuous Integration and Continuous Delivery. In its simplest form, CI/CD introduces automation and monitoring to the complete SDLC.
Let’s deep dive into CI and CD in the following sections.
Modern software development is a team effort with multiple developers working on different areas, features, or bug fixes of a product. All these code changes need to be combined to release a single end product. However, manually integrating all these changes can be a near-impossible task, and there will inevitably be conflicting code changes with developers working on multiple changes.
Continuous Integrations offer the ideal solution for this issue by allowing developers to continuously push their code to the version control system (VCS). These changes are validated, and new builds are created from the new code that will undergo automated testing.
This testing will typically include unit and integration tests to ensure that the changes do not cause any issues in the application. It also ensures that all code changes are properly validated, tested, and immediate feedback is provided to the developer from the pipeline in the event of an issue enabling them to fix that issue quickly.
This not only increases the quality of the code but also provides a platform to quickly identify code errors with a shorter automated feedback cycle. Another benefit of Continuous Integrations is that it ensures all developers have the latest codebase to work on as code changes are quickly merged, further mitigating merge conflicts.
The end goal of the continuous integration process is to create a deployable artifact.
Once a deployable artifact is created, the next stage of the software development process is to deploy this artifact to the production environment. Continuous delivery comes into play to address this need by automating the entire delivery process.
Continuous Delivery is responsible for the application deployment as well as infrastructure and configuration changes, monitoring and maintaining the application. CD can extend its functionally to include operational responsibilities such as infrastructure management using automation tools such as:
Continuous Delivery also supports multi-stage deployments where artifacts are moved through different stages like staging, pre-production, and finally to production with additional testing and verifications at each stage. These additional testing and verification further increase the reliability and robustness of the application.
CI/CD is the backbone of all modern software developments allowing organizations to develop and deploy software quickly and efficiently. It offers a unified platform to integrate all aspects of the SDLC, including separate tools and platforms from source control, testing tools to infrastructure modification, and monitoring tools.
A properly configured CI/CD pipeline allows organizations to adapt to changing consumer needs and technological innovations easily. In a traditional development strategy, fulfilling changes requested by clients or adapting new technology will be a long-winded process. Moreover, the consumer need may also have shifted when the organization tries to adapt to the change. Approaches like DevOps with CI/CD solve this issue as CI/CD pipelines are much more flexible.
For example: suppose there is a consumer requirement that is not currently addressed with a DevOps approach. In that case, it can be quickly identified, analyzed, developed, and deployed to the software product in a relatively short amount of time without disrupting the normal development flow of the application.
Another aspect is that CI/CD enables quick deployment of even small changes to the end product, quickly addressing user needs. It not only resolves user needs but also provides visibility of the development process to the end-user. End-users can see that the product grows with frequent deployments related to bug fixes or new features.
This is in stark contrast with traditional approaches like the waterfall model, where the end-users only see the final product after the complete development is done.
CI/CD has come a long way since its inception, where it began only as a platform to support application delivery. Now it has evolved to support other aspects, such as:
Thus, users can integrate almost all aspects of the software delivery into Continuous Integration and Continuous Delivery. Furthermore, CI/CD can also extend itself to DevSecOps, where security testing such as vulnerability scans, configuration policy enforcements, network monitoring, etc., can be directly integrated into CI/CD pipelines.
CI/CD pipeline is a software delivery process created through Continuous Integration and Continuous Delivery platforms. The complexity and the stages of the CI/CD pipeline vary depending on the development requirements.
Properly setting up CI/CD pipeline is the key to benefitting from all the advantages offered by CI/CD. One pipeline might have a multi-stage deployment strategy that delivers software as containers to a multi-cloud Kubernetes cluster, and another may be a simple pipeline that builds, tests, and deploys the application as a serverless function.
A typical CI/CD pipeline can be broken down into the following stages:
All the above stages are continuously monitored for any errors and quickly notified to the relevant parties.
CI/CD undoubtedly increases the speed and the efficiency of the software development process while providing a top-down view of all the tasks involved in the delivery process. On top of that, CI/CD will have the following benefits reaching all aspects of the organization..
When it comes to CI/CD tools and platforms, there are many choices ranging from simple CI/CD platforms to specialized tools that support a specific architecture. There are even tools and services directly available through source control systems. Let’s look at some of the popular CI/CD tools and platforms.
Continuous Integration and Continuous Delivery have become an integral part of most software development lifecycles. With continuous development, testing, and deployment, CI/CD has enabled faster, more flexible development without increasing the workload of development, quality assurance, or the operations teams.
Today, CI/CD has evolved to support all aspects of the delivery pipelines, thus also facilitating new paradigms such as GitOps, Database DevOps, DevSecOps, etc.—and we can expect more to come.
From legacy systems to cloud software, BMC supports DevOps across the enter enterprise. Learn more about Enterprise DevOps.
]]>
Quality assurance (QA) is a major part of any software development. Software testing is the path to a bug-free, performance-oriented software application—one that also satisfies (or exceeds!) end-user requirements.
Of course, manual testing is quickly unscalable due to the rapid pace of development and ever-increasing requirements. Thus, a faster yet accurate testing solution was required, and automated testing became the ideal solution for this need. Automated testing does not mean replacing the entire manual testing process. Instead automated testing means:
Introducing automated testing to a delivery pipeline can be a daunting process. Several factors—the programming language, user preferences, test cases, and the overall testing scope—directly decide what can and cannot be automated. However, if set up correctly, automated testing can be the backbone of the QA team to ensure a smooth and scalable testing experience.
Different types of automation frameworks came into prominence to aid in this endeavor. An automation framework allows users to easily set up an automated test environment that ultimately helps in providing a better ROI for both development and QA teams. In this article, we will have a look at different types of test automation frameworks available and their advantages and disadvantages.
(This article is part of our DevOps Guide. Use the right-hand menu to go deeper into individual practices and concepts.)
Before diving into different types of test automation frameworks, we need to understand what an automation framework is. Test automation is the process of automating repetitive and predictable testing scenarios.
A test automation framework is a set of guidelines or rules that can be used to define test cases. These test cases can then be configured and implemented using test automation tools such as Selenium, Puppeteer, etc., to the delivery process via a CI/CD pipeline.
A test automation framework will consist of practices and tools that are designed to create efficient test cases. These practices range from coding standards, test-data handling methods, object repository management, and managing access control to test environment and external tools, etc. However, testers have more freedom than this. Testers are:
Still, a framework provides standardization across the testing process, leading to a more efficient, secure, and compliant testing process.
There are some key advantages of adhering to the rules and guidelines offered by a test automation framework. These advantages include:
When it comes to test automation frameworks, there are six leading frameworks available these days. In this section, we will look at each of these six frameworks with regard to their architecture, advantages, and disadvantages:
The linear framework or the record and playback framework is best suited for basic, introductory level testing.
In a linear automation framework, users target a specific program functionality, create test scripts in sequential order and run them individually. This process includes capturing all the tests like navigation, inputs, etc., and playing them back repeatedly to conduct the test.
This framework takes a modular approach to testing which breaks down tests into separate units, functions, or modules and will be tested in isolation. These separate test scripts can be combined to build larger tests covering the complete application or specific functionality.
(Learn about unit testing, function testing, and more.)
This framework is derived from the modular framework that aims to provide a greater level of modularity to testing by breaking down tests by units, functions, etc.
The library architecture framework identifies similar tasks within test scripts and groups them by function. These modular parts aren’t directly about function—they’re more focused on common objectives. Then these functions are stored in a library sorted by their objectives, and test scripts call upon this library to obtain different functionality when testing.
The main feature of the data-driven framework is that it decouples data from the script logic. It is the ideal framework when users need to test a function or scenario with different data sets but still use the same internal logic.
In data-driven frameworks, values such as inputs and outputs are passed as parameters to test scripts from external data sources such as variable files, databases, etc.
The keyword-driven framework takes the decoupling of data and the logic introduced in the data-driven framework a step further. In addition to the data being stored externally, specific keywords that are associated with different actions and used to test the GUI are also stored externally to be referenced at the test execution.
It makes keywords independent entities that reference specific functions or actions that are associated with specific objects. Users write code to prompt the necessary keyword-based action, and the appropriate script is executed within the test when the keyword is referenced.
A hybrid testing framework is not a predefined framework with its architecture or rules but a combination of previously mentioned frameworks.
Depending on a single framework will not be a feasible endeavor with the ever-increasing need to cater to different test scenarios. Therefore, different types of frameworks are combined in most development environments to best suit the application testing needs while leveraging the strengths of each framework and mitigating the disadvantages.
With the popularity of DevOps and agile practices, more flexible frameworks are needed to cope with the changing environments. Therefore, a hybrid approach provides the best solution by allowing users to mix and match frameworks to obtain the best results for their specific testing requirements.
Selecting a test automation framework is the first step towards creating an automated testing environment. However, relying on a single framework has become a near-impossible task due to the ever-evolving nature of the technological landscape and rapid development cycles. That’s why the hybrid testing framework has gained popularity—for enabling users to combine different test automation frameworks to build an ideal automation framework for their needs.
Even if you are new to the automation world, you can start with a framework with many built-in solutions, build on top of it and customize it to create the ideal framework.
DevOps came to prominence to meet the ever-increasing market and consumer demand for tech applications. It aims to create a faster development environment without sacrificing the quality of software. DevOps also focuses on improving the overall quality of software in a rapid development lifecycle. It relies on a combination of multiple technologies, platforms, and tools to achieve all these goals.
Containerization is one technology that revolutionized how we develop, deploy, and manage applications. In this post, we will look at how containers fit into the DevOps world and the advantages or disadvantages offered by a container-based DevOps delivery pipeline.
(This article is part of our DevOps Guide. Use the right-hand menu to navigate.)
Virtualization helped users to create virtual environments that share hardware resources. Containerization takes this abstraction a step further by sharing the operating system kernel.
This leads to lightweight and inherently portable objects (containers) that bundle the software code and all the required dependencies together. These containers can then be deployed on any supported infrastructure with minimal or no external configurations.
One of the most complex parts of a traditional deployment is configuring the deployment environment with all the dependencies and configurations. Containerized applications eliminate these configuration requirements as the container packages everything that the application requires within the container.
On top of that, containers will require fewer resources and can be easily managed compared to virtual machines. This way, containerization leads to greatly simplified deployment strategies that can be easily automated and integrated into DevOps delivery pipelines. When this is combined with an orchestration platform like Kubernetes or Rancher, users can:
DevOps relies on Continuous Delivery (CD) as the core process to manage software delivery. It enables software development teams to deploy software more frequently while maintaining the stability and reliability of systems.
Continuous Delivery utilizes a stack of tools such as CI/CD platforms, testing tools, etc., combined with automation to facilitate frequent software delivery. Automation plays a major role in these continuous delivery pipelines by automating all the possible tasks of the pipeline from tests, infrastructure provisioning, and even deployments.
In most cases, Continuous Delivery is combined with Continuous Integration to create more robust delivery pipelines called CI/CD pipelines. They enable organizations to integrate the complete software development process into a DevOps pipeline:
Both are crucial for a successful DevOps delivery pipeline.
(Learn how to set up a CI/CD pipeline.)
Now that we understand a containerized application and a delivery pipeline, let’s see how these two relate to each other to deliver software more efficiently.
First, let’s look at a more traditional DevOps pipeline. In general, a traditional delivery pipeline will consist of the following steps.
Most of the above tasks can be automated, including provisioning infrastructure with IaC tools such as Terraform, CloudFormation, etc., and deployment can be simplified using platforms such as AWS Elastic Beanstalk and Azure App Service, etc. However, all these automated tasks still require careful configuration and management, and using provider-specific tools will lead to vendor lock-in.
Containerized application deployments allow us to simplify the delivery pipeline with less management overhead. A typical containerized pipeline can be summed up in the following steps.
Container-based DevOps delivery pipeline
As you can see in the above diagram, containerized application pipelines effectively eliminates most regular infrastructure and environment configuration requirements. However, the main thing to remember is that the container deployment environment must be configured beforehand. In most instances, this environment relates to either:
The main turning point of the delivery pipeline is the application build versus the containerization. Only the application is built in a normal delivery pipeline, while the complete container is built in a containerized application, which can be deployed in any supported environment.
The container includes all the application dependencies and configurations. It reduces any errors relating to configuration issues and allows delivery teams to quickly move these containers between different environments such as staging and production. Besides, containerization greatly reduces the scope of troubleshooting as developers only need to drill down applications within the container with little to no effect from external configurations or services.
Modern application architectures such as microservices-based architectures are well suited for containerization as they decouple application functionality to different services. Containerization allows users to manage these services as separate individual entities without relying on any external configurations.
There will be infrastructure management requirements even with containers, thought containers do indeed simplify these requirements. The most prominent infrastructure management requirement will be managing both the:
However, using a managed container orchestration platform like Amazon Elastic Kubernetes Service (EKS) or Azure Kubernetes Service (AKS) eliminates any need for managing infrastructure for the container orchestration platform. These platforms further simplify the delivery pipeline and allow Kubernetes users to use them without being vendor-locked as they are based on Kubernetes.
(Determine when to use ECS vs AKS vs EKS.)
Container Orchestration goes hand in hand with containerized applications as containerization is only one part of the overall container revolution. Container Orchestration is the process of managing the container throughout its lifecycle, from deploying the container to managing availability and scaling.
While there are many orchestration platforms, Kubernetes is one of the most popular options with industry-wide support. It can power virtually any environment, from single-node clusters to multi-cloud clusters. The ability of orchestration platforms to manage the container throughout its lifecycle while ensuring availability eliminates the need for manual intervention to manage containers.
As mentioned earlier, using a platform-agnostic orchestration platform prevents vendor-lock-in while allowing users to utilize managed solutions and power multi-cloud architectures with a single platform.
(Explore our multi-part Kubernetes Guide, including hands-on tutorials.)
The simple answer is yes. Containerization can benefit practically all application developments, with the only detractors including overly simple developments or legacy monolithic developments.
Containers can support any environment regardless of the programming language, framework, deployment strategy, etc., while providing more flexibility for delivery teams to customize their environments without affecting the delivery process.
The modern technology landscape is an uber-competitive, constantly evolving ecosystem. With technology integrated into all aspects of modern life, all companies must continuously evolve to:
Historically, IT departments acted as a single team, but they have been increasingly divided into specialized departments or teams with specific goals and responsibilities. This increased specialization is vital for quickly adapting to the evolving technological landscape. However, this division has also created some disconnect between teams when it comes to software development and deployment.
DevOps, ITOps, and NoOps are some concepts that help companies to become as agile and secure as possible. Understanding these concepts is the key to structuring the delivery pipeline at an organizational level. So, in this article, let’s take a look at the evolution of ITOps, DevOps, and NoOps.
(This article is part of our DevOps Guide. Use the right-hand menu to go deeper into individual practices and concepts.)
ITOps (or TecOps) is shorthand for IT Operations. IT Operations is the most traditional concept of the three we’ll discuss, and it’s also the basis for these more modern practices.
Any IT task can come under the ITOps umbrella regardless of the business domain, as almost every business domain relies on IT for day-to-day operations. ITOps can apply to virtually any field.
(Understand how operations management can vary from service management.)
In its most fundamental form, ITOps is the process of delivering and maintaining all the services, applications, technologies, and infrastructure that are required to run an information technology-based product or service. Therefore, ITOps views software development and IT infrastructure management as a unified entity that is a part of the same process. The main difference of ITOps is how it handles delivery and maintenance.
ITOps typically covers all the following job roles:
The above roles represent the people who are responsible for delivering IT changes and providing long-term support for the overall IT services and infrastructure.
ITOps are geared more towards stability and long-term reliability with limited support for agile and speedy workflows. Generally, agility and speed are not the primary concerns of ITOps at all. Thus, ITOps will seem inflexible with rigid workflows. This approach is also geared towards managing physical infrastructure with release-based, highly tested software products where reliability and stability are key factors.
This inflexible nature is also the major downside of ITOps. However, it may be an excellent choice for monolithic and slow-moving software developments, such as in the financial services industry. Yet ITOps becomes obsolete in rapidly evolving software developments. As modern software developments come under this category, ITOps is not a suitable candidate for such environments.
DevOps provides a set of practices to bring software development and IT operations together to create rapid software development pipelines. These development pipelines feature greater agility without sacrificing the overall quality of the end product. We can understand DevOps as a major evolution of traditional ITOps that is an outcome of the Cloud era.
(Explore our comprehensive DevOps Guide.)
DevOps combines cultural philosophies, different practices, and tools with a greater emphasis on team collaboration. Moreover, DevOps will bring fundamental changes to how an organization handles its overall development strategy. As mentioned previously, a modern software delivery team consists of multiple specialized teams such as:
DevOps aims to bring all these teams together without impacting their specialty while fostering a more collaborative environment. This environment provides greater visibility of the roles and responsibilities of each team and team member.
Automation also plays a key role in DevOps to facilitate an agile and rapid SDLC. It enables offloading most manual and tedious tasks such as testing and infrastructure provisioning into automated workflows. Tools to facilitate this automation include:
The gist of adapting DevOps in your organization is that it can power previously disconnected tasks such as infrastructure provisioning and application deployments through a single unified delivery pipeline.
For example, in a more traditional development process, developers will need to inform the operations team separately if they need to provision or reconfigure infrastructure to meet the application changes. This process can lead to significant delays and bottlenecks in the overall delivery process.
However, DevOps streamlines this process by allowing separate teams to understand the requirements of each other. It enables them to foresee these requirements and address them promptly. This process can be automated in some situations, eliminating the need for manual interaction to manage the infrastructure.
DevOps is well situated for modern, cloud-based, or cloud-native application developments and can be easily adapted to meet the ever-changing market and user requirements. There is a common misconception that DevOps is unsuitable for traditional developments, yet DevOps practices can be adapted to suit any type of development—including DevOps for service management.
NoOps is a further evolution of the DevOps method to eliminate the need for a separate operations team by fully automating the IT infrastructure. In this approach, all provisioning, maintenance, and similar tasks are automated to a level where no manual intervention is required.
NoOps and DevOps are similar in a sense as they both rely on automation to streamline software development and deployment. However, DevOps aims to garner a more collaborative environment while using automation to simplify the development process.
On the other hand, NoOps aims to remove any operational concerns from the development process. In a fully automated environment, developers can use these tools and processes directly even without knowing their underlying mechanisms.
NoOps is solely targeted at a cloud-based architecture where infrastructure can be less of a burden or the complete responsibility of the service provider.
Serverless architectures are perfect examples of NoOps software delivery where developers only need to create their applications and simply deploy them in the serverless environment, eliminating any infrastructure or operational considerations.
NoOps may seem like the perfect operational strategy. Unfortunately, it lacks proper process management or team management practices baked into the method. Due to that, it may hinder the overall collaboration within a delivery pipeline as well as put more burden on the developers to manage the application lifecycle without any operational assistance.
In most cases, NoOps will be an ideal method to complement DevOps practices by introducing further automation to a delivery pipeline while preserving the collaborative multi-team environments.
In the above sections, we discussed the impact of each of these methods on the software development lifecycle. But what is the ideal solution for your organizational environment? Let’s summarize the primary characteristics of each method to find out the answer to that question.
As you can see, ITOps and NoOps excel at their domains, whereas DevOps can be considered a more universal approach.
ITOps is slowly becoming obsolete due to its slow rate of adaptation to the current technological landscape. (In fact, AIOps is rapidly moving in.)
NoOps is an idealistic approach where everything can be automated. However, it is still a way off as some critical aspects such as testing and advanced infrastructure and networking configurations require manual intervention.
Finally, we will come back to DevOps. DevOps has gained high popularity due to its adaptability to almost all development environments while improving the agility, speed, and efficiency of the software delivery process. Approaches like NoOps can even be integrated into the overall DevOps process to enhance the DevOps approach further.
Database DevOps is an emerging area of the DevOps methodology. Let’s take a look at database management and what happens when you apply DevOps concepts.
(This article is part of our DevOps Guide. Use the right-hand menu to go deeper into individual practices and concepts.)
DevOps is (officially) the preferred software development lifecycle (SDLC) framework for application development in recent years. Continuous operations and automation have emerged as a key characteristic of the DevOps philosophy as practiced in successful DevOps environments.
Yet, the principles of continuity and automation have not entirely encompassed all aspects of software applications.
The ongoing management, change, updates, development, and processing of database (code) has emerged as a bottleneck for many DevOps organizations. This has forced engineers to invest countless hours on database development rework that support continuous release cycles as expected for a streamlined DevOps SDLC pipeline.
Since database changes and development is considered as one of the riskiest and slowest processes of the SDLC, applying the DevOps principles of continuous operations and automation specifically for database code development is seen as a potential solution to the database problem. According to a recent research report:
Before we discuss how database DevOps can make the DevOps SDLC pipeline efficient, let’s discuss the database-specific challenges facing DevOps organizations:
And then there’s a bigger, harder-to-tackle challenge: insufficient DevOps attention.
Many real-world DevOps implementations have failed to integrate database and application development process into a unified and holistic SDLC framework policy. Database management processes have continued the traditional route, and the increasing scale of database changes has made it difficult for engineers to standardize and coordinate database development efforts with the rest of application development.
(Watch these challenges grow as data storage increases.)
Now, let’s look at the main tasks involved in Database DevOps, which in fact make the process similar to adopting the DevOps framework for application code:
Use a centralized version control system where all of the database code is stored, merged, and modified. Storing static data, scripts, and configuration files all within a unified source control system makes it easy to roll back changes and synchronize database code changes with the application code development following a CI/CD approach.
(Learn more about CI/CD.)
Automate build operations that run alongside application releases. This makes it easy to coordinate the application and database code deployment process. The database code is tested and updated at the same time a new software build integration takes place, according to the underlying database dependencies.
The CI/CD approach with a centralized version control system for both database and application code makes it easier to identify database problems every time a new build is checked in and compiled to the repository.
Additional best practices consistent with the DevOps framework:
Finally, note that every DevOps implementation is unique to the organization adopting the framework. Database DevOps can conceptually take several guidelines from the application code DevOps playbook, and integrate database code development and management along with the application code for similar SDLC performance and efficiency gains.
DevOps is a data-driven software development lifecycle (SDLC) framework. DevOps engineers analyze logs and metrics data generated across all software components and the underlying hardware infrastructure. This helps them understand a variety of areas:
Extensive application monitoring and telemetry is required before an application achieves the coveted Service Level Agreement (SLA) uptime of five 9’s or more: available at least 99.999% of the time. But what exactly is monitoring and telemetry and how does it fit into a DevOps environment? Let’s discuss.
(This article is part of our DevOps Guide. Use the right-hand menu to go deeper into individual practices and concepts.)
Monitoring is a common IT practice. In the context of DevOps, monitoring entails the process of collecting logs and metrics data to observe and detect performance and compliance at every stage of the SDLC pipeline. Monitoring involves tooling that can be programmed to
The goals of monitoring in DevOps include:
(Explore continuous delivery metrics, including monitoring.)
Telemetry is a subset of monitoring and refers to the mechanism of representing the measurement data provided by a monitoring tool. Telemetry can be seen as agents that can be programmed to extract specific monitoring data such as:
Consider the case of motor racing where fans get to see metrics such as top speed, G-forces, lap times, race position, and other information that displays on TV screens. These measurement displays refer to the telemetry.
Conversely, the process of installing sensors, extracting data, and providing a limited set of metrics information onto TVs is, in its entirety, called monitoring.
In the context of DevOps, some of the most common metrics measured are related to the health and performance of an application, and various corresponding metrics are always visible at the dashboard.
Before discussing the various DevOps use cases of telemetry, let’s discuss the most common monitoring challenges facing DevOps organizations:
In order to address these challenges, DevOps teams use a variety of monitoring tools to carefully identify and understand patterns that could predict future performance of an app, service, or the underlying infrastructure.
Some of the common use cases of telemetry in DevOps include the following metrics and use cases:
Analysis follows monitoring. Telemetry doesn’t necessarily include analyzed and processed logs or metrics information. The decision making based on telemetry of log metrics requires extensive analysis of a variety of KPIs and can be integrated with the monitoring systems to trigger automated actions when necessary.