Discover how to implement source code management, continuous code builds by using Maven and MSBuild and automate functional and load testing in this course. Explore adopting deployment strategies and implement continuous deployment (CD) with various open-source tools, and apply continuous monitoring, and building infrastructure as code using Puppet.
Key concepts covered here include steps and risks involved in implementing continuous integration (CI) workflow that can be mitigated with CI; how to version and control source codes using Git; and how to implement continuous build using Maven and MSBuild.
Next, learn how to implement automated testing from the perspective of functional and load testing; describe the process of implementing CD with focus on deployment strategies like Blue/Green and Rolling Upgrade; and set up end-to-end continuous delivery pipelines and implementations using open-source DevOps tools.
Then explore implement infrastructure as code using Puppet to automate infrastructure deployment and configuration management; steps involved in implementing CI workflow; and prominent frameworks and tools that can be used to implement infrastructure as code and implement infrastructure as code using Puppet.
Table of Contents
- Course Overview
- Continuous Integration Workflow
- Source Code Versioning and Control
- Patterns of Continuous Integration
- Continuous Build Using Maven and MSBuild
- Continuous Testing Best Practices
- Automated Testing
- Deployment Strategies
- Continuous Delivery Using Open Source DevOps Tools
- Continuous Monitoring and Benefits
- Frameworks and Tools for Infrastructure as Code
- Infrastructure as Code Implementation
- CI Workflow and Infrastructure as Code
Course Overview
Agile development brings many important practices, goals, history, terminology that enables a transformation of a DevOps collaborative culture. This is evidence through design practices such as modular design, and microservices continuous integration, continuous testing, continuous delivery and deployment, and continuous monitoring. In this course, I will examine how to implement source code management, continuous code build using Maven and MSBuild, and automate functional and load testing. I will also explore how to adapt deployment strategies and implement continuous delivery and continuous deployment using various open source tools, and apply continuous monitoring and building infrastructure as code using Puppet.
Continuous Integration Workflow
The objective of this exercise is to list the steps involved in implementing continuous integration flow and the risk that we can mitigate using continuous integration. In order to adopt DevOps practices, one of the essential key implementations that we need to adopt is continuous integration. Continuous integration is all about developers checking in the code CI server initiating the build. After build test is done, and post-test developer is notified in order to act further. So, let’s try to understand how actually this flow works.
You’ll have developers involved who will be using IDEs that is, integrated development environment, along with source code management tool, in order to check out the code in source-controlled management system. Once developer checks out the code and commits the changes, build process starts. Post build depending on whether build is successful or it is failed, developers are notified. And they do further refinement of the program to ensure that unit as well integrated testing is successful.
Now, what all are the risk which actually are associated with continuous integration? It all depends on the practices that we have adopted. Continuous integration practices that may have impact over benefits of continuous integration are related to testing. When we talk about testing, risk associated with testing is on overreliance on manual testing. Whereas, when we talk about DevOps, it’s all about automated test. Apart from that, you may not have right balance of functional test. Sometimes you may have too many functional tests that you have to conduct, again it raises risk. Apart from that insufficient integration test, all you can say that after unit test is completed, we need to integrate.
And we need to see what is the outcome or what is the behavior of the code when it gets integrated to other part of the code. If you don’t do it properly and in optimized way, you’ll find risk associated with testing, you may not get the desired outcome of testing. Second impact is on selection of tools. You end up selecting wrong tool, it will create issues in DevOps lifecycle. If you improperly configure tool, it will raise again the risk of maintenance. Apart from that, if you incorrectly integrate those tools that you have selected, that may also have failing impact. Third risk is associated with versioning of the code. In case if your code is not versioned properly, it may lead to poor maintenance of the code.
Apart from that, you need to also ensure that developers commit the code to repositories which are not part of delivery pipeline. If they’re part of delivery pipeline, you may not have isolation of the code. Another risk is associated with organizational confusion. If you don’t have proper team collaboration implemented, team will be confused regarding their task and how actually their tasks are going to impact other teams. To avoid that, you need to ensure that you build proper collaboration protocols along with practices. Another risk that gets raised because of organizational confusion is confusing integration pipeline. In other words, it will not be straightforward, rather it will be complex and it will be difficult to manage.
Now how actually we can mitigate the risk. In order to mitigate the risk, you need to adopt a process. And that process starts with risk identification. Identification of risk is related to events and the relationship that are defined on the object where risk is raised. Second step is to do proper risk assessment. In other words, you have to go and calculate probability and consequences of risk associated with the task. Consequences may include various different factors like cost, schedule, technical performance, overall impact, and capability of functionality and impact on capability of functional units.
Once you have done proper risk assessment, you have to go and you have to do risk analysis. Risk analysis is all about identifying the right risk, building a decision matrix, ranking those risks in particular order, and then identifying how actually you can mitigate it. And that’s what you do in step D. That is planning the risk, identifying the implementation that may take care of risk, and again putting it under continuous monitoring to assess re-occurrence of factors which may again recreate the risk.
Now, let’s try to mitigate those risks which we have identified from perspective of testing. For testing, in order to mitigate the risk, you need to do planning. And you need to create comprehensive range of automated test, or you can say you have to replace the manual testing with automated testing. Second task that you have to do is to conduct detailed integration test. Identifying the adapter and adaptee among the system and conducting the right integration test using the right parameter is what may provide a solution from risk mitigation of negative impact that poor integration test may have.
And finally, include balanced functional test instead of having too many functional tests. From tooling perspective, you need to ensure that you are doing extensive research before you go ahead and select the tool. It is also recommended that you do small POCs and then identify the right fit. Identifying the right tool in early stages, that is while planning, is essential so that you can derive the right delivery pipeline and operational flow. In order to take care of risk that gets raised because of unversioned code, you need to ensure that developers commit the code in a single distributed version control system, instead of having their own individual version control system.
Also you need to go and apply certain practices and protocol and enforce those version control mechanism. Finally, in order to mitigate the risk which gets raised because of organizational confusion, you need to enforce the right policies and practices. Apart from enforcing policies and practices, you also need to train all the stakeholders and team members to adopt and participate in Agile and DevOps practices.
Source Code Versioning and Control

The objective of this exercise is to demonstrate how to version and control source code using Git. Version control is a type of system that actually allows us to keep track of changes which are being made to the code or to the files or to the resources. We can utilize capability of Git in order to manage versions and also various level of source code which can be made available in different branches depending on the coding practice adopted.
Now, let’s start with some of the basic steps that you have to take in order to implement versioning or history of the changes that you want to maintain. First, you need to ensure that you have Git installed. Once you have Git installed, you can go ahead and you can start working with Git commands in order to manage history of commits, which are being made to the code. Our first step will be to go and initialize a Git repository. And in order to do that, let’s go and create a directory first.
To create a directory, we’ll be using mkdir followed by testrepo3 is what is the name that I am giving, press Enter. Move to the directory that you created recently.
Then you have to go and initialize the repository by writing git followed by init, and you’ll have to press Enter.
Once you’ll press Enter, you’ll find that in the directory it will create a .git folder. Now let’s go and see, was there any modification which was done or is there something which is not yet committed? To do that, we will write git status.
Once you will write git status, it tells you that what is happening on what branch. As of now there was no commit which was made. Now let’s go and create a small file in testrepo3. To do that, we’ll be using notepad to create the file. So for example, assume that we are creating test.java.
You have to specify the code here, for example, we’ll write class Test and we’ll just write some piece of code. We need to ensure that we’ll save the file.
So now we have created the file and we are saving the file in the directory which is our Git repository. We’ll click on Save. After clicking on Save, we’ll move back to our Command Prompt. And at Command Prompt, again, we will write git status.
Now once you will write git status, it clearly tells you that what files are there which are untracked. And it also indicates nothing added to commit, but we have some untracked files which are present. Now in order to ensure that we are able to make this file trackable by your git, you have to go and you have to use a command called as git add, and specify name of the file. For example, we’ll specify Test.java and we’ll press Enter.
Now after pressing Enter, let’s go and again see git status.
Now when you will see git status, you’ll find that it clearly indicates that you have a file which is now tracked, but yes, it is not committed yet. Before we move further, let’s go and clear the screen.
And now we will be committing the code. To do that, we’ll specify git followed by commit, specify the message that you want. We’ll say first commit, and we’ll press Enter.
Once you’ll press Enter, you’ll find that it indicates that one file has changed and there are eight insertions. Now let’s go and visualize the log of all the commits which are made. To do that, we’ll write git followed by log.
Once you’ll specify git log, you’ll find that it tells that what commit is made and when commit was made and who is the author. Now let’s clear the screen once again by writing cls.
Now let’s check the changes which was made in this specific commit. And for that we need to use commit ID. In order to get commit ID, we will write git show.
Once you will write git show, it will display you the commit. And just beside commit, it gives you a hash code which starts with d. We’ll copy the commit ID. After copying the commit ID, now we want to see the changes which was made in that particular commit. To do that, we’ll write git show and specify the commit ID, press Enter.
Once you will press Enter, it will display you the detail about the commit which was made. Now let’s go and make some changes to the existing file which is already committed. We’ll be adding some line here.
We have added another System.out.println, and we have written some message there saying hello changing. We’ll again save the file.
After saving the file, we’ll come back to Command Prompt and we’ll clear the screen. After clearing the screen, we’ll go back and we will write git show.
Once you’ll write git show, it will show you the details of the commit which is already made. We need to see git status in order to know what happened to the resources which was part of the repository.
Now we can clearly see that after we made change, git status clearly indicates that something was modified in the file. Now let’s go ahead and commit the recent change as well. To do that, we’ll write git commit -m followed by message “this is second commit”. We are specifying the message, that is, this is second commit and we’ll press Enter.
Once you’ll press Enter, you’ll find that it displays that no changes were added. So we need to go and we need to add the changes again to the file by using git add and specifying name of the file.
After adding name of the file, we’ll go back again and we’ll execute git commit.
Now once you will execute git commit, you’ll find that again one commit is made. Now let’s clear the screen once again and write git show.
Once you’ll write git show, you’ll find that it displays you the current commit which is made where second line indicates plus with green color.
Once you write git log, it will show you all the commits that are made to the current code. Now you want to revert back to the previous commit or to whatever commit you want in case if number of commits are huge.
To do that, you will write git followed by reset followed by –hard option, and you have to specify the commit ID. Let’s go and copy the commit ID of the first commit that was made. And pass that commit ID. This will ensure that it automatically goes back to this specified commit. And all changes since that commit will be lost and the head now will point to the specified commit. We’ll press Enter.
Now once you’ll press Enter, you will clearly find that now head is pointing to the first commit. We’ll go and we’ll write git show.
Once you’ll write git show, you’ll find that the recent change which was made is automatically deleted and you have reverted back to the previous version. By following this approach, you will be able to manage code as well as versions.
Patterns of Continuous Integration

The objective of this exercise is to specify the best practices and patterns that we can adopt to implement continuous integration. Patterns always evolve, and it evolves in order to ensure that it is documented and shared for future use. Before we talk about patterns, let’s talk about what all are the best practices that we need to adopt in order to have proper continuous integration implemented.
Implement time-boxed iteration is one of the best practices that indicates that organizations must always have multiple fixed-length, short duration fixed for each and every release. It can expand from one week to four week, depending on complexity of the project. Second important best practice that we need to adopt is to ensure that we perform short task with frequent commits.
And this is responsibility of development team. Development team need to ensure that they daily do standups. And after their work is completed, they ensure that whatever code they have written for a particular feature set, it is committed before they move on to another task. Third important best practice is to ensure that features are prioritized. In other words, you need to be clear that which feature has to be released early after identifying dependencies on it, responsibility goes to product owner.
You need to ensure that continuous integration has daily build, and this daily build should be comprehensive. It is performed in order to ensure that whatever is the target development, it is merged at end of the day and provide a clean baseline for next day development task. And finally, best practice is to ensure that you need to facilitate iterations and retrospective walkthroughs. Meaning of retrospective is to identify what went wrong and how actually it can be taken care in further iterations.
Now let’s categorize patterns of continuous integration into three broad categories. First is artifact management. Artifact management deals with handling and sharing the artifacts, which can be done using a centralized repository. Source code modularity is another category of patterns.
Objective of source code modularity is to ensure that developers have adopted interface driven development mechanism, which helps them to keep the code reusable. And third important category is all about build and execution. Build and execution is essential in order to ensure that code is built, and before it is deployed, it is executed to test whether there are integration bugs or broken builds.
Now let’s talk about what all patterns are there which can be used for artifact management. Artifact management patterns can be classified broadly into three categories. First is single shared library pattern. You will have a central repository and all the development team will be committing their code to the centralized single repository.
It provides concise way in order to ensure that artifacts are available for all developers. But yes, you can go and you can apply granular security in order to make the artifact visible to the right team. Second pattern is installer pattern. In the first iteration of Agile process, you would be creating an installer.
And objective of installer would be to ensure that it is able to install the program which can be managed by development team. You need to ensure that your installation program is cross-platform. And third pattern of artifact management is patch management pattern. Patch management pattern is all about creating a patch in order to generate patches for an existing installer or for an existing feature. Now let’s talk about source code modularity pattern.
Interface driven source is what is advocated most of the time. It helps to define programs, and interfaces helps you to decouple your implementation from specification. Second pattern is platform independent module. In other words, you have to focus on developing modules which are not dependent on a particular platform. In case if your module will depend on a particular platform, it will have high level of coupling. And third pattern is native module pattern.
Native module pattern classifies implementation code into two different parts in order to get right level of modularity. First one is independent, which does not depend on any platform. But yes, second one is dependent, also called as native. You have to ensure that you have placed all the native code in native module. Now let’s talk about third pattern, which is build and execution pattern.
Build and execution pattern guides you how actually your build and execution process should be controlled. We’ll talk about the first pattern, which is local and remote build pattern. When we talk about local and remote build pattern, objective is to perform a build locally. Now there are two different kind of local environment.
First one, which is used by developer, where they will be developing the code. And the second, which will be present with the CI system, which is also called as local build CI. We can use a single CI system to deploy the build for a native module to remote platforms. The remote platform takes responsibility of combining all the target platform which exist in the current life cycle. Now let’s talk about second important build and execution pattern. It is called as integration workflow pattern. Objective is to design integration workflows that controls build jobs. There are two types of essential integration workflows.
First, intra-project workflow and second, inter-project workflow. Intra-project workflow spans the workflow to multiple projects, whereas inter-project workflow is within the project. And finally, we can adopt single responsibility pattern where we assign one person in development team to get notified when a build is broken. The single responsible person has the responsibility and full authority to make sure that a broken build should be fixed as soon as possible.
Continuous Build Using Maven and MSBuild

Objective of this exercise is to demonstrate how to implement continuous build using Maven and MSBuild. In order to ensure that we are able to incorporate build capabilities of Maven and MSBuild, we are using Jenkins.
In order to create a project which takes responsibility of helping you to configure builds using Maven and MSBuild, first we need to launch Jenkins. After launching Jenkins, you will go and you will log into it. And once you will log in, you will get a dashboard. On the dashboard, your first objective should be to ensure that you have the tools installed. And to do that, you will click on Manage Jenkins.
Once you’ll click on Manage Jenkins, you will get various different options in order to manage Jenkins configuration. We’ll click on Manage Plugins.
Once you’ll click on Manage Plugin, it will show you all the plugins which are available for update, which are available to install, and which are already installed. We’ll click on Installed. And we will look for MSBuild, if it is installed or not. We’ll find that MSBuild Plugin is already installed. We will also look for Maven Integration plugin, if it is installed or not, so that it can automatically run Maven task and that task would be build here.
We have both installed. In case if it is not installed, you will go and you will install it by just clicking on Available, selecting the right plugin, and clicking on Install. After installation confirmation is done, you’ll click on Back to Dashboard, and then you will click on New Item.
After clicking on New Item, you’ll give the item a name, say, test67, and you’ll select Freestyle project. After selecting Freestyle project, we’ll click on OK. Once you’ll click on OK, it will show you various different configurations that you have to do.
Your first configuration is to decide how actually general properties will be managed, and this is where you will control the build process. We can go and we can select This build requires lockable resource in case if you want to conduct build only after resources are locked. Depending on the requirement, you can go and you can select concurrent build, if necessary. As of now, we’ll select Execute concurrent builds, if it is necessary.
Then you have to specify Git Source Code Management where you have your source code available that will get invoked when you execute the build process of the project. You can utilize here Git repository, which can be remote as well as local. Let’s go and specify a local repository.
Once you’ll specify your local repository where code is, your objective will be to go and plan how build will be triggered. We can select Poll SCM, which will poll as per the Schedule that you’ll specify.
In order to get help of the Schedule, you can click on the question mark, it specifies various Schedules.
If you want to execute every 15 minutes, you can select from the example and you can paste it in the Schedule of Poll SCM.
After doing that, in order to close the help, you will click on question mark again. You’ll find that help is closed and you have now Schedule ready, and it clearly tells you when actually build will be triggered. Now you have to decide Build Environment settings. To do that, you can select whether to delete workspace before build start so that new build can be there. If you want to do that, you can select Delete workspace before build starts.
Now your next objective is to set up build steps that you want. For example, we’ll set up Add build step, we can go and we can select Invoke top-level Maven targets, where we’ll specify mvn test or mvn install, any of the goal that you prefer that need to be executed when build is fired.
We will be adding another build, and that build will be related to project which need to be built for a Visual Studio solution.
We’ll select it, and then we will specify the Build File. We selected the project that is solution from one of the project which is located in our Visual Studio. Apart from specifying the Build File, you can also go and specify the Command Line Arguments that you want.
Next, we can go and we can select post-build action. And that post-build action can be E-mail Notification that your build is successful.
Finally, you will click on Save in order to ensure that your project is ready for build which will be fired depending on the configuration that we have given, as scheduled.
Continuous Testing Best Practices

The objective of this exercise is to recall the best practices that we can adopt to implement continuous testing along with the importance of continuous testing in DevOps.
Testing early and continuously is important and when we do that, we are able to detect bugs early and we are able to resolve them early as well. Let’s talk from perspective of testing flow. We need to first write the test script, which will be written keeping in mind the objective. After writing the test script, we can publish it to Git, which is the repository. Then we can trigger the build.
Once we trigger the build, your test script will be compiled, and post compilation, it will be executed. Once it is executed, you can go and you can see the test execution outcome either in form of graph or in form of matrices. You’ll keep on submitting test scripts until and unless you are not satisfied with the outcome of the test.
Now, what all are the best practices that we need to adopt when we go for testing? First best practice is test early and often. Second, try to automate most of the test. Third best practice is, your focus should be on providing the value to the business.
In other words, when you test, your objective should be what value the testing candidate need to provide to the business. You need to also adopt lean testing approach in order to ensure that cost of testing is not huge.
And finally, you can implement appropriate test techniques which can be mock testing, which can be smoke testing, which can also be clone testing and various other different type of testing standards which are there can also be adopted. Now from perspective of DevOps, how continuous testing helps and what will be the task that will be played by roles which are involved in DevOps? DevOps involves three critical team players.
First is developer. Developer need to begin continuous testing to test the functionality. In other words, they will be more or less doing functional or performance test. Operations will do continuous integration testing in order to ensure that developers provide a test, may have seen the expected manner. And finally, QA analyst need to ensure that they run all the test parallelly to keep the processes moving fast. In other words, we will have different roles and each role is critical in terms of continuous delivery, and they are required for entire value stream.
Automated Testing

Objective of this exercise is to demonstrate how to implement automated testing from perspective of functional and load testing. In order to ensure that we are able to automate testing and that too from perspective of functional testing which includes unit testing and load testing, we need to utilize continuous integration tool which is Jenkins.
After installation of Jenkins, we have to go and we have to log in. To log in, we have to specify the username and password and click on Sign in.
After clicking on Sign in, now you have to configure your Jenkins so that functional and load testing can be done. To do that, we’ll go to left panel and will click on Manage Jenkins.
After we click on Manage Jenkins, we have to install the required plugins.
We’ll click on Manage Plugins and we will look for plugin which is required in order to do functional and load testing.
For functional testing, depends highly on the language that you are using. For example, if you are using Microsoft .NET, you can use MSTestRunner, we’ll select it for installation and we’ll click on Install without restart. We’ll find that it starts installation of MSTestRunner.
We’ll go back to the top page again and now we have to install for performance test or load test.
To do that, again, we’ll click on Manage Jenkins and we’ll click on Manage Plugins. After we click on Manage Plugins, we’ll go to Available.
And in Available, we’ll search for Performance. And you’ll get a list of performance testing tool. We’ll go and we’ll select Performance depending on what performance testing you want to do and what tool you want to configure. For example, assume that your administrator has installed Apache JMeter and they want to utilize it. So we’ll select Performance, and we’ll click on Install without restart.
We have to specify the Repository URL. Repository URL you can get from the code base or IDE wherever you have written.
To know the location, we can right-click, click on Properties and select the Location of your Git repository that contains your code along with the functional test. And then you have to specify the Repository URL.
Now we have to add the Repository URL where all the code base is present. You can also go and tune the branch. Now, our next objective is to go and decide how testing will be done. For testing, we’ll go to Build Environment and we’ll select Show tests in progress and then we will Add build step. When you’ll add build step, you will get all the different type of builds which are required.
For example, if our code is Microsoft, we can go and we can select Run unit tests with MSTest. We’ll select it, if we want to. Else, depending on the performance test that we want, we can select Run Performance Test. So depending on kind of test that you want, you can go and you can select. Since our test is integrated in Maven, first we will select Invoke top-level Maven targets and we’ll specify Goals as mvn test. It will automatically go and look for all the test cases that you have written, and it will execute.
Apart from functional test, we also intend to do performance test. For that, we will select Run Performance Test, and we’ll specify the tool parameter that we want. For example, we’ll write here tool parameter called run, and we’ll click on Save.
Now you have your project which is ready and once you will invoke the build, you’ll find that it runs the unit test which is functional test as well as load testing. But yes, you have a dependency, you need to ensure that you have load testing environment also installed.
Deployment Strategies

The objective of this exercise is to describe the process of implementing continuous deployment with focus on various deployment strategies. Continuous deployment is an integral part of DevOps process. It starts with developers contributing the code. Once code is contributed, code will be tested, integrated, after integration, there will be a story test that will be built for UAT.
Post-UAT has to be deployed in production. Primary difference between continuous delivery and continuous deployment is in approach of deployment. Continuous delivery requires manual intervention whereas continuous deployment does not. Now let’s talk about what all deployment strategies are there that can be adopted.
They are recreated, canary, rolling updates, A/B testing, blue-green, and shadow. Now let’s talk about these deployment strategies in detail. When we talk about recreate, it is a scenario where you may have version one, which is terminated, and then version two will be rolled out. Benefit of using recreate is the simplicity of setting it up and application states are entirely renewed.
In other words, it will not carry backlog of previous application. Drawback is it may have high impact on user experience. You may have certain users who might have created sessions in the previous version. So, when you roll out new version, probably those sessions will be lost and user may face poor experience. Now let’s talk about rolling update.
Rolling update is all about doing incremental deployment. In other words, you may have version two, which is slowly rolled out, and it replaces version one. Benefits that are associated with rolling update are simplified setup, slow version release across multiple instances. And apart from slow version release, it also works well when you have stateful application, in other words, where states are persisted.
Rolling updates will ensure that it adopts principle of rebalancing the session. When we talk about drawbacks, rolling update may take considerable amount of time. It is difficult to support multiple different APIs, previous version API as well as new version API, and complex traffic management because it has to manage state.
Third important strategy is blue-green, which is widely adopted. Blue-green indicates that version two will be released alongside version one. And slowly, there will be switch of traffic from version one to version two. Benefit of blue-green is its instant rollout or rollback avoids any confusion which gets raised because of versioning issue. And another benefit is it changes complete application state in a single go without having impact on user experience.
Drawback is blue-green deployment may be expensive because it requires equal amount of resource for each version. Apart from that, it also requires proper testing before you go ahead and do production release. And finally, it is difficult to handle stateful application in case if there are certain states which are still persisted and is being used by users.
Next is canary, where we will follow principle of releasing version B to a subset of users and then proceed with full release and make it available to all users. Benefit is it releases versions for a subset of users, it gains experience from there, and it improvises it. Another benefit is related to convenient for error rate and performance monitoring.
In other words, you will have better outcome of monitoring if you have adopted canary as your deployment strategy. Another benefit is fast rollback. You can revert back to previous state quickly. Drawback is slow rollout because you have to capture experience of subset of user before you move forward and make it available fully to all users. Then comes A/B testing.
In case of A/B testing, version one is released to a subset of user under certain identified specific condition. Benefit is, you will have different versions running parallelly, and you will have full control over traffic distribution. When we talk about drawback, drawback is, it requires load balancer. And apart from need of load balancer, whenever there is error, tracing the error and eliminating the error is also difficult.
Finally, you have shadow. Shadow indicates that you will have version two that receives real world traffic alongside version one, and it does not impact the response that is given back to the users. Benefit is it implements performance testing and provides predictions on production traffic.
It will never have impact on user experience and no rollout is required before achieving stability. In other words, you will go and rollout one version only when your system is stable. Drawback, it’s expensive because need of resources, again, twice. It does not test the users. And if it does not test the user, it may give misleading outcomes. Complex to setup and it requires lots of mocking services for various use cases.
Continuous Delivery Using Open Source DevOps Tools

The objective of this exercise is to demonstrate setup end-to-end continuous delivery pipelines and implementations using open-source DevOps tools. There are various open-source DevOps tools which are there in market, probably for each stage of DevOps lifecycle you may need a tool. So, let’s start understanding that how actually all these tools will collaborate. And in order to do that we are using open-source tool called as Jenkins.
And then we will see how it can be configured with other DevOps tools which can help you in implementing end-to-end continuous delivery pipelines. First task is to ensure that Jenkins itself is installed, and installation process heavily depends on operating system. We have installed Jenkins on Windows, which is a simple install. It comes with a MSI file which can be executed, and wizard can be followed in order to install it. Once it is installed and ready, you will go, and you will log in by using the user id and password and you’ll click on Sign in.
Once you’ll click on Sign in, your objective now should be to go and configure all the tools which are required for continuous delivery pipeline. And to do that, first we need to go and install all the required plugins. To install the plugins, we’ll go on the extreme left and will click on Manage Jenkins.
Once you click on Manage Jenkins, it will give you various different options in order to control the way Jenkins should behave. We’ll click on Manage Plugins.
After clicking on Manage Plugins, we need to ensure that some of the plugins are installed. For example, for code commit or for source code management, you have to ensure you have installed Git. All the installed ones will be displayed in Installed tab, so you can check for Git to be installed. If Git is installed, it means that you have source code management ready.
Once you select Available tab, you will get list of all the available plugins which may participate in continuous delivery pipeline. Now we are going to select where all our application will be delivered. And for that we need to have the right connectors, which are provided using various agent launchers and controllers. Assume that we are planning to deliver our code not only in Amazon but also in Azure. We’ll select Azure VM Agents, as well as Amazon Elastic Container. Now, our next objective is to go and find out whether containerization plugin is there or not. For that, we will go to find and we’ll write docker.
Once you’ll write docker, you will get docker-build-step. So in case if you want to build your job which get executed on docker or you want to execute docker command in order to push or pull artifacts, you need to go and you need to have docker-build-step as well.
After selecting all the necessary continuous delivery pipeline coordinators, you will go, and you will click on Install without restart. Now we have selected docker-build-step which comes with Variant that provides virtualization capability, and cloud connectors which can connect to Amazon, Azure, and CloudBees, to provide continuous delivery pipeline capabilities. We’ll wait for this installation to get over. Once all the required plugins are installed, you’ll click on Go back to the top page.
Once you’ll go to Post-build Actions, you’ll find that whatever plugins you have installed will be part of it and you’ll be able to deliver the code on Azure, AWS, as well as docker.
Continuous Monitoring and Benefits

The objective of this exercise is to recognize the benefits of implementing continuous monitoring in DevOps pipelines. Continuous monitoring is an essential task. It is used in order to identify the right matrices and observe those matrices in order to derive the performance of the application, as well as operational components.
In order to understand continuous monitoring and its application in pipeline, let’s go and define the pipeline. Pipeline starts with development and automated build. You’ll be adopting certain build tools in order to automate build process, so that we can start with continuous integration process. Continuous integration process is all about compilation, unit test, and static code analysis.
To ensure that whatever artifacts are getting created or they are getting generated, they are as per the standards, and they are error free. Then you’ll move to continuous delivery, which ensures that it deploys your artifact into multiple different environments, where approval workflow may be required. Then it moves to continuous testing where you will be automating different type of test that includes functional test, load test, security test.
Once testing is done, you will move to continuous deployment where you will adopt production deployment, with or without approval workflow, for proper governance of deployed artifacts. Finally, whatever you have deployed, it must have event’s hooks in order to ensure that different components and application’s behaviour is monitored. You can also setup alerts in order to ensure that whenever there are troubles or whenever there are warnings, your continuous monitoring system takes responsibility of notifying it. Now let’s talk about continuous monitoring in value stream.
Value stream is all about identifying the right set of steps in order to complete the operations that you have adopted in order to provide the solution. It has three different stages. First stage is all about applying the controls, to achieve regulations and controls which can be applied in order to reduce the cost. And that is only possible if you go and improve the operations.
Operations are improved by identifying the right automation technique, and also various different components which are required in order to participate in improvement of operations. And finally, you need to ensure that you apply technologies to optimize the processes to get the best outcome out of your existing system.
Now let’s try to understand benefits that we get out of continuous monitoring. If you monitor your system continuously, you’ll be able to identify the progress and build instant feedback. Apart from that, monitoring is with objective of indicating the problems. And early identification of problems may help you to align your business properly with the underlying processes. Free SEO Tool
Monitoring also helps you to identify risk. In case if you are running short of resources, monitoring system will take responsibility of triggering an alert that alerts you with complete information about what resource need to be scaled up. There are situations when you may have control violations.
Continuous monitoring helps in identifying control violations, which can be rectified adopting right strategy. And finally, continuous monitoring also provides detailed information about exceptions. And how actually those exceptions can be remediated by using appropriate analytics on monitored artifacts.
Frameworks and Tools for Infrastructure as Code

The objective of this exercise is to name the essential frameworks and tools that we can use to implement infrastructure as code. Now let’s try to understand infrastructure as code first before we start discussing about tools and frameworks. There will be scenarios when you will like to replicate similar kind of infrastructure, probably on a different data center or on a different cloud provider.
Recreating all the components and configuring them becomes tedious. In order to simplify the task, we can write a script that can provision all the resources which are required in order to setup infrastructure.
And this is the reason why we use the term called as infrastructure as code. Fundamental objective of infrastructure as code is to manage and provision all the computer data centers by using definition files.
Apart from that, we need to also understand that when we talk about infrastructure as code, it will always use the same versioning as the ones which are used by the entire DevOps team. For example, if DevOps team is using version 1.1, your infrastructure as code will also be versioned as 1.1, indicating the relation between application artifact and infrastructural artifact. Apart from that, infrastructure as code also provides capability to evolve, in order to resolve certain issues which are related to environmental drift or environmental shifts.
Infrastructure as code frameworks can be classified into three types. First type is declarative framework. Then we have imperative framework. And finally, we have environment aware frameworks. When we talk about declarative framework, these are declarations which are made typically in YML or other functional languages. But when we talk about imperative, imperative uses complete programming language which contains statements. And those statements may use various different expressions in order to define a resource and configure the resource.
Finally, we have environment aware. Environment aware are intelligent systems, they determine the correct desired state before the system executes. Depending on the state of the system it reconfigures or auto-configures itself in order to give the desired outcome.
Finally, let’s talk about certain tools, which are used as infrastructure as code. You have Chef, SaltStack, Ansible, and Puppet. You can go and you can use any one of these. But yes, it depends on the skill, it depends on the cost involved, and it depends on the training curve which is required in order to train your team to utilize one of these tools to automate infrastructure by using code. Let’s talk about them in detail. Chef. Chefs adopt declarative as well as imperative framework.
In other words, it provides you opportunity of making declaration as well as writing procedures using some language which is understood by the tool in order to orchestrate infrastructure as code. Chef is written in Ruby, benefit of using Chef is it provides scalable automation. Apart from providing scalable automation, it also provides software and infrastructure change management. In other words, it understands the change and how that change will have impact on existing infrastructure.
And finally, you can do analytics on components of infrastructure. Now, let’s talk about another popular tool called as Puppet. Puppet uses declarative framework, and it is written in Ruby. Benefit of using Puppet is it enables you to build dynamic policies and also enables you to work with real-time data instead of working with static data.
Similar to Chef, Puppet also provides scalable automation. Third popular tool is SaltStack. SaltStack, again, adopts declarative and imperative framework. Unlike Chef and Puppet, it is written in Python. It provides capability of controlling end-to-end configuration management and also orchestration of various components. Apart from that, using SaltStack you can provision new servers and IT infrastructure from scratch. Another benefit of SaltStack is its availability, it is available on all popular cloud providers, which includes AWS, Azure, as well as Google Cloud.
Finally, we’ll talk about Ansible. Ansible simplifies job of automation of infrastructure by using code. It has certain features which makes it leader in field of infrastructure as code. It adopts, again, declarative and imperative framework and it is entirely written in Python. Benefit of using Ansible is its support to asynchronous actions and polling events.
Apart from that, it supports delegation, rolling update, and local action patterns in order to manage infrastructure. Finally, Ansible can be utilized in order to do end-to-end IT configuration, deployment, and orchestration of various different components which makes up your infrastructure.
Infrastructure as Code Implementation

In order to ensure that you are able to provide an environment that takes care of infrastructure as code, you can go and you can install Puppet either on cloud or on-premise. We are going to install it on cloud, and cloud that we have selected is AWS. Our first task should be to always go and sign in to the console, and for that, we will click on Sign in to the Console button, which is there on extreme right.
Once you click on Sign in to the Console, you will come to AWS Management Console. In AWS Management Console, under Find Services, you will have a text field where you can go and you can write puppet. Once you’ll write puppet, it will provide you OpsWorks, which is a service provided by AWS in order to help you in installing the tool for orchestration of infrastructure. We’ll go to Puppet and will click on Puppet.
Once you click on Puppet, it will directly come to a welcome page where you can go and you can Create Puppet Enterprise server and provide access to the server to the developers who are writing code in order to automate structure. Now we’ll go and we’ll click on Create Puppet Enterprise server by clicking on Create Puppet Enterprise server button.
Once you will create, it will ask you a server name. Name it as automateinfra, and we’ll select the region where we want this server to be. We’ll keep the defaults as of now and click on Next.
Once you click on Next, it will ask you, do you want to connect to Puppet server using SSH? We’ll go and we’ll say, we are not connecting by SSH.
Though it is recommended that you should never provide access directly to Puppet Enterprise server, rather you need to have Puppet Enterprise client tools, through which you have to go and you have to execute your code that you have written. Now let’s keep everything default and click on Next. Once you’ll click on Next, it will bring you to configuration page, or advanced settings, where we will not make any change as of now.
It will generate a security group, automatically allocate a role and instance profile. We can go and we can change instance profile that we want. We can generate a new one as well.
As of now, we’ll go with the default which exists. And finally, we can go and we can click on Next, where it’ll bring you to Review page and ask you to refer to all the elements which you have specified. We will go on and we’ll click on Launch button. Once you’ll click on Launch button, it will start deploying your Puppet Enterprise server that you will be using in order to manage infrastructure and implement infrastructure as code using Puppet. Now once your Puppet Enterprise server is ready and launched, you can go and you can log in to web console, where you can see all the infrastructural components.
And also, it will facilitate you to run various different Puppet task which are deployed by Puppet developers who have simulated infrastructure as code. Ensure that you have downloaded the credentials in order to retain it for later use. As of now, we can click on Show sign-in credentials, which we will be using once we login to web console. To log in to web console, we’ll click on Puppet Enterprise console.
You may get this warning because you don’t have the certificate. So in order to ensure that you’re able to log in initially, you’ll click on Advanced, and you’ll click on Accept the Risk and Continue.
Now you’ll come to log in page where you can use the credentials which are provided by enterprise server, and you can click on Log in.
Now you have the dashboard where you can clearly see that how many of the nodes are running in enforcement state. How many nodes are running in no-op state, and how many nodes are there for which you don’t have any report, or they are unresponsive.
You can utilize the web console that we are able to see here to run various type of jobs, like you can run Puppet job, or you can run Task.
And you can give the command, which is being displayed here, to administrators, so that they can make their existing system part of Puppet Enterprise server, and it can be managed from there. Post adding all the nodes, developers will be deploying their application on the enterprise server, and it will get called whenever we need to automate the infrastructure.
CI Workflow and Infrastructure as Code

After completing this exercise, you will be able to recall the steps involved in implementing a continuous integration workflow, list the prominent frameworks and tools that can be used to implement infrastructure as code, and implement infrastructure as code using Puppet.
Now it’s time to test your knowledge and understanding of what we have learned in this course. So in this exercise you will, list prominent categories of patterns that can be used to implement continuous integration. List tools that can be used to implement infrastructure as code, and implement infrastructure as code using Puppet. Now I want you to pause this video and attempt all of the exercises and then come back to view the solutions, so you understand how well you have attempted.
Essential pattern categorization for continuous integrations are, artifact management, source code modularity, build and execution. Some of the popular tools that can be used for infrastructure as code are, Chef, Puppet, SaltStack, and Ansible. In order to ensure that you’re able to utilize Puppet as infrastructure as code, you have to go and you have to first install Puppet Enterprise servers. We’ll illustrate what are the essential steps that are required in order to install Puppet Enterprise servers to which developers can commit the code, deploy it, and that code can be executed using web console to automate the infrastructure.
To do that, you have to go to your AWS account and select Puppet Enterprise servers. Under Puppet Enterprise servers, you’ll find Create Puppet Enterprise server. You have to click on it, specify a name. Select the instance type that you want and click on Next. Once you’ll click on Next, it will ask you various different configuration parameter, out of which we will keep everything default and we’ll click on Next. Once we’ll click on Next, it will ask us to configure network and security. For now we will keep everything default and we’ll click on Next. Once we’ll click on Next, it will go to Review page and it will give you opportunity to click on Launch to start Puppet Enterprise server.
Now since all the IP addresses which are elastic are exhausted, you will not get this created. But, yes, once it is created, you will find that it displays you something similar to automateinfra. You’ll click on the Puppet server which you want to use. Click on Show sign-in credentials to get the credential and click on Puppet Enterprise console. Once you’ll click on Puppet Enterprise console, it will give you a warning, which is related to SSL certificate, we’ll click on Advanced to ignore it. Once you’ll click on Advanced, it will provide you a button called Accept the Risk and Continue, you’ll click on it. Once you click on it, it will take you to log in page, where you can specify USERNAME and PASSWORD and click on Log in. Once you’ll click on Log in, it will launch the dashboard from where you can decide execution of infrastructure as code, where code will be submitted and deployed by developers.
Not only that, from here you can manage Nodes, Packages, Jobs which are running, tasks which are executing, what is the task plan, and what is the schedule. In order to run code that is deployed by developers, you have to go to Run.
Under Run you will get Puppet, where you can go and you can decide what mode you want to run in, what is the environment where you want to execute, what is the inventory type that you want to select, whether it will be Node list, Node group, or PQL query.
Once you have selected the desired and developers have deployed the code, on execution of that particular Puppet task, you’ll find that it simulates an infrastructure, provisions it with the script which is present in the code. So this is how actually you will automate infrastructure, which will be simulated by code, which will be developed by Puppet developers.