CompTIA Cybersecurity Analyst+ Vulnerability

Vulnerability Assessments

Vulnerability scanning identifies host and network vulnerabilities and must be an ongoing task. Penetration testing is an active security method by which there is an attempt to exploit discovered vulnerabilities. In this course, you will discover how to plan for, schedule, and execute vulnerability assessments, identify common vulnerability scanning tools, and conduct an nmap scan.

Next, you will use Nessus and Zenmap to execute security scans and text web app security using the OWASP Zed Attack Proxy (ZAP) tool. Then you will explore penetration testing and the Metasploit framework and use the Burp Suite tool as an HTTP intermediary proxy.

Finally, you will learn how to manage Azure policy, investigate potential indicators of compromise, and examine how IT security relates to industrial control systems. This course can be used to prepare for the CS0-003: CompTIA Cybersecurity Analyst+ exam.

Table of Contents

  •     Course Overview
  •     Vulnerability Assessments
  •     Common Vulnerability Assessment Tools
  •     Using nmap to conduct Port Scanning
  •     Conducting a Network Vulnerability Assessment
  •     Using Zenmap for Network Scanning
  •     Testing Web Application Security
  •     Penetration Testing
  •     Navigating the Metasploit Framework
  •     Using Burp Suite for HTTP Sniffing
  •     Viewing Cloud Resource Security Compliance
  •     Threat Hunting
  •      Critical Infrastructure

Vulnerability scanning identifies host and network vulnerabilities and must be an ongoing task. Penetration testing is an active security method by which there is an attempt to exploit discovered vulnerabilities. In this course, I will start by discussing how to plan for, how to schedule, and how to execute vulnerability assessments, followed by identifying common vulnerability scanning tools and then conducting an Nmap scan.

Next, I will use Nessus and Zenmap to execute security scans. I will then use the OWASP ZAP tool to test web app security. Moving on, I will discuss penetration testing, explore the Metasploit framework, and use the Burpsuite tool as an HTTP intermediary proxy. Lastly, I will manage Azure policy, discuss potential indicators of compromise and discuss how IT security relates to industrial control systems. This course can be used to prepare for the CS0-003: CompTIA Cybersecurity Analyst+ exam.

Vulnerability Assessments

Vulnerability Assessments. This is an ongoing task and it’s sometimes required for compliance with various security standards or regulations. So, the purpose of a vulnerability assessment, of course, is to identify weaknesses before they get exploited so that we as cybersecurity analysts can do something about it. This means detecting any potential weaknesses at the network level, perhaps with network perimeter devices like Wi-Fi routers or regular routers, firewall appliances. But we can also use vulnerability assessments to gauge the security posture of individual hosts, whether it’s a server-based machine, a client desktop or laptop, a mobile device like a phone or a tablet.

All of these things need to be periodically scanned because vulnerability assessments are the best tool that we have to truly assess risk, so it feeds directly into risk management. We can also do vulnerability assessments against specific applications like a particular web app, which ideally should be protected with the web application firewall to protect against common web attacks like injection attacks of various types, cross-site scripting attacks, directory traversal attacks, and on and on. We can also run vulnerability assessments against internal business processes, such as how the payment of invoices is handled by more than one person internally, so having internal security controls that might not directly relate solely to IT systems. So, we need to run these vulnerability assessments periodically. You might ask, is this not like a security audit?

You can call it whatever you wish and you might be required to do it quarterly or monthly, but there is nothing bad in terms of the outcome of running frequent vulnerability assessments. Yes, it takes time. We have to allocate time for technicians or hire outside contractors to do it and that, of course, translates to having a cost. But imagine the cost of non-compliance with regulations, fines, penalties, loss of customer confidence due to sensitive data breaches, it’s well worth having vulnerability assessments at the forefront of our minds. There are two main categories of vulnerability scanning. One is active. This means that we not only identify weaknesses or vulnerabilities, but we also have tools where we will attempt to exploit them in a controlled manner so that we can determine how vulnerable that system or network really is.

And really, what we’re defining there is penetration testing, otherwise called pen testing. And you can’t just conduct pen tests randomly whenever you want. There has to be an approval process in place because it could disrupt systems. If you plan on running a pen test against your cloud deployment in the public cloud, you’d better check your cloud provider’s rules of engagement related to pen testing in the cloud. The next category is a passive type of vulnerability scan. This means we are identifying but not attempting to exploit discovered weaknesses. Instead, the idea is to identify those weaknesses and report on them, and then focus our energy on securing those discovered problems. So, where active is more of a penetration test, passive vulnerability scanning is basically a vulnerability assessment.

When you plan vulnerability scans, there are a number of things that you need to consider because we know it should be frequent so we definitely have to plan this out. And the details about planning a vulnerability scan within an organization from one region to another, if it’s an international firm or even from one network to another might vary depends on what’s being hosted on those networks. Either way, we need to begin with establishing a baseline. What is normal? Let’s say, on network A where we plan on scheduling vulnerability scans every month. What’s normal on network A for inbound outbound traffic? Which devices normally appear on network A? Are they all patched? Are they all configured with a baseline security configuration that we’ve established?

This needs to be done because as we know, because we’ve been saying it over and over and over, how could you possibly detect suspicious activity or anomalies, at least some of the hidden ones if you don’t know what’s normal? Now, we also need to make sure that we determine which scan targets we will test security against. Maybe it’s not every host on a network, maybe it’s only web application servers. It could be either one. What type of scan? Are we just scanning for the most common types of vulnerabilities? Or do we plan on doing an in-depth detailed scan of each and every discovered device on the network which could take a long time? Maybe that you would do quarterly but do a quicker scan monthly.

You might also consider a credentialed scan. So, you need to make sure, of course, that there is approval at the IT level and the management level to run vulnerability scans because this might set off alarms for intrusion detection systems. You also have to make sure if you’re going to do a very deep intensive scan of every host, you might consider credentialed scans where in your vulnerability scanning tool, you put in the credentials such as a general admin account used on stations that would allow it to not only do an external scan of those hosts, but an internal scan which can shed light on a lot of vulnerabilities. But there’s no right or wrong, it just depends on what your requirements are. In a perfect world, you would run frequent scans of everything fully in-depth. The vulnerability scanning process begins with step one, making sure your scanning tool is up to date.

Not only the engine that actually runs the scan, but vulnerability scanning tools usually have a vulnerabilities database with the latest vulnerabilities that they gauge against scanned hosts. Naturally, because threats change constantly, you want to make sure you keep this up to date. A good vulnerability scanning tool will allow you to schedule these updates so it’s automated and you don’t have to think about it. Once the tool is in place and we’re ready to go, in step two, we run the scan. You can run this on demand, but you most certainly should have this scheduled automatically. The frequency will depend on what your requirements are, which could be influenced by regulations or security standards like PCI DSS. So, we could scan the entire network and when you do that, you have to wonder, well, do we scan it from outside of the network or from inside the network?

In a perfect world, both. Same with hosts. In a perfect world, we would scan them with no credentials. So, an attacker that might be just scanning the network, what would they see, would there be any vulnerabilities? But also, a vulnerability scan when you’re signed into the host might point out vulnerabilities with components that you might be using on a web server that are out of date, which you would not know by doing a scan externally that wasn’t credentialed on that host. So, we see then the benefit of performing multiple types of scans. And of course, you might choose to scan individual applications on hosts, so there could be a hybrid of the targets. Then prioritize the scan results.

Well, based on what, this is easy. Which assets, for example, which web servers are considered more critical than others? Either because they have a very high dollar value to the organization, they generate revenue, or they are used in research which results in valuable research data, or they process very sensitive customer financial data. Whatever the case is, we need to prioritize the scan results to determine where we’re going to focus our energy. So, to implement or modify security controls to protect high-value assets. But be very careful with that. For example, if you’ve got vulnerabilities on a revenue-generating web server and you’ve got vulnerabilities discovered on a user desktop, don’t just assume that the desktop vulnerabilities are not important because if that desktop is compromised, it could lead to your revenue-generating server being compromised.

You have to be very careful about how you prioritize these results. So, the vulnerability scan will have a number of outcomes. It’ll define any missing patches that should be applied. A vulnerability scan might also include testing user awareness of things like phishing scams by sending out fake email messages so that we can gauge user responses. The scan might also point out any security misconfigurations that we have, like the use of weak passwords or default settings not being changed, or known flawed components being used such as with a web server, or unnecessary public-facing services that really should only be visible private on an internal network.

Running vulnerability scans can also result in defined actions that we will take to address those reported weaknesses. So, there are a lot of benefits then, of course, in conducting vulnerability assessments, such as achieving legal, regulatory, or security standards compliance, preventing security incidents from happening or reducing their impact, maintaining client and shareholder confidence in our business processes and supporting IT systems, avoiding fines and penalties such as due to data privacy breaches. And, of course, enhancing business continuity. If there is a downside to vulnerability assessments, even if you can think of it, it will pale in comparison to the benefits.

Common Vulnerability Assessment Tools

We now know the importance of periodic vulnerability assessments, so let’s take a few minutes and let’s focus on vulnerability assessment tools and their features. The first thing to consider is that we’ve got vulnerability assessment tools that we can use to test the security of on-premises networks and hosts as well as cloud-based network and hosts. And if we want to be even more granular, there are specific tools that will allow us to scan applications for vulnerabilities. Regardless of the specific vulnerability scanning tool we’re using, there are some common features we should look out for that those tools really should have, such as the ability to schedule the scans. Remember, for compliance with some regulations or security standards such as PCI DSS, you’re going to have to be conducting security scans perhaps more than just once per year.

Things change a lot on the threat landscape within a one-year period. Performing vulnerability scans monthly would be a much better solution. You might want to make sure that you have the option of running credentialed scans where you feed in credentials into your scanner tool, and when it reaches out to hosts on the network that it discovers, it can use those credentials to sign in and do a detailed, thorough scan of that machine. Besides scheduling the scans themselves, we also want a way to schedule the update of the vulnerability database that the tool uses. Now, we also have network scanning and mapping tools as well. They’re a little bit different because they aren’t designed to focus on vulnerabilities, but rather discovering what is actually up and running on the network.

These days a lot of vulnerability assessment tools have this built-in, of course, but under themselves, network scanning and mapping tools discover networks, active hosts that are on those networks and responding, unless they’re firewalled to the hilt, then they might not show up. And in some cases which services are running on those hosts, like an HTTP web server or an SMTP mail server? Network scanning and mapping is great from the security technician’s standpoint to determine what’s on the network and what might be running, but it’s also used by malicious actors as part of reconnaissance. As a result, it’s very easy to discover if there are network and host scans being conducted on the network using standard intrusion detection system or IDS tools. So, if we’re doing this legitimately on an internal network, we need to be aware of IDS configuration so we don’t unnecessarily set off alarm bells.

So, what are some common network scanning tools? Well, we know that it can be used by security technicians or attackers during the reconnaissance phase. Tools would include Nmap, the Angry IP Scanner, Maltego, which is a tool that integrates with many other security-based information sources and tools like Shodan, VirusTotal, and WHOIS. Then there are cloud scanners like Scout Suite, Prowler, Pacu. Then there are vulnerability scanners and debuggers. Now, vulnerability scanners are more than just network scanners because they are designed to compare their list of vulnerabilities in a vulnerabilities database to what might be discovered on hosts. Tools would include Nessus, OpenVAS to name but just a few. Then there are debuggers which would be used by software developers to learn about the behavior of malware even in some cases to begin the reverse engineering process. Debuggers would include things like the GNU Debugger, Immunity Debugger, again to name but just a few.

Using nmap to conduct Port Scanning

In this demonstration, we’re going to take a look at how to use the free Nmap tool to perform a host scan. So, Nmap is a host as well as a network scanner. You can scan a network or an individual host to see which machines respond and also to get a list of which services are running on them, such as HTTP, SMTP, DNS, and so on. Some Linux distributions might already include Nmap, otherwise, you can install it on a multitude of platforms beyond Linux. Here I’ve navigated to nmap.org and I’m looking at the Download page where we can get the latest Nmap version for Windows, the macOS, Linux, and other OSs, so you have the source code that you would then have to compile.

We’ll be using Nmap in this demo, but in another demo, we’ll take a look at the Zenmap GUI. Here I happen to be using the Kali Linux distribution, which includes, among many other tools, Nmap. I’m going to go ahead and click on the Terminal icon in the upper left to open a Terminal window. The first thing, I want to do is run sudo service ssh status to see if the SSH daemon is running.


OK, it looks like it’s inactive, so I’m going to go ahead and use the Up arrow key to bring up that previous command and I would change the word status to start.

So, now if I check the status, the SSH daemon is active and running. So, if I were to run commands like sudo netstat -an, and I want to see all numeric ports, and I want to specifically pipe that to grep, I’m interested in looking at :22, port 22. We do have port 22 in a listening state.

For tcp that would be IPv4 as well as IPv6. Now, the reason we’re looking at this is we want to have a few services running here because we can run Nmap against the host to see what services are there. So, I’m going to run sudo nmap and I’m going to run this against the localhost. You don’t have to do that. We’ll do a network scan after, but 127.0.0.1 -sU. That means I want to perform a UDP scan and uppercase T, I want to do a TCP scan, and I’ll press Enter. So, what it’s telling us then is what we already expect in the sense that the ssh daemon is in an open state. So, the SSH daemon is running on this machine and accepting connections. There are no other services listed.

If I clear the screen and do an ip a, notice that we are on the 192.168.2 network here that’s the network and I know that because we have a /24 mask, so the first 24 bits or the first 3 bytes. So, let’s do a network scan sudo nmap 192.168.2.0/24, let’s say slash and we’ll add -v for verbose. 

OK, notice here it’s either displaying that the host is down or that we’ve got open ports and the port numbers are shown here like port 8008, port 80, 443. We also have the IP of the host on which that service is listening. Now, this as is the case usually with any type of powerful tool, can be used for good or bad reasons. For legitimate purposes, security technicians can run periodic scans on the network so they know what is on their network and whether there are unnecessary ports that are running, like port 80, port 443, which would indicate that we might have a web server. We could determine whether or not that should actually be running.

It’s all about reducing the attack surface and only having what you need on your network, or perhaps also using network scans to discover new hosts on the network that shouldn’t be there, that could be suspicious. OK, so once the scan has completed, we could go back up through the output to get a listing of the devices, it might be able to discover a name or a model of the device and showing things like the hardware or the MAC Address as well as any open ports. So, some may not have many open ports, other hosts may have plenty of them. Now, of course, you could use standard output redirection in Linux, such as with a greater than symbol to write that to a file. But you can also use -o for output, let’s say X for XML. And maybe I’ll call this file nmap_results.txt. Once that’s completed, if I do an ls, there’s my nmap_results.txt.

Of course, I might want to rename that so it has an XML extension. Doesn’t really matter in this case. I’m just going to run sudo cat against that file so we can see what’s inside it. And indeed, it is formatted as an XML file.

You can also perform OS fingerprinting, for example, nmap -O. So, we can try to determine what kind of an OS is running on a given host. We can then specify an individual host or a range. Let’s say I specify an individual.

Now, here it says we need root privileges. That’s simply me forgetting to add the sudo prefix at the beginning of this command to run this as an elevated command.

Now, always fingerprinting isn’t always successful, it will attempt to determine what kind of an OS it is.

So, the OS guesses that are returned here are Linux variants, and in fact, this happens to be a wireless router device running an embedded firmware Linux OS. There are plenty of other command line options, we can’t possibly cover them all. However, be aware that you can look at the manual page, the man page, sudo man nmap.

So, of course, Nmap is a Network Mapper. It’s not quite correct to call it a network vulnerability scanner because it doesn’t actually probe each individual device looking inside the OS or at apps that might be configured in an insecured manner or that might be missing patches. Some people will say it’s kind of a vulnerability scanner, in that, it shows you what’s up and running on the network and what listening ports are available.

But generally, most people call Nmap a Network Scanner or a Network Mapper. At any rate, here in the man page, we have all kinds of examples of using command line switches like -A, and many other examples. And as we get further down when it comes to a host discovery, we have a lot of variations on command line parameters and various scanning techniques. One of the ones that we used here, you might recall was performing a UDP Scan. So, the man page is telling us that that is done with -s and U together. Or you can specify only certain ports that you’re interested in scanning forward -p. So, I’ll press Q for quit.

As an example, if we run sudo nmap -p for port 80 192.168.2.0/24. Basically, I want to see all of the web servers that are listening on port 80 on my subnet.

So, if you see the state of open from port 80, it’s a standard port 80 HTTP web server. Or if it’s firewalled in some way, it might show up as being filtered. So, as a cybersecurity analyst, you must have a sense of how the Nmap tool is used and what the results are. Some of the questions on the Cybersecurity Analyst+ exam might test your knowledge of what you should do, such as to test which web servers are available on the network at the current time, or how to read output from a command where you have to determine which command resulted in that output.

Conducting a Network Vulnerability Assessment

In this demonstration, I will be using Nessus to execute a vulnerability scan. Nessus is a tool that has the ability, such as Nmap does also, of scanning the network to see which hosts are there and which services are running. But unlike NMAP, Nessus is actually a vulnerability scanner, meaning it has a list of common vulnerabilities that it checks against hosts that it discovers on the network. So, it’s doing a bit more of a deep dive than just simply scanning the network and showing which ports are open on hosts. It can determine if there are insecure configurations at the OS level, or if there are missing patches, and so on. So, here in my web browser, I’ve navigated to the tenable.com/downloads page where I can choose to download the specific Version and Platform for Nessus.

Now I’ve already gone and downloaded and installed this to save time. On the host where you’ve installed that if it’s running Windows, when you go into your Services tool, in the Ts, you will expect to see Tenable Nessus. It’s set as an Automatic start service, which is good because, of course, you can schedule recurring scans.

When Nessus is installed, it installs a web application that listens by default on the localhost where you’ve installed it on port 8834, and it should automatically open a web browser to allow you to log in.

So, for the credentials, I’m going to use admin as the username and admin as the password. If for some reason you forget the credentials for your Nessus installation, you can go to the Command Prompt.

Here I’ve navigated to Program Files\Tenable\Nessus where you can run nessuscli chpasswd, change password, and then give it the user for whom you would like to change the password, such as admin.

It’ll prompt you for the password, you’ll confirm it, and then it’s done. So, you can then use those credentials to sign in on port 8834. If I go to All Scans, New Scan, one of the things you can specify are Credentials.

In this case, for Malware Scanning, I can specify SSH credentials for Linux-based hosts or Windows credentials. Now a credentialed scan means you’re putting in the credentials, so that Nessus has the ability to go into that operating system and poke around, and that will give you a much more detailed perspective on things like malware.

If I don’t specify any credentials, then the scan will only see what an outsider might see if they were trying to scan the network looking for vulnerabilities. With credentials, it looks more like what malware that might have been infected a machine where users logged on would look like. So, when you configure a scan, one of the things to consider as well is the target. You could specify a range of IPs or an entire subnet, for example, 192.168.2.0/24.

 Or I could specify a range by using a dash between two IP addresses. You can also click Schedule to enable scheduling so you can have a recurring scan that occurs. Now if I go back to All scans and choose New scan, notice that there are plenty of options available.

Basic Network Scan, Advanced Scan, Malware Scan, Mobile Device Scan which would require product upgrade, Web Application scanning, Ransomware specifically the WannaCry Ransomware, a Microsoft Active Directory scanner, and if I go down under COMPLIANCE, even an Internal PCI Network Scan for PCI DSS compliance where one of the control objectives for PCI is to have a vulnerability management program with periodic scans. If I go to All Scans so I can see what the results are thus far. If I click on Advanced Scan, here it’s discovered a number of hosts on the network that I specified as the target range.

And based on our Vulnerabilities legend, it looks like we might have just a handful of Critical and High and Medium Vulnerabilities on given hosts shown here with their IP address. Let’s say I click on the second one found here. So, it looks like we’ve got an SSL type of high severity issue and if I click on that, it gives me the details about the fact that we might have SSL configured on the host using only Medium Strength Ciphers or encryption functions.

The great thing about vulnerability scanners like this is they also provide references for further reading. I can use the links at the top to go back through my scans. Looks like I’ve got a lot of SSL-related issues for certificates and cipher suites. Always bear in mind that while we explore these tools and we’re looking at these things manually, a lot of threat-hunting and SIEM and SOAR tools will do this automatically. You would configure perhaps alert thresholds, but a lot of this stuff is automated. And on the Cybersecurity Analyst+ exam, you might get a scenario-based question that asks what you should do to enhance the security posture of an organization, and part of that might include scheduling credentialed vulnerability scans.

Using Zenmap for Network Scanning

We’ve briefly discussed already how to use the Nmap network scanning tool and the fact that you can download it for free from nmap.org. Now we can also work with the Zenmap GUI.

So, this is basically a graphical user interface front end that uses Nmap in the background with various command line options depending on the type of scan that you tell it you want to run. I’ve already downloaded and installed Nmap and Zenmap. So, from my machine Start menu, I’m going to launch Zenmap. So, the Zenmap GUI shows up. I’ll click to launch it. And here’s the interface for it.

Now, because we’ve installed Nmap and it uses Nmap in the background, if we were to pop out into a command line environment, we could also run nmap from here if we so chose.

However, I’m going to exit this and go back into the Zenmap GUI.

The first thing we have to determine here is what the Target is. What is it that we want to scan? It could be an individual host like 192.168.2.1, or it could be a range of IPs I could specify using the dash notation, or I could scan an entire subnet by using CIDR notation. Here I want to scan 192.168.2.0/24. Now, for the profile, you can tell it you want to do an Intense scan which will take the longest time, which will take longer than just doing a Quick scan.

It’ll be able to probe more in-depth into the services running in the machine to see if there are any vulnerabilities. Or you can do the Slow comprehensive scan. So, for example, let’s say, we do an Intense scan. Now, what it’s doing as I’m switching these different profile types is it’s changing the command line syntax for the underlying nmap command.

OK, we could run that at the command line if we so chose. When I’ve got the parameters set correctly, I could then click the Scan button to begin the nmap scan. You can also go to the Scan menu and Open Scans that you’ve saved in the past. I’ve got one here called Scan1 and it saves it in XML format, so I’m going to go ahead and Open that past scan. This is the result you get when you run an nmap scan.

For example, over on the left, I could simply view either the Hosts that were discovered, so it’s shown here by IP address and also sometimes by MAC address or by host name, such as a Microsoft XBOXONE device, an HP network printing device. You could also click on Services to organize it by the discovered services like http.

When I select http on the left, on the right, it exposes those discovered hosts from the scan, whatever the scan range was that are running some form of an HTTP server. For example, it’s telling me we’ve got an HP Officejet Pro 8610 network printer. That’s a lot of returned detail. We’ve also got other hosts that are returning that they are running, for example, the nginx HTTP web server stack or a Microsoft HTTP web server. If it’s easy for us to do this and this is a non-credential type of network scan, then it’s just as easy for an attacker performing reconnaissance to get the same information assuming they can get on the network somehow. Now we’re looking at Ports/Hosts over on the right.

We can also go to Host Details to get the details about the selected host, like the number of Open ports, the Last boot time so it’s taking time-stamped, it’s configured addressing information, the version of the operating system that is valuable information that we don’t really want to have advertised, the Ports that are in use from that host, like port 135, port 32815. 

Again, if we view it from the host’s perspective by clicking Hosts in the left-hand navigator, we can zoom directly into a particular device to get its individual details. Depending on what you’re trying to achieve by running these types of network scans will determine whether it’s a good thing that you can probe and get this detail. This way you kind of have an inventory, you know exactly what’s on your network.

But that can also be achieved using host inventory tools that have a software agent running locally that’s authenticated. Ideally, we don’t want any information disclosed when someone performs like an uncredentialled scan, such as was done here. Now, this is still just a network scanner. This is not going in-depth into each host looking for vulnerabilities at the OS level or at the installed app level missing patches. It’s not doing that as other tools would do.

Also, in closing, we can also go to the Tools menu and Compare scan results by selecting what they’re referring to as the A Scan and then the B Scan. So, these are two scans from different points in time, ideally of the same network, of course, so you can pinpoint things that have been added or removed from the network.

Testing Web Application Security

So far, we’ve discussed about the importance of scanning networks and then scanning hosts on those networks to see which ports are open and also how important it can be to run vulnerability scans to do an in-depth analysis on given hosts including all the software installed on it, making sure configurations are secure, that outdated components aren’t being used, and that patches are applied. Here we’re going to be focusing on testing web application security and we’re going to do that using the OWASP ZAP tool, the ZED Attack Proxy tool. First things first, I’ve got the Metasploitable 2 downloaded virtual machine which is a collection of different web pages that can be set to low security for testing security like we’re going to do.

Now, one thing that we can do here is if I go to DVWA and if I sign in with the credentials that I’ve configured this with, the default credentials are Username admin and a Password of password. What I can do is go to DVWA Security on the left and set it down to low and Submit.

By setting the Security level to low, all of these web pages with all these ways to test things like SQL Injection attacks, Brute Force attacks, it is using an intentionally vulnerable version of the web page, and that’s what I want here for us to do our web app security scan.

There are plenty of tools that you can use to scan the security of a given web application. We’re going to be using the free ZED Attack Proxy or the OWASP ZAP tool which can be downloaded from zaproxy.org/download.

In some cases, you might have tools such as the Kali Linux distribution which is designed for pen testers that already includes the OWASP ZAP tool. So, here in Kali Linux, if I open up my menu in the upper left and go down to item 03, Web Application Analysis, over on the right, among the list of tools, I have ZAP, I’m going to tell it No, I do not want to persist this session at this moment in time. Every time I start the ZAP tool, I want to have a fresh new session. I’ll click the Start button.

OK, that puts me in the interface for ZAP where I’m going to click Automated Scan, and in the URL to attack field, I’m going to pop in the IP address of the web app server, 192.168.2, in our case, .34.

Down below, I’m going to tell it to also use an ajax spider with a web browser to test the security. I’ll tell it to use Firefox. Now to start this scan for web vulnerabilities, I would click Attack. You want to make sure that you have express written consent to check for these vulnerabilities. Even though we’re only checking, it can impact the performance of the web application. So, I’m going to go ahead and click the Attack button and we already have some information here. Basically, it’s scouring all of the pages and going through them all on that web application server.

So, notice it’s going through different pages, different graphics. It’s testing HTTP GET and HTTP POSTS and it’s doing this using the Firefox browser. Now, depending on the web application, how many pages there are, how complex it is will determine how long this takes. One of the things that we want to focus on here in the ZAP tool is the Alerts output down below, I’m going to click on Alerts. We already have some issues that we need to pay attention to and notice the counters. The numbers are rolling here as we’re looking at it and it discovers new related items. For example, Application Error Disclosure.

So, when we read the description, this is where you get the meat and potatoes. It’s going to tell you what the problem is like specific pages. The page in question here you’ll be able to gather from the URL at the top.

This is a Medium security issue. It might be providing sensitive information like the location of a file if there’s some kind of an error. The great thing is not only is it all doom and gloom, it offers a solution. We’ve got another problem here, Vulnerable JavaScript or JS Library. And it says over here on the right for the Solution, Please upgrade to the latest version of jquery. That would be something that would be the responsibility of the web server technician. It even lists a couple of Common Vulnerability and Exposures or CVE-related documents for those specific problems. If I select Cookie No HTTPOnly Flag, looks like there’s some kind of PHP session ID where we’ve got the cookie not using the HTTPOnly flag, which means it might be vulnerable to being accessed by local scripts.

Now, notice it’s still active and if I go back to the Spider tab, we get a sense of how far along we are, in this case, 59%. We also have a Report menu where we can Generate a variety of different types of reports in different formats, such as an HTML report of the findings of our web application vulnerability scanner, XML, or JSON.

You also have the option of comparing it with Another Session so that if vulnerabilities were detected, let’s say, the last time we did a scan of that web app a month ago, we can determine whether it’s been addressed or not by running the scan again now. So, needless to say, this is a crucial tool for helping to ensure that web applications are kept safe, in addition to other techniques such as using a web application firewall.

Penetration Testing

Penetration testing is often just called pen testing. This is an active form of security scanning. We’ve discussed vulnerability scans where we scan networks and hosts seeking vulnerabilities but not trying to exploit them. So, vulnerability scanning is passive by nature. But this isn’t vulnerability scanning, it’s penetration testing, which can include vulnerability scanning first so that discovered weaknesses are exploited. We use tools that are designed to exploit these vulnerabilities to test security. Now, just because a vulnerability is detected, it doesn’t mean that when you run pen tests you will immediately succeed in breaking into a system or crashing a server, whatever the case might be.

But we need to recognize how penetration testing provides value to the organization’s security program. Of course, it might be required for compliance with contractual obligations or regulation related to things like data privacy. And if you’ve ever been involved in a security audit of an organization related to its IT systems, then penetration may or may not have been a part of that. Organizations can use in-house cybersecurity technicians to perform these types of tests, or they might contract it out to a third party. But penetration testing is not to be taken lightly. There are always rules of engagement to consider, such as the scope. What is it that we’re pen testing?

Is it just an entire network? And we have to think about when we are going to conduct pen testing. Certainly, we must be in constant and clear communication with the system owners that we are pen testing. Maybe part of the agreement is they don’t know exactly when pen testing will occur in an attempt to simulate real-world attacks or there might be a very specific time frame that is agreed upon by the system owners and the pen test team of when pen tests will occur. Now the reason that there are these rules of engagement, we have to be very careful is because pen testing, as you know, involves exploiting vulnerabilities that get discovered.

Exploiting those vulnerabilities could result in the disclosure of very sensitive information, which is why sometimes we’ll have to have pen testing teams sign off on a non-disclosure agreement or an NDA in case they might see sensitive data, which could be customer data, employee data, medical records, financial records, company trade secrets, whatever it is. But the potential service disruption might also actually make services unavailable for legitimate use, such as line of business servers used internally by employees. That’s a serious thing. And so, that’s why we have to be very clear about what the rules of engagement are when we run pen tests or when we work with teams that will conduct pen tests. And then there’s the cloud. Here we have a screenshot for the Microsoft Azure cloud. The web page is labeled Penetration Testing Rules of Engagement.

Of course, we must abide by these rules if we plan on running either vulnerability scans and/or penetration tests against our deployed resources in the Microsoft Azure cloud. Just because we pay for a monthly subscription and pay for our use of cloud services, it doesn’t mean we have carte blanche to do whatever we want when it comes to pen testing. And each cloud provider might have a different set of rules. So, it’s very important for cybersecurity analysts to be aware of those rules before pen tests are conducted. There are a couple of categories of pen testing, the first of which is called a known environment or some people will call it white box testing. This means that the details of the environment that are being tested are known to the pen testing team. We might wonder what good is that? How does that test security?

It’s very handy and very important because it can simulate insider attacks where the configuration, the environment is partially or fully known. Now, an unknown environment is also sometimes called black box testing. This means that from the penetration team’s perspective, there are no details known about what will be tested. There’s no network diagrams, there’s no server names, IP address ranges that are no nothing. So really, this allows us to simulate what external attackers would be able to do. Now they might run reconnaissance scans to learn about IP address ranges, host names, and so on. But from the beginning, nothing is known, and then somewhere in the middle, you’ve also got partially known environments. This is often called gray box testing, so some details might be known, such as the fact that there is a web server used by an organization to host an app and it’s running Apache.

Regardless of the type of pen test being conducted, the goal is to uncover flaws, especially those that are easily exploitable and mitigate those problems immediately. The pen test red team is otherwise called an offensive team. This is the team that executes the penetration testing, whether it’s against a network, specific devices, applications, databases. Pen tests can also be non-technical in the sense it might be social engineering through telephone calls trying to trick people into divulging sensitive information. The idea is that we want to simulate real-world attacks as much as possible. That’s the red team. In penetration testing, the blue team is the defensive team. Now, this could be in-house security technicians that secure and monitor IT systems, or it could be contracted to an outside party, whatever the case might be.

But the blue team monitors for security events, and coincidentally, that is what being a cybersecurity analyst is about. It’s about monitoring, detecting, and responding. Ideally, in a perfect world, the blue team will be able to detect and prevent, and even stop red team attacks while they’re occurring. Just imagine the wealth of information learned from conducting pen tests. It can also enhance security awareness. It can be used in training materials that is used to educate staff about security, such as giving a high-level overview of pen test results and what was vulnerable. Of course, doing that in a meaningful way that’s engaging and interesting to the audience this is being reported to. We can also learn about new mitigation strategies and techniques. So, there’s not a bad thing about penetration testing. When you look at the results, analyze them, and make improvements based on those testing results.

Navigating the Metasploit Framework

It’s important for cybersecurity analysts to have a general understanding of some of the commonly used tools to exploit vulnerabilities. And one of those is the Metasploit Framework, which is a collection of tools with a variety of different types of exploits that can be deployed against targets. Now, you can install the Metasploit Framework manually, but it’s included automatically in this case with the Kali Linux distribution. So, I’m going to go into a Terminal window to a Command Prompt environment where the first thing I’ll do is change directory into /usr/share/metasploit-framework.

Now, notice we’ve got exploits, payloads, and we also have a post directory. As you might guess, the exploit is code that executes on a target machine to take advantage of a vulnerability. Normally, if this were an attacker doing all of this, they would first perform some kind of reconnaissance to determine what device is on the network, or maybe infect it with malware through an email phishing campaign. Whatever the method might be, the attacker discovers that there’s a given vulnerability on a host, and that’s how they would know which exploit to use against that target. So, if I change directory into the exploits folder and if we do an ls, here we’ve got a number of folders based on the platform such as aix, unix, or android, apple_ios, bsd, firefox, linux, windows.

Or smtp, vnc, winrm for Windows Remote Management, mssql for Microsoft SQL, iis for the IIS web server. And if I change directory into iis, here we have a number of exploit code files written in Ruby, that’s why we have the rb file extension. So, if we were to use the cat command here to view the contents of one of these Ruby files, I’ll just pipe that to more. So, these are actually exploit files. Here we’ve got the code that is used to run whatever this particular exploit happens to be. I’ll press Q to get out of there. So, those are exploits. If we go back into the modules directory level, you might recall that we also had a payloads directory.

Now payloads run on a machine after the vulnerability has been exploited, an attacker might customize some payloads and use obfuscation techniques to bypass detection by things like virus scanners so that it might set up a back door so the attacker can get back in to the future, or maybe enable a reverse shell from the attacked machine, whatever the case might be.

And if I go back one level and go into the post directory, this is where an attacker could work or a pen tester could work with post-exploitation modules. So after you’ve successfully exploited a vulnerability on a device, what do you want to do? Maybe enumerate the file system, or get a list of user accounts on the device? Or in terrible cases, start encrypting files as in the case of variants of ransomware, now to actually work with the Metasploit Framework.

That puts us into a different Command Prompt environment where if I type commands like ls, it looks like we are in the exploits location. For example, if I change directory to windows and do an ls, then we might want to go into a gather directory. Gather meaning some kind of reconnaissance scan like getting a version of a running web server or whatever the case might be. We’ve got all kinds of there’s those Ruby scripts. You can also use the search keyword to search for something specific, like iis.

When you press Enter, your Command Prompt will change to reflect that you are now using that particular exploit. I can clear the screen and type show info to learn more about what I need to do to use this exploiter rather to execute it.

So, we’ve got some options here such as RHOSTS, that’s pretty standard, remote hosts would be the target or targets that you want to apply this exploit against. The remote port is 80, SSL is set to false, and so on. And down below, it talks about how this module will exploit a specific bug on the IIS web server stack. So, it looks like the required variables here are configured with values other than RHOSTS. So, we could type set RHOSTS and give it a value.

That value has now been set for RHOSTS. Now once you’ve set the appropriate items and you’re sure you want to run this. And, of course, you want to be very careful. While it’s very easy to use the Metasploit Framework to exploit discovered vulnerabilities, doesn’t mean you should just do it. It can be unethical for sure if you don’t have written express consent from the system owner, it could also be illegal. So, be careful and do this only against your own systems for testing purposes.

Or in a pen testing situation, make sure the rules of engagement are known and signed off on. At any rate, at this point, what you would do to actually run that is simply type in exploit, so it would try to connect to that given host that you’ve specified to see if it can successfully exploit that vulnerability.

Using Burp Suite for HTTP Sniffing

Man in the middle or MITM attacks, also called on path attacks. What is that? Well, essentially we have an attacker positioning themselves between a client and a server session. At least in a simple sense, that’s what it is. Let’s take a look at how that would work. Let’s do some http traffic sniffing using Burp Suite.

You can navigate to portswigger.net/burp in a web browser to get to the point where you can download the Burp Suite Community Edition tool. In our case, we’re going to be using Burp Suite which is already installed in Kali Linux.

Now before we do that, let’s just open up a web browser and let’s navigate to the site we’re going to be using in this example. So it’s the metasploitable2 free downloadable virtual machine. But at the end of the day, what really matters here is we want to be able to gather credentials from web form fields such as Usernames and Passwords. 

We need some way to inject ourselves from the attacker’s perspective between the client and the server. We want to act as a middleman. Or really what we’re going to be doing is acting as an http proxy client HTTP traffic will come to us first. From us it will then go to the server and thus, we can examine everything. So, how would you really do that? One way is if you can somehow trick a user into clicking a script that might configure their local web browser to proxy through your attacker’s station.

So, here in the Kali Linux menu I’ll spawn from the upper left, I’m going to go to O3 Web Application Analysis and I’m going to choose Burp Suite from there. I’m being asked if I would like to create a project.

Now you can create a project and save it so that you can open it up in the future and your settings are retained. In this case, it’s going to be a one-time thing so I’ll leave it on Temporary project and I’ll click Next.

We can load specific configurations from config files. There are quite a few settings that can get a little complex depending on how deeply you get into the Burp Suite, but here I’m just going to use the Burp defaults and I’ll click Start Burp.

The first thing I want to do here is go to the Proxy menu towards the upper left and I want to make sure that Intercept is on.

If it’s not, I can turn it on, but here it’s already set to on. If I go to the Options tab here, notice that our local IP address in IPv4 local loop back 127.0.0.1 is set as a Proxy Listener on port 8080.

You can have whichever IP address configurations you want here if you have a multi-interface device.

 But this is the proxy interface that we want to make sure the client web browser HTTP traffic is sent to, so we can examine it. But go back to Intercept here because at this point to test this out as a security analyst we could do one of two things. We could click Open Browser which is just going to open up a Chromium-based browser here for Burp Suite that will automatically be proxying through Burp Suite on this host.

Or we could manually go configure any web browser we want on any station to proxy through the proxy port on this Burp Suite host. When malicious actors are using this thing, they need to find a way to get clients to proxy their http traffic through them. But here we’re going to do it manually as we are testing it. So, I’ll click the Open Browser button. OK, so here’s a little Chromium-based browser.

At this point, what we would do or if we had tricked a user into having a configuration that proxied through us when they go to visit the website.

So, let’s go visit our intentionally vulnerable website. Let’s flip back to the browser. So, notice, looks like it’s loading the page and that’s because it’s paused. That request has been paused by Burp Suite, the proxy we are going through.

So, if we go back to Burp Suite, we’re still under the proxy Intercept page where we have some GET HTTP traffic. That’s the user in our little example navigating to that website.

So to continue on, we would click Forward, we go back to our web browser that’s proxying through us and here we go. There’s the web page. So the user, for example, would then click DVWA. Of course, Burp Suite here is stepping through it, so we would forward through.

So it looks like, Cookie: security equals high, we’re already able to see some cookie session information, including a PHP session ID, if we go back to the browser and if we forward again and then go back to the browser. This is where the user would sign in and specify credentials. It could be any type of form fields. It could be any type of web form with any type of form fields. And of course, a user would then proceed by clicking Login. And just like that, we are able to determine that the username field was given a value of admin and they typed into the password field the value of password. The trick here from the attacker’s perspective, is how would they position themselves between the client and the server session.

Well, all they have to do is run a tool like this that’s configured as a proxy. The key here is to get the client device configured to go through the proxy. Now, one of the mitigations of course, with this type of thing is to make sure that client devices are secured and that users are aware of phishing campaigns so they won’t be tricked into allowing these types of things to happen. As a cybersecurity analyst, we need to be aware of how these types of proxying or man in the middle HTTP attacks take place.

Viewing Cloud Resource Security Compliance

In Microsoft Azure, we can use Azure policy to verify whether our deployments in the cloud are compliant or not compliant with various standards. And that’s what the focus of this demo will be to view cloud resource security compliance.

Now, there are a number of resources that have already been deployed in this Microsoft Azure subscription. Things like Public IP addresses, Storage accounts, virtual disks, and of course, if I were to navigate to the Virtual machines view on the left, plenty of virtual machines, so lots of stuff has been deployed in other words.

 What you can do in Azure is at a global level, you can determine whether you want to check for compliance with certain settings. For example, let’s go into Azure Policy. So therefore, I’m going to search for the word policy in the search field up at the top here on this page. That puts me in the Policy, the Microsoft Azure Policy navigation panel.

Now, if I click on the Policy type filter which currently says All policy types, notice that we can also filter out Built-in policies versus Custom policies, and so on. And if I click the Definition type filter which currently states All definition types, we can also view Initiatives. Initiatives are collections of related policies lumped together under one name. So for example, let’s say I Search for the word encrypt. 

So here we now have a filtered list of Azure policies that we can assign related to encryption. For example, Enforce SSL connection should be enabled for MySQL database servers so we could check our environment to see if all of our MySQL deployments have that setting enabled.

Or things like SQL servers should use customer-managed keys to encrypt data at rest instead of using keys managed by Microsoft. What we can do is click on a given policy to open it up, not because we want to look at the JSON representation of it, but because we can then click the Assign button up at the top to assign it to part of the Microsoft Azure hierarchy. That’s where the Scope setting at the top comes in. 

If I click the Scope selector button over to the right of the scope field, we can assign this Azure policy where SQL servers should use customer-managed keys for encryption at the Management Group level. You might recall that in Microsoft Azure, if you have multiple subscriptions and larger organizations, well perhaps, one for each region, one for production, one for testing.

[Video description begins] A panel appears on the right with the heading: Scope. It contains two drop-down field options: Subscription and Resource Group. [Video description ends] 

Whatever the case is, you can organize multiple subscriptions under a Management Group. So when you assign stuff to the Management Group, like a policy, it flows down and applies to all of those subscriptions. In this case, we only have one Subscription, so it wouldn’t make sense necessarily to use a Management Group. I can select the Subscription that I want this policy to apply to, and I can even be more granular and select a specific Resource Group. A Resource Group, of course, is just a way to group related cloud resources, such as all the things you might need to set up to support a web app in the cloud. So I would click Select when I’ve made the appropriate selection and then I could click Review + create and then I could Create that assignment.

Now, back to Azure policy. If I go to the Assignments view on the left and if I click the Refresh button, there’s our SQL servers should use customer-managed keys for encryption, which is applied to the East resource group in my Pay-As-You-Go subscription. And it’s a policy, not an initiative or a grouping of related policies.

Now how long will it take for it to scour through, in this case, the east resource group, assuming we’ve got SQL servers there? Well, it should happen fairly quickly, but it also depends on how busy the Azure cloud is in general. We can go to the Compliance view on the left to check on the compliance. For example, here I’ve got other policies as well as initiatives that have been applied in the past as well as our current one for SQL server customer-managed keys which is showing currently as compliant.

But for PCI DSS v4 for the protection of card holder data which has been applied in the past to the entire Pay-As-You-Go subscription, it says Non-compliant. If I click on that to open it up, I can determine which cloud resources are not compliant.

The Compliance option page is now active. It contains 3 buttons: Assign policy, Assign initiative, and Refresh. Further, it contains a table with the column headings: Name, Scope, Compliance, and so on.

 A webpage appears titled: PCI DSS v4. It contains various buttons: View assignment, Create remediation task, Create exemption, and Activity Logs. Below, it contains three tabs: Controls, Policies, and Non-compliant resources. The Controls tab is active. It contains a table with the column headings: Name, Compliance state, Domain, Responsibility, and more. [Video description ends]

So, which settings have I not adhered to which I would need to if I want to make sure I comply with PCI DSS? So I’ve got a number of non-compliant PCI DSS control objectives like Network connections between trusted and untrusted networks are controlled. Apparently, we are Non-compliant if I click on that to open it up, and if I click Resource Compliance, I do have some virtual machines that are Compliant, but some other settings that are not compliant.

All it takes is one item to be Non-compliant for the overall compliant state for this setting to be non-compliant.

A new page displays: Network connections between trusted and untrusted networks are controlled. It contains three tabs: Overview, Policies, and Resource Compliance. The Resource Compliance tab is active. It contains a table with the column headings: Name, Parent Resource, Compliance state, Resource Type, Location, Scope, and so on. [Video description ends]

 So, at least I know now where to focus my energy, such as on storage accounts that are not compliant with network connectivity. In this case, they allow access from any network. The great thing about doing this in Azure is that once you’ve got a policy assignment made, until you change that configuration, it’s always in effect. And so, as you add new cloud resources and reconfigure them, these compliance reports will automatically be kept up to date.

Threat Hunting

Upon completion of this video, you will be able to describe the importance of detecting anomalies and potential indicators of compromis

The notion of threat hunting is very important to cybersecurity analysts. The idea is that through threat hunting, we can detect potential threats or indicators of compromise. This is a key activity of a security operations center, or SoC. As the name implies, a security operations center is a dedicated team for the monitoring, detection, and response to security incidents. Now here’s an interesting statistic. According to IBM, the average time to detect a compromise is 280 days.

What is this telling us? It’s telling us we have an enormous amount of work to do to reduce that average overall. So, threat hunting is designed to identify indicators of compromise. The goal is to minimize attack damage and also to learn how to improve security controls in the future related to these threats. And threat hunting can be manual or automated, or some combination of the two. In some cases, threats might not be detected by automation systems, such as a social engineering attempt through a phone call to an employee. In IT environments, threat hunting will succeed, if we have endpoint detection on every device, whether it’s a smartphone or a mission-critical server or a router or switch or a Wi-Fi router, it doesn’t matter.

We need to have logging enabled and have that forwarded to a centralized tool that can analyze that data looking for potential security threats. That centralized environment would be a security information and event management or SIEM, spelled SIEM tool. Now remember, the typical IT environment these days can contain many different types of data sources whose logs you will feed into the SIEM system. Think about the fact that these days we have so many people working remotely, such as from home. So we need automation for big data analysis, so to speak, for threat hunting, because searching for anomalies can be complicated, especially if we’re not sure exactly what we’re looking for.

Talk about looking for a needle in a haystack. There are some common threat hunting models that are used by automation tools as well as can be used by cybersecurity analysts, first of which is hypothesis-based. This is all proactive whereby we have an idea of what threats are out there, we have threat definitions and we can configure tools or take actions to detect and minimize the impact of those threats. We then have an Intel-based threat hunting model. This is more reactive. It’s based on indicators of compromise or IoCs as well as open-source intelligence or OSINT such as detecting that a specific type of security vulnerability is being exploited on the network.

And so Intel-based systems of course, will generate alerts once these things are detected. The third threat hunting model is just custom. It means that we will tweak configurations for the tools we’re using, like a centralized SIEM tool, as well as perhaps customizing and tweaking how logging works on endpoint devices so that we have a combination of both proactive and reactive threat hunting models. Threat hunting response is enacted through an incident response plan or plans in plural because you will have many incident response plans. So, the IRPs can take many different types of actions when a security incident or an indicator of compromise is detected.

Some of those actions might include blocking certain IP addresses or domain names from connecting in the future, or removing detected malware, or reimaging devices that might have been infected with malware, or restoring corrupted or locked files. This is just a tiny list of possible incident response plan actions, it could really be anything. When it comes to using a SIEM tool or a centralized log ingestion service that analyzes data looking for threats. An example of which might be Splunk. But there are many cloud-based solutions and many other tools out there as well.

So, there are many potential data sources, network packet captures, device logs, application logs, malware activity from endpoint devices, user logon activity. And this could be on-premises, in the cloud, or both depending on what your enterprise uses. So, SIEM tools are designed for large-scale or big-data analysis. They use machine learning algorithms over time to detect potential security incidents and also to correlate events over time that might be related, including across different devices. And of course, having alert notifications to notify technicians that there was a problem or that some kind of remediation might have automatically taken place.

So, threats need to be prioritized then we have to think about the assets to the organization that have the highest value which could be collected customer spending habits, client lists, trade secrets, intellectual property copyrights, anything along those lines. We need to prioritize these because this is where we’re going to focus on threat hunting. It also allows you to configure and tweak a SIEM solution so it’s customized and so to speak, “trained” through machine learning over time for focused indicator of compromise detection. Now when you’re using threat hunting tools, you will be able to run queries.

It’s not just about being alerted when things are detected, so the query language will vary. It really depends on which solution you are using, but a cybersecurity analyst must spend the time to focus on that particular tool to learn its details. Queries of course are questions, and queries can become quite complex depending on what you’re asking questions about, so you can save them for future use.

That’s pretty much standard in any threat-hunting tool, such as maybe looking for simple things like “login failed” or “connection attempt”. Let’s not forget that the Cybersecurity Analyst+ exam will present scenarios where there is a way to improve the situation from a security perspective, and that could include certain types of threat-hunting queries to run in the future to prevent past incidents from occurring.

Critical Infrastructure

Cybersecurity analysts need to be aware of critical infrastructure and the threats to it. What is critical infrastructure exactly? This really refers to the physical and virtual infrastructure required by society. But what does that mean? We’re talking about the physical and also the supporting IT systems that provide the overall health and safety for societies all around the world as well as nation-state economies.

So, securing critical infrastructure includes the use of what is referred to as operational technology or OT. This is simply the hardware and the software that is used to control industrial processes. Industrial control systems are referred to as ICS. This is where we have automation, controlling things at the industrial level, equipment, robotics on a manufacturing line, pressure and valve sensors for water systems, and so on. But the frightening aspect of this is critical infrastructure has been a target of attackers in recent years.

We’re talking at the cybersecurity level. Things like the disruption of fuel pipelines or ransomware infecting devices in hospitals or attacks that stall the use of cellular services, attacks on power grids. Think of other critical infrastructure, water supply systems for drinking water, dams to hold back water. Securing this type of thing is absolutely serious business. In an industrial environment, we have the notion of supervisory control and data acquisition. This is otherwise called SCADA. Collectively, this is a general umbrella term that’s used to describe the equipment that controls industrial processes.

This means the hardware as well as the software working together. This would include things like control stations such as on a factory floor, intermediary equipment to allow communication between the control stations, and industrial automation devices. Imagine in an automobile plant the robotics on the assembly line that assemble all the parts that result in a finished product, a vehicle. SCADA components would include industrial devices such as on factory assembly lines, pumps, valves, sensors, electricity substations. In an industrial control environment, there is widespread use of programmable logic controllers or PLCs. These are physical firmware devices that execute SCADA instructions on industrial devices. They also talk to industrial devices and transmit telemetry and statistical data to SCADA systems to monitor the work that is being done by PLCs.

All of this ultimately is controlled by technicians managing this stuff using SCADA software. The networks that consist of things like PLCs are often referred to as controller area networks or CANs. Collectively, all of these components working together are referred to as a distributed control system or a DCS. Pictured on the screen, we have a diagram of a SCADA network. On the right, we’ve got the PLCs, the programmable logic controllers, which would be connected to sensors, maybe valves, maybe robotics to control some kind of industrial process. PLCs are physical devices, they have firmware, they perform automation tasks and as such these are devices that need to be secured in this type of environment, they are a target for attackers.

Now ideally, our industrial monitoring stations which could be running a combination of Linux or Windows or Unix variants are on the network and are ultimately used to monitor and control PLCs and the network would also consist of database transaction servers to retrieve data about productivity statistics about PLCs, and so on. This type of a network might be well served by not having any type of external connection to other networks such as the Internet, in which case we would call this an air-gapped network. On the security side of a SCADA environment, we’re talking about securing PLCs and the stations that manage them, remote terminal units which are running some kind of an operating system that needs to be patched.

Now, some of these more specialized operating systems running on some of this type of hardware would include operating systems like OS-9 and VxWorks. The management stations might run a variant of Windows or Linux. The actual devices themselves will have different specialized operating systems running on them. Then we have to think about the device supply chain for this hardware. Who made the components? Are we bringing in a lot of this equipment from what we consider to be hostile nations that we don’t have great relationships with on an air-gapped industrial control network you might also consider banning the use of USB thumb drives as it might introduce malware or result in some kind of a security breach.

Remember that the less attackers know, the better, so if we have an air-gapped network, we also want to make sure we do not disclose the specific brand of hardware or software and versions that are being used on industrial control networks. And there are hardening options for specific operating systems running on these specialized types of devices. Remember up above we mentioned OS-9 and VxWorks as specialized operating systems for things like PLCs.

If you’re using an OS like VxWorks, you should be blocking UDP port 17185. This is one of those things that’s enabled automatically normally it’s used for remote debugging over the network, but unless you need to do that, it’s one of those services that we should turn off in the interest of hardening the environment.

Course Summary

So, in this course, we’ve examined how to plan for and run vulnerability and penetration tests. We did this by exploring vulnerability assessments, assessment tools, port scanning with Nmap, conducting network vulnerability assessment scans. We then tested web application security using the OWASP ZAP tool, discussed penetration testing, and navigated the Metasploit framework.

Finally, we used the Burp Suite for HTTP sniffing. We viewed cloud resource security compliance, and we discussed threat hunting, and critical infrastructure. In our next course, we’ll move on to explore how to apply security to the software development life cycle or the SDLC, as well as digital forensics.