CompTIA CASP+ CAS-004 – Chapter 06 – Utilizing Security Assessments and Incident Response Part 3
- Exploit Kits
Exploitation tools. Sometimes we’re just referred to as exploit kits are groups of tools that are used to exploit security holes they’re created for a large number of applications. These tools attack an application essentially in the same way that a hacker would. So they can be used for good or evil. Some of them are free. Others are quite expensive.
This kind of depends on which one you choose. But an exploit framework does help to provide a consistent environment to create and run exploit code against a target. The three most widely used are Metasploit, which we had mentioned earlier. That’s an open source framework that ships with hundreds of exploits and payloads, as well as a number of auxiliary modules. Canvas, which is sold on a subscription that ships with more than 400 exploits and then impact another commercially available tool. This one uses Agent, technology that helps the attacker to gather information on the particular target.
- Host Tools
In some cases you’re concerned with assessing the security of a single host rather than the network in general. And so we’ve got a number of different tools that are appropriate for assessing host security. Let’s start with password crackers. These are just programs that do exactly what the name implies. They attempt to identify passwords and they can be used to mount several different types of password attacks, including dictionarybased attacks and brute force. In a dictionary attack the attacker is using a dictionary of common words to discover passwords. So you have an automated program that uses not the words themselves but the hash of the dictionary word and then it’s comparing that hash value to entries in the system password file. While the program comes with a dictionary, you can also use extra dictionaries that can be found on the internet.
It’s really actually pretty easy to defend against a dictionary based attack because you really should just implement security rules. Password policies that say password can’t be a word found in a dictionary and that should be able to combat that. So if used complex passwords or alphanumeric passwords, you should be good. Brute force password attacks are more difficult to perform because they work through all the possible combinations of numbers and characters. Having said that, they are extremely time consuming and so you do kind of have that going for you. The best countermeasure against these password threats is just complex password policies. We want to force people to change their passwords on a regular basis, we want to put in account lockout policies, we want to make sure that passwords are always stored in an encrypted fashion.
And then we can utilize these password cracking tools to discover weak passwords. There are a number of them. Cane and Able is probably one of the more common password cracking tools and essentially it just hits a passwords file and then can produce usernames and passwords for you. Another example is a program called John the Ripper that works with Unix, Linux as well as Macintosh systems. So you can easily find a number of password crackers out there to use to determine whether passwords on your systems are secure. Like network vulnerability scanners. We have host scanners as well. They can scan for vulnerabilities, but only on the particular machine where you have the tool installed, although some scanners can do both. The most common one for Microsoft is the MBSA. It’s been around for quite some time. The Microsoft Baseline Security analyzer.
You can scan one or multiple host and it will return a list of all the vulnerabilities and prioritize them, looking for default accounts still being enabled, weak passwords, the missing operating system or application patches, et cetera. And then a lot of your local command line tools you can use for these purposes as well. These are available in both Windows as well as Linux and Unix. They’re not as user friendly as a lot of the automated tools, but they’re preferred by many who have a bit more experience in the field because they have a lot more flexibility. They do require more knowledge, more background, but they’re capable of doing more. Netstat is the network status command and you can use that to see what ports are listening on a TCP IP based system.
If you do Netstat, A, it shows you all the ports. There are a number of different options. If you don’t have any switches, you just type Netstat and hit Enter. Then it just displays the current connections, b shows you the executable that is responsible for a particular connection, and we can take a look at the status of the connection. Is it in a listening state? Is it in the midst of a TCP handshake? Is it an established open connection? All of that can be gleaned right from that Netstat program. Ping is making use of the ICMP protocol just to try to test connectivity between two devices. Probably one of the more used commands in the TCP IP protocol just sends four ICMP echo requests to a system and is expecting echo responses. It’s most used because it’s very useful for troubleshooting utilities, but having said that, it can just indicate whether a host can be reached, you know, how long it’s it’s accessible or how long it took to, to access that system as well.
You can do Ping A and you can resolve names. So some of the possible outputs are destination unreachable, which just usually means there’s some sort of routing problem request timed out, you might just not have gotten a response in time, or a firewall might be blocking it. Trace RT and Trace Route is used to trace the path of a packet through the network. Typically this is done from a troubleshooting perspective to see where the packets are being dropped, as well as the length of time that it takes for a packet to cross each router. But these commands can also be used by other programs like Nmap to record the path and present it graphically. A lot of times those are easier to understand. I think we had mentioned before that you can use Trace route by default. It will try to display DNS names as opposed to just IP addresses, so it can be used as a form of reconnaissance.
IP Config and its counterpart if Config on Linux and Unix is used to view the IP configuration of a device, DHCP servers, DNS servers, Mac addresses, depending on the switches that you put on there, of course you can always do a release and renewed or release a DHCP assigned address and renew it. You can eliminate the DNS corrupted cache entries by using IP config, Flush, DNS, so on and so forth. And then like I said, if Config is the Linux Unix counterpart, NS lookup to resolve DNS names and you have the Dig command for Linux and Unix and then you have Sysinternal.
Sysinternals is a collection of more than 70 Windows tools that can be used for troubleshooting and security related issues. They used to be third party, but Microsoft now makes them available on the Tech net website. All of the tools collectively, I think, are a whopping about four megabytes. So you can download the entire suite and there’s a bunch of documentation for each one. We’ve got tools that are security related, like Enumerating, who has access to certain directories, to files, to registry keys, targeting a particular user and displaying the access that that particular user has, showing another one called Auto runs that will display the programs that are set to start up automatically. You have commands that are used to identify who’s currently connected those kinds of things. And again, that’s just some of them. There’s a whole lot of performancerelated tools as well, so you want to keep these in mind because these can be very useful in testing vulnerabilities and the configuration of a particular host machine.
- Additional Host Tools
There are some additional host tools that get us kind of out of the command line. Well, the first one doesn’t. But file integrity monitoring is important because sometimes malicious software and malicious individuals will make unauthorized changes to files. In a lot of cases these are data files, but in other cases they might be system files. With data files, it’s undesirable that you’d have alterations to those. But if you change system files, files that can compromise the entire system. So file integrity software will generate a hash value of each system file and then verify that hash value at regular intervals. The entire process is automated, and sometimes you can even automatically replace the corrupted system file. There are third party tools out there.
One popular one is Tripwire, but Windows offers the system file checker SFC, which is a command line utility that will do the same thing. SFC Scan now is the typical one that just tells it to scan all protected system files. You can scan at boot, you can scan specific files, and then you can use switches if you just want to verify. If you don’t do that, then by default it attempts to repair. Log analysis tools are going to be important because logs are going to record just about anything that is happening on the system. How you go about viewing those logs is going to be up to you. One. The graphical utility in Windows is just event, Viewer. Typically we would be going in and looking at the security logs to see if accounts were compromised, if an account has been locked out, if somebody’s attempting a dictionary based attack, somebody’s logging in from a different location or at an odd time, event Viewer can be used to do that. And there are a number of other tools as well. Windows PowerShell is built into the operating system, but that’s just a CLI instead of a gui.
Logly is a tool that has both free and paid plans per month. This one, as well as some of these other third parties, just make it a little bit easier to weed out all the noise. Along with providing the option of doing full text searches in these logs, log entries, is a cloud based system. Again, free version and paid version. You can filter logs in real time. Go Access is a terminal based tool that’s open source and free to use. It generates reports either in CSV or HTML format. It also has a free and then monthly paid plans. And Gray Log, another open source tool that has a lot of large customers like Cisco, makes it very easy to parse all the data logs from a particular source.
And then you have antivirus. Obviously antivirus tools are important. You have cloud antivirus products that don’t run on the local computer, but in the cloud. I think we had mentioned those before. They have certain advantages and disadvantages. The advantage is the performance footprint, the lack of requirement for automatically updating the antivirus software. But the disadvantage is that they have this dependency on the Internet connection, and they typically don’t scan the whole computer. It’s just core Windows files. So you need to decide whether we want to maintain local antivirus software or utilize cloudbased software.
- Physical Security Tools
As we mentioned earlier, if you don’t have physical security, other forms of security are quite useless. And so we’re going to kind of end this topic with looking at some physical security tools. Lock picks are just tools used to test the ability of your physical locks. Can they withstand somebody picking them? These are the same tools that are used by locksmiths to open a lock when you hire them to do so. That’s actually one of the reasons that a lot of organizations have moved away from physical locks. These tools in the hand of somebody who knows what they’re doing are incredibly successful in many cases. So it’s often a good idea if you use physical locks to check them to see if they’re susceptible to this. You might even want to hire a locksmith to see if they can open those doors.
That will tell you whether or not a professional is able to do it. Lock types, I mean door locks can be different. It can be mechanical or electronic. Electronic locks or cipher locks use a keypad that requires the correct code to open the lock. These are programmable and organizations should use them because they’re better than physical locks, but they should also change the codes frequently. Another one is of course the Proximity authentication device where you have a Swipe card. Those devices contain an electronic access control and different components which are electromagnetic lock, a Credential reader and a closed door sensor. Some other types that you have Warded locks is a type of lock that has a spring loaded bolt with a notch in it. It has wards or metal protections inside the lock which the key will match when it opens the lock.
Tumbler locks is a type of lock that has more moving parts than the Warded lock. The key essentially raises a little lock metal piece to a correct height. Both of these are physical mechanical locks that are susceptible to lock picking tools. And then you have combination locks. In this case you have to rotate the lock in a pattern that lines tumblers up and opens the lock. And so that’s of course a little bit different and not as susceptible. Malicious individuals can use radio frequency identification or RFID tools to try to steal proximity badge information. Somebody’s not really paying attention, they walk near a concealed device. For instance. One example is the tastic RFID thief by Bishop Fox. It targets specific low frequency badge systems used for physical security.
And so you really have to be careful. If these types of systems are used, then penetration testing should include the testing of the vulnerability of those systems, the attempted capture of RFID credentials, because if they are captured, that can lead to some serious issues. And then finally, an infrared camera is just a camera that forms an image using infrared radiation and can capture images in the dark. They can also detect motion in the area so they are a great choice for performing physical security assessments. And as we said, physical security assessments are going to be very important if we want to complete look at organizational security.
- Topic C: Incident Response and Recovery
In this topic, we’re going to look at incident response and recovery. This is very important because in order to determine whether an incident has occurred, an organization has to first document normal actions and performance of the system. It’s called a baseline, and it’s the baseline to which all other activity is compared. We have to make sure that we’re capturing information during both high activity activity and low activity times so that we can always be able to effectively identify the abnormal. But then we also need to establish procedures that document exactly how we should respond to events. I think I said before that we can’t completely eliminate the security event, but we can ensure that we respond to them in the appropriate way. And after that initial response, after the the incident has been stopped, then we need to understand how to go through the process of documenting and analyzing, as well as recovering systems back into their operational state.
- E-discovery
The term Ediscovery is used when evidence is recovered from electronic devices. And a lot of this information is very volatile in nature, okay? And so it’s important that as security pros, we know how to do this. We receive the appropriate training to make sure that evidence is collected and preserved in the proper manner. This is going to involve the collection of all data, whether it’s be written digital, regarding an incident in a larger enterprise. When Ediscovery occurs, security professionals probably need to focus on obtaining all the evidence quickly, usually within about 90 days. Because in addition to the time factor, larger enterprises are going to have larger quantities of data and they’re going to reside in multiple physical locations. It may be fairly simple to provide an investigator with all the data, but it can be difficult to search through that data to find specific information that’s needed for the investigation. So another thing that we would want to do in larger organizations is to implement some sort of indexing technology that’s going to help any searches happen more quickly. Some methods that assist Ediscovery are electronic inventory and asset control. An asset is any type of or any item of any value to the organization.
So that does include physical devices and digital information, among other things. But we need to be able to recognize when an asset is stolen. And if you don’t have an item count, you don’t have an inventory system, or you have an inventory system that’s not kept up to date, then it’s going to be fairly impossible to do that. So all of your equipment should be inventory, and all the relevant information about a device needs to be maintained and kept up to date. This is serial numbers, model numbers, OS versions, responsible personnel, et cetera.
This doesn’t just include computer systems. It also includes security devices like Firewalls, Nat, intrusion detection. In fact, they should probably receive the most attention because they relate to physical and logical security. And beyond that, other devices that might be easily stolen, like your mobile devices, it would be very useful for you to use electronic inventory and asset control. Data retention policies are procedures that are put in place for the retention and destruction of data. Now, we do need to follow local, state, and government regulations and laws, but we should have, beyond that, proper procedures documented to make sure that information is maintained for the required amount of time to prevent any sort of fines or regulatory issues. And then that at the end of that time, we go through all of the appropriate steps to dispose of that data.
And in order for these data retention policies to be effective, you really need to have the data categorized properly, because each category of data may have different retention and destruction policies. In a lot of organizations, data is one of the most critical assets. When you’re recovering from a disaster, an operations team is going to have to determine what data is backed up, how often is it backed up, how is it backed up, what’s the method that we’re going to use? But we also need to determine exactly how it’s stored, including the data that’s in use and the data that’s backed up. Data owners are responsible for determining access rules, lifecycle data usage, but security professionals are often responsible for the recovery and the storage of backups. The main responsibility of data or information owner is to determine the classification level of the information he or she owns and then to protect that particular data. So data ownership is important because they define those levels. Now, they don’t often handle the implementation of data access controls, but they do define them. Usually this role is just filled by somebody who understands the data because they belong to a particular business unit, and then the data custodian, not the data owner, would actually implement the controls. After they are determined by the data owner, the appropriate policies need to be put in place for data handling. When data is stored on servers when it’s actively being used, it’s usually controlled by using ACLs and implementing group policies and other data security measures like data loss prevention, for instance.
Once it’s archived to backup media, data handling procedures are just as critical. And then you have data archiving which is storing that so that it’s historically accessible. We also want to talk about legal holds. An organization needs to have policies regarding any legal holds that may be in place. Legal holds often require an organization to maintain archive data for longer periods of time. So anything that’s actually placed on legal hold needs to be properly identified, and the appropriate security controls should be put in place to make sure that that data can’t be deleted or tampered with. All right, again, all of this is very relevant to Ediscovery, but it can because it controls who owns the data, who can access the data, where is the data stored, and for how long is it stored.
- Data Breach
A data breach is any incident that occurs where information considered to be private or confidential is released to unauthorized parties. And an organization has to have a plan in place to detect and respond to that type of incident in a correct manner. If you just have an Incident Response Plan, that’s not going to be enough. Organization does have to have trained personnel who are familiar with the Incident Response Plan, and actually they have the skills to respond to the event in the case that it occurs. And it’s important that we follow certain procedures. And these are the procedures that are going to be in place for the CASP exam. Number one is detect the incident.
Two, respond to it. Three, report the incident to the appropriate personnel. Four, recover from the incident. We want to remediate all affected components by the incident, trying to make sure that all traces of the incident are removed, and then we want to review the incident itself and document all the findings. So if something goes undetected or unreported, you can’t take steps to stop it while it’s occurring. You can’t prevent it in the future. Okay, so the actual investigation of the incident is, of course, going to require steps, one that you detect. But then we need to move on from there as quickly as possible, following policies and procedures.
- Incident Response Process
So let’s take a brief look at these different processes. The first process is detection and collection. We have to identify the incident, lock down or secure the attack systems and identify the evidence. Now, identifying the evidence is typically going to be done through reviewing audit logs, analyzing the user complaint, analyzing maybe detection mechanisms, the status of the system. Now, initially, we may be unsure about what evidence is important, and so preserving evidence that you don’t need is always going to be better than oops wishing you had the evidence that you did not retain. Identifying the attacked systems is a part of the process as well. And so in some cases, we can identify that the attack originated from a particular place or targeted a particular place. Now, I may not be able to fully capture the attacker’s system, but I can capture whatever data is possible. IP addresses, usernames, other Identifiers. And we always want to preserve evidence, collect evidence, make system images, implement a chain of custody, document the evidence, record timestamps, et cetera. Data analytics is the second. Any data that’s collected as a part of incident response needs to be analyzed properly by a forensic investigator or somebody who’s trained to do that in a similar fashion.
In addition to that, if you’ve got somebody trained in big data analytics, they may need to be engaged to help with the analysis. It’s going to depend on the amount of data that needs to be analyzed. But after we’ve preserved and collected that evidence, the investigator then needs to examine and analyze the evidence, looking for characteristics like timestamps and identification properties. Next step would be mitigation. It’s the immediate countermeasures that are performed to stop the data breach in its tracks. So we’ve detected the incident, we’ve collected the evidence, and now I need to take the appropriate actions to mitigate the effect of the incident and to isolate the affected systems. And that’s what we have under mitigation. Minimize.
I’m taking steps to try to minimize the effect of that event. In most cases, this includes being open and responsive to the data breach immediately after it occurs. We’re talking about lost data here. So we’re trying to minimize the damage to the organization’s reputation in some cases. And in other cases, we’re trying to minimize the damage to physical assets. We might be trying to do both. And then we’re also going to isolate the affected systems, because that’s a big part of the incident response to a data breach. Depending on the level of breach that has occurred, how many assets are affected, you might have to suspend even some services, then recovery, reconstitution.
So once we’ve stopped it, it’s time for the organization to recover data, and we’re trying to return operations to a state that is as normal as possible. Obviously, the ultimate goal is to fully recover a system, but it’s in some cases, not possible to recover all the data just due to the nature of how things are backed up, recover, and where they’re stored, how available is that data? Most organizations are going to have some form of SLA as it relates to what the It department, security department is going to provide for the rest of the organization. And so you need to make sure that you can restore backups within that particular time frame.
We also need to understand the recovery procedures for this. Then we have the response. So once it’s been analyzed and investigated, we’re going to respond to that type of event. And then finally disclosure. Once you fully understood the data breach, then we need to record all of our findings. We need to disclose that to the organization, and the organization will then disclose that to the general public or to the customer base, just depending on the particular scenario. Okay, so that’s some basics of incidents, response processes, and what we would go through in order to implement this and effectively respond to security issues within the organization.
- Chapter 06 Review
In this chapter, we looked at performing security assessments and Incident Response and Recovery. We discussed different types of assessments that are important to your organization because they allow us to scan and identify vulnerabilities and then in some case, even try to exploit those vulnerabilities through Pen testing. We looked at a number of different security assessment tools from port scanners, packet sniffers, vulnerability analyzers, et cetera, and talked about how, as security professionals, we would use those tools in order to identify and fix problems before an attacker is able to exploit them. We then finally looked at Incident Response and Recovery as sort of an overview look of how we would go through the process of responding to the inevitable security events in a way that helps us to recover from those events very quickly and learn as an organization.
- Course Closure
Well, this concludes the video course for CompTIA’s certified Advanced Security Practitioner Exam. In this course, we started with understanding risk assessment and the important role that it plays in the life of the security professional. How to take vulnerabilities and threats and use those to calculate the risk of a particular entity or a particular technology within your network, and then how to prioritize those risk and how that assist us in determining how we’re going to handle it, whether we’re going to avoid it, transfer it, mitigate it, et cetera. Then we discussed implementing network and security components and architecture, in other words, the nuts and the bolts of security. How are we going to go about actually securing our network through the use of firewalls, unified threat management, ACLs, on Routers, network Access control, et cetera. We discussed implementing advanced authentication techniques and cryptography. Authentication is required, and in many cases it’s something where we need to actually implement multiple controls.
So initiate multifactor authentication to ensure that only authorized users are accessing our network. And in some cases, in order to implement or to achieve, I should say, the confidentiality and integrity of data, we have to implement cryptography encryption algorithms and hashing algorithms. We then discussed implementing security for systems, for applications for storage and mobile devices. We talked about hardening those systems through the use of antimalware patch management systems and the use of various components within application code, as well as mobile devices to ensure the security of those devices. We talked about the use of virtualization in cloud computing and how they bring some unique things to the table in terms of security for host systems and a virtualization platform, as well as security of data in a public cloud scenario, data that’s stored outside of your network.
And then we finally discussed security assessments and incident response, how to go through and audit the network, how to do vulnerability testing, penetration testing to identify security holes that may still exist and need to be plugged, and then finally, what to do when incidents happen. They are going to occur, and how we respond to them says a lot as to the overall security of the network. As I said, that concludes the course. And it’s my hope that the information in this course has been beneficial to you, that it will help you in the real world as a security professional, but also to help you pass that CASP exam. Again. My name is Patrick Loner. It’s been my pleasure to be your instructor on this course and we’ll see you next time.