All security standards and corporate governance compliance policies such as PCI DSS, GCSx CoCo, SOX (Sarbanes Oxley), NERC CIP, HIPAA, HITECH, GLBA, ISO27000 and FISMA require devices such as PCs, Windows servers, Unix servers, devices such as firewalls, Intrusion Protection Systems (IPS), and routers to be secure to protect sensitive data safely.

There are a number of buzzwords used in this area: Security vulnerabilities and device hardening? Device ‘hardening’ requires known security ‘vulnerabilities’ to be removed or mitigated. A vulnerability is any weakness or flaw in the design, implementation, or management of a system’s software that provides a mechanism for a threat to exploit a weakness in a system or process. There are two main areas that must be addressed to eliminate security vulnerabilities: configuration settings and software flaws in program and operating system files. Removal of the vulnerabilities will require “remediation”, usually a software update or patch for program files or operating systems, or “mitigation”, a configuration change. Hardening is required equally for servers, workstations, and network devices such as firewalls, switches, and routers.

How do I identify vulnerabilities? A vulnerability scan or external penetration test will report all the vulnerabilities applicable to your systems and applications. You can buy third-party pen test / scan services – Pen tests, by their very nature, are done externally via the public internet, as this is where any threats would be exploited. Vulnerability scanning services must be delivered on-site at the site. This can be done by an outside consultant with scanning hardware, or you can purchase a ‘black box’ solution whereby a scanning device is permanently located within your network and scans are provisioned remotely. Of course, the results of any scan are only accurate at scan time, so solutions that continuously track configuration changes are the only real way to ensure that your IT health is maintained.

What is the difference between ‘remediation’ and ‘mitigation’? The ‘remediation’ of a vulnerability results in the permanent removal or repair of the flaw, which is why this term generally applies to any software update or patch. Patch management is increasingly automated by the operating system and the product developer; As long as you roll out the patches when they are released, the built-in vulnerabilities will be fixed. As an example, the recently reported Operation Aurora, classified as an Advanced Persistent Threat or APT, managed to infiltrate Google and Adobe. A vulnerability within Internet Explorer was used to plant malware on specific users’ PCs that allowed access to sensitive data. The solution for this vulnerability is to ‘fix’ Internet Explorer using patches released by Microsoft. Vulnerability ‘mitigation’ through configuration ensures that vulnerabilities are disabled. Configuration-based vulnerabilities are neither more nor less potentially harmful than those that need to be remediated through a patch, although a securely configured device can mitigate a threat based on a program or operating system. The biggest problem with configuration-based vulnerabilities is that they can be reintroduced or enabled at any time; it only takes a few clicks to change most configuration settings.

How often are new vulnerabilities discovered? Unfortunately, all the time! Worse, often the only way the global community discovers a vulnerability is after a hacker has discovered and exploited it. Only when the damage has been done and the hack is traced back to its source can a preventive course of action be formulated, be it a patch or configuration tweaks. There are several centralized repositories of threats and vulnerabilities on the web, such as the MITER CCE lists, and many security product vendors compile live threat reports or “storm center” websites.

So all I have to do is go through the checklist and then be sure? In theory, but there are literally hundreds of known vulnerabilities for every platform and even in a small IT space, the task of verifying the hardened state of each and every device is a nearly impossible task to perform manually.

Even if you automate the task of scanning for vulnerabilities using a scan tool to identify how hardened your devices are before you begin, you still have work to do to mitigate and remediate vulnerabilities. But this is only the first step: if you consider a typical configuration vulnerability, for example, a Windows server should have the guest account disabled. If you run a scan, identify where this vulnerability exists for your devices, and then take steps to mitigate this vulnerability by disabling the guest account, then you have hardened these devices. However, if another user with administrator privileges accesses these same servers and re-enables the guest account for any reason, they will be exposed. Of course, you won’t know that the server has become vulnerable until you run a scan which may not be for another 3 months or even 12 months. There is another factor that has not yet been covered and that is how systems are protected from an insider threat; More on this later.

So is strict change management essential to ensure we continue to deliver? In fact, section 6.4 of the PCI DSS describes the requirements for a change management process formally managed for this very reason. Any changes to a server or network device can have an impact on the ‘hardened’ state of the device and therefore it is imperative that this is taken into account when making changes. If you are using a continuous configuration change tracking solution, you will have an audit trail available that provides you with ‘closed loop’ change management, so that the detail of the approved change is documented, along with the exact details of the changes. changes that were actually implemented. Additionally, modified devices will be re-evaluated for vulnerabilities and their compliance status will be automatically confirmed.

What about insider threats? Cybercrime joins the organized crime league, which means it’s not just about stopping malicious hackers from proving their skills as a fun hobby. Fully implemented firewall, intrusion protection systems, antivirus software, and device hardening measures will not yet stop or even detect a rogue employee working as an “inside man.” This type of threat could cause an employee with administrator rights to introduce malware into otherwise secure systems, or even have back doors programmed into core business applications. Similarly, with the advent of Advanced Persistent Threats (APTs) such as the publicized ‘Aurora’ tricks that use social engineering to trick employees into introducing ‘zero-day’ malware. ‘Zero-day’ threats exploit previously unknown vulnerabilities: A hacker discovers a new vulnerability and formulates an attack process to exploit it. So the job is to understand how the attack occurred and, more importantly, how to remediate or mitigate future recurrences of the threat. By their very nature, antivirus measures are often powerless against “zero-day” threats. In fact, the only way to detect these types of threats is to use File-Integrity Monitoring technology. “All the firewalls, intrusion protection systems, antivirus and process whitelisting technology in the world will not save you from a well-orchestrated insider attack in which the perpetrator has administrator rights to key servers or legitimate access to the code of the application: file integrity monitoring used in conjunction with strict exchange control is the only way to properly govern sensitive payment card systems “Phil Snell, CTO, NNT

See our other whitepaper ‘File Integrity Monitoring: The PCI DSS Last Line of Defense’ for more information on this area, but this is a short summary. any change can be significant to compromise the security of a host. This can be accomplished by monitoring file size and attribute changes.

However, since we seek to prevent one of the most sophisticated types of piracy, we need to introduce a completely foolproof means to ensure the integrity of the file. This requires each file to have a ‘DNA fingerprint’, which is generally generated using a secure hashing algorithm. A secure hashing algorithm, such as SHA1 or MD5, produces a unique hash value based on the content of the file and ensures that even a single character that changes is detected in a file. This means that even if a program is modified to expose the payment card details, but the file is ‘padded’ to be the same size as the original file and with all other attributes edited so that the file looks and feel the same, the modifications will still be exposed. This is why PCI DSS makes file integrity monitoring a mandatory requirement and why it is increasingly seen as a vital component in system security, such as firewalls and antivirus defenses.

Conclusion Device hardening is an essential discipline for any organization that is serious about security. Also, if your organization is subject to any corporate governance or formal security standard such as PCI DSS, SOX, HIPAA, NERC CIP, ISO 27K, GCSx Co Co, device hardening will be a mandatory requirement. – All servers, workstations and network devices should be hardened through a combination of configuration settings and software patching – Any change to a device can negatively affect its hardened state and expose your organization to security threats – File integrity monitoring should also be employed to mitigate ‘zero-day’ threats and the ‘insider man’ threat: vulnerability checklists will change regularly as new threats are identified

Leave a Reply

Your email address will not be published. Required fields are marked *