Views: 70 Author: Site Editor Publish Time: 2022-05-20 Origin: Site
Today, governments have never faced so many cybersecurity threats. From hacktivists to organized crime cohorts, cyber attackers are working around the clock to compromise governments' technology, infrastructure and populations. Terrorist groups are actively seeking to exploit software vulnerabilities, as evidenced by the recent NSA investigations into ISIS and BlueKeep vulnerabilities, the latter of which targeted traditional Windows-based systems. Even amateur hackers are posing a growing threat by developing kits that leak all kinds of data into the public domain.
Not surprisingly, cybersecurity is a top concern for chief information officers. Research firm Gartner Inc. projects that the $90 billion spent globally on cybersecurity from 2017 will increase to $1 trillion by 2022. Meanwhile, the U.S. Council of Economic Advisers reports that malicious attacks could cost the global economy as much as $109 billion a year, while IBM estimates that the average loss per attack is $3.86 million.
With so much at stake, no government organization can afford to have a vulnerable infrastructure. However, combating cyber threats against a backdrop of tight budgets and resources is a serious challenge for government agency staff.
One of the management challenges facing government agencies is that data center environments have become increasingly complex over the past decade. Workloads are running locally, in public and private clouds, and at the edge. This diversity creates greater security risks.
Understandably, many CIOs are concerned about where to place critical workloads and want to achieve end-to-end security across environments. But the reality is that there are security risks at every layer of the data center stack. Hackers have recognized this and have targeted the application layer and are now further escalating their attacks on hypervisors, boot drivers, firmware and even hardware.
While government agencies have been working to reduce security risks to personal computers, they are beginning to realize the need to shift their attention to infrastructure. Traditional data center protections (such as detection and isolation software) or perimeter controls (such as firewalls) are no longer sufficient. And by the time a problem is discovered, damage is likely to have already been done.
To protect data in idle, in transit, and in use, IT administrators must start with the processor foundation, fully understanding the organization's risks and establishing controls. Security must be designed into the data center architecture at the outset, not as an ad hoc solution through random products.
Data center attacks that occur at the application layer are easy to identify, but the real threats are more serious and the attacks are more difficult to detect and remediate. This is because traditional detection solutions are less adept at identifying malware penetration into the foundation of hardware components, and some components expose the stack to additional vulnerabilities. For example, hypervisors are designed to optimize virtual machine memory space and cores. However, this resource sharing opens up the hypervisor and stack to increased attack risk.
The chain of trust is the key to building enhanced security from the first boot process, which begins with the Trusted Platform Module. tpm is stored in the machine's chip rather than in software, storing cryptographic keys specifically associated with the device itself. Establishing a root of trust means that each layer of the stack (boot, virtualization, libraries, services and applications) should be checked, thus proving the validity of each layer of the stack.
Until now, protecting infrastructure and application stacks in this way has not been easy to achieve due to performance, complexity and cost factors. However, the technology and conviction to do so now exists.