Zero Trust Explained: A Comprehensive Guide
One concept is gradually gaining traction across the web, government, and the corporate world: “Zero Trust.”
However, unlike other buzzwords, this one has the potential to be one of the most transformative paradigms in cybersecurity—so much so that U.S. federal agencies and enterprises are moving to rapidly adopt Zero Trust.
But why?
Because it encourages hypervigilance to safeguard their interests. However, it’s not exclusive to governments and state secrets. Zero Trust has the potential to benefit organizations of any size, big or small.
To put it to work, you need to know the basics. Discover the inner workings of Zero Trust and what it can do.
In This Article
- What Is Zero Trust?
- A Simple Explanation of Zero Trust
- Establishing the “Trust” in Zero Trust
- How Does Zero Trust Protect Resources and Data?
- What Is ZTNA?
- What Is Zero Trust Architecture?
- How Do I Create an Application Perimeter for Zero Trust?
- What Is Micro-Segmentation?
- What Is an Application Perimeter?
- What's the Best Way to Create an Application Perimeter for Zero Trust?
- How Can I Use Zero Trust in the Cloud?
- How Does Zero Trust Apply to OT and Critical Infrastructure?
- Are Governments Paying Attention to Zero Trust?
- Simplify Your Zero Trust Implementation
What Is Zero Trust?
Zero Trust is a strategy for securing critical resources, such as databases, application servers, or devices and machines—and the data stored on them—by positively and continuously validating the identity of all users and applications that access them.
Before Zero Trust, organizations spent a lot of effort to maintain strong perimeter defenses, investing in network firewalls, intrusion detection, network access control, and other tools to try and keep hackers out of their networks.
After two decades, the verdict on legacy network security is in: It hasn't delivered results. We know this because of the sheer volume of news reports about organizations falling victim to cyberattacks. Between phishing, social engineering, drive-by downloads, exposed endpoints, and zero-day vulnerabilities, motivated hackers have their pick of ways to access a modern enterprise.
This fact was highlighted by Okta’s The State of Zero Trust Security 2023 report, indicating that 61 percent of all organizations have implemented Zero Trust, with the remaining 35 percent eyeing an update of their own.
"Organizations around the world are taking tangible steps toward Zero Trust. Identity is critical for keeping these complex, global workforces collaborating securely and productively." — David Bradbury, Chief Security Officer, Okta
Zero Trust can help keep resources protected, even when hackers have free reign in your network.
A Simple Explanation of Zero Trust
Zero Trust means never implicitly trusting a user or device just because it's on the network. You must instead build trust through verification.
All devices and users trying to access a protected resource must be authenticated and authorized:
- Are they who they claim to be?
- Are they allowed to access the resource?
With Zero Trust in place, hackers trying to access a protected resource will either fail the authentication or the authorization check.
Simple concept, right? There’s a bit more to it.
Establishing the "Trust" in Zero Trust
For Zero Trust to work, you need appropriate tests to verify authentication and authorization.
You can verify authentication with a range of attributes regarding users, devices, and client software designed to differentiate authorized from unauthorized access. These are checked at sign-on and continuously verified throughout the user's session.
- Example: Assess the user’s location. An executive logging in with the right password but from an unexpected country requires greater scrutiny.
From here, define access policies that use those attributes to determine who can access what, from where, and how. Authorization may vary based on the attributes.
- Example: A normal user might be restricted to accessing the web interface of an internal server, whereas an IT admin may be able to log in to that server via Secure Shell (SSH) protocol.
It’s important to note that access policies don’t only apply to users. All accesses to the protected resource, including any server-to-server traffic, should be covered by a policy.
Zero Trust provides the flexibility to choose suitable attributes and policy definitions, blocking hackers from accessing protected resources while maintaining existing application communications and giving authorized users a streamlined experience.
The Relationship Between Zero Trust and Identity
Identity is at the heart of Zero Trust, but building multi-factor authentication into the "front door" of an application (often a web portal) isn’t the same thing. As described in our blog post about Zero Trust and identity, there are many other open services on an application server. If you “assume breach” and want to protect against hackers who may already be in your network, every single network packet must go through authentication and authorization.
- Example: IT admins need to be able to log in to manage the server. Stealing an administrator credential offers a way to obtain resources without going through the front door that bypasses the application.
- Example: Without credentials, attackers can gain access by sending a carefully crafted packet to a vulnerable service.
Identity is necessary, but insufficient, to achieve Zero Trust.
A resource protected by Zero Trust is defended in several ways. For one, all unauthorized packets are blocked, effectively hiding the protected resource on the internal network. This resource cloaking has many benefits, including:
- Preventing hackers from discovering and probing the protected resource.
- Making it difficult to uncover corporate practices and technology stacks.
- Blocking accesses from unauthorized clients—including malware, hacker toolkits, and ransomware.
Zero Trust policies aren't limited to access to a resource. They can also work to control access from a protected resource—for example, to prevent software on protected resources from making unauthorized lateral access to other machines in the network or on the internet. This secures the data on the resource by:
- Guarding against privilege abuse by authorized users.
- Blocking data leaks.
Zentera customers often restrict access to a resource to known good virtual desktop infrastructure (VDI) clients, preventing VNC- or RDP-based attacks. They enforce copy/paste controls to block direct exfiltration, and then use Zero Trust policies to prevent direct exfiltration to the Internet or other lateral staging servers.
Another way Zero Trust can secure resources is by ensuring all traffic to and from the resource travels in an encrypted tunnel. Zentera's CoIP® Platform's AppLink <link> capabilities can encrypt local area network (LAN) traffic to prevent packet sniffing and spoofing. Although it’s not technically part of Zero Trust specifications, such as NIST SP 800-207, LAN encryption is required for the U.S. Government in OMB M-22-09.
What Is ZTNA?
Despite the name, Zero Trust Network Access (ZTNA) does not grant network access. In fact, ZTNA is most often used to provide users with access limited to specific resources.
Think of ZTNA as a replacement for a user virtual private network (VPN), but without granting access to the network. It has much more granular filters based on the identity of the user, device, and client software. ZTNA solutions may deploy as a gateway in front of protected resources or as an agent installed on the protected resource—and Zentera offers both models.
Most ZTNA solutions only support remote access ("north-south") use cases, but this means the security controls are only effective when the user is remote. Because Zero Trust is about protecting assets even when the networks are compromised, it’s critical that the same levels of authentication be used inside the enterprise ("east-west"). Without east-west coverage, you can’t filter threats already inside the network and fulfill Zero Trust. Zentera's ZTNA solutions work across the compass, providing users with a consistent access method both north-south and east-west to ensure protection whether users are at home or in the office.
ZTNA solutions are often used to enable remote access ("north-south"), but for Zero Trust, it's critical that ZTNA also be used inside the enterprise ("east-west"). A ZTNA solution that doesn't cover the east-west direction doesn't address the basic Zero Trust motivation—filter the threats that are already inside the network. Zentera's ZTNA solutions work for both north-south and east-west accesses, providing users with a consistent access method regardless of whether they are working from home or in the office.
Zero Trust Requires a New Perimeter
One of the easiest mistakes to make when architecting Zero Trust is to focus on "front-door" access. Policies you can't enforce are worthless.
The diagram below shows a ZTNA gateway that enforces access policies to the protected resource.
ZTNA secures remote access, but resources that are open in the network are still exposed to threats in the network.
In addition to the web "front door," there may be many other open paths:
- SSH and PowerShell for IT administrators
- Application performance monitoring
- Logging tools
If threats in the network can bypass the gateway—via the application server or the databases it depends on—it isn’t very useful. To protect the resource, you need a new perimeter around the resource and its dependencies, so all access to the resources goes through policy checks. With this, you can safely say you’ve achieved the goals of Zero Trust.
What Is Zero Trust Architecture?
The combination of access policy control with ZTNA and a new perimeter follows the guidance in NIST SP 800-207 to the letter.
Zero Trust protects resources or assets; everything behind the PEP is implicitly trusted.
This new perimeter is an "implicit trust zone." Servers and devices within the zone are trusted and able to bypass policy checks to access each other. All accesses outside the implicit trust zone have the policy enforced at a policy enforcement point (PEP).
The new perimeter must force all accesses to go through the PEP. Think of this as an application perimeter, containing an application (e.g., web servers) and its dependencies (e.g., backend databases).
The NIST spec provides a helpful lens through which to evaluate what is and what is not Zero Trust. It’s easy to see that the core components of Secure Access Service Edge (SASE)—such as secure web gateway (SWG)—are actually solving other problems, such as protecting users against malicious web content.
For more on NIST SP 800-207, check out our NIST SP800-207 explainer article.
You can create an application perimeter using legacy technology, put the protected resource and its dependencies in a separate VLAN, or change the network topology and put the resource and its dependencies behind a firewall. But as NIST SP 800-207 points out, the management overhead and disruption to existing applications make these options less than ideal.
But a few other approaches can help you create an application perimeter: micro-segmentation and software-defined perimeter.
Micro-segmentation minimizes the attack surface in a data center by filtering unused ports between pairs of machines, leaving only "authorized" communications.
Most micro-segmentation solutions focus on data-center-scale challenges—deploy everywhere, make traffic between servers visible, and then filter unused ports to prevent “surprises.” The filtering can be implemented by:
- Programming the network (switch/router ACLs)
- Host firewalls (host-based micro-segmentation)
Programming the network depends on the data center’s topology, whereas the host-based model applies to north-south and east-west traffic and is more flexible.
Micro-segmentation tools can be used to create a Zero Trust perimeter, but are not optimized for creating a network zone. As a result, many customers stop at the visibility stage and never turn on filtering. Also, because micro-segmentation is targeted at the data center-scale, vendors may be challenged to scale down to individual resources or applications, both from operational and business perspectives. As a result, the difficulty of creating and maintaining a perimeter can vary widely from vendor to vendor.
An application perimeter is defined by and can be reconfigured using software to control what services and users can access an application. Compared to micro-segmentation deployed across the data center, an application perimeter can be deployed to protect a single application server.
Zentera's CoIP Platform implements this concept using an Application Chamber—or a software-defined perimeter that enables administrators to create perimeters around groups of servers, partition them into smaller groups, or merge them as needed. This makes it possible to create an application perimeter with a resource and its dependencies—but does not disrupt it.
Even decades-old infrastructure supports the VLAN method of limiting the NIST SP 800-207 implicit trust zone, whereas micro-segmentation and software-defined perimeter approaches can run on any IP network. But not all approaches are equal, and you must consider the operational costs of VLAN, micro-segmentation, and application perimeter:
VLAN
Enterprises have mixed technology stacks, complete with different vendors and operational teams responsible for each site. Coordinating VLAN deployment and maintenance across the entire enterprise infrastructure can be challenging. Furthermore, cloud infrastructure introduces new concepts, such as virtual private clouds (VPCs) and virtual networks (VNETs). Building an application perimeter using existing infrastructure is possible, but can end up costing far more in planning, execution, and maintenance.
Micro-Segmentation
Depending on the implementation, micro-segmentation can also suffer from dependencies on the underlying infrastructure. Some vendors use ACLs to program rules into the infrastructure, and these tools may introduce dependencies on the network infrastructure stack or the API versions used in the environment.
Application Perimeter
Application perimeters are immune from infrastructure dependencies, providing a consistent configuration and deployment model across the entire estate. Zentera’s CoIP Platform achieves this in two ways:
- zLink software agent, which supports virtualized and bare metal servers anywhere.
- Micro-Segmentation Gatekeeper (MSG), which deploys in line with protected resources to create an application perimeter called a Zero Trust demilitarized zone (DMZ).
As cloud computing continues to grow in importance, it’s critical to leverage the Zero Trust paradigm. Cloud flexibility and the need for end users and DevOps to access cloud applications from anywhere make it difficult to define a traditional perimeter. As many cloud infrastructures connect back to on-premises environments in a hybrid deployment, misconfigurations can open paths for attacks to flow back into the enterprise.
In truth, cloud infrastructure is not that different from legacy network infrastructure. Cloud providers offer VPN services, but users can only connect to one VPN at a time, forcing administrators to make difficult routing choices.
VPCs are fast to deploy and easy to configure, but difficult to reconfigure. Partitioning a VPC usually requires server migration. An application perimeter can be continually modified to improve security, whereas a VPC is static.
An overlay Zero Trust architecture like the CoIP Platform can help solve these challenges. The CoIP Access Platform AppLink creates an Application Network that enables any-to-any cross-domain ZTNA. It allows for a single set of identity-based policies to define connectivity, eliminating the need to change them when a workload is migrated from on-prem to the cloud.
Although the underlying Zero Trust principles are no different for operational technology (OT) and critical infrastructure compared to cloud and data centers, the implementation can be significantly different. OT workloads cannot accept the software agent needed to create an application perimeter. Instead, you need to define a Zero Trust DMZ to enforce access policies.
Manufacturing and critical infrastructure environments are highly sensitive to downtime, so it's important to be able to insert the Zero Trust DMZ while minimizing the impact on existing applications. Zentera helps customers implement a Zero Trust DMZ with the Micro-Segmentation Gatekeeper—an appliance that deploys inline to protect resources and enforce Zero Trust access policies.
Are Governments Paying Attention to Zero Trust?
Governments worldwide have wholeheartedly embraced Zero Trust, as evidenced by policies enacted in recent years.
In the U.S., the Biden administration's Executive Order 14028 was the catalyst for NIST SP 800-207 and OMB M-22-09, mandating all federal agencies to adopt Zero Trust by the end of 2024. It also lays the groundwork for the government to push Zero Trust requirements into private industry, starting with its contractors.
Does SASE Play a Role in Zero Trust?
SASE, or Secure Access Service Edge, proposes a new, cloud-based deployment model for security services that were previously deployed as on-prem components or are new for the cloud. It includes a basket of capabilities, including SWG, CASB, and SD-WAN.
SASE often includes ZTNA as well. SASE-delivered ZTNA may play a role in remote access for a Zero Trust project, but it cannot deliver on-prem ZTNA, so on-prem users and remote users will have different experiences. It also does not have a concept of an application perimeter, which would be needed for a NIST SP 800-207 Zero Trust Architecture.
For more information on SASE, check out our explainer article.
Simplify Your Zero Trust Implementation
Zero Trust can be daunting, especially if you conceptualize it as a switch to throw. The variables, dependencies, and unknowns make it impossible to cut over to Zero Trust.
Zentera customers find the most success by viewing Zero Trust as a journey. With this mindset, it’s easier to progress at your own pace.
Transitioning to Zero Trust one resource and use case at a time yields the best results.
And if you break the transition into small steps, you can more readily reflect after each migration. Stop and think about how to tune the process for your organizational needs, building momentum for the transformation from smaller successes. It also helps you train your team and users on how to get the most out of Zero Trust.
What could your Zero Trust transformation look like? Consider the most important factors by asking the following questions:
- What assets will I need to protect (VM, bare metal, cloud, OT) now and in the future?
- What kinds of network connections will I need? Web apps only, or full support for TCP/UDP/ICMP?
- What policies will I need to implement?
- Can my needs be met by a single platform, or will I need a suite?
- How can I staff the build and run phases of the transformation?