Zero Trust is a network security model developed by John Kindervag at Forrester Research in 2010. It radically shifts away from traditional access models that assume everything within the perimeter can be trusted.

Zero trust models provide secure connections to internal applications and data, verifying identity continuously and assessing permissions based on context. It prevents lateral movement by attackers and limits the damage of breaches that do occur.

The Evolution of Network Security

The threat landscape for data and online services is constantly changing, but the methods to secure those assets are also maturing. As the value of data and online services increases, it’s crucial to have strong defenses between those resources and all undesirables trying to steal, infiltrate, or otherwise damage them. That’s why network security has become a crucial part of every modern business.

As time goes on, attackers are becoming more sophisticated and more determined to find ways to compromise the networks they target. The good news is that security professionals are working just as hard to find and implement new technologies to counter those threats.

One such security innovation that began emerging in the late 80s was anti-virus software. These programs helped to stop malicious code from infecting computers and stealing, corrupting, or otherwise damaging data. Unfortunately, the bad news is that these tools are only effective against known code – and as attacks get more sophisticated, there will always be new types of viruses to combat.

Zero Trust was developed to respond to the growing need for better, more comprehensive network security solutions. It’s based on the idea that all access to data and applications should be verified as if it were coming from an untrusted Internet connection. The goal is to prevent lateral movement by removing the ability of an attacker to access other parts of a network from the point they gained initial entry. The zero trust model utilizes microsegmentation, firewall as a service, SD-WAN, and secure web gateway to verify users and their devices as they enter the network.

The Early Years of Network Security

As network technologies advanced, cybersecurity methods had to evolve quickly. The 1970s saw the creation of ARPANET, which would eventually be part of the Internet and open a world of connectivity to universities, military installations, and commercial businesses. But even these early networks were vulnerable to cyber-attacks. In one of the first known attacks, a student created a virus that could connect to computers in a network, use vulnerabilities to copy itself, and then move from computer to computer, a precursor to today’s ransomware and other automated threats.

Firewalls started looking deeper into the data that flowed through them, detecting anomalies and stopping threats before they could cause harm. However, these systems were only sometimes enough to keep up with attackers with plenty of cash and time to develop their exploits.

The traditional security model was based on castle-and-moat security, with layers of protection around internal systems and assets to stop attackers before they gain access. The problem is that these layers of defenses were often built on top of one another, making them easy for attackers to get around if they were intelligent or fast enough. Zero-trust network access and the principle of least privilege require every user, device, and application to be fully authenticated, authorized, and continuously verified. The use of rich information and analytics to identify and address anomalies is combined with micro-segmentation to restrict lateral movement.

The Mid-90s

During the late 80s, network use expanded rapidly as universities, militaries, and governments connected to the Internet. This expansion led to the need for basic network security protections. Unfortunately, attackers quickly realized they could exploit the new vulnerabilities and attack networks. For example, a German hacker hacked into 400 US military computers and attempted to sell their secrets to the KGB. This cyber espionage threat and state-sponsored attacks prompted the US government to create new resources for managing these events.

It also became clear that it needed to be more practical to keep track of attackers because they could move around the network using various methods (proxies, temporary anonymous dial-up accounts, wireless connections, and more). They often use multiple devices and may be located in another jurisdiction. Additionally, they could delete logs to cover their tracks. It made identifying, investigating, and prosecuting such attacks difficult or impossible.

Network security experts developed various tools and systems to protect networks in response to these threats. These included firewalls and anti-virus programs. 

The Future

For decades, the goal of network security has been to protect a safe, trusted internal network from dangerous, unknown external actors. It has been a flawed strategy for many reasons. Still, it is an overarching goal that has led to the creation of the cybersecurity industry and the myriad of solutions available today.

Organizations didn’t take long to recognize the need for basic protections as they connected their existing networks to the outside world – a world that had no idea of them or any trust in them. They developed architectures that created layers of protection like walls, ramparts, bulwarks, parapets, trenches, and moats to keep attackers from getting past them to the most sensitive parts of their internal network.

As the Internet expanded and hackers became ever more sophisticated in their attacks and the tools they used to execute them, these old approaches failed. Attackers could exploit firewall flaws, breach credentials and systems through phishing and malware attacks, and use lateral movement to reach the most critical parts of a company’s network.

New technologies such as Zero Trust and microsegmentation are now required to stop these threats. Zero Trust is a model where every user, system, and endpoint is considered untrusted until they can be authenticated, authorized, and continuously validated for security posture and configuration. It is not the same as a traditional perimeter, and it is a much more holistic approach to network protection that should be supplemented with ML-powered Network Detection and Response solutions.