The downfall of firewalls
Leveraging Crowd Power to recreate trust in Internet’s Zero Trust Architecture
Castle, bastion and walls, the CTO sleeping pills
During the ’80s, most IT assets were sheltered in-house, in some basements or micro-data centers, which would provide a form of safety and allow to filter access from a central, all-seeing, machine. The “Bastion architecture”, with all assets in the same place and a big solid perimeter around it was there to last, the dawn of the Firewall age came and would last for three decades.
Firewalls became a must-have, protecting both the internal subnets and the DMZ from external aggressions. Those days were golden for CTOs, the machines were expensive but they provided a warm feeling of safety, which was quite legitimate. Firewalls quickly became CTO's cuddle bears, and as long as they had one, nobody could really punish them in case of a security breach.
And … Clouds brought rain
Alas, Data Centers outsourcing, and then the Cloud revolution took place. All assets had to be moved from the comfort of the basement to these ruthless, remote, insecure places. Well, of course, the trade-off was better connectivity, physical safety, stability, and scalability. With local servers to sustain the LAN activities, online servers to provide a decent customer-facing service, CTOs had to deal with a two-headed monster who was about to eat their cuddle bears alive. The bastion strategy started to be less than optimal and at the same time dangers were not so much coming from exposed services being knocked out like in the past decade, but more and more from web, mail & other applicative layers. More and more, the exposed surfaces and internal ones had to be interconnected, inviting external breaches to invade the LAN area. The cuddle bears were long gone already, retiring to safe (air-gapped) places, leaving the CTOs to their nightmares.
One line of defense to rule them all… or not anymore
The age of BYOD, IoT, Containers, Microservices, SaaS, VPNs, Wifi, smartphone app, Cloud drives finished the burial ceremony of the “Bastion strategy”. To be agile and quickly adapt to an ever-changing world, one had to embrace all those changes, while still securing company assets. Quite a headache if you ask me.
Since there was no Gandalf blocking the only access bridge in the Moria mines anymore, it became quite obvious that every and all IT asset had to defend by themselves and no longer rely on some form of centralized defense system.
Right, but how do you defend so many different assets, spread across so many different places, from different generations, and providing very different computing capabilities? What would be a universal way to provide a defense line to your connected weighing machine, cameras, AS400, smartphones, web applications, compiled ones, browsers, servers, and all other IT assets?
Well, the cure was in the poison… HTTP.
Intelligence may be King but reputation is Queen and ladies rule the house
HTTP is the only commonality to any IT asset, whether they are 30 years old or bleeding edge. HTTP is the only common protocol that those assets can leverage and the unexpected bonus is that it requires close to no resource to use. Making an API call (which is an HTTP one in the end) is possible for any and all devices, programs, languages, and platforms in the world, for an extremely marginal CPU/RAM cost.
But… How to secure a connexion between peers with just one HTTP call?
The solution has a name: Reputation.
What if any of your IT assets could safely determine if the incoming peer is trustworthy or not, with just one universal, very light, HTTP request? This is exactly what IP reputation is about.
Creating the largest and most accurate ever IP reputation system, one watcher at a time…
Previous attempts at IP reputation were born several decades ago. Their collection networks were smaller, privately held, and ran at company expenses. They worked decently well for Mails, like Spamhaus demonstrated, but not much more than that.
For such an IP system to be accurate, three dimensions need to be accounted for: Scale, Time, and Compatibility.
Scale is the most obvious one. The larger the network is, the most accurate it becomes. But at the same time, it also becomes more real-time because more collecting points mean fresher data. For the sake of comparison, running 10 000 hosts in a honeypot will cost you around $4M/year. Count $30 per host with a public IP per month, times 12, times 10000. Add a few hundreds of thousands of dollars for Devops and you have a global estimate of the costs.
On top of that, those honeypot systems would be limited because not all cloud or hosting environments are possible to automate. Only an Open Source software could be run by a large crowd, in varied environments. Over IPV4 or 6 addresses, on a DSL, Fibre, or 5G connexion, in every country in the world. 10 000 hosts would be more varied in this context than in a DevOps industrial one. Now think about… 50K hosts. Or 500 K. Who could beat such variety, precision and scale?
The second dimension is Time. An IP that was behaving aggressively yesterday was probably used by a server hacked by someone recently. Before that, this IP was fine, and once cleaned by the legitimate owner, it will be clean again. The time frame where this IP is dangerous is very precise and blocking this IP for too long would result in unwanted bans, potentially preventing legitimate people from accessing your service. The larger your reputation network grows, the less this is likely to happen because your IP reputation database will hold IPs for only some hours before discarding them if no sign of aggressive behavior is visible anymore or updating their entry otherwise. No IP reputation system should keep IPs without new insights or spotting for more than 72h.
The third dimension is compatibility.
A system could be dealing with tens or hundreds of thousands of queries from new IPs per day. Burdening machines with a large amount of IP to block is not an efficient option. Instead, only the compatible match of offending IPs and technological pairs should be distributed. If your system is Linux, Apache, Mysql based, you don’t need to get fed with IPs targeting IoT devices or Windows-based ones. Matching the threat profile to the IT asset one is a good way of sending only IPs that are relevant to the protected system.
CrowdSec: A free, open-source solution doing just that
This is exactly what CrowdSec is about. Creating a global IP reputation network of an unprecedented magnitude, which could be queried by any IT asset, at any point in time.
Creating a global network requires a few ingredients. 1/ An Open source, hence trustable product. 2/ A free offer to set no barrier in adoption. 3/ Provide value, to motivate people to use it, beyond just benefiting from the IP reputation system.
CrowdSec is just this. A free, open-source tool, you can use at work or home, in a small or large business, that provides instantly better security. CrowdSec achieves this through two engines: Behavior & Reputation. It brings a very powerful, versatile, and flexible behavior analysis engine. If someone failed its mail authentication for the fifth time, maybe they just don’t have the password and are trying to guess it. If an IP is knocking on every closed port of your system, it could be scanning you. If you get tons of 40x 50x error code, maybe your visitor is fingerprinting your website to find a vulnerability, etc. And when your machine identifies a threatening IP, it reports it to the global database for curation. This is where the Reputation system is growing. By sharing your sightings, you also get the benefit of the global IP reputation system. All of it for free.
Other IT assets that are just querying the IP reputation database can do it for a very small fee since they are not participating in the maintenance of it. But worry not, with a dollar, you can make thousands of queries, we want this system to be accessible to anyone.
Cuddle bears are going to throw quite a hell of a party.