In computer technology, we talk about security breaches and how to prevent them, but honestly, we have different kinds of breaches and different reasons to want to prevent them. Sure, we hear the stats like “60 percent of small companies that suffer a cyber-attack are out of business within six months,” but what is it about those attacks that cripple and destroy companies? And how can we create better cloud security policies and implement those policies so we don’t suffer attacks?
I personally see three kinds of reasons we want to manage our traffic: protecting infrastructure, legal compliance, and public embarrassment. Let’s run through these and what we need to be able to do in each case.
Security to Protect Infrastructure and Performance
Like graffiti or vandalism, many breaches have no goal other than to break stuff. These can come in the form of denial of service attacks (DoS or DDoS if distributed), SQL injection (see xkcd’s “Robert’; DROP Table Student;”), port knocking, request flooding, and all sorts of other ways people try to slow your network, delete your data, or shut down your systems.
For the most part, these are anonymous attacks. I don’t mean the hacker organization Anonymous, I mean that the requests have no identity associated with them — random scripts looking for random vulnerabilities to see if they can cause wreak random havoc.
The cloud security tools we use to deflect these kinds of attacks are woven throughout the ecosystem. Some basics include:
- Locking down your client-side code to limit browser script inspection and back-door discovery
- Using a CDN to distribute requests and limit the impact of a DDoS attack
- Using a WAF to stop cross-site scripting or SQL injection before it makes it to your application
- Locking down your ports and making every request to your service a signed request
- Logging everything and monitoring those logs for attacks
Security for Legal Compliance and Intellectual Property Protection
Attacks used to be only the worry of the DevOps team, who were concerned with keeping their systems running, but with a growing body of privacy and security laws like HIPAA and GDPR – where breaches carry fines and disclosure requirements – the legal team has gotten involved.
Now we get into a situation where someone may be impersonating someone else and stealing credentials or hijacking a session. Or you may have an exposure of personal data, like files stored on a public server that may – or may not – have been accessed by an outside party. The potential for a breach is now something we have to disclose to our user base, even if we don’t know whether or not someone got into systems.
So our security processes start to become more policy-driven and less technically responsive. We aren’t just trying to keep out bad actors, we’re also trying to see non-technical breaches – such as someone putting personally identifiable information in a non-secure data store – and we’re trying to identify who might have been affected in order to limit the scope of the notification.
Expanding on the things we’ve already done to protect our infrastructure, we add a few more guidelines to our security plan:
- Clear security policies linked to specific legal rules and responses
- Clear internal security policies for who gets to post what and where they can post it
- Rather than just logging, creating clear “chain of evidence” audit trails
- Secure every microservice and data endpoint with identity to enrich that chain of evidence
- Use single-use tokens whenever possible to avoid hijacked sessions
Security from Public Embarrassment
When Facebook got called out for allowing Cambridge Analytica to troll tens of millions of Facebook users’ data, Facebook made it very clear that this was not a breach:
The claim that this is a data breach is completely false. Aleksandr Kogan requested and gained access to information from users who chose to sign up to his app, and everyone involved gave their consent. People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked. (March 16, 2018)
To which most people say, “I don’t care, it was still wrong.” There is a balance between ease of use for your consumers and protecting your company’s reputation. If you make it too easy for people to collect personal information – even if you can justify that ease of disclosure within your terms of service – you set your consumers up for being targeted, and the eventual backlash pits you against your customers for an environment you created.
So while this isn’t an infrastructure issue, and it’s not a legal issue, it’s a big business credibility issue. Believe it or not, internet culture is still evolving at a breakneck pace. You need to be prepared to change your privacy, security, and even identity policies as the world changes around you. To do that, you need to consider some of the same tools used for infrastructure and legal purposes — only now you really need to look at your analytics.
- Examine user patterns, including delegated identity such as IoT devices
- Examine device and application usage for abnormal or excessive access
- Allow users to easily revoke permissions for applications and devices
- Allow users to audit their own usage patterns
- Make sure your security is declarative, not programmatic, meaning you can easily modify your policies as business needs require
As long as we are looking at security policy management holistically, and, well, actually looking at both the policies and the reasons for the policies, we shouldn’t find ourselves in trouble. But understanding the reasons behind our DevOps security policies is a huge step that many of organizations have yet to take.