Security has changed a lot over the last few decades. Here, we look at how practices have changed, where current practice is heading, and how to achieve the best from modern security services. This part focuses on the technology side of security operations, and a forthcoming blog will cover the business and cultural implications.
Once upon a time, life was simple. Businesses had an inside, initially all stand-alone machines and then maybe a simple network to allow file-shares and printing; and everything else was outside. Transfers between the two zones were by floppy disc, possibly via an anti-virus 'sheep-dip' machine for the particularly security conscious. Those times did not last.
Over the last thirty years or so, business boundaries have become more and more blurred. The Jericho Forum used the term 'deperimeterisation' in the earlier 2000s, pushing a model based on encryption and mutual authentication instead of a defined boundary, but even today, the perimeter plays an important role, even though it has rather more gateways through it. More about these later.
Security should be seen as a business enabler: by putting controls in place to lower risk, actions can be taken that would have been considered too high a risk before. Hence as the internet developed, businesses allowed employees to reach out, through firewalls and content filters, and allowed customers to reach in, through reverse proxies, to on-line stores hosted in mostly isolated segregated network zones. Employees, mostly travelling sales staff to start with, could connect back to the corporate network through VPN links and early smartphones could provide a limited visibility of email and appointment calendars. Then came deeper penetration into the core networks, such as providing managed service teams access to the devices they looked after on your behalf; and the move of core systems outwards, either into 'the cloud' as an extension of the virtualised datacentre or into somebody else's systems as a SaaS offering. Device level security, such as having a corporate signed certificate on each endpoint, provides additional access control over the basic username and password. For sensitive system access MFA provides yet more defence in depth, although the style of the additional factor keeps changing. Hardware tokens, SMS codes, and authentication apps have all been fashionable in their time. Biometrics seems to always hover in the background without ever becoming widely used, possibly because of its one overriding flaw - If the biometric identification is compromised there is no way to update it. This kind of authentication is also commonly seen as an invasion of privacy.
Combining many SaaS platforms together - Azure AD for authentication, Office365 for document storage and handling, ServiceNow for incident handling, multiple on-line meeting providers, and so on, means that a modern employee does not even need to start up the corporate VPN link any more to be fully productive for 99% of the time. The perimeter is now completely transparent, maybe even irrelevant, to them. Security which is effective but transparent to the legitimate user is good security as they will not try to circumnavigate the obstacles that they perceive it brings. The last 18 months of working remotely have proved the value of these services, and by some estimates, have accelerated a working from home culture that otherwise might have taken another five years to reach.
Although business boundaries are still in place to a large extent, albeit now with many more gateways through them in both directions, the model of extending the distrust of the outside into the business side of the network, and only trusting what you explicitly verify is seeing a resurgence again. These days, it is known as the 'Zero Trust' model. However, on its own, pulling defences back to just at the destination systems, as deperimeterisation proposed, ignores the fundamental security principle of defence in depth. It is far safer to block simple attacks early on, such as by a firewall at the datacentre perimeter or an access control list on a cloud-based load balancer, than to rely on having perfect last-line defences on every internal machine. Zero trust provides a strong final layer of protection, but it should still be one layer of many.
So, if security is now effective and unintrusive for the end user, what is going to happen next? The goal of security is to protect assets, and an organisation’s most valuable asset is its information. Protecting IT systems is just a way of protecting the data on those systems. Hence, asking where security is going next is the same as asking where the data is going next. Digital Rights Management tools have been around for a long time, but the management side was historically difficult. With the improvement in DRM systems, tying a layer of protection directly around sensitive documents is an increasing trend. Again, it can be transparent, or only require a minor tweak to working practices for legitimate users but provides protection should the document go astray. Services now keep businesses in control by stopping accidental or malicious extraction of files and reporting upon who has accessed or edited documentation.
Operational data is unlikely to ever be recorded in a DRM protected document, but is also important and valuable, and the gradual evolution of SCADA and other Operational Technology systems using proprietary serial links and isolated controllers into ‘Industrial IoT’ using standardised TCP/IP protocols and integrated into corporate networks has greatly increased the risk these systems face. The increasing volume of data and the need for quick responses is pushing the need for network-edge based on-the-spot processing rather than bringing all the raw data back to central systems. These new edge systems are likely to be outside any existing security mechanisms and need their own protection according to their criticality.
Behind the scenes, the security operations centre must look after all the above. Two areas of improvement help here. It used to be that ‘best-of-breed’ was a good policy for security controls, but when every control type has its own reporting and management dashboard, security analysts can easily miss one warning while they check something else. Integrating multiple events into related incidents was a difficult and manual process. These days, a single vendor suite with an integrated dashboard helps provide a common and correlated viewpoint – a single pane instead of multiple pains. Controls at different points in the network can complement each other to provide greater protection than the sum of the individual elements while still being easy to manage. Microsoft’s integration of multiple controls into the Azure Security Centre is a particularly good example of this. The other area is automation of both defensive and offensive systems, whether this is moving a virus infected endpoint onto a quarantined network segment until it is sanitised or detecting unusual behaviour on a system and blocking it or providing continuous penetration-test style activities against key systems – the next level up from vulnerability scanning. Network attackers have their own automated scanning and exploitation tools, so any defensive system now needs to respond in real time.
In summary, traditional solutions comprising individual components driven by manual processes is no longer seen as best, or even good, practice. Security controls should now be linked, under integrated management, underpinned by user and network behavioural analysis, informed through threat intelligence, continuously and automatically scanned for exploitable weaknesses, and with automated response against recognised threat scenarios.
Here, at Net Reply, we have specialist teams providing expertise in security and future network evolution. If you have concerns or questions about security architecture, security operations, operational security, edge networks or security service automation, please contact Reply for a professional consultation or check us out on LinkedIn.