Article

AI Assistants in Security Operations Centres

LEVERAGING AI TO IMPROVE SECURITY GOVERNANCE AND OPERATIONS

The integration of AI-driven assistants into Security Operations Centres (SOCs) and broader security functions is offers a pathway to enhance efficiency, bridge resource gaps, and build more resilient and proactive defence mechanisms.

Introducing AI Assistants to get tangible results in SOCs

Reply’s experience shows tangible results in real-world deployments of AI Assistants in security operations. In one case study focusing on strengthening a SOC Tier 1 team’s capabilities, the implementation of an AI assistant yielded remarkable improvements: the time required for initial incident triage was reduced by as much as 50%, with a similar 50% reduction in overall response time. This acceleration did not come at the expense of quality; on the contrary, the quality of analysis was significantly improved due to superior event correlation and data enrichment provided by the assistant.

Strengthen existing capabilities

The journey towards adopting an AI assistant in a security context begins with a strategic choice between two principal paths: strengthening existing capabilities or covering procedural gaps. The first strategy involves reinforcing the processes already in place. For instance, a company with an established incident response workflow can leverage AI to augment and improve it. This is achieved by introducing sophisticated automation and data enrichment playbooks designed for specific use cases, which enhances the quality and efficiency of security management. A key objective is to improve operational efficiency by focusing on areas where teams are overburdened, thereby reducing their manual workload. This approach promotes a cycle of continuous improvement, where existing workflows are consistently strengthened with augmented intelligence.

Covering procedural gaps

The second path is chosen when an organisation identifies a significant gap in its security processes, perhaps following a formal gap analysis. Instead of optimising something that already functions, the focus shifts to creating entirely new workflows to manage previously unaddressed security scenarios. This is particularly relevant when faced with new regulatory requirements, such as those imposed by NIS2 or DORA, which may mandate entirely new processes like rapid post-incident notification. AI assistants can be instrumental in bridging these gaps, especially in areas suffering from a lack of resources or specialised expertise. This strategy is not merely about filling holes but about enhancing adaptability and scalability by introducing new capabilities for proactive defence.

How to choose?

The decision between these strategies is influenced by a range of critical factors unique to each organisation. These drivers include the specifics of data handling, such as whether a SIEM is hosted on-premise or in the cloud, and the overarching compliance and regulatory landscape governing data use. Within the SOC context, considerations extend to the desired approach for incident analysis—whether to apply a uniform methodology to all incidents or to focus intently on a select group of high-stakes cases. The existing level of automation, for example through the implementation of a Security Orchestration, Automation and Response (SOAR) platform, is another crucial element. Finally, the company’s overall sentiment towards artificial intelligence and its integration into critical security processes plays a defining role in shaping the strategic direction.

Out-of-the-Box solutions

Once a strategy is defined, the next consideration is the technology itself. Here, organisations face another choice: implementing a ready-made, out-of-the-box solution or embarking on the development of a custom-made assistant. Off-the-Shelf tools are ready to use and offer different levels of automation and capability, from basic assistant functions to the more advanced role of a virtual analyst. The primary distinction lies in their operational paradigm. Some function as passive assistants, providing on-demand support for an investigation when prompted by a human analyst—for example, by supplying more information about a specific user or IP address. Others operate as more autonomous virtual analysts, taking an active role by conducting a preliminary, first-level analysis of an incident, thereby preparing a well-contextualised case for a Tier 2 analyst to act upon swiftly and precisely.

“Build your own Assistant”

Conversely, an organisation may opt to pursue a custom-made approach: this path offers unparalleled flexibility, as the architecture can be tailored to meet the exact requirements of specific, often unique, company use cases. This bespoke nature allows for a higher degree of control and precision. However, this flexibility comes at the cost of higher initial setup investment. The process involves integrating a suitable AI model with various in-house tools, such as SOAR platforms and other automation technologies, to construct a proprietary security assistant. An effective architecture for such a system involves creating a comprehensive knowledge base fed by a continuous stream of data from security tools like SIEMs and XDRs, as well as ticketing platforms and analyst input. This knowledge base, which understands the specific context of the organisation’s reality, is then leveraged by the AI model through orchestrators to provide active and impactful support to security operations.

For the SOC Team

AI is invaluable for reducing the burden of repetitive, low-value activities. A classic application is the use of chatbots to provide instant enrichment and contextualisation during an investigation. An analyst can simply ask the assistant for more information about an indicator of compromise, such as whether a particular user has engaged in similar activities before, saving precious time. Furthermore, AI can automate the entire first phase of an investigation, running pre-configured plays to gather preliminary data for certain types of incidents. This provides the human analyst with a rich, contextualised alarm from the outset, enabling a faster response and a higher quality of analysis. Reporting and ticket generation, often tedious but necessary tasks, can also be delegated to an AI assistant that understands the security context well enough to draft specific and accurate reports.

For security managers and CISOs

For who operate at a more strategic level, AI assistants serve as a powerful tool for governance and oversight. By integrating with ticketing systems, SIEMs, and other security platforms, the AI can develop a holistic view of the security posture, providing managers with on-demand access to correlated data without the need to consult multiple teams. This centralised intelligence is crucial for compliance activities. For example, an AI assistant can be queried to assess an organisation’s adherence to the requirements of the NIS2 directive, identifying key areas for improvement. The assistant can parse the complex regulatory text, extract key requirements, and cross-reference them against the company’s internal policies and documentation to flag conflicts or gaps.

For System and IT Administrators

In configuration management, an AI assistant that understands the organisation’s technology stack can support administrators by identifying misconfigurations that may have led to a security incident. One of the most compelling use cases is in vulnerability and patch management. When a new CVE is announced, the AI can parse the advisory, identify all affected assets within the corporate environment, assess their exposure and business criticality to determine priority, identify the system owner, and even initiate contact to prompt remediation. This transforms a complex, multi-step manual process into a highly automated and efficient workflow. Similarly, for policy management, the AI can automatically identify violations and streamline communication with the relevant stakeholders.

Leverage on Reply’s experience with AI Assistants in SOCs

Reply’s wide experience shows that successful adoption of AI assistants hinges on selecting a strategy that is carefully aligned with the organization’s specific needs and maturity level. As these intelligent systems become increasingly central to enterprise defences, it is important to remind that the AI systems themselves must be robustly protected. Reply’s security experts suggest to maintain a human-in-the-loop approach, combining manual validation for critical actions with technical guardrails that strictly limit the assistant’s field of action.

Cyber Security Operation Center

Communication Valley

Communication Valley Reply is the Reply Group company that specialises in the provision of managed security services. Through its Cyber Security Operation Center, which is ISO27001 certified and operates on a 24/7 basis, 365 days a year, the company provides business continuity and fraud prevention to both mid-sized and large organisations. The services offered by Communication Valley Reply include system and security monitoring, remote SIEM management and optimisation, log management, security device management and network device management. In addition, Communication Valley Reply supplies highly specialised banking fraud detection services that allow online fraud incidents to be identified and the necessary countermeasures to be taken, as well as IT operations packages that allow entire systems to be outsourced on a 24/7 basis. Communication Valley Reply works closely with major research bodies, international universities and the main technology partners in the sector, with the aim of setting the European benchmark for managed security services.