Splunk is designed to increase visibility across an enterprise by bringing all logs into a single pane of glass accessible by anyone who has a need to know.
We have seen cases where a company invests significant funds into building a Splunk solution to increase visibility. The project goes well and produces significant results leaving stakeholders ecstatic. However, as time goes on, they start missing security events that should have been detected.
Environments are never static as they are constantly changing. This increases the likelihood of blind spots developing when monitoring falls behind change.
In this blog series we will be diving into how finding blind spots in your data and security can help you with your: time, coverage, scheduling and failure detection. In this blog we will uncover how time is valuable to maintaining visibility.
Any enterprise network is only as secure as its weakest link, so your cybersecurity visibility must extend to all types of assets and all sorts of security issues. Maintaining visibility requires three aspects related to time:
This starts with logging purchases and ends with logging final disposition. Detect violations (requires deep understanding of technology)
Knowing what is expected such as when something stops reporting or when undocumented traffic appears. This includes making sure detections are working as designed (searches are running and not skipped for example).
This often means looking at why searches fail or succeed, looking for ingestion failures and event delays, and designing with failure detection mechanism.
This is where event delays come into play. By creating 3-5 second delays with proper alert and dashboards one can:
Example Dashboard
Example alert creation
Here’s a good starting place if issues are identified:
https://docs.splunk.com/Documentation/Splunk/latest/Data/Resolvedataqualityissues