Phone: 1 (800) 982-3332

There is a persistent myth in cybersecurity that meaningful breaches are the work of exceptional adversaries. Somewhere out there is a mysterious cadre of highly skilled operators with custom tools or the perfect zero-day exploit. Something cinematic.  It is a comforting belief because it suggests that failure, when it happens, is the result of something rare and fundamentally unstoppable.  But in practice, most incidents aren’t like that. They’re stubbornly ordinary. 

At TMG, we’ve had the (unfortunate) opportunity to reconstruct plenty of these events after the fact. And most of the time, the striking thing is not how clever the attacker was, but how familiar the attack was and how many opportunities to stop it were missed.

How Attackers Get In: Credentials, Phishing, and Unpatched Systems

Initial access is rarely dramatic.  It’s a reused password tied to an account no one has reviewed in years. Or it’s a vendor credential that was meant to be temporary but never revoked. It’s a user who receives a well-timed email at the end of a long day and makes a poor decision.

Phishing still works, not because users are careless, but because they are human. Context, timing, fatique… Attackers understand these and design around them.  Plus, the campaigns have gotten better. A well-crafted phishing email in 2025 does not look like the Nigerian prince letters people joke about. It looks like a DocuSign notification or a Teams message from IT. The design goal is to catch someone in the three seconds where they weren’t paying full attention.

In other cases, the pathway is even less interesting. Consider the known vulnerability that has been sitting unpatched . . . not because no one knows about it, but because patching it would require downtime on a production system and no one has been willing to schedule it. A remote access service exposed to the internet with default or weak credentials. A system that was deployed quickly to solve a problem and never fully integrated into the organization’s security model, because by the time anyone thought to ask, it was already “in production” and therefore untouchable.

None of this requires much ingenuity on the part of any attacker. It requires patience. And a target list, which — thanks to automated scanning tools — is trivially easy to build.

Alert Fatigue and the Gap Between Security Tools and Security Operations

These days, most organizations are not without defenses. They have endpoint protection. They have logging. They have monitoring platforms that, in theory, should surface unusual activity. That’s not the problem. The problem is what happens to that signal once it appears.

Anyone who has worked an alert queue in a real environment knows the math. A midsize organization can generate thousands of alerts per day. Some are clearly benign. Others are ambiguous. A few are genuinely concerning, but they arrive alongside dozens of others that look similar, often when the on-call analyst is also triaging a ticket backlog. Over time, patterns emerge, and teams learn what can be safely ignored.

This is how important signals can become background noise. It’s not really a question of negligence but rather of volume.

There is also a structural gap between having a tool and operationalizing it. A SIEM that is not tuned to the environment can generate volume without clarity. Every failed login, every port scan, every automated process triggering the same rule, all land in the same queue with the same severity. An EDR platform that is deployed but not actively managed produces alerts without context. In both cases, the technology is present. The investment has been made. But the process around it (the tuning, the escalation paths, the staffing to actually investigate what surfaces) is thin and poorly coordinated.

This is one of the more uncomfortable truths in cybersecurity: the gap between “deployed” and “effective” is where most risk lives. Organizations that have spent significantly on security tooling are not necessarily more secure than those that have spent less. They are more secure only if someone is watching, interpreting, and acting on what those tools produce.

Lateral Movement, Dwell Time, and Why Flat Networks Are Still the Norm

Once access is established, the progression is methodical.

The attacker begins by understanding the environment. What systems are reachable? What identities exist? Where are the boundaries, and how rigid are they?

In many organizations, those boundaries are softer than expected. Networks are flatter than they appear on architecture diagrams — because the segmentation project that was scoped two years ago got deprioritized when something more urgent came along, and then again the following quarter, and now it sits on a roadmap that no one references. Permissions have accumulated over time, granting broader access than originally intended. Service accounts exist with elevated privileges that no one has revisited since the system they were built for was last configured.

Dwell time is a hugely important variable, and frankly it doesn’t get nearly enough attention. Industry reports consistently put median dwell time (meaning, the gap between initial compromise and detection) at weeks or months, not hours. And it’s not because attackers are invisible. It is because their activity, when viewed step by step, looks normal. Maybe it’s a query against Active Directory that resembles routine administrative activity. Or it’s just another login from a valid account, followed by access to a file share that the account technically has permission to use. 

An attacker does not need full administrative control of every system. They need enough access to move from one point to another, gradually expanding their reach. Shared drives with open permissions, poorly segmented networks, and over-permissioned service accounts all make this easier.  And all three are endemic in organizations that have grown faster than their infrastructure governance.

From a purely technical perspective, nothing is “broken.” Every access event is authenticated, authorized, and logged.  Which is precisely the problem.

What Incident Response Exposes About Your Security Posture

By the time an incident becomes visible, whether through disruption, data loss, or external notification, it often feels sudden and shocking. Oh no! Something has just happened!

But in reality, something has been happening, often for a long time.

These breaches are the result of accumulated conditions. What organizations discover in the aftermath is rarely a single point of failure. It is a pattern.

They discover systems they did not know existed, or at least did not think about — the test server someone spun up three years ago, the SaaS integration that IT never approved but the sales team has been using since Q2. They find accounts that have more access than anyone realized, because access was granted incrementally and never audited as a whole. They uncover processes that work in theory but not in practice. Like an incident response plan that names roles no actual person currently holds, or a communication tree that has not been updated since the last reorg.

Perhaps most uncomfortable: they realize that their environment behaved exactly as it was configured to behave. Nothing failed in a dramatic sense. The controls that were in place operated as designed. The sum total of every configuration, every policy, every exception granted under pressure, simply did not fully reflect reality.

There is also a human layer that becomes impossible to ignore during response. Questions of ownership and authority surface quickly, and they surface under pressure. Who can make the call to shut down a revenue-generating system? Who communicates with customers, and what are they authorized to say? Who speaks for the organization to regulators, to the press, to the board?

If those answers are unclear, and they frequently are, the response slows down at exactly the moment when clarity matters most. Decisions that should take minutes take hours. Hours become days. And the window for containment continues to narrow.

Breach Prevention Starts With Operational Discipline

What makes these breaches frustrating is their familiarity.

The contributing factors are well understood. The controls required to address them are not exotic. Credential hygiene. Network segmentation. Alert tuning. Permission audits. Incident response planning that gets tested, not just documented. None of this is new. None of it is glamorous. It does not lend itself to conference keynotes or breathless vendor pitches.

It requires discipline, consistency, and a willingness to align stated policies with actual behavior,  which means confronting the distance between the security posture an organization describes and the one it actually operates.

So the question worth asking is not “could this happen to us?” You already know the answer: yes. It could happen to anyone.

The better question is: if someone got into your environment tomorrow, how far could they get before anyone noticed? And when your team sat down to respond, would they know who is supposed to do what?

If you are confident in those answers, you are ahead of most. If you are not, the work that matters is not a product purchase. It is the unglamorous operational work that most breaches, in hindsight, would have been stopped by.  That’s where the outcomes are decided.

Leave a Reply

Your email address will not be published. Required fields are marked *