Deception Technology 101 – from a Deception Engineer

Table of Contents
  1. Preface
  2. What Cyber Deception IS and IS NOT
    1. Cyber Deception IS
    2. Cyber Deception is NOT (Honeypots)
  3. Benefits of Cyber Deception
    1. Deception can be incredibly lightweight
    2. Deception is focused on heuristics not all tooling can catch
    3. Deception is adaptable
  4. Is Deception right for your organization?

Preface

If you have a keen eye, you probably have noticed that I’ve had a considerable amount of time between posts here – a lot of time. This wasn’t by design, life happens, however, between posts I’ve been able to immerse myself into a fairly new discipline within the security space. If you haven’t guessed by now it would be the Deception Technology space, and I hope this is the first of many posts on the topic.

Webster’s defines the word deception as:

  • the act of causing someone to accept as true or valid what is false or invalid the act of deceiving

Focusing on “the act of causing someone to accept as true or valid” is incredibly important here, and we’ll get into that later, because I believe there are some serious misunderstandings when it comes to information security deception, its purposes, and its applications.

There are already some great resources out there regarding deception, but, since you’ve managed to stumble onto my site here, and this is my site with my rules, let me clue you into how I go about things.

What Cyber Deception IS and IS NOT

Let’s again consider the definition of deception as “the act of causing someone to accept as true or valid”. To deceive someone, or something, is to purposely make them accept a truth that you determine. If we were to tie that understanding back to information security, there could be two sides of this deceptive coin:

  1. Attackers trying to deceive someone into doing something
    • example: creating a convincing looking website to phish credentials
  2. Defenders proactively attempting deceive attackers to detect threats

So, with that brief introduction in mind, my opinion is that true cyber deception lies in the latter.

Cyber Deception IS

As engineers in the cyber deception space, our goal is to not only deceive would-be attackers but to also detect threats as quickly as possible. This means that we are not only engineering technical solutions to serve as decoys, canaries, etc, we are also engineering environments and, most importantly, we are storytellers.

Technology is rather easy these days. The difficulty to deploy anything has a fairly low barrier to entry whether it be a cloud compute resource or a configuration on an endpoint. The same could be said for picking up a paintbrush or learning to type words on a computer. The individual actions on their own are simple, however, it takes time to create a story or paint an entire portrait – that’s where true cyber deception comes into play.

It may also help understand what deception is not.

Cyber Deception is NOT (Honeypots)

I personally believe that the definition of honeypots tend to be very ambiguous. CrowdStrike describes them as “a cybersecurity mechanism that uses a manufactured attack target to lure cybercriminals away from legitimate targets”. Kapersky describes them as a way of “baiting track for attackers“. Honeypots have also been proven to be invaluable when it comes to discovering what attackers are up to – there’s no refuting that!

The problem comes into focus when there is a miscategorization of the applications of honeypots. Can honeypots be used to detect attackers? Absolutely, but in the same way a simple HTTP request to the right endpoint can trigger an alert. They’re a singular piece of technology in a greater puzzle of information security.

This also means that we, as deception engineers, are not in the business of threat analysis. Do we exactly care how a particular piece of malware can bypass AV enginers? We might be curious, but ultimately we want to capture malicious intent and work to remove the threat as quickly as possible.

Benefits of Cyber Deception

Deception can be incredibly lightweight

Simple pieces of deception are often items like canary credentials or canarytokens. They’re just realistic-looking credentials that, when properly placed, can be used to detect malicious intent. In practice, they just live on the filesystem somewhere and that’s all they do. They have no significant storage space occupied, no CPU, no memory usage, and, best of all, do not interfere with legitimate end users. While some EDR solutions can be configured to completely lock down a machine – not ideal for legitimate end users attempting to do their work – canarytokens will simply persist and often go unnoticed.

This can also directly impact the financial investment of a security program. Static resources can be incredibly cheap to host since they just persist in an existing environment. Even cloud-based decoy resources can be cost effective since they, in theory, should only incur cost when used. For deception resources they should ideally never be used which means costs stay down.

Deception is focused on heuristics not all tooling can catch

EDR solutions do a fantastic job of catching all the bad things ™️ using all sorts of methods these days, heuristics, signatures, domain reputation checking, AI; the list could go on! However, what happens when legitimate tooling and protocols are used for bad things? How do you detect that?

One example of this gap is enumeration of Active Directory. SharpHound and other *Hound-based tooling can be often picked up my AV or EDR solutions, but, what happens when it doesn’t? What happens when the tooling is proxied through a legitimate endpoint? There’s nothing unusual about querying the or using existing configurations in the domain – even if they’re misconfigurations! Deception can be used to delineate the legitimate use cases from the behavior of a bad actor.

Deception is adaptable

The unique proposition of deception, as mentioned earlier, is that is relies on us crafting our own story. As a result, deception can be catered to any situation or environment as long as you can understand your situation. Simply being able to understant what you are trying to protect and how it can be compromised can be enough to help conceptualize a deception strategy.

Deployment strategies that are simple can also be adjusted on the fly. Let’s say a compromise does happen and your approach needs to change, deception’s independence of major workloads means that you can add/remove/modify as quickly as you can write your response playbooks.

Is Deception right for your organization?

I may be biased, but it’s a strong yes. You can get started literally for free thanks to Thinkst and their canarytokens.org offering. In under 10 minutes you can generate a few canarytokens and place them in a few spots to wait for interaction.

At a larger scale, canarytokens can be deployed across endpoints and networks within a few months to establish a baseline program. Once they’re deployed all you have to do is sit and wait for interactions. With a dedicated deception team? The sky is really the limit.