DERBI: Diagnosis, Explanation and
Recovery from Break-Ins
Mabry Tyson
Douglas B. Moran
Pauline Berry
David Blei
Jim Carpenter
Ruth Lang
Artificial Intelligence Center
SRI International
333 Ravenswood Avenue
Menlo Park, CA 94025
http://www.ai.sri.com/~derbi/
Contrast: Traditional Intrusion Detection Systems
- Traditional IDSs monitor events as they occur and
produce assessment in real-time
- DERBI is invoked by "significant" events
- May be report from another site days
after intrusion occurred
- Must look backwards for evidence of intrusion that
occurred before event was noticed
Adaptive Reaction to Threat
- Controlled reaction to perceived threat
- Level of vulnerability
- Level of resources
- Level of attack
- Multiple triggering events
- Follow threads from each piece of evidence
- Detects pieces of attack, even if novelty precludes
coordinating them
Project Rationale
- Issue: limited expertise at most sites
- Large commonalties in many intrusions
- Reusing tools, techniques, tactics
- Possible variations on shared recipes
- Goals:
- Allow non-expert SysAdmin to understand nature,
extent, and recovery of break-in.
- Faster recovery for typical system
- Non-intrusive in normal state
DERBI Architecture
Diagram (40K)
Diagnosis: Reasoning from a Model of Intrusion
- Model the structure of intrusion event
- Follow intrusion at abstract level
- Relate concrete actions to the abstract stages
- Model the relationship of evidence to actions
- Indirect evidence provides clues to
prior and subsequent steps
- Explanation to sysadmin of intrusion is based
on these models
Benefits of Model
- Multiple levels of models provides extensibility
- Models reusable for other platforms
- Novel attack scenarios will give some evidence
at different levels
- Models can evolve with exploits
- Multiple chains of reasoning promotes robustness
General Model of Intrusion:
Components
- Point of Entry
- Acquire additional privileges
(optional)
- Main Purpose:
Theft, sabotage, publicity, ...
- Camouflage/Concealment
- Subsequent activity:
Reentry, data collection
Camouflage as
Indirect Evidence
- Hide login by cleaning up wtmp log
- lastlog inconsistency
==>
root was compromised
- For a user: wtmp/lastlog inconsistency
==>
which user compromised
- For that user: last-access dates on files
==>
when compromise may have occurred
Camouflage:
Examples of Evidence
- Replaced system commands
- Detection by direct comparison
- Checksum: cheaper, spoof-able
- Binary comparison: more expensive,
original may be unavailable
- Detection by features
- Built from different code base
- Hypothesize what it hides and test
- Suspicious date (file or directory)
Camouflage:
Examples of Evidence
- Argument to turn off camouflage?
- So intruder can see his files and processes
- Some "conventions": test if present
- Suspicious files and processes
- Found by other means
- Confirm modified system command hides suspicious file or process
Reasoning from the
Available Evidence
- Replaced system commands
- Primary function suggests most likely purpose: ps
==>
hidden process
- Alternative function (eg, Trojan horse)
- Replaced system library is an indirect form of
replaced system command (harder to identify)
Chain of Reasoning
- Modified PS command
==>
hidden process
- Hidden process
==>
network sniffer or
- Network sniffer
==>
info storage or
- Info storage
==>
retrieval method
- Retrieval method
==>
re-entry or transmit
- Re-entry
==>
backdoor or ...
DERBI
- Reactive tool for analyzing intrusions
- Models promote extensibility and robustness
- Combine models of intrusion and evidence to guide search for additional information
- Model-driven explanation