Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RBA Migration #3204

Merged
merged 180 commits into from
Jan 21, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
180 commits
Select commit Hold shift + click to select a range
a0a94f3
Application detections
ljstella Nov 13, 2024
4daecb5
Merge branch 'develop' into rba_migration
ljstella Nov 13, 2024
25523a7
Branch was auto-updated.
patel-bhavin Nov 14, 2024
c88e375
Branch was auto-updated.
patel-bhavin Nov 14, 2024
b2ac6bd
Branch was auto-updated.
patel-bhavin Nov 14, 2024
92cc97a
endpoint first pass
ljstella Nov 14, 2024
318311f
cloud detections initial translation
ljstella Nov 14, 2024
b901a04
Merge branch 'develop' into rba_migration
ljstella Nov 14, 2024
6089391
Deprecated Detections initial translation
ljstella Nov 14, 2024
f6587bd
network detections initial translation
ljstella Nov 15, 2024
c218c03
web detections initial translation
ljstella Nov 15, 2024
f88eb16
endpoint detection score fix
ljstella Nov 15, 2024
3b43f75
cloud detection score fix
ljstella Nov 15, 2024
33574d7
deprecated detection score fix
ljstella Nov 15, 2024
6792bcc
network detections score fix
ljstella Nov 15, 2024
c4eb58c
web detections score fix
ljstella Nov 15, 2024
c6739f2
application detections score fix
ljstella Nov 15, 2024
d4a93b0
Application detection score field rename
ljstella Nov 15, 2024
5141230
endpoint detection score field rename
ljstella Nov 15, 2024
a33df66
web detection score field rename
ljstella Nov 15, 2024
77e5b9b
cloud detection score field rename
ljstella Nov 15, 2024
5c299c7
deprecated detection score field rename
ljstella Nov 15, 2024
17b942b
network detection score field rename
ljstella Nov 15, 2024
e88b279
missing types
ljstella Nov 15, 2024
c9186e0
endpoint: lowercase rba types
ljstella Nov 15, 2024
6384bea
web: lowercase rba types
ljstella Nov 15, 2024
a27edb3
cloud: lowercase rba types
ljstella Nov 15, 2024
ddbaa75
network: lowercase rba types
ljstella Nov 15, 2024
b554c8d
deprecated: lowercase rba types
ljstella Nov 15, 2024
92ab1d5
application: lowercase rba types
ljstella Nov 15, 2024
ce696ff
web: ip address typefix
ljstella Nov 15, 2024
19a96c0
endpoint: ip address typefix
ljstella Nov 15, 2024
e21906f
deprecated: ip address typefix
ljstella Nov 15, 2024
bf5d4c1
cloud: ip address typefix
ljstella Nov 15, 2024
e902ee2
application: ip address typefix
ljstella Nov 15, 2024
8b73d42
network: ip address typefix
ljstella Nov 15, 2024
18a44d5
application: more typefixes
ljstella Nov 15, 2024
bc14854
endpoint: more typefixes
ljstella Nov 15, 2024
d8616d2
cloud: more typefixes
ljstella Nov 15, 2024
4b5f5a9
deprecated: more typefixes
ljstella Nov 15, 2024
f15bde0
network: more typefixes
ljstella Nov 15, 2024
393c038
web: more typefixes
ljstella Nov 15, 2024
4369125
artifact of cleanup
ljstella Nov 15, 2024
9225593
application: more typefixes
ljstella Nov 15, 2024
e3fb34c
cloud: more typefixes
ljstella Nov 15, 2024
911bfd0
deprecated: more typefixes
ljstella Nov 15, 2024
8f18a20
endpoint: more typefixes
ljstella Nov 15, 2024
1051b93
network: more typefixes
ljstella Nov 15, 2024
4a3bc66
web: more typefixes
ljstella Nov 15, 2024
8d7aabc
web: remove rba from hunting
ljstella Nov 15, 2024
189cc59
endpoint: remove rba from hunting
ljstella Nov 15, 2024
c3f94b6
application: remove rba from hunting
ljstella Nov 15, 2024
91e76c6
cloud: remove rba from hunting
ljstella Nov 15, 2024
9811512
deprecated: remove rba from hunting
ljstella Nov 15, 2024
2f98865
network: remove rba from hunting
ljstella Nov 15, 2024
1af44a3
Cleanup rba
ljstella Nov 15, 2024
8b6e0cf
cloud: more cleanup
ljstella Nov 15, 2024
5e9af4f
endpoint: more cleanup
ljstella Nov 15, 2024
dfd645f
web: more cleanup
ljstella Nov 15, 2024
26e4c85
application: remove rba config from correlations
ljstella Nov 15, 2024
9398d80
cloud: remove rba config from correlations
ljstella Nov 15, 2024
a5850dd
deprecated: remove rba config from correlations
ljstella Nov 15, 2024
8b2b6bf
endpoint: remove rba config from correlations
ljstella Nov 15, 2024
b600be3
web: remove rba config from correlations
ljstella Nov 15, 2024
67dec1e
application: threat object type cleanup
ljstella Nov 15, 2024
6805f6d
cloud: threat object type cleanup
ljstella Nov 15, 2024
5d8524d
deprecated: threat object type cleanup
ljstella Nov 15, 2024
eb9dfba
endpoint: threat object type cleanup
ljstella Nov 15, 2024
4722d10
network: threat object type cleanup
ljstella Nov 15, 2024
e0718d1
web: threat object type cleanup
ljstella Nov 15, 2024
801b8c3
stragglers cleanup
ljstella Nov 15, 2024
88fd263
Merge branch 'develop' into rba_migration
ljstella Nov 22, 2024
8014132
Updates with develop
ljstella Nov 25, 2024
9f14118
application: cleanup tbd messages
ljstella Nov 27, 2024
33d6fd9
Cloud: cleanup tbd messages
ljstella Nov 27, 2024
85628e2
Endpoint: cleanup TBD messages
ljstella Nov 27, 2024
67ad9f5
network: cleanup tbd messages
ljstella Nov 27, 2024
9ab5e6a
web: cleanup TBD messages
ljstella Nov 27, 2024
d2edfa4
deprecated: cleanup TBD messages
ljstella Nov 27, 2024
606f42a
cloud: cleanup of risk object type other
ljstella Nov 27, 2024
78f39ca
endpoint: cleanup of risk object type other
ljstella Nov 27, 2024
99f9c99
cloud: missed one
ljstella Nov 27, 2024
46f4e1e
Branch was auto-updated.
patel-bhavin Dec 2, 2024
c7bac21
Branch was auto-updated.
patel-bhavin Dec 2, 2024
6357f18
Merge branch 'develop' into rba_migration
ljstella Dec 10, 2024
6bc9c2a
Merge branch 'develop' into rba_migration
ljstella Dec 10, 2024
3a965a7
rba conversion
ljstella Dec 10, 2024
14edbda
Branch was auto-updated.
patel-bhavin Dec 10, 2024
d30e04f
small grammar fix
ljstella Dec 10, 2024
8fefa90
Merge branch 'develop' into rba_migration
ljstella Dec 11, 2024
ebd4437
Branch was auto-updated.
patel-bhavin Dec 16, 2024
357eb89
Rename to A&I compatible user field
ljstella Dec 16, 2024
5adad84
Merge branch 'develop' into rba_migration
ljstella Dec 16, 2024
51d034c
Merge branch 'develop' into rba_migration
ljstella Dec 16, 2024
d5328df
Merge branch 'develop' into rba_migration
ljstella Dec 16, 2024
0e2e844
Merge branch 'develop' into rba_migration
ljstella Dec 19, 2024
e1dde45
Merge branch 'develop' into rba_migration
ljstella Dec 23, 2024
8b1b3d5
first batch of updates to
pyth0n1c Dec 23, 2024
27c9cb5
Second round of lookup file cleanup
pyth0n1c Dec 23, 2024
32da5d2
another round of lookup file updates
pyth0n1c Dec 23, 2024
32fb17c
More cleanup of lookups
pyth0n1c Dec 24, 2024
29b2332
more cleanup of lookups to follow new format.
pyth0n1c Jan 3, 2025
f4b6770
finish lookup cleanup
pyth0n1c Jan 3, 2025
79f11c3
removes types from stories
pyth0n1c Jan 3, 2025
c1b0b64
remove extra fields from baselines and investigations
pyth0n1c Jan 3, 2025
9c8a019
Branch was auto-updated.
patel-bhavin Jan 3, 2025
8b4ca42
remove collection for all kvstores. this will just come from the nam…
pyth0n1c Jan 3, 2025
0706bfd
Merge branch 'rba_migration' into strict_yml_from_rba
pyth0n1c Jan 3, 2025
acbe468
Branch was auto-updated.
patel-bhavin Jan 3, 2025
d1442c2
remove update_timestamps, confidence, impact,
pyth0n1c Jan 3, 2025
fdaa038
Finish removing extra fields, or renaming
pyth0n1c Jan 3, 2025
3b8e769
progress on input/output lookup
pyth0n1c Jan 4, 2025
9fb7b90
Stragglers migration
ljstella Jan 7, 2025
f49899a
Merge branch 'develop' into rba_migration
ljstella Jan 7, 2025
0e6251e
Branch was auto-updated.
patel-bhavin Jan 8, 2025
6f3c999
Branch was auto-updated.
patel-bhavin Jan 9, 2025
3756602
Merge branch 'rba_migration' into strict_yml_from_rba
pyth0n1c Jan 9, 2025
ad772ee
More stragglers
ljstella Jan 9, 2025
b9b7a16
Branch was auto-updated.
patel-bhavin Jan 9, 2025
1c4c7bd
remove confidence impact message
pyth0n1c Jan 9, 2025
5aafb26
Merge branch 'rba_migration' into strict_yml_from_rba
pyth0n1c Jan 9, 2025
e7d4e0b
Merge branch 'strict_yml_from_rba' of https://github.com/splunk/secur…
pyth0n1c Jan 9, 2025
83ecc74
Branch was auto-updated.
patel-bhavin Jan 9, 2025
0625624
Branch was auto-updated.
patel-bhavin Jan 9, 2025
db1e1bb
Branch was auto-updated.
patel-bhavin Jan 10, 2025
963f94f
Branch was auto-updated.
patel-bhavin Jan 10, 2025
6f56a46
New detection conversion
ljstella Jan 10, 2025
5efd265
Branch was auto-updated.
patel-bhavin Jan 10, 2025
a642239
Merge branch 'rba_migration' into strict_yml_from_rba
pyth0n1c Jan 11, 2025
926ca49
remove extra fields
pyth0n1c Jan 14, 2025
05f7a6e
Merge branch 'strict_yml_from_rba' of https://github.com/splunk/secur…
pyth0n1c Jan 14, 2025
a0a7dc4
Uploading new detections
dluxtron Jan 6, 2025
4761a49
Uploading new detections
dluxtron Jan 6, 2025
5ba40b3
Adding intune detections & updating dataset links
dluxtron Jan 7, 2025
0282aff
fixing syntax & updating macro
dluxtron Jan 7, 2025
71e6dd7
fixing up yamls for testing
patel-bhavin Jan 7, 2025
dc1672d
updating risk
patel-bhavin Jan 7, 2025
dbc1780
updating macro
patel-bhavin Jan 7, 2025
e06b07f
udpating SPL
patel-bhavin Jan 7, 2025
37f2760
reverting macro change
dluxtron Jan 8, 2025
e013fd8
Update o365_service_principal_privilege_escalation.yml
patel-bhavin Jan 13, 2025
437368d
updating data sources
patel-bhavin Jan 13, 2025
56074f2
Improved ASL AWS detections
Dec 12, 2024
f727062
bug fixes
Dec 12, 2024
22c3c2d
bug fixes
Dec 12, 2024
ebca301
new asl aws detections
Dec 12, 2024
8921fd9
bug fix
Dec 12, 2024
a639044
bug fix
Dec 12, 2024
5ae468d
bug fix
Dec 12, 2024
a304f41
bug fix
Dec 12, 2024
39f28ef
bug fix
Dec 12, 2024
7d061fa
bug fix improvements
Dec 16, 2024
d3e1be0
updates
Dec 16, 2024
d3b40ac
bug fix
Dec 16, 2024
b3a7ff3
bug fix
Dec 16, 2024
009ca3b
bug fix
Dec 16, 2024
d7b526d
new detection
Dec 16, 2024
bde1c9d
new detection
Dec 17, 2024
253c463
bug fix
Dec 17, 2024
2f0c371
new detection
Dec 17, 2024
3786451
new detection
Jan 8, 2025
e937232
bug fix
Jan 8, 2025
22e401a
new detection
Jan 8, 2025
8e0b643
new detections
Jan 9, 2025
37ec540
change
Jan 9, 2025
633655d
improvements
Jan 9, 2025
2b63765
minor udpates to yaml
patel-bhavin Jan 10, 2025
a7d061c
Add ASL AWS CloudTrail data source
Jan 14, 2025
9c5b3aa
Merge branch 'develop' into rba_migration
ljstella Jan 15, 2025
1d086b4
Merge branch 'rba_migration' into strict_yml_from_rba
ljstella Jan 15, 2025
70861d2
More migrations: AWS, Azure, O365
ljstella Jan 15, 2025
9ee2e1d
Branch was auto-updated.
patel-bhavin Jan 15, 2025
ef2ac2a
Removal of fields from new detections
ljstella Jan 15, 2025
8894f2d
Merge branch 'develop' into rba_migration
ljstella Jan 16, 2025
5b6f8ea
Fix more errors with missing lookups, baselines, and detections
pyth0n1c Jan 16, 2025
e526990
Merge pull request #3269 from splunk/strict_yml_from_rba
ljstella Jan 16, 2025
b49a76d
fix baseline which referenced lookups
pyth0n1c Jan 16, 2025
fc2b5d4
empty commit
patel-bhavin Jan 18, 2025
d2726d6
specify ctl version
patel-bhavin Jan 18, 2025
8bf0425
bump to alpha2
patel-bhavin Jan 21, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
2 changes: 1 addition & 1 deletion .github/workflows/appinspect.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ jobs:

- name: Install Python Dependencies and ContentCTL and Atomic Red Team
run: |
pip install contentctl>=4.0.0
pip install contentctl==v5.0.0-alpha.2
git clone --depth=1 --single-branch --branch=master https://github.com/redcanaryco/atomic-red-team.git external_repos/atomic-red-team
git clone --depth=1 --single-branch --branch=master https://github.com/mitre/cti external_repos/cti

Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ jobs:

- name: Install Python Dependencies and ContentCTL and Atomic Red Team
run: |
pip install contentctl>=4.0.0
pip install contentctl==v5.0.0-alpha.2
git clone --depth=1 --single-branch --branch=master https://github.com/redcanaryco/atomic-red-team.git external_repos/atomic-red-team
git clone --depth=1 --single-branch --branch=master https://github.com/mitre/cti external_repos/cti

Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/unit-testing.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ jobs:
- name: Install Python Dependencies and ContentCTL
run: |
python -m pip install --upgrade pip
pip install contentctl>=4.0.0
pip install contentctl==v5.0.0-alpha.2

# Running contentctl test with a few arguments, before running the command make sure you checkout into the current branch of the pull request. This step only performs unit testing on all the changes against the target-branch. In most cases this target branch will be develop
# Make sure we check out the PR, even if it actually lives in a fork
Expand Down
6 changes: 0 additions & 6 deletions baselines/baseline_of_blocked_outbound_traffic_from_aws.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ version: 1
date: '2018-05-07'
author: Bhavin Patel, Splunk
type: Baseline
datamodel: []
description: This search establishes, on a per-hour basis, the average and the standard
deviation of the number of outbound connections blocked in your VPC flow logs by
each source IP address (IP address of your EC2 instances). Also recorded is the
Expand Down Expand Up @@ -34,9 +33,4 @@ tags:
- Splunk Enterprise
- Splunk Enterprise Security
- Splunk Cloud
required_fields:
- _time
- action
- src_ip
- dest_ip
security_domain: network
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,6 @@ version: 1
date: '2020-09-07'
author: David Dorsey, Splunk
type: Baseline
datamodel:
- Change
description: This search is used to build a Machine Learning Toolkit (MLTK) model
for how many API calls are performed by each user. By default, the search uses the
last 90 days of data to build the model and the model is rebuilt weekly. The model
Expand Down Expand Up @@ -40,14 +38,10 @@ tags:
- Splunk Enterprise
- Splunk Enterprise Security
- Splunk Cloud
required_fields:
- _time
- All_Changes.user
- All_Changes.status
security_domain: network
deployment:
scheduling:
cron_schedule: 0 2 * * 0
earliest_time: -90d@d
latest_time: -1d@d
schedule_window: auto
schedule_window: auto
16 changes: 4 additions & 12 deletions baselines/baseline_of_cloud_instances_destroyed.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,6 @@ version: 1
date: '2020-08-25'
author: David Dorsey, Splunk
type: Baseline
datamodel:
- Change
description: This search is used to build a Machine Learning Toolkit (MLTK) model
for how many instances are destroyed in the environment. By default, the search
uses the last 90 days of data to build the model and the model is rebuilt weekly.
Expand All @@ -20,17 +18,16 @@ search: '| tstats count as instances_destroyed from datamodel=Change where All_C
<= 5, 0, 1) | table _time instances_destroyed, HourOfDay, isWeekend | fit DensityFunction
instances_destroyed by "HourOfDay,isWeekend" into cloud_excessive_instances_destroyed_v1
dist=expon show_density=true'
how_to_implement: 'You must have Enterprise Security 6.0 or later, if not you will
how_to_implement: "You must have Enterprise Security 6.0 or later, if not you will
need to verify that the Machine Learning Toolkit (MLTK) version 4.2 or later is
installed, along with any required dependencies. Depending on the number of users
in your environment, you may also need to adjust the value for max_inputs in the
MLTK settings for the DensityFunction algorithm, then ensure that the search completes
in a reasonable timeframe. By default, the search builds the model using the past
30 days of data. You can modify the search window to build the model over a longer
period of time, which may give you better results. You may also want to periodically
re-run this search to rebuild the model with the latest data.

More information on the algorithm used in the search can be found at `https://docs.splunk.com/Documentation/MLApp/4.2.0/User/Algorithms#DensityFunction`.'
re-run this search to rebuild the model with the latest data.\nMore information
on the algorithm used in the search can be found at `https://docs.splunk.com/Documentation/MLApp/4.2.0/User/Algorithms#DensityFunction`."
known_false_positives: none
references: []
tags:
Expand All @@ -43,15 +40,10 @@ tags:
- Splunk Enterprise
- Splunk Enterprise Security
- Splunk Cloud
required_fields:
- _time
- All_Changes.action
- All_Changes.status
- All_Changes.object_category
security_domain: network
deployment:
scheduling:
cron_schedule: 0 2 * * 0
earliest_time: -90d@d
latest_time: -1d@d
schedule_window: auto
schedule_window: auto
16 changes: 4 additions & 12 deletions baselines/baseline_of_cloud_instances_launched.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,6 @@ version: 1
date: '2020-08-14'
author: David Dorsey, Splunk
type: Baseline
datamodel:
- Change
description: This search is used to build a Machine Learning Toolkit (MLTK) model
for how many instances are created in the environment. By default, the search uses
the last 90 days of data to build the model and the model is rebuilt weekly. The
Expand All @@ -20,17 +18,16 @@ search: '| tstats count as instances_launched from datamodel=Change where (All_C
<= 5, 0, 1) | table _time instances_launched, HourOfDay, isWeekend | fit DensityFunction
instances_launched by "HourOfDay,isWeekend" into cloud_excessive_instances_created_v1
dist=expon show_density=true'
how_to_implement: 'You must have Enterprise Security 6.0 or later, if not you will
how_to_implement: "You must have Enterprise Security 6.0 or later, if not you will
need to verify that the Machine Learning Toolkit (MLTK) version 4.2 or later is
installed, along with any required dependencies. Depending on the number of users
in your environment, you may also need to adjust the value for max_inputs in the
MLTK settings for the DensityFunction algorithm, then ensure that the search completes
in a reasonable timeframe. By default, the search builds the model using the past
90 days of data. You can modify the search window to build the model over a longer
period of time, which may give you better results. You may also want to periodically
re-run this search to rebuild the model with the latest data.

More information on the algorithm used in the search can be found at `https://docs.splunk.com/Documentation/MLApp/4.2.0/User/Algorithms#DensityFunction`.'
re-run this search to rebuild the model with the latest data.\nMore information
on the algorithm used in the search can be found at `https://docs.splunk.com/Documentation/MLApp/4.2.0/User/Algorithms#DensityFunction`."
known_false_positives: none
references: []
tags:
Expand All @@ -43,15 +40,10 @@ tags:
- Splunk Enterprise
- Splunk Enterprise Security
- Splunk Cloud
required_fields:
- _time
- All_Changes.action
- All_Changes.status
- All_Changes.object_category
security_domain: network
deployment:
scheduling:
cron_schedule: 0 2 * * 0
earliest_time: -90d@d
latest_time: -1d@d
schedule_window: auto
schedule_window: auto
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,6 @@ version: 1
date: '2020-09-07'
author: David Dorsey, Splunk
type: Baseline
datamodel:
- Change
description: This search is used to build a Machine Learning Toolkit (MLTK) model
for how many API calls for security groups are performed by each user. By default,
the search uses the last 90 days of data to build the model and the model is rebuilt
Expand Down Expand Up @@ -39,15 +37,10 @@ tags:
- Splunk Enterprise
- Splunk Enterprise Security
- Splunk Cloud
required_fields:
- _time
- All_Changes.user
- All_Changes.status
- All_Changes.object_category
security_domain: network
deployment:
scheduling:
cron_schedule: 0 2 * * 0
earliest_time: -90d@d
latest_time: -1d@d
schedule_window: auto
schedule_window: auto
10 changes: 2 additions & 8 deletions baselines/baseline_of_command_line_length___mltk.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ version: 1
date: '2019-05-08'
author: Rico Valdez, Splunk
type: Baseline
datamodel: []
description: This search is used to build a Machine Learning Toolkit (MLTK) model
to characterize the length of the command lines observed for each user in the environment.
By default, the search uses the last 30 days of data to build the model. The model
Expand All @@ -24,7 +23,8 @@ how_to_implement: You must be ingesting endpoint data and populating the Endpoin
the past 30 days of data. You can modify the search window to build the model over
a longer period of time, which may give you better results. You may also want to
periodically re-run this search to rebuild the model with the latest data. More
information on the algorithm used in the search can be found at `https://docs.splunk.com/Documentation/MLApp/4.2.0/User/Algorithms#DensityFunction`.
information on the algorithm used in the search can be found at
`https://docs.splunk.com/Documentation/MLApp/4.2.0/User/Algorithms#DensityFunction`.
known_false_positives: none
references: []
tags:
Expand All @@ -41,12 +41,6 @@ tags:
- Splunk Enterprise
- Splunk Enterprise Security
- Splunk Cloud
required_fields:
- _time
- Processes.user
- Processes.dest
- Processes.process_name
- Processes.process
security_domain: endpoint
deployment:
scheduling:
Expand Down
9 changes: 2 additions & 7 deletions baselines/baseline_of_dns_query_length___mltk.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,6 @@ version: 1
date: '2019-05-08'
author: Rico Valdez, Splunk
type: Baseline
datamodel:
- Network_Resolution
description: This search is used to build a Machine Learning Toolkit (MLTK) model
to characterize the length of the DNS queries for each DNS record type observed
in the environment. By default, the search uses the last 30 days of data to build
Expand All @@ -22,7 +20,8 @@ how_to_implement: To successfully implement this search, you will need to ensure
days of data. You can modify the search window to build the model over a longer
period of time, which may give you better results. You may also want to periodically
re-run this search to rebuild the model with the latest data. More information on
the algorithm used in the search can be found at `https://docs.splunk.com/Documentation/MLApp/4.2.0/User/Algorithms#DensityFunction`.
the algorithm used in the search can be found at
`https://docs.splunk.com/Documentation/MLApp/4.2.0/User/Algorithms#DensityFunction`.
known_false_positives: none
references: []
tags:
Expand All @@ -36,10 +35,6 @@ tags:
- Splunk Enterprise
- Splunk Enterprise Security
- Splunk Cloud
required_fields:
- _time
- DNS.query
- DNS.record_type
security_domain: network
deployment:
scheduling:
Expand Down
61 changes: 32 additions & 29 deletions baselines/baseline_of_kubernetes_container_network_io.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,29 +4,37 @@ version: 4
date: '2024-09-24'
author: Matthew Moore, Splunk
type: Baseline
datamodel: []
description: This baseline rule calculates the average and standard deviation of inbound and outbound network IO for each Kubernetes container.
It uses metrics from the Kubernetes API and the Splunk Infrastructure Monitoring Add-on. The rule generates a lookup table with the average and
standard deviation of the network IO for each container. This baseline can be used to detect anomalies in network communication behavior,
which may indicate security threats such as data exfiltration, command and control communication, or compromised container behavior.
search: '| mstats avg(k8s.pod.network.io) as io where `kubernetes_metrics` by k8s.cluster.name k8s.pod.name k8s.node.name direction span=10s
| eval service = replace(''k8s.pod.name'', "-\w{5}$|-[abcdef0-9]{8,10}-\w{5}$", "")
| eval key = ''k8s.cluster.name'' + ":" + ''service''
| stats avg(eval(if(direction="transmit", io,null()))) as avg_outbound_network_io avg(eval(if(direction="receive", io,null()))) as avg_inbound_network_io
stdev(eval(if(direction="transmit", io,null()))) as stdev_outbound_network_io stdev(eval(if(direction="receive", io,null()))) as stdev_inbound_network_io
count latest(_time) as last_seen by key
| outputlookup k8s_container_network_io_baseline'
how_to_implement: 'To implement this detection, follow these steps:
1. Deploy the OpenTelemetry Collector (OTEL) to your Kubernetes cluster.
2. Enable the hostmetrics/process receiver in the OTEL configuration.
3. Ensure that the process metrics, specifically Process.cpu.utilization and process.memory.utilization, are enabled.
4. Install the Splunk Infrastructure Monitoring (SIM) add-on (ref: https://splunkbase.splunk.com/app/5247)
5. Configure the SIM add-on with your Observability Cloud Organization ID and Access Token.
6. Set up the SIM modular input to ingest Process Metrics. Name this input "sim_process_metrics_to_metrics_index".
7. In the SIM configuration, set the Organization ID to your Observability Cloud Organization ID.
8. Set the Signal Flow Program to the following: data(''process.threads'').publish(label=''A''); data(''process.cpu.utilization'').publish(label=''B''); data(''process.cpu.time'').publish(label=''C''); data(''process.disk.io'').publish(label=''D''); data(''process.memory.usage'').publish(label=''E''); data(''process.memory.virtual'').publish(label=''F''); data(''process.memory.utilization'').publish(label=''G''); data(''process.cpu.utilization'').publish(label=''H''); data(''process.disk.operations'').publish(label=''I''); data(''process.handles'').publish(label=''J''); data(''process.threads'').publish(label=''K'')
9. Set the Metric Resolution to 10000.
10. Leave all other settings at their default values.'
description: This baseline rule calculates the average and standard deviation of inbound
and outbound network IO for each Kubernetes container. It uses metrics from the
Kubernetes API and the Splunk Infrastructure Monitoring Add-on. The rule generates
a lookup table with the average and standard deviation of the network IO for each
container. This baseline can be used to detect anomalies in network communication
behavior, which may indicate security threats such as data exfiltration, command
and control communication, or compromised container behavior.
search: "| mstats avg(k8s.pod.network.io) as io where `kubernetes_metrics` by k8s.cluster.name
k8s.pod.name k8s.node.name direction span=10s | eval service = replace('k8s.pod.name',
\"-\\w{5}$|-[abcdef0-9]{8,10}-\\w{5}$\", \"\") | eval key = 'k8s.cluster.name' +
\":\" + 'service' | stats avg(eval(if(direction=\"transmit\", io,null()))) as avg_outbound_network_io
avg(eval(if(direction=\"receive\", io,null()))) as avg_inbound_network_io stdev(eval(if(direction=\"\
transmit\", io,null()))) as stdev_outbound_network_io stdev(eval(if(direction=\"\
receive\", io,null()))) as stdev_inbound_network_io count latest(_time) as last_seen
by key | outputlookup k8s_container_network_io_baseline"
how_to_implement: "To implement this detection, follow these steps: 1. Deploy the
OpenTelemetry Collector (OTEL) to your Kubernetes cluster. 2. Enable the hostmetrics/process
receiver in the OTEL configuration. 3. Ensure that the process metrics, specifically
Process.cpu.utilization and process.memory.utilization, are enabled. 4. Install
the Splunk Infrastructure Monitoring (SIM) add-on (ref: https://splunkbase.splunk.com/app/5247)
5. Configure the SIM add-on with your Observability Cloud Organization ID and Access
Token. 6. Set up the SIM modular input to ingest Process Metrics. Name this input
\"sim_process_metrics_to_metrics_index\". 7. In the SIM configuration, set the Organization
ID to your Observability Cloud Organization ID. 8. Set the Signal Flow Program to
the following: data('process.threads').publish(label='A'); data('process.cpu.utilization').publish(label='B');
data('process.cpu.time').publish(label='C'); data('process.disk.io').publish(label='D');
data('process.memory.usage').publish(label='E'); data('process.memory.virtual').publish(label='F');
data('process.memory.utilization').publish(label='G'); data('process.cpu.utilization').publish(label='H');
data('process.disk.operations').publish(label='I'); data('process.handles').publish(label='J');
data('process.threads').publish(label='K') 9. Set the Metric Resolution to 10000.
10. Leave all other settings at their default values."
known_false_positives: none
references: []
tags:
Expand All @@ -38,15 +46,10 @@ tags:
- Splunk Enterprise
- Splunk Enterprise Security
- Splunk Cloud
required_fields:
- k8s.pod.network.io
- k8s.cluster.name
- k8s.node.name
- k8s.pod.name
security_domain: network
deployment:
scheduling:
cron_schedule: 0 2 * * 0
earliest_time: -30d@d
latest_time: -1d@d
schedule_window: auto
schedule_window: auto
Loading
Loading