Splunk Cheat Sheet: Difference between revisions

From Coolscript
Jump to navigation Jump to search
No edit summary
 
(51 intermediate revisions by the same user not shown)
Line 1: Line 1:


=Administration=
==Paths==
*All config specs:
ls /opt/splunk/etc/system/README
*Default conf (never use)
ls /opt/splunk/etc/system/default
*Local conf
ls /opt/splunk/etc/system/local
*Merged and running config
var/run/merged/server.conf
==Splunk Configuration==
===server.conf===
*Allow remote login when using the free license
[general]
allowRemoteLogin = always
*Do not show the update information
[applicationsManagement]
allowInternetAccess = false
===inputs.conf===
*Set the sourcetype on the forwarder machines, ''' this is for the universal forwarder'''
[monitor://path\log\file.txt*]
sourcetype = FileXYflightapi
disabled = 0
===indexes.conf===
*/opt/splunk/etc/apps/search/local
[security]
coldPath = $SPLUNK_DB/security/colddb
enableDataIntegrityControl = 0
enableTsidxReduction = 0
homePath = $SPLUNK_DB/security/db
maxTotalDataSizeMB = 1024
thawedPath = $SPLUNK_DB/security/thaweddb
===inputs.conf===
*Server
[splunktcp://9997]
queueSize = 2MB
disabled = 0
==btool==
*Check Syntax
./splunk btool check
*List server.conf  / general
./splunk btool server list general
or server.conf  / sslConfig
./splunk btool server list sslConfig
*See where changes come from
./splunk btool server list general --debug
*Show the script stanza form Inputs.conf
./splunk btool inputs list script
*and see wehre the change comes from
./splunk btool inputs list script --debug
*Or monitor
./splunk btool inputs list monitor
==DBInspect==
./splunk dispatch "| dbinspect index=myindex" -uri https://127.0.0.1:8089
==Curl Search ==
curl -u admin:changeit -k https://localhost:8089/services/search/jobs/export -d output_mode=csv  -d search="search index=_internal |head 10"
==Server Commands==
*Show running server.conf
./splunk show config server
/or inputs.conf
./splunk show config inputs
*Set/Show the server name
./splunk set servername splunk##
./splunk show servername
*Set/Show the default host name
./splunk show default-hostname
./splunk show default-hostname
*Add a test index to the search app
./splunk add index test -app search
*Add a receiving port to the search app
./splunk enable listen 9997  -app search
*Force reload
https://domain.com:8000/debug/refresh
==Config Tracker (Splunk9+)==
index = _configtracker
index=_configtracker server.conf serverName
==Diag==
*Diag selections
    These switches select which categories of information should be
    collected.  The current components available are: index_files,
    index_listing, dispatch, etc, log, searchpeers, consensus,
    conf_replication_summary, suppression_listing, rest, kvstore,
    file_validate, profiler
*Sample
./splunk diag --collect=index_files,etc
==btool==
*List all configurations incl. the location
btool check --debug
*List all input stanzas
splunk btool inputs list
/opt/splunk/bin/splunk btool outputs list --debug
/opt/splunk/bin/splunk btool inputs list --debug
/opt/splunk/bin/splunk btool server list --debug
/opt/splunk/bin/splunk btool props list --debug
/opt/splunk/bin/splunk btool indexes list --debug
*Database Dir: /opt/splunk/var/lib/splunk/
==Cluster==
*Cluster Status, only available from the cluster master
./bin/splunk show cluster-status
./bin/splunk show cluster-status -auth admin:$(</mnt/splunk-secrets/password)
* Including indexes
./bin/splunk show cluster-status --verbose
./bin/splunk list cluster-config
./bin/splunk show cluster-bundle-status
* Maintenance
bin/splunk show maintenance-mode -auth admin:$(</mnt/splunk-secrets/password)
==SH-Cluster==
*SH Cluster Status
bin/splunk show shcluster-status -auth admin:$(</mnt/splunk-secrets/password)
splunk show shcluster-status --verbose
bin/splunk list shcluster-member-info
*Restart the search head cluster
splunk rolling-restart shcluster-members
*Force App Update
splunk apply shcluster-bundle -target <URI>:<management_port> -auth <username>:<password>
Note: To obtain an URI you may use '''splunk show shcluster-status'''
===KV Store===
*Status
bin/splunk show kvstore-status -auth admin:$(</mnt/splunk-secrets/password)
*Clean
bin/splunk stop && bin/splunk clean kvstore --local -f ; bin/splunk start
*Resync
splunk resync kvstore
===SH Bundles===
*Put Bundles into the deployer, in etc/shcluster
*'''Sample'''
splunk@dep1 splunk]$ mkdir -p etc/shcluster/apps/base-app-demo1/default
[splunk@dep1 splunk]$ echo "#test" > etc/shcluster/apps/base-app-demo1/default/server.conf
*Apply in stages
[splunk@dep1 splunk]$ bin/splunk apply shcluster-bundle -target  https://sh1:8089 -action stage
[splunk@dep1 splunk]$ bin/splunk apply shcluster-bundle -target  https://sh1:8089 -action send
*Status
[splunk@dep1 splunk]$ bin/splunk list shcluster-bundle -member_uri https://sh3:8089
[splunk@dep1 splunk]$ bin/splunk show bundle-replication-status
[splunk@sh1 splunk]$ bin/splunk show bundle-replication-status -auth admin:$(</mnt/splunk-secrets/password)
==Smart Store==
*Check Filesystem:
bin/splunk cmd splunkd rfs -- ls --starts-with volume:remote_store
*Check Logs
cat var/log/splunk/splunkd.log | grep S3
==Rolling restart==
splunk rolling-restart cluster-peers
splunk rolling-restart shcluster-peers
==HEC==
*Sening a test message
curl "https://localhost:8088/services/collector" \
    -H "Authorization: Splunk <Auth Token>" \
    -d '{"event": "Hello, world!", "sourcetype": "manual"}' -k
==Debugging Searches==
===General===
*Search Splunkd Log
index=_internal sourcetype=splunkd
*Status / LogLevel
index=_internal sourcetype=splunkd status_code=*
index=_internal sourcetype=splunkd log_level=ERROR
===Apps===
*ConfDeployment
index=_internal component=ConfDeployment data.task=*Apps
index=_internal sourcetype=splunkd_conf | top data.task
*Checking SHC Bundle Deployment Status
index=_internal component=ConfDeployment data.task=*Apps
| table host data.source data.source data.target_label data.task data.status
*Filter for SRC
index=_internal sourcetype=splunkd_conf data.task=createDeployableApps | rex "\"src\":\"(?<src>[^\"]+)\"" | top _time,src
*Find missing baseline
index=_internal sourcetype=splunkd_conf
STOP_ON_MISSING_LOCAL_BASELINE | timechart count by host
*Overall configuration behaviour
index=_internal sourcetype=splunkd_conf pullFrom
data.to_repo!=*skipping* | timechart count by data.to_repo
*Evidence of Caption Switching
index=_internal sourcetype=splunkd_conf pullFrom
data.from_repo!=*skipping* | timechart count by data.from_repo
*Find the destructive resynch events
index=_internal sourcetype=splunkd_conf installSnapshot
| timechart count by host
*App Creation
index=_internal sourcetype=splunkd "Detected app creation"
===Mongod startup===
index="_internal" MongoDB starting | top host
index="_internal" "MongoDB starting" source="/opt/splunk/var/log/splunk/mongod.log"
===S3===
*SmartStore
index=_internal sourcetype=splunkd S3Client
=Searching=
==Timechart==
==Timechart==
  M=CB PCC=SYDA83210 | timechart max(DTM) as CRSMessages span=30s
  M=CB PCC=SYDXXX | timechart max(DTM) as CRSMessages span=30s




Line 9: Line 241:


==Lookups==
==Lookups==
*List Lookups by Rest
| rest /services/data/lookup-table-files
Lookups are used to normalize data, currently there are lookups defined for:
Lookups are used to normalize data, currently there are lookups defined for:
  | inputlookup airports
  | inputlookup name
| inputlookup airlines
| inputlookup errors
| inputlookup pcc


*Sample Lookup Query, Show the top bookings and show the carrier name
*Sample Lookup Query, Show the top bookings and show the carrier name
Line 23: Line 256:


  M=FAPI CMD=GetFares | top PCC showperc=f |  lookup pcc PCC as PCC OUTPUT Owner,CRSName | rename Owner as Customer | fields Customer, count
  M=FAPI CMD=GetFares | top PCC showperc=f |  lookup pcc PCC as PCC OUTPUT Owner,CRSName | rename Owner as Customer | fields Customer, count
==Extract Json==
*Sample
info  2023-05-12 01:14:01: MQTT publish: topic 'zigbee2mqtt/0xa4c138acf7922221', payload '{"battery":100,"illuminance":11,"keep_time":"10","linkquality":14,"occupancy":false,"sensitivity":"high"}'
*Search
sourcetype=zigbee2mqtt zigbee2mqtt/0xa4c1381e3ed015b7
| rex field=_raw "(?<json_field>\{.*\})"
| spath input=json_field
| table _time occupancy battery tamper
*Sort by time and output the most recent record
sourcetype=zigbee2mqtt zigbee2mqtt/0xa4c1381e3ed015b7
| rex field=_raw "(?<json_field>\{.*\})"
| spath input=json_field
| sort - _time
| head 1
| table _time occupancy battery tamper


==Advanced Search Samples==
==Advanced Search Samples==
Line 29: Line 280:


String to search:
String to search:
  Feb 13 14:07:02 10.0.3.30 Feb 13 14:07:02 mail mimedefang.pl[10780]: MDLOG,s1DD71da017590,mail_in,,,<support@explorer.de>,<support@explorer.de>,Warning Message
  Feb 13 14:07:02 10.0.3.30 Feb 13 14:07:02 mail mimedefang.pl[10780]: MDLOG,s1DD71da017590,mail_in,,,<support@domain.com>,<support@domain.com>,Warning Message


Regex to extract the message id:
Regex to extract the message id:
Line 72: Line 323:
Regex to get a table of SRC,DST and Port
Regex to get a table of SRC,DST and Port
  host="192.168.100.1" Built inbound TCP connection *  | rex field=_raw "for outside:(?<SRC>[^\/]+)" | rex field=_raw "to inside:(?<DST>[^\/]+)\/(?<PORT>[^\s+]+)" | top 500 SRC,DST,PORT
  host="192.168.100.1" Built inbound TCP connection *  | rex field=_raw "for outside:(?<SRC>[^\/]+)" | rex field=_raw "to inside:(?<DST>[^\/]+)\/(?<PORT>[^\s+]+)" | top 500 SRC,DST,PORT
Regex to get date
2023-12-17 18:00:00
*
| rex "^(?<Year>[^\-]+)\-(?<Month>[^\-]+)\-(?<Day>[^ ]+)\s+(?<Hour>[^\:]+):(?<Minute>[^\:]+):(?<Second>[^ ]+)"
| eval dateref=Year + "-" + Month + "-" + Day + " " + Hour + ":" + Minute + ":" + Second
| sort - dateref


===Lookahead Sample===
===Lookahead Sample===
Line 81: Line 340:
Query:
Query:
  earliest=-1d  latest=now host="10.0.3.30"  "X-Spam-Status: Yes" OR GSCORE |  rex field=_raw "]: (?<MSGID>.*): Milter" |  '''transaction MSGID''' | search "X-Spam-Status: Yes" | top MFROM
  earliest=-1d  latest=now host="10.0.3.30"  "X-Spam-Status: Yes" OR GSCORE |  rex field=_raw "]: (?<MSGID>.*): Milter" |  '''transaction MSGID''' | search "X-Spam-Status: Yes" | top MFROM
===Look to Book Chart===
Show the Look to Book Ratio
*Count PCC, GetFares,BookFare
*Calculate to Ratio
*Append a trailing identifier, :1 or :0 if no bookings were made
*Lookup PCC to Wonername
*Select output Fileds
*leave the total as the last row
M=FAPI (CMD=GetFares OR (CMD=BookFare AND STAT>=0)) STAT>=0
| chart count by PCC,CMD
| sort BookFare,GetFares desc
| eval L2B=round(GetFares/BookFare)
| eval STATBOOK=if(BookFare>0,"1","0")
| eval STATGF=if(L2B>0,L2B,GetFares) 
| eval LookToBook=STATGF . ":" . STATBOOK 
| lookup pcc PCC as PCC OUTPUT Owner 
| fields Owner,PCC, GetFares, BookFare, LookToBook
| addtotals col=t row=f labelfield=PCC
===Last 50 Bookings===
Show the recnt bookings
*Use top (no counting)
*Append OK or Error, if error then lookup the error code
*Lookup pcc, owner, carrier, city codes
*Rename and format fields
M=BOI earliest=-1d  latest=now 
| top 50 _time,PCC,CRS,AIR,DEP,ARR,STAT,PAS,SEG, NET, TAX, CUR, DIST
| lookup errors Code as STAT OUTPUT Description 
| eval STATX=if(STAT>=0,"OK", Description) 
| eval field-description=STAT. " = " . STATX 
| lookup airlines Code as AIR OUTPUT Hint 
| lookup pcc PCC as PCC OUTPUT Owner,CRSName 
| lookup airports IATA as DEP OUTPUT CityName as From 
| lookup airports IATA as ARR OUTPUT CityName as To 
| rename Hint AS Carrier 
| rename Owner AS Customer 
| rename Carrier as CarrierName 
| rename field-description as STATUS 
| fields _time, CRS,Customer, AIR, CarrierName, From, To, PAS, SEG, NET, TAX, CUR, DIST, STATUS
===Revenue===
Show the revenue
*Use stats for counting
*Use the new fields EURNET and EURTAX as unique currency source (available since FEB2014)
M=BOI STAT>=0 TST=0 
| stats count(_time) as Bookings, sum(EURNET) as TotalFareEuro,sum(EURTAX) as TotalTAXEuro,sum(PAS) as Passenger by CRS, PCC 
| eval AverageFarePerPassenger=round(TotalFareEuro/Passenger) 
| eval AveragePassengerPerBooking=round(Passenger/Bookings) 
| lookup pcc PCC as PCC OUTPUT Owner,CRSName 
| fields CRSName, Owner, Bookings, TotalFareEuro,TotalTAXEuro,Passenger,AverageFarePerPassenger,AveragePassengerPerBooking 
| addtotals col=t row=f labelfield=CRSName


===Transaction===
===Transaction===


Use transaction and table to map FAPI/WFE data.
Transaction sample
 
TIME>20  | transaction TID | rename TIME as ResponseTime | table _time,TID,host,ResponseTime,DEP,ARR,M,SC,SS,PCC | search PCC=LCH
 
 
===WebFare Searches===
 
Search for the slowest carrier and list the errors (if any)
 
M=WFE SS<0 | top SC,SS
 
Search for W6 if the Agent Plugini is used:
 
M=WFE WAGTD="*W6*"
 
List all errors for U2 with Agent Plugin used.
 
M=WFE WAGTD="*U2*" SC=U2 SS<0 | top SS
 
List all errors for U2 with '''NO''' Agent Plugin used.
 
M=WFE NOT WAGTD="*U2*" SC=U2 SS<0 | top SS
 
List all U2 traffic with Agent Plugin used.
 
M=WFE WAGTD="*U2*"
 
 
==Splunk Cnfiguration==
===server.conf===
 
*Allow remote login when using the free license
[general]
allowRemoteLogin = always
 
*Do not show the update information
[applicationsManagement]
allowInternetAccess = false
 
===inputs.conf===
*Set the sourcetype on the forwarder machines
 
[monitor://d:\internetbackend\log\splunk\fapi*]
sourcetype = flightapi
disabled = 0
[monitor://d:\internetbackend\log\splunk\ibe*]
sourcetype = internetbackend
disabled = 0
[monitor://d:\internetbackend\log\splunk\mnt*]
sourcetype = maintenance
disabled = 0
[monitor://d:\internetbackend\log\splunk\wfe*]
sourcetype = webfares
disabled = 0
[monitor://d:\internetbackend\log\splunk\fq*]
sourcetype = farequote
disabled = 0
[monitor://c:\hitchhiker\log\splunk\pg*]
sourcetype = paymentgate
disabled = 0


TIME>20  | transaction TID | rename TIME as ResponseTime | table _time,TID,host,ResponseTime,DEP,ARR,M,SC,SS,PCC | search PCC=XXX


===Append two searches===
===Append two searches===
*Use appendcols
*Use appendcols
  M=FAPI FT=1 STAT=0 USR=QUNAR.PROD | stats count(CMD) as QUNAR | appendcols [search M=FAPI FT=1 STAT=0 USR NOT QUNAR.PROD | stats count(CMD) as OTHER]
  M=FAPI FT=1 STAT=0 USR=USR.PROD | stats count(CMD) as XXX | '''appendcols''' [search M=FAPI FT=1 STAT=0 USR NOT XXXX.PROD | stats count(CMD) as OTHER]
 
===Event count===
just a draft:
M=WFE  | stats list(PSTAT) as PSTAT count(AIR) as total by AIR | where mvindex(PSTAT,1)="0" or mvindex(PSTAT,1)="1"
M=WFE  | stats list(PSTAT) as PSTAT count(AIR) as total by AIR
M=WFE 0Y | stats values(PSTAT) as PSTAT count(AIR) as total by AIR
 
 


=HEC=
[splunk@splunk splunk]$ curl -k http://127.0.0.1:8088/services/collector/event -H 'Authorization: Splunk <token>' \
-d"{\"event\": \"test message\", \"index\":\"main\", \"host\":\"$HOSTNAME\"}"
=Mail=
| makeresults 1 | eval Message="This is a test"  | sendemail to="mail@domain.com" sendresults=true inline=true subject="search result test"


=Bucket Export/Import=
Demo Export of an index called sh_azure
*Help
[splunk@splunk splunk]$ bin/splunk help cmd
*Roll index from hot to warm
[splunk@splunk splunk]$ bin/splunk _internal call /data/indexes/sh_azure/roll-hot-buckets
*Backup
[splunk@splunk splunk]$ bin/splunk cmd exporttool /opt/splunk/var/lib/splunk/sh_azure/db/db_1702096212_1701977673_0 export.csv  -et 1701977673 -lt 1702096212 -csv


===Inputs.conf===
=Stop Indexing=
bin/splunk set minfreemb 200000
*OR
**Disable input
*OR
**Block inputs port


[default]
=Stats=
host = xyz
==User Stats==
*Source
index="_internal" sourcetype=splunk_web_access
*total number of users:
index="_internal" sourcetype=splunk_web_access | timechart span=1d count(user) as total_users
*distinct number of users:
index="_internal" sourcetype=splunk_web_access | timechart span=1d dc(user) as distinct_users
*count per user:
index="_internal" sourcetype=splunk_web_access | timechart span=1d count as count_user by user
==Message Stats==
*Messages per day
index=* earliest=-24h@h latest=now | stats count by index
*Volume per Index
index=* | eval size=len(_raw) | eval GB=(size/1024/1024/1024) | stats sum(GB) by index
*Volume per Day
index=* | eval size=len(_raw) | eval GB=(size/1024/1024/1024) | timechart sum(GB) span=1d


[monitor://c:\path\*.*]
==Searches==
sourcetype = MySourceType
*Amount of searches per day
disabled = 0
index=_audit action="search" search="*" NOT user="splunk-system-user" savedsearch_name="" NOT search="\'|history*" NOT search="\'typeahead*" | timechart count
#index = MyIndex


 
=Links=
 
 
==Links==
*http://docs.splunk.com/Documentation/Splunk/6.0/SearchReference/CommonEvalFunctions
*http://docs.splunk.com/Documentation/Splunk/6.0/SearchReference/CommonEvalFunctions


*KV Store renew cert
*KV Store Renew cert
*http://wiki.intern/index.php/Renew_internal_Splunk_License
*http://wiki.intern/index.php/Renew_internal_Splunk_License
*Distributed search
*https://infohub.delltechnologies.com/l/splunk-enterprise-on-dell-powerflex-rack-using-powerscale-1/splunk-distributed-clustered-deployment-1


[[Category:Statistic]]
[[Category:Statistic]]

Latest revision as of 16:52, 19 May 2024

Administration

Paths

  • All config specs:
ls /opt/splunk/etc/system/README
  • Default conf (never use)
ls /opt/splunk/etc/system/default
  • Local conf
ls /opt/splunk/etc/system/local
  • Merged and running config
var/run/merged/server.conf

Splunk Configuration

server.conf

  • Allow remote login when using the free license
[general]
allowRemoteLogin = always
  • Do not show the update information
[applicationsManagement]
allowInternetAccess = false

inputs.conf

  • Set the sourcetype on the forwarder machines, this is for the universal forwarder
[monitor://path\log\file.txt*]
sourcetype = FileXYflightapi
disabled = 0

indexes.conf

  • /opt/splunk/etc/apps/search/local
[security]
coldPath = $SPLUNK_DB/security/colddb
enableDataIntegrityControl = 0
enableTsidxReduction = 0
homePath = $SPLUNK_DB/security/db
maxTotalDataSizeMB = 1024
thawedPath = $SPLUNK_DB/security/thaweddb


inputs.conf

  • Server
[splunktcp://9997]
queueSize = 2MB
disabled = 0

btool

  • Check Syntax
./splunk btool check
  • List server.conf / general
./splunk btool server list general

or server.conf / sslConfig

./splunk btool server list sslConfig
  • See where changes come from
./splunk btool server list general --debug
  • Show the script stanza form Inputs.conf
./splunk btool inputs list script
  • and see wehre the change comes from
./splunk btool inputs list script --debug
  • Or monitor
./splunk btool inputs list monitor

DBInspect

./splunk dispatch "| dbinspect index=myindex" -uri https://127.0.0.1:8089

Curl Search

curl -u admin:changeit -k https://localhost:8089/services/search/jobs/export -d output_mode=csv  -d search="search index=_internal |head 10"

Server Commands

  • Show running server.conf
./splunk show config server

/or inputs.conf

./splunk show config inputs
  • Set/Show the server name
./splunk set servername splunk##
./splunk show servername
  • Set/Show the default host name
./splunk show default-hostname
./splunk show default-hostname
  • Add a test index to the search app
./splunk add index test -app search
  • Add a receiving port to the search app
./splunk enable listen 9997  -app search
  • Force reload
https://domain.com:8000/debug/refresh

Config Tracker (Splunk9+)

index = _configtracker
index=_configtracker server.conf serverName

Diag

  • Diag selections
   These switches select which categories of information should be
   collected.  The current components available are: index_files,
   index_listing, dispatch, etc, log, searchpeers, consensus,
   conf_replication_summary, suppression_listing, rest, kvstore,
   file_validate, profiler
  • Sample
./splunk diag --collect=index_files,etc


btool

  • List all configurations incl. the location
btool check --debug
  • List all input stanzas
splunk btool inputs list
/opt/splunk/bin/splunk btool outputs list --debug
/opt/splunk/bin/splunk btool inputs list --debug
/opt/splunk/bin/splunk btool server list --debug
/opt/splunk/bin/splunk btool props list --debug
/opt/splunk/bin/splunk btool indexes list --debug

  • Database Dir: /opt/splunk/var/lib/splunk/

Cluster

  • Cluster Status, only available from the cluster master
./bin/splunk show cluster-status 
./bin/splunk show cluster-status -auth admin:$(</mnt/splunk-secrets/password)
  • Including indexes
./bin/splunk show cluster-status --verbose
./bin/splunk list cluster-config
./bin/splunk show cluster-bundle-status
  • Maintenance
bin/splunk show maintenance-mode -auth admin:$(</mnt/splunk-secrets/password)

SH-Cluster

  • SH Cluster Status
bin/splunk show shcluster-status -auth admin:$(</mnt/splunk-secrets/password)
splunk show shcluster-status --verbose
bin/splunk list shcluster-member-info
  • Restart the search head cluster
splunk rolling-restart shcluster-members
  • Force App Update
splunk apply shcluster-bundle -target <URI>:<management_port> -auth <username>:<password>

Note: To obtain an URI you may use splunk show shcluster-status

KV Store

  • Status
bin/splunk show kvstore-status -auth admin:$(</mnt/splunk-secrets/password)
  • Clean
bin/splunk stop && bin/splunk clean kvstore --local -f ; bin/splunk start
  • Resync
splunk resync kvstore

SH Bundles

  • Put Bundles into the deployer, in etc/shcluster
  • Sample
splunk@dep1 splunk]$ mkdir -p etc/shcluster/apps/base-app-demo1/default
[splunk@dep1 splunk]$ echo "#test" > etc/shcluster/apps/base-app-demo1/default/server.conf
  • Apply in stages
[splunk@dep1 splunk]$ bin/splunk apply shcluster-bundle -target  https://sh1:8089 -action stage
[splunk@dep1 splunk]$ bin/splunk apply shcluster-bundle -target  https://sh1:8089 -action send
  • Status
[splunk@dep1 splunk]$ bin/splunk list shcluster-bundle -member_uri https://sh3:8089
[splunk@dep1 splunk]$ bin/splunk show bundle-replication-status
[splunk@sh1 splunk]$ bin/splunk show bundle-replication-status -auth admin:$(</mnt/splunk-secrets/password)

Smart Store

  • Check Filesystem:
bin/splunk cmd splunkd rfs -- ls --starts-with volume:remote_store
  • Check Logs
cat var/log/splunk/splunkd.log | grep S3

Rolling restart

splunk rolling-restart cluster-peers

splunk rolling-restart shcluster-peers

HEC

  • Sening a test message
curl "https://localhost:8088/services/collector" \
   -H "Authorization: Splunk <Auth Token>" \
   -d '{"event": "Hello, world!", "sourcetype": "manual"}' -k

Debugging Searches

General

  • Search Splunkd Log
index=_internal sourcetype=splunkd
  • Status / LogLevel
index=_internal sourcetype=splunkd status_code=*
index=_internal sourcetype=splunkd log_level=ERROR


Apps

  • ConfDeployment
index=_internal component=ConfDeployment data.task=*Apps
index=_internal sourcetype=splunkd_conf | top data.task
  • Checking SHC Bundle Deployment Status
index=_internal component=ConfDeployment data.task=*Apps
| table host data.source data.source data.target_label data.task data.status
  • Filter for SRC
index=_internal sourcetype=splunkd_conf data.task=createDeployableApps | rex "\"src\":\"(?<src>[^\"]+)\"" | top _time,src


  • Find missing baseline
index=_internal sourcetype=splunkd_conf
STOP_ON_MISSING_LOCAL_BASELINE | timechart count by host
  • Overall configuration behaviour
index=_internal sourcetype=splunkd_conf pullFrom 
data.to_repo!=*skipping* | timechart count by data.to_repo
  • Evidence of Caption Switching
index=_internal sourcetype=splunkd_conf pullFrom
data.from_repo!=*skipping* | timechart count by data.from_repo
  • Find the destructive resynch events
index=_internal sourcetype=splunkd_conf installSnapshot
| timechart count by host


  • App Creation
index=_internal sourcetype=splunkd "Detected app creation"

Mongod startup

index="_internal" MongoDB starting | top host

index="_internal" "MongoDB starting" source="/opt/splunk/var/log/splunk/mongod.log"

S3

  • SmartStore
index=_internal sourcetype=splunkd S3Client

Searching

Timechart

M=CB PCC=SYDXXX | timechart max(DTM) as CRSMessages span=30s


Sparkline

M=CB  | stats sparkline max(DTM) as Messages by PCC

Lookups

  • List Lookups by Rest
| rest /services/data/lookup-table-files

Lookups are used to normalize data, currently there are lookups defined for:

| inputlookup name
  • Sample Lookup Query, Show the top bookings and show the carrier name
M=BOI earliest=-1d  latest=now   | stats count(AIR) as Amount by AIR | sort Amount desc, limit=20 |  lookup airlines Code as AIR OUTPUT Hint 
| rename Hint as Carrier | fields Carrier, Amount
  • Sample Lookup Query, Show the top PCCs and show the customer name
M=FAPI CMD=GetFares | top PCC showperc=f |  lookup pcc PCC as PCC OUTPUT Owner,CRSName | rename Owner as Customer | fields Customer, count

Extract Json

  • Sample
info  2023-05-12 01:14:01: MQTT publish: topic 'zigbee2mqtt/0xa4c138acf7922221', payload '{"battery":100,"illuminance":11,"keep_time":"10","linkquality":14,"occupancy":false,"sensitivity":"high"}'
  • Search
sourcetype=zigbee2mqtt zigbee2mqtt/0xa4c1381e3ed015b7
| rex field=_raw "(?<json_field>\{.*\})"
| spath input=json_field
| table _time occupancy battery tamper
  • Sort by time and output the most recent record
sourcetype=zigbee2mqtt zigbee2mqtt/0xa4c1381e3ed015b7
| rex field=_raw "(?<json_field>\{.*\})"
| spath input=json_field
| sort - _time
| head 1
| table _time occupancy battery tamper

Advanced Search Samples

Regex Samples

String to search:

Feb 13 14:07:02 10.0.3.30 Feb 13 14:07:02 mail mimedefang.pl[10780]: MDLOG,s1DD71da017590,mail_in,,,<support@domain.com>,<support@domain.com>,Warning Message

Regex to extract the message id:

explorer mimedefang.pl | rex field=_raw "MDLOG\,(?<MSGID>.*),mail*" | top 100 MSGID,_time | fields _time, MSGID

String to search:

Feb 13 13:59:57 10.0.3.6 Feb 13 13:59:57 neptun vsftpd[8973]: [keytravel] FTP response: Client "194.74.154.185", "226 Transfer complete."

Regex to extract the login:

host="10.0.3.6" ": [*]" FTP | rex field=_raw "(?<Login>\s{1}\[.*\])" | top Login

String to search:

Mar 5 15:07:10 10.0.3.30 Mar 5 15:07:10 mail sm-mta[15042]: s25E727n015042: Milter add: header: X-Spam-Status: Yes, score=21.8 required=5.0 tests=BAYES_99,GEO_MAIL_SEARCH,\n \tHELO_DYNAMIC_IPADDR,HTML_MESSAGE,MIME_HTML_ONLY,RCVD_IN_BL_SPAMCOP_NET,\n\tRCVD_IN_BRBL_LASTEXT,RCVD_IN_PBL,RCVD_IN_PSBL,RCVD_IN_RP_RNBL, \n\tRCVD_IN_SORBS_DUL,RCVD_IN_XBL,RDNS_DYNAMIC,SPF_NEUTRAL,URIBL_DBL_SPAM,\n\tURIBL_WS_SURBL 

Regex to extract the message id:

host="10.0.3.30"  "X-Spam-Status: Yes" |  rex field=_raw "]: (?<MSGID>.*): Milter" | top MSGID

String to search:

M=FEEDEDF OAD=142 TOTFLIGHTFILES=71 TOTALOMAFILES=71 TOTNBRFLIGHTS=4406 TOTNBRALOMAS=6066 TOTKEYS=10614 SIZETOT=8839080 DURATION=13 TTL=432000 INFO=0 Host=VM-XC01 Job=hhh_edf_NL_2018-11-19-1349-1-90-RT.csv   Code=HHH-FR-01 

Regex to extract the date range (1-90):

M=FEEDEDF | rex field=_raw "Job=hhh_edf_\w+-\d+-\d+-\d+-(?<STR>.*\d*-\d*)-RT"  |  top STR

Regex to expand date to day, month and year, sample:

DATE=2020-01-01 .... 

Regex

rex field=DATE "(?<Year>[^\-]+)\-(?<Month>[^\-]+)\-(?<Day>[^\-]+)"

Then aggregate by

stats sum(...) as Something  by Month Year 

Sample:

Oct 31 12:14:39 192.168.100.1 %ASA-4-106023: Deny tcp src outside:185.176.27.178/46086 dst inside:192.168.100.237/12834 by access-group "static_outside" [0x0, 0x0]

Regex:

host="192.168.100.1" | rex field=_raw "Deny tcp src outside:(?<SRC>[^\/]+).*dst inside:(?<DST>[^\/]+)\/(?<PORT>[^\s+]+)" 
|  top SRC,DST,PORT

Sample:

Jun  3 15:29:32 192.168.100.1 %ASA-6-302013: Built inbound TCP connection 2154199512 for outside:212.19.51.190/64499 (212.19.51.190/64499) to inside:192.168.100.240/443 (146.0.228.21/443)

Regex to get a table of SRC,DST and Port

host="192.168.100.1" Built inbound TCP connection *  | rex field=_raw "for outside:(?<SRC>[^\/]+)" | rex field=_raw "to inside:(?<DST>[^\/]+)\/(?<PORT>[^\s+]+)" | top 500 SRC,DST,PORT


Regex to get date

2023-12-17 18:00:00 
*
| rex "^(?<Year>[^\-]+)\-(?<Month>[^\-]+)\-(?<Day>[^ ]+)\s+(?<Hour>[^\:]+):(?<Minute>[^\:]+):(?<Second>[^ ]+)"
| eval dateref=Year + "-" + Month + "-" + Day + " " + Hour + ":" + Minute + ":" + Second
| sort - dateref

Lookahead Sample

Records(s) to look ahead and group

Mar 5 15:34:20 10.0.3.30 Mar 5 15:34:20 spamd child[6707]: GSCORE=0 COU=ES ASN=AS12357 IP=77.230.132.146 MFROM=ibe@elegancejewelrydesigns.com MTO=ibe@hitchhiker.com  MSGID=s25EYHtn016074 HELO=static-146-132-230-77.ipcom.comunitel.net IPN=1306952850 LAT=40.0000 LON=-4.0000 CTY=0 
Mar 5 15:34:24 10.0.3.30 Mar 5 15:34:24 mail sm-mta[16074]: s25EYHtn016074: Milter add: header: X-Spam-Status: Yes, score=23.4 required=3.0 tests=BAYES_99,CK_HELO_GENERIC,\n\tGEO_MAIL_SEARCH,HELO_DYNAMIC_IPADDR,HTML_MESSAGE,MIME_HTML_ONLY,\n\tRAZOR2_CF_RANGE_51_100,RAZOR2_CF_RANGE_E8_51_100,RAZOR2_CHECK,\n\tRCVD_IN_BL_SPAMCOP_NET,RCVD_IN_BRBL_LASTEXT,RCVD_IN_PSBL,RCVD_IN_SORBS_WEB,\n\tRCVD_IN_XBL,SPF_NEUTRAL,URIBL_BLACK,URIBL_DBL_SPAM,URIBL_JP_SURBL,\n\tURIBL_WS_SURBL autolearn=spam version=3.3.1

Query:

earliest=-1d  latest=now host="10.0.3.30"  "X-Spam-Status: Yes" OR GSCORE |  rex field=_raw "]: (?<MSGID>.*): Milter" |  transaction MSGID | search "X-Spam-Status: Yes" | top MFROM

Transaction

Transaction sample

TIME>20  | transaction TID | rename TIME as ResponseTime | table _time,TID,host,ResponseTime,DEP,ARR,M,SC,SS,PCC | search PCC=XXX

Append two searches

  • Use appendcols
M=FAPI FT=1 STAT=0 USR=USR.PROD | stats count(CMD) as XXX | appendcols [search M=FAPI FT=1 STAT=0 USR NOT XXXX.PROD | stats count(CMD) as OTHER]

HEC

[splunk@splunk splunk]$ curl -k http://127.0.0.1:8088/services/collector/event -H 'Authorization: Splunk <token>' \

-d"{\"event\": \"test message\", \"index\":\"main\", \"host\":\"$HOSTNAME\"}"

Mail

| makeresults 1 | eval Message="This is a test" | sendemail to="mail@domain.com" sendresults=true inline=true subject="search result test"

Bucket Export/Import

Demo Export of an index called sh_azure

  • Help
[splunk@splunk splunk]$ bin/splunk help cmd
  • Roll index from hot to warm
[splunk@splunk splunk]$ bin/splunk _internal call /data/indexes/sh_azure/roll-hot-buckets
  • Backup
[splunk@splunk splunk]$ bin/splunk cmd exporttool /opt/splunk/var/lib/splunk/sh_azure/db/db_1702096212_1701977673_0 export.csv  -et 1701977673 -lt 1702096212 -csv

Stop Indexing

bin/splunk set minfreemb 200000
  • OR
    • Disable input
  • OR
    • Block inputs port

Stats

User Stats

  • Source
index="_internal" sourcetype=splunk_web_access
  • total number of users:
index="_internal" sourcetype=splunk_web_access | timechart span=1d count(user) as total_users
  • distinct number of users:
index="_internal" sourcetype=splunk_web_access | timechart span=1d dc(user) as distinct_users
  • count per user:
index="_internal" sourcetype=splunk_web_access | timechart span=1d count as count_user by user

Message Stats

  • Messages per day
index=* earliest=-24h@h latest=now | stats count by index
  • Volume per Index
index=* | eval size=len(_raw) | eval GB=(size/1024/1024/1024) | stats sum(GB) by index
  • Volume per Day
index=* | eval size=len(_raw) | eval GB=(size/1024/1024/1024) | timechart sum(GB) span=1d

Searches

  • Amount of searches per day
index=_audit action="search" search="*" NOT user="splunk-system-user" savedsearch_name="" NOT search="\'|history*" NOT search="\'typeahead*" | timechart count

Links