Building a Splunk SmartStore Deployment on your Laptop with MinIO and Docker

Brendan Dalpe - October 2020

I’ve been working on a project recently with a customer who was migrating to SmartStore. With that comes the need to test the configurations being deployed in the environment. So how do you simulate a SmartStore deployment on your local laptop? Let me introduce you to MinIO.

MinIO is an object storage platform that is compatible with the Amazon S3 API, which means it will work with the Splunk SmartStore backend. I use Docker for my Splunk lab environment. Conveniently, MinIO comes pre-packed in a Docker container which we can use. I’ve put together a sample docker-compose.yml file that you can use to test.

version: '3'

services:
 splunk:
   container_name: splunk
   hostname: splunk
   image: splunk/splunk:8.0.2.1
   volumes:
     - ./apps:/opt/splunk/etc/apps
   environment:
     SPLUNK_START_ARGS: --accept-license
     SPLUNK_PASSWORD: Sup3rS3cret!
     SPLUNK_SECRET: _Dl8G2FYk0WJv...snip...ucaclaRD-qRhX6k8r9Q
   ports:
     - "8001:8000"
     - "8088:8088"
     - "8089:8089"
     - "9997:9997"

  minio:
   container_name: minio
   hostname: minio
   image: minio/minio:latest
   volumes:
     - ./minio:/data
   ports:
     - “9000:9000”
   command: server /data
   environment:
     MINIO_ACCESS_KEY: 03349469357e858e23cdf342a70846fc
     MINIO_SECRET_KEY: 851a33d4...snip...4f176438bb00c74267e343
     MINIO_OPTS: --compat

 

Obviously in a production deployment, please don’t hardcode your secrets like above, but for the purposes of my lab this lets me decrypt the stored secrets if the Docker container stops and restarts.

You can generate some secrets using a few lines of Python. Here’s an example I used for the config above:

import secrets
secrets.token_urlsafe(128) # SPLUNK_SECRET
secrets.token_hex(32) # MINIO_ACCESS_KEY
secrets.token_hex(64) # MINIO_SECRET_KEY

 

Once you bring up your Docker compose file, navigate to http://localhost:9000 and enter your access key and secret key to use the MinIO web client. You’ll need to create a bucket (not very creative, but I’m calling mine Splunk) to be able to read and write indexed buckets.

Now to get Splunk talking to MinIO, we need to add an indexes.conf file with some of the values above. You can put this file wherever, but make sure to copy the values exactly as defined or you’ll run into access issues.

# indexes.conf
[volume:s3]
storageType = remote
path = s3://splunk/
remote.s3.access_key = 03349469357e858e23cdf342a70846fc
remote.s3.secret_key = 851a33d4...snip...4f176438bb00c74267e343
remote.s3.supports_versioning = false
remote.s3.endpoint = http://minio:9000

[smartstore]
homePath = $SPLUNK_DB/$_index_name/db
coldPath = $SPLUNK_DB/$_index_name/colddb
thawedPath = $SPLUNK_DB/$_index_name/thaweddb
remotePath = volume:s3/$_index_name

 

To throw some data at our SmartStore backend, we can copy our internal logs!

[monitor:///opt/splunk/var/log/splunk]
index = smartstore

Restart Splunk, and check to see if you see buckets in MinIO. It may take a few minutes for objects to appear because buckets only uploaded when they are rolled from hot to warm, or you could force a bucket roll through the CLI or restart Splunk.

 


 

Want more insight like this? Make sure to follow us on LinkedIn, Twitter and Facebook!