Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slack hook #14

Open
wants to merge 8 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ FROM alpine:3.7@sha256:8c03bb07a531c53ad7d0f6e7041b64d81f99c6e493cb39abba56d956b

MAINTAINER Leonardo Gatica <[email protected]>

RUN apk add --no-cache mongodb-tools py2-pip && \
RUN apk add --no-cache mongodb-tools curl py2-pip && \
pip install pymongo awscli && \
mkdir /backup

Expand Down
29 changes: 25 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ docker run -d --name mongodump \
lgatica/mongodump-s3
```

### Inmediatic backup
### Immediate backup

```bash
docker run -d --name mongodump \
Expand All @@ -65,9 +65,22 @@ docker run -d --name mongodump \
lgatica/mongodump-s3
```

## IAM Policity
### Slack Hook
```bash
docker run -d --name mongodump \
-e "MONGO_URI=mongodb://user:pass@host:port/dbname"
-e "AWS_ACCESS_KEY_ID=your_aws_access_key"
-e "AWS_SECRET_ACCESS_KEY=your_aws_secret_access_key"
-e "AWS_DEFAULT_REGION=us-west-1"
-e "S3_BUCKET=your_aws_bucket"
-e "SLACK_URI=your_slack_uri"
lgatica/mongodump-s3
```


You need to add a user with the following policies. Be sure to change `your_bucket` by the correct.
## IAM Policy

You need to add a user with the following policies. Be sure to change `your_bucket` by the correct name.

```xml
{
Expand Down Expand Up @@ -98,11 +111,19 @@ You need to add a user with the following policies. Be sure to change `your_buck
}
```

## Extra environmnet
## Extra environment

- `S3_PATH` - Default value is `mongodb`. Example `s3://your_bucket/mongodb`
- `MONGO_COMPLETE` - Default not set. If set doing backup full mongodb
- `MAX_BACKUPS` - Default not set. If set doing it keeps the last n backups in /backup
- `BACKUP_NAME` - Default is `$(date -u +%Y-%m-%d_%H-%M-%S)_UTC.gz`. If set this is the name of the backup file. Useful when using s3 versioning. (Remember to place .gz extension on your filename)
- `EXTRA_OPTIONS` - Default not set.
- `SLACK_URI` - Default not set. Sends a curl notification to the Slack Incoming Webhook.

## Troubleshoot

1. If you get SASL Authentication failure, add `--authenticationDatabase=admin` to EXTRA_OPTIONS.
2. If you get "Failed: error writing data for collection ... Unrecognized field 'snapshot'", add `--forceTableScan` to EXTRA_OPTIONS.

## License

Expand Down
50 changes: 41 additions & 9 deletions backup.sh
Original file line number Diff line number Diff line change
@@ -1,24 +1,56 @@
#!/usr/bin/env sh

OPTIONS=`python /usr/local/bin/mongouri`
BACKUP_NAME="$(date -u +%Y-%m-%d_%H-%M-%S)_UTC.gz"
OPTIONS="$OPTIONS $EXTRA_OPTIONS"
DEFAULT_BACKUP_NAME="$(date -u +%Y-%m-%d_%H-%M-%S)_UTC.gz"
BACKUP_NAME=${BACKUP_NAME:-$DEFAULT_BACKUP_NAME}
LOCAL_BACKUP_ROOT_FOLDER="/backup"
LOCAL_DUMP_LOCATION="$LOCAL_BACKUP_ROOT_FOLDER/dump"

notify() {
if [ "${SLACK_URI}" ]; then
message="$BACKUP_NAME has been backed up at s3://${S3_BUCKET}/${S3_PATH}/${BACKUP_NAME}"
if [ "${1}" != "0" ]; then
message="Unable to backup $BACKUP_NAME at s3://${S3_BUCKET}/${S3_PATH}/${BACKUP_NAME}. See Logs."
fi
curl -X POST --data-urlencode "payload={\"text\": \"$message\"}" $SLACK_URI
fi
}

# Run backup
mongodump ${OPTIONS} -o /backup/dump
mongodump ${OPTIONS} -o "${LOCAL_DUMP_LOCATION}"
status=$?
if [ "${status}" -eq "1" ]; then
echo "ERROR: Mongodump failed."
notify 1
exit 1
fi

# Compress backup
cd /backup/ && tar -cvzf "${BACKUP_NAME}" dump
tar -cvzf "${LOCAL_BACKUP_ROOT_FOLDER}/${BACKUP_NAME}" "${LOCAL_DUMP_LOCATION}"

# Upload backup
aws s3 cp "/backup/${BACKUP_NAME}" "s3://${S3_BUCKET}/${S3_PATH}/${BACKUP_NAME}"
aws s3 cp "${LOCAL_BACKUP_ROOT_FOLDER}/${BACKUP_NAME}" "s3://${S3_BUCKET}/${S3_PATH}/${BACKUP_NAME}"
status=$?
echo $status
if [ "${status}" != "0" ]; then
echo "ERROR: AWS Upload failed."
notify 1
exit 1
fi

notify 0

# Delete temp files
rm -rf /backup/dump
rm -rf "${LOCAL_DUMP_LOCATION}"

# Delete backup files
if [ -n "${MAX_BACKUPS}" ]; then
while [ $(ls /backup -w 1 | wc -l) -gt ${MAX_BACKUPS} ];
while [ $(ls ${LOCAL_BACKUP_ROOT_FOLDER} -w 1 | wc -l) -gt ${MAX_BACKUPS} ];
do
BACKUP_TO_BE_DELETED=$(ls /backup -w 1 | sort | head -n 1)
rm -rf /backup/${BACKUP_TO_BE_DELETED}
rm -rf ${LOCAL_BACKUP_ROOT_FOLDER}/${BACKUP_TO_BE_DELETED}
done
else
rm -rf /backup/*
fi
rm -rf ${LOCAL_BACKUP_ROOT_FOLDER}/*
fi