-
-
Notifications
You must be signed in to change notification settings - Fork 0
Running
docker-compose up -d
# Wait a minute (no, really -- kafka does take a while to init properly)
docker exec binoas_transformer_1 manage.py elasticsearch put_template
# The above for the first time only
- See this tutorial for more on kafka
In order to run smoothly, binoas needs to run several cron jobs. The best way to run these might be to run them from the host.
Documents are stored in Elasticsearch in order to make it possible to generate digests. You can control the number of digests yourself. However, to prevent your indexes from running full you should run the cleanup job every couple of hours (?) In any case, at least once in the maximum period you sue for geenrating digests. Ie. if the biggest digest you build is one of 24 hrs you should run it at least once every 24 hrs, but you can also run it more often. You can run cleanup as follows:
docker exec binoas_transformer_1 ./manage.py elasticsearch cleanup
You might want to run this from a separate container, but for that you should adjust the docker compose file.
You can make digests over time periods that you define yourself. Remeber that if the frequency of a digest is 3 hours, you should run the digest creation process 8 times a day (Ie, in cron terms 5 */3 * * *
). It is run as follows:
docker exec binoas_transformer_1 ./manage.py digest make --frequency=3h
You might want to run this from a separate container, but for that you should adjust the docker compose file.