-
Notifications
You must be signed in to change notification settings - Fork 159
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Query optimisation #107
Comments
Hi Sulka Haro! I already have a caching layer in place. But I will have a look if that is working as expected. What would be great is if the cgm-remote-monitor would support the notion of an event stream. Simply all that has happened to be retrieved by a single timestamp based query. |
Based on looking at the request logs, looks like the caching is not working as expected, or at least I'm seeing the basic four queries that load a lot of data being repeated every few minutes. The V3 api has better support for delta loading, have you checked that? |
Yes - I had a look. But you have still different endpoints there to collect
all events, right?
…
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#107 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAYPBFJTF5P67IWEC4J4HI3SEHVEHANCNFSM4Q2O5O6A>
.
|
At least I can't reproduce this here. The query
is fired every 5 minutes only. I have a maximum count set to 400. But I already set the time limits to the last delta. What could cause more data to be returned is when the user clicks on the statistics tab. That could return 288 entries for each day the users has selected to appear in the statistics. So typically these are 5 queries which return 288 entries each. But normally the main tab is the one that is open alle the time. So that shouldn't be the problem either... |
Right, so looks like you have some bug in the timer, where that query is randomly issued multiple times per second when it's issued. In our log, around half the time I see the query sent twice instead of once. If you want to reduce the load a lot, for the periodic poll to get latest updates, you can actually reduce the load a lot by dropping the date parameters altogether and setting the count to a lower value. Asking for 288 values should guarantee that once the next release goes out, the query will not hit the database at all. |
The timer has a quite fast retry. So if the request is too slow, it can do a retry too early. I could improve that. The dropping of the date parameters would be a redesign. If just caching of the request is the improvement, the better timer should solve it, right? I'm running everything on a small raspberry pi - together with other docker containers on it. And it's very responsive and the queries returning practically immediately. So I'm not sure whether I did understand the main problem. |
Hi! Looks like the app is querying for CGM data using queries such as:
/api/v1/entries?find%5Bdate%5D%5B$gt%5D=1599277060032.0&find%5Bdate%5D%5B$lte%5D=1599339600000.0&count=400
and issuing these fairly frequently. I'm getting reports that Atlas users are hitting database query limits when using Nightscout apps that hit the database frequently. If feasible, please consider refactoring the app in a way where instead of reloading historic data very frequently, only poll for the latest readings with most queries. If you query for /api/v1/entries.json?count=144, this would give you 12 hours of data for most NS users and should be enough to keep the local data up to date. We're working on adding in-memory caching to NS for the next release, and this query will subsequently not hit the database at all, while the queries that use date queries will continue to load the database.
The text was updated successfully, but these errors were encountered: