Replies: 9 comments 2 replies
-
Maybe consider feeding the streams into something like |
Beta Was this translation helpful? Give feedback.
-
Thank you for the suggestion, though I'd like to keep in code without server dependencies. Even if there was a way to change the callback method for process_stream_data without having to create a new one, might be the right approach, if that's possible? |
Beta Was this translation helpful? Give feedback.
-
I find the idea with redis super for a problem I have :) But about your problem: As far as I understand one of the two functions may always access all data and the second only if desired or needed. If the first is always allowed to access and query all data sequentially, then it can additionally check if the switch is True or False and if True, then the received value is simply written into a FIFO stack with deque() from which the 2nd function can then query it. that would be about 5 lines of code. |
Beta Was this translation helpful? Give feedback.
-
I never tryed it and i prefer using stream_buffer instead of the callback function, but i think it should be possible to override the callback function by providing a new one to
|
Beta Was this translation helpful? Give feedback.
-
Hi all, thank you for the feedback. I may stick with just creating a new connection since there's no auth required. Just curious, Oliver, why do you prefer using stream_buffer instead of a callback? Doesn't that then require another step with the pop() method? Thanks again! |
Beta Was this translation helpful? Give feedback.
-
stream_buffer avoids crashes if there are to many msg are coming in. because receiving and processing the data gets decoupled and can work asynchronous then. And try-except blocks in ubwa are catching some errors that are happening inside the callback function, so the error stack is not show and this makes debugging much harder. |
Beta Was this translation helpful? Give feedback.
-
the call back within the asyncio loop of awaiting new receives is actualy a loop like this but it spans a callback on every receive. if the callback needs a long time finish the task, then reiceiving data can get a backlog because you are blocking it with processing the data. we receive asynchronous data via webocket and write it to a stream_buffer which is writing to one variable, then its done. receiving can forward with full ressources. on peaktimes (5x more receives than in average) this avoids a lot of crashes on my side. in the example, the delay should only be present if the stream_buffer is empty! if not the loop is running and running... |
Beta Was this translation helpful? Give feedback.
-
receiving the data and storing it to a db or doing intensive calculations get decoupled that way... |
Beta Was this translation helpful? Give feedback.
-
This could be interessting: https://medium.lucit.tech/passing-binance-market-data-to-apache-kafka-in-python-with-aiokafka-570541574655 |
Beta Was this translation helpful? Give feedback.
-
Hello,
Once I have a trade stream open, and things are working along, I have a need to temporarily "tee" or fork it to another stream processing function to perform some calculations, then after X number of events, it would close and the normal stream ingestion would resume.
Right now, I'm creating a whole new unicorn_binance_websocket_api_manager instance which then I close, which I feel is inefficient.
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions