You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 18, 2020. It is now read-only.
Hopefully it's just a leak and not a general issue.
If there is a large amount of duplicate Strings that are not a leak I have made good experiences (in java) with de-duplicating the strings in memory by using Weak Sets or a corresponding pair of weak maps (you pay in CPU when loading data, but on the long run the time saved in the garbage collector is higher because garbage collection is nonlinear to the amount of memory in use). Later versions of Node already support Weak Sets via ES6 / experimental flag (io.js already defaults).
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
The csv-mapper fails on the full dataset (that's why there is currenty a filter on notebooks set).
we should use https://github.com/felixge/node-memory-leak-tutorial or some other technique to find out the source of the issue.
Hopefully it's just a leak and not a general issue.
If there is a large amount of duplicate Strings that are not a leak I have made good experiences (in java) with de-duplicating the strings in memory by using Weak Sets or a corresponding pair of weak maps (you pay in CPU when loading data, but on the long run the time saved in the garbage collector is higher because garbage collection is nonlinear to the amount of memory in use). Later versions of Node already support Weak Sets via ES6 / experimental flag (io.js already defaults).
The text was updated successfully, but these errors were encountered: