You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I try to combine the data with lmtoy_combine for a large number of obsnums (>= 16), I get the following message:
*** buffer overflow detected ***: terminated
I assume this means out of memory but my desktop has 64 Gb of ram and the nc files are 153 MB each x 64 = 9.7 Gb. It works fine for 10 (both HCN and CO) -- I haven't tried 11,12,13,14,15 # obsnums. If it is a memory issue, then I would inquire whether we are reading-in all of the data prior to combining or sequentially as needed. There should NOT be a limit of the number of files to combine unless the output cube is some ridiculous size.
For our 14m pipeline, we would read just the
headers of all the files sequentially to determine the size of the output cube,
and then sequentially read in the data to fold into the output cube. By doing this sequentially, we would never run into a memory limit. This is also how I do this in python with my own combine routine.
The text was updated successfully, but these errors were encountered:
solved: the MAXHIST=512 buffer was overrun, despite the strncpy..... need to more closely check this code why it allowed it.
In the mean time, MAXHIST=2048 now.
[via Mark Heyer]
When I try to combine the data with lmtoy_combine for a large number of obsnums (>= 16), I get the following message:
*** buffer overflow detected ***: terminated
I assume this means out of memory but my desktop has 64 Gb of ram and the nc files are 153 MB each x 64 = 9.7 Gb. It works fine for 10 (both HCN and CO) -- I haven't tried 11,12,13,14,15 # obsnums. If it is a memory issue, then I would inquire whether we are reading-in all of the data prior to combining or sequentially as needed. There should NOT be a limit of the number of files to combine unless the output cube is some ridiculous size.
For our 14m pipeline, we would read just the
headers of all the files sequentially to determine the size of the output cube,
and then sequentially read in the data to fold into the output cube. By doing this sequentially, we would never run into a memory limit. This is also how I do this in python with my own combine routine.
The text was updated successfully, but these errors were encountered: