Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

timestamp for recorded patches (this automatically includes frames) #83

Open
Neumann-A opened this issue Aug 24, 2017 · 24 comments
Open

Comments

@Neumann-A
Copy link
Contributor

due to the discussions of #72 we also may need timestamps between patches/frames if the hardware has to use different delays between the acquistion of a new patch/frame.

Maybe a flag must be introduced to describe wether those timestamps are necessary for the given data.

The timestamp of a new frame would always be the timestamp of the first patch within that frame.

Those timestamp can be then used to calculate a timestamp for a reconstructed image.

@hofmannmartin
Copy link
Member

Maybe a flag must be introduced to describe wether those timestamps are necessary for the given data.

I think we wont need a flag. We would make the field optional and assume that the data is obtained contiguous if the field is not present.

The timestamp of a new frame would always be the timestamp of the first patch within that frame.

Would one use use a global time stamp such as the one describing the creation of the MDF file, or would one derive a time differences from the receiver clock to be easily capture the phase relations between measurements at different patch positions?

@Neumann-A
Copy link
Contributor Author

Would one use use a global time stamp such as the one describing the creation of the MDF file, or would one derive a time differences from the receiver clock to be easily capture the phase relations between measurements at different patch positions?

I would vote for time difference between patches/frames. Seems to be the one with the most precision. Calculating the absolute timing from that would be max of the order O(N*J). Reason why I prefer dt instead of absolute timings -> Rounding errors if t gets large and may want to choose another frame as t = 0

@hofmannmartin
Copy link
Member

I would vote for time difference between patches/frames. Seems to be the one with the most precision. Calculating the absolute timing from that would be max of the order O(N*J). Reason why I prefer dt instead of absolute timings -> Rounding errors if t gets large and may want to choose another frame as t = 0

I would modify your proposal.

Time differences derived from the sampling rate of the receiver as this provides exact knowledge about the phases, which are not prone to rounding errors.

@tknopp
Copy link
Member

tknopp commented Aug 24, 2017

Ok, lets do this. Lets first discuss the semantics and after that bikeshed the name.

Proposal: we introduce a new (optional) field /acquisition/pauseSamples that is of dimension J and type In64. It describes the number of samples that are measured after each drive-field cycle but which are dropped after acquisition and therefore neglected in the stored data stream. Usually pauseSamples will be a multiple of the numberOfSamplingPoints

@hofmannmartin
Copy link
Member

What if we put it into /acquisition/receiver/pauseSamplingPeriods? To indicate that it is derived from the sampling rate of the receive channel.

@tknopp
Copy link
Member

tknopp commented Aug 25, 2017

don't care about naming. your proposal sounds good. @NeumannIMT Since you opened this issue: Are you fine with the solution? Given your approval we can update the specification.

@Neumann-A
Copy link
Contributor Author

Proposal: we introduce a new (optional) field /acquisition/pauseSamples that is of dimension J and type Int64. It describes the number of samples that are measured after each drive-field cycle but which are dropped after acquisition and therefore neglected in the stored data stream. Usually pauseSamples will be a multiple of the numberOfSamplingPoints

although I like the precision of the proposal I think not all implementation of scanners can fullfill it because it is assuming that the acquisition is not stopped and that the scanner can actually keep track of how many samples have been dropped (which must be implemented in the scanners acquisition logic). Usally scanners dont work in that way and the acqusition logic just waits until a trigger condition for acq. is recieved. We as a user might be able to listen in on how many triggers have been dropped and thus derive some value for pauseSamples but i dont think this is always the case. In addition to that pauseSamples actually needs knowledge of the sampling rate of the recievers (which my not be the same for all recievers).
So having a fallback to a less precise timestamp seems favorable.
(Question: Do we need sample accurate timing for patch timestamps?)

maybe
/acquisition/pauseAfterPatch and allow it either to be in sampling cycles (UInt64) or in seconds (Float64)

@tknopp
Copy link
Member

tknopp commented Aug 28, 2017

We should check if we can convert between seconds and samples in a lossless fashion. But I think we are basically fine here. Its 52 bit mantisse vs 64 bit. So I think we can just use seconds.

(Question: Do we need sample accurate timing for patch timestamps?)

Depends. There is a scenario, where one just uses a fraction of a DF period for movement. Then one would need that (+shifting the DF phase accordingly)

@hofmannmartin
Copy link
Member

We could solve the precision issue by using a pauseAfterPatchUnit field. Either the unit is seconds, or the unit is something like "number of sampling cycles".

@tknopp
Copy link
Member

tknopp commented Aug 29, 2017

Yes, but we do this only if it is really necessary. After thinking more about it my precision argument is not really valid. A Float64 can handle our use cases just fine. You only get rounding errors in a scenario where you have a very long pause time. So the only thing that we are optimizing using an Int64 is how to get 12 bits longer pause times without losing precision.

@Neumann-A
Copy link
Contributor Author

After thinking about it having a timestamp of the acquistion start would be more favorable then having knowledge of the pause between end of patch and start of a new patch.

So currently the discussing about the delta t is:
last patch end <-dt-> next patch start
my (original) idea was to have:
last patch start <-dt-> next patch start

The latter seems to have more usability for images because it doesn't need to know when a patch ends.
So the name of the dataset would be dtBetweenPatches or something similar

@hofmannmartin
Copy link
Member

After thinking about it having a timestamp of the acquistion start would be more favorable then having knowledge of the pause between end of patch and start of a new patch.

/acquisition/startTime does just that.

The latter seems to have more usability for images because it doesn't need to know when a patch ends.
So the name of the dataset would be dtBetweenPatches or something similar

The problem I see is that no delay does not mean <-dt->=0 in this case, such that you would have to set <-dt->=T in each case.

In any case both approaches differ just by the time T. For the phase delay that is completely irrelevant.

@tknopp
Copy link
Member

tknopp commented Aug 29, 2017

@NeumannIMT: This does not work since you loose precision over time. Since you encode a large dynamic range within the Float64.
We do not care about usability when calculation of an equivalent value is very easy to obtain.

@Neumann-A
Copy link
Contributor Author

/acquisition/startTime does just that.

No. it only describe the start of the measurement not the time relation between patches.

The problem I see is that no delay does not mean <-dt->=0 in this case, such that you would have to set <-dt->=T in each case.

Was there not the idea to have a flag indicating whether continous acquisition is possible or not? Making the timestamp optional? (maybe the exisitenz of the dataset iis flag enough for that)

This does not work since you loose precision over time. Since you encode a large dynamic range within the Float64.

How long do you assume that the acquistion of one patch will take ? Im currently assuming of the order of tenth of seconds and that is already quite long.

Float64 epsilon is roughly 2E-16 for a value of 1.0. Assuming that any sensible step for a time delay is in the 1ns [0.1ns] range we could easily handle time delays in the order of 1E6 seconds (11 days) for only the delay between the starting of two patches

@tknopp
Copy link
Member

tknopp commented Aug 29, 2017

Float64 epsilon is roughly 2E-16 for a value of 1.0. Assuming that any sensible step for a time delay is in the 1ns [0.1ns] range we could easily handle time delays in the order of 1E6 seconds (11 days) for only the delay between the starting of two patches

yes, I thing you are correct with that.

@tknopp
Copy link
Member

tknopp commented Aug 29, 2017

we could also use float128...

@tknopp
Copy link
Member

tknopp commented Aug 29, 2017

how would you name your parameter?

@Neumann-A
Copy link
Contributor Author

dtBetweenPatches or dtPatches or deltatPatches : Description would be time delay between the
starting point of two consecutive patches.

In my opinion dtPatches is the catchiest.

Special cases:
dataset missing: -> Allows assumption that data has been acquired continous (or delay is unknown)
dataset scalar: -> Fixed delay between patches given as scalar value
dataset 1D of dim N: -> Continous acquisition of patches, delay between frames is given
dataset 1D of dim J: -> Delay between patches given, Delay between frames unknown (may need a flag to distinguish from above case (when J = N) or we need an additional dataset dtFrames)
dataset 2D of dim N x J: all other cases.

After writing this we may need both dtPatches and dtFrames to cover all cases in a good way.

we could also use float128...

not a good idea. not all languages have native support for extended precision floating points. You would also have to define a HDF5 type for it.

@tknopp
Copy link
Member

tknopp commented Aug 29, 2017

so you do not want this to be the delta from the start point but you want this to be the delta between two patch start points? (did not get that before)

@Neumann-A
Copy link
Contributor Author

See comment after tagging:

I would vote for time difference between patches/frames. Seems to be the one with the most precision. Calculating the absolute timing from that would be max of the order O(N*J). Reason why I prefer dt instead of absolute timings -> Rounding errors if t gets large and may want to choose another frame as t = 0

@tknopp
Copy link
Member

tknopp commented Aug 29, 2017

dataset missing: -> Allows assumption that data has been acquired continous (or delay is unknown)
dataset scalar: -> Fixed delay between patches given as scalar value
dataset 1D of dim N: -> Continous acquisition of patches, delay between frames is given
dataset 1D of dim J: -> Delay between patches given, Delay between frames unknown (may need a flag to distinguish from above case (when J = N) or we need an additional dataset dtFrames)
dataset 2D of dim N x J: all other cases.

In v2 we have tried to not make the dimensions different for different setups. I would therefore simply require dimension J

@tknopp
Copy link
Member

tknopp commented Aug 29, 2017

please have a look at the changelog where we discuss this

@Neumann-A
Copy link
Contributor Author

With that

After writing this we may need both dtPatches and dtFrames to cover all cases in a good way.

this is not an issue

Also the Dimensions would be J-1 and N-1

maybe add another group /acquisition/timing and but all timing related stuff into there

@tknopp
Copy link
Member

tknopp commented Aug 29, 2017

both are separable. Each frame has exactly the same timings.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants