-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lecture: Calibration & Imaging #4
Comments
I have not yet started with the RIME lecture, the fundamentals is taking me a bit longer than I thought. |
I need some help/guidance in developing my lectures; I am still reading up and having trouble figuring out how to plan/organize my lectures. This is the plan for the lectures from the email @o-smirnov sent (maybe we should also start thinking about the order/exact times for the lectures?):
My lectures will be for 2x90 minutes, with practicals. Does that mean 3 hours of lectures, with demos from the python notebooks interspersed? At last Thursday's meeting, Oleg said the lectures have to be about general principles, with particular emphasis on RATT-interest areas. I am planning to base my lectures on NRAO's summer school slides and the published lectures from the Synthesis Imaging in Radio Astronomy book. I have identified the following relevant presentations:
And these are the relevant lectures from the Synthesis Imaging in Radio Astronomy book:
All of these aren't going to be covered in equal depth, but I am feeling rather overwhelmed by the amount of material. I have finished reading only a part of these materials, and have some questions/comments:
That's all I have for now. For RATT-specific issues, I also have to add high dynamic range imaging, wide-field imaging/W-projection, wide-band imaging/multi-frequency synthesis, antenna primary beams/A-projection, AW-projection, etc. |
Check also my RIME courses here: You raise good questions -- let's go over them tomorrow. Cheers, |
Hi Moderita, You do raise good points here. I would like to make the following comments which may assist.
Lets discuss further tomorrow. |
I have uploaded the completed calibration lecture here. It is probably too elementary (much simpler than the NRAO summer school lecture slides and @o-smirnov's 3GC3 RIME talk), but that is by design - I tried to keep it as simple as possible, with as little math as possible, and focused on RATT-interest aspects of calibration. |
Good work! Nice and clear and pitched at the right level. Some comments on @modhurita's lecture:
|
I have incorporated @o-smirnov's suggestions and uploaded a new version of the talk, with some additional minor changes/typos fixed.
I changed it to "Amplitude and direction of electric vector remain unchanged during propagation" - I think that's accurate, since the direction remains unchanged but the sense changes while the electric vector oscillates? |
This is really nice work, the slides are very clear. Slide 31 though: maybe the accompanying narrative will explain this but shouldn't that be the effect of correcting for the primary beam on the noise across the field of view? The beam response itself doesn't affect the thermal noise which is just jitter in the visibility measurement in the complex plane, and so is direction independent. |
I have been thinking about @IanHeywood's comment about the figure in slide 31 - what exactly does that image represent? The noise in the visibility is additive and direction-independent, thus the noise in the image is constant over the field of view. In the image on that slide, is the noise boosted by the primary beam to illustrate that different sources were detected at different SNRs in the source finding step performed on the apparent image? That is, the noise map scaled by the beam has no physical meaning, but reflects the SNR at which the sources were detected in the apparent image? Is "Effect of correcting for the primary beam on the noise across the field of view", as per Ian's comment above, a better heading for the slide? |
I have divided my presentation into two lectures - one on RIME, and another on calibration. There is some overlap between the two (some slides are in both lectures) because I felt that was necessary for providing context for the calibration lecture. @SpheMakh and I discussed the practicals - he says the students should know the calibration procedure (that it is a least squares minimization process, and the concepts of residual visibilities, residual image, etc.) before he starts the calibration exercises. I agree, but the calibration procedure is on slide 19, and I am reluctant to place it earlier - it is the most complicated/dense slide of the lecture, and I think it should not be any earlier. It would either have to be moved to just after slide 2 to expand on the simple introduction to calibration on that slide, or after slide 11 in which the structure of the G-Jones matrix is introduced, because Sphe plans to start the calibration exercises with a G-Jones example. Therefore, it seems it would be best if I finish my lecture before Sphe begins his tutorial session. He estimates it will probably take several hours, and plans to illustrate the calibration process for the following cases:
The MS will be simulated beforehand by Sphe and will be a small MS (probably a KAT-7 one). Does this plan sound ok, or is this all too much? |
@o-smirnov, I have uploaded the revised lectures here. Can you please check if these look okay - I moved the slides on the structure of Jones matrices to the RIME lecture, and also added slides on commutation, polarization bases, etc. to that lecture. |
@modhurita sorry for the delay in my response. The usual approach to primary beam correction is to divide the final image by the model of the primary beam. This brings the attenuated sources at the field edge up to their instrinsic flux densities but boosts the background noise by the same factor. The SNR at which a source is detected is probably easier measured by taking the ratio of its apparent brightness to a measurement of the local noise, which if everything is behaving itself will be uniform across the map. I guess the complication arises for things in sidelobes as they become apparently variable and do not have a constant SNR over time. But I don't think your map captures this in any case, as it looks like it's an average map over the duration of the observation, and the azimuthal variation has been averaged out. I think the far field noise pattern you have produced is interesting to see but I'm not sure what the practical application of it is, although I’d love to hear about one if I’m just being blinkered. In my opinion the only reason to care about things in sidelobes is to mitigate them during calibration if they happen to be affecting your science goal (or extreme dynamic range goal). If you're doing a wide area survey with numerous pointings and there is something of interest in a sidelobe then it's better studied just by looking at it in the relevant neighbouring pointing. Similarly if it's a targeted observation to study an object of interest then it's not going to be in the sidelobe in the first place. |
I have added a calibration notebook which explains how to perform calibration using LM and StEFCal. See attached calibration notebook : |
That's a pretty awesome notebook! On Mon, Mar 16, 2015 at 3:51 PM, Trienko [email protected] wrote:
|
Seconded, really nice work. |
No description provided.
The text was updated successfully, but these errors were encountered: