Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

large fraction of focalplane with poor throughput fraction for tile 10105, exp 275643-5 #285

Closed
sybenzvi opened this issue Feb 9, 2025 · 7 comments
Assignees
Labels
dailyops For listing individual dailyops problems

Comments

@sybenzvi
Copy link
Collaborator

sybenzvi commented Feb 9, 2025

This tile was observed at high declination (72°N) after a long slew, and exhibited low efftime over half the focalplane. This could belong to #247.

Image

@sybenzvi sybenzvi added the dailyops For listing individual dailyops problems label Feb 9, 2025
@araichoor
Copy link
Contributor

if useful, below a couple of diagnoses: those seem to show that the issue is increasing with exposures (ie the last exposure is the worst); and petals 5-9 are affected.

first: note that there were some clouds passing by during those exposures: https://data.desi.lbl.gov/desi/users/raichoor/main-status/spacewatch/spacewatch-20250125.mp4.

then: here are my "usual" plots for these long slews, where I look at the sky brightness fluctuations over the focal plane (each row is for one exposure, each column for one camera; see e.g. outer edge of petal=6, camera b):

Image
Image
Image

lastly: here are the STARRMS and THRUFRAC values for those three exposures from the exposure-qa-EXPID.fits filesPETALQA extension:

for expid in ["00275643", "00275644", "00275645"]:
    d = Table.read("/global/cfs/cdirs/desi/spectro/redux/daily/exposures/20250125/{}/exposure-qa-{}.fits".format(expid, expid), "PETALQA")
    print(d["PETAL_LOC", "STARRMS", "BTHRUFRAC", "RTHRUFRAC", "ZTHRUFRAC"])

=>

PETAL_LOC   STARRMS   BTHRUFRAC  RTHRUFRAC  ZTHRUFRAC
--------- ----------- ---------- ---------- ---------
        0  0.06528422  1.0279399  1.0018204 0.9522014
        1 0.058695726  0.9486853 0.99457973 0.9569484
        2  0.05965884 0.95581686  1.0168191 1.0129249
        3  0.03868476   1.101833  1.1327407 1.1757886
        4 0.056645714  1.1140901  1.0626199 1.1233506
        5 0.105635785   1.020575  0.9829534 1.0565091
        6 0.090100706  1.0348362  0.9842399 1.0102203
        7  0.15399164 0.94522595  0.9088826  0.896034
        8  0.05130075  0.9338694  0.9464021 0.9082319
        9 0.047410812 0.91712946 0.96894217 0.9077906
PETAL_LOC  STARRMS   BTHRUFRAC  RTHRUFRAC  ZTHRUFRAC 
--------- ---------- ---------- ---------- ----------
        0 0.06385986  1.1568973  1.1401111  1.0500982
        1 0.06043744  1.1454666   1.197144  1.1230997
        2 0.11466648  1.1177976  1.1904198  1.1684271
        3 0.05367049  1.2606283  1.2741456  1.3696429
        4  0.0881328  1.1232555  1.0398011  1.1386355
        5 0.19829196 0.94069904 0.89513355 0.98811567
        6 0.14985628 0.86915284  0.8197386  0.8569934
        7 0.17111763 0.72246665 0.68383014  0.6795049
        8 0.17706026 0.79954594  0.8282185  0.7795927
        9 0.09460766  0.8640908  0.9314571  0.8458892
PETAL_LOC   STARRMS   BTHRUFRAC  RTHRUFRAC  ZTHRUFRAC 
--------- ----------- ---------- ---------- ----------
        0  0.09673065  1.2188675  1.1936506  1.0939132
        1  0.09755205  1.2309362  1.2777183  1.1739671
        2  0.22994801  1.1560496  1.2171675  1.1389366
        3 0.043600462  1.4446377  1.4468718  1.5283242
        4   0.0824544  1.2588137  1.1701864  1.2863156
        5  0.32887688 0.93784684    0.87913 0.99748033
        6  0.19805364 0.79008347  0.7557109  0.8220628
        7  0.19915052  0.5198402 0.49743578   0.508951
        8   0.3652376 0.67570245 0.71597993 0.68093646
        9  0.17024389 0.76722246  0.8461483  0.7691124

@geordie666
Copy link
Collaborator

Thanks Anand. It's interesting that the exposures worsen with time rather than settling down!

Let's mark all of these exposures as bad and re-process.

@abhi0395: Could you please mark exposures 275643, 275644, 275645 on tile 10105 from night 20250125 as bad, re-process and then report back here? Thanks!

@geordie666 geordie666 assigned geordie666 and abhi0395 and unassigned geordie666 Feb 10, 2025
@schlafly
Copy link
Contributor

My hypothesis is that we're somehow planning these off target (e.g., at the pre-slew initial telescope HA/Dec rather than the actual HA/Dec??). Then we're getting further off target with time.

@geordie666
Copy link
Collaborator

That's an interesting idea, @schlafly.

Do you think this particular tile (10105) gets worse with time specifically because of its high declination, then?

Or are you suggesting that many of our long-slew cases in #247 might be caused by pre-slew planning?

@schlafly
Copy link
Contributor

More the latter --- I'm guessing that for some reason long slews trigger mispositioning. We discussed with Klaus taking a series of backup exposures with long slews between them, as tile 1, slew, tile 2, repeat tile 2, slew, tile 1, repeat tile 1, etc. Then we could see e.g. if the positioning of the repeats looked different from the post slew tiles.

I don't know why this should happen! But some of this feels that way.

@abhi0395
Copy link
Member

@geordie666 I have marked these exposures bad for tileid 10105 and re-processed. nightqa is also updated now. Please feel free to close the ticket if things look okay.

@geordie666
Copy link
Collaborator

Thanks @abhi0395. I've reset the QA for this tile and I'm closing the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dailyops For listing individual dailyops problems
Projects
None yet
Development

No branches or pull requests

5 participants