Meeting agenda:

  1. News & Announcements.
  2. NIRSpec pipeline validation updates: non-linearity and dark current discussion (Ubeda, all)
  3. Non-linearity experiments on NIRISS updates (Roy).
  4. TSO JWebbinar.
  5. Closing remarks.

Meeting slides

Slides from today can be accessed through innerspace (external folks, send an e-mail to Nestor Espinoza if you would like to see them).

Discussion items

5 mins

1. News & announcements

  • How are proposal reviews going? All going quite well! For MIRI all TSOs are affected by the change in read mode so a bit of work. 
  • Nestor Espinoza comments on one particular proposal feedback that might be useful for other review(er)s: when performing calculations with the ExoCTK Contamination Tool, care must be taken as the sources used to identify contamination in it come from 2MASS. However, 2MASS is not a complete survey (no survey is!), and Gaia DR3 seems to find more sources than 2MASS. These can be added manually in the tool. 

2. NIRSpec pipeline validation

  • Nestor Espinoza introduces that the objective of this work is to validate the pipeline - check physical units, do the algorithms make sense? Meeting every Tuesday at 9:30 am for a quick update on progress and ideas (if you want to join the meetings, just let Nestor Espinoza know!). Analysis being led by Leonardo Ubeda.

  • Right now Leonardo Ubeda  has validated up to the linearity/dark current step in Detector1 step. Using CV3 lengthy time series data (10k integrations) to investigate the linearity correction. 

  • Leonardo Ubeda performed similar analysis to the ones shown in the previous TSO WG meeting, but only using illuminated pixels. The mean ratio he gets for (2-1)/(3-2) is 0.99554 (0.45 +/- 0.072%) which is inconsistent with 1.

    Michael Regan believes this is a low illumination exposure, so not really a "stress test" on the non-linearity correction.
    Stephan Birkmann slightly disagrees; NIRSpec/Prism has a very narrow PSF, so a handful of pixels at the center of the profile do get large fluences; in particular, around 1-2 µm.
    Michael Regan then proposes that a better test then would be to plot this (2-1)/(3-2) ratio as a function of counts. Leonardo Ubeda will get to this for the next meeting.

  • Stephan Birkmann also makes the point that there might be other useful datasets to test this with larger number of groups. Nestor Espinoza thinks that's great, and believes that getting a good hang on this particular dataset is of importance because it has been released to the community — and most likely real Cycle 1 datasets will be better to perform these kind of analyses in the future anyways.

  • Nestor Espinoza mentions that for the next step Leonardo Ubeda tested, the dark current step, the reference file has clearly some residual 1/f noise in it, which gets imprinted in the actual data.

    Michael Regan argues the effect would mostly be cosmetic; noise-wise, the variance is zero for that component.
    Nestor Espinoza asks then what is the point of the dark correction for TSOs. Pressumably the idea is to only get dark current counts, and not ASIC-related phenomena like 1/f noise.
    Michael Regan argues that this is important because the dark current might be localized in the detector.
    Everett Schlawin asks whether it would make sense to perhaps smooth the reference dark image then, in order to remove the 1/f component and just leave an estimate of the dark current. Having these 1/f patterns added to the data could have second or third-order effects (e.g., mess up the reference pixel correction, jump step, etc.). 
    Stephan Birkmann mentions that the next suite of dark frames for NIRSpec will have a better handling/correction of the 1/f pattern. It's a difficult problem.

5min3. Non-linearity experiments

  • Unknown User (aroy) kicks-off the discussion with an introduction to the analyses she's been performing. Slides of her presentation here.

  • Main idea: non-linearity correction is not perfect; we have errors for it. We can use those errors to compute how much of a bias we should expect on transit depths to be measured with different instruments, due to the fact that we almost certainly don't have the real, underlying non-linearity correction for the pixels but only an estimate on it.

  • Michael Regan makes the point to be careful with the language and refer to "fluence" whenever we speak about the number of counts.

  • Unknown User (aroy) performed Monte Carlo simulations assuming a 1% transit. Applied a non-linearity function to different fluences, drawn from the posterior distribution published by a technical report in Morishita et al. (2021). Then, sampled a non-linearity function again in order to correct the data — this simulates our ignorance on the "true", underlying non-linearity correction. Then, tried to recover the transit depth — which should be 1%, but in reality is not due to the non-linearity sampling just mentioned. Preliminary results show that this might indeed be important to study (see presentation for numbers).

    Michael Regan shows concern on the actual coefficients derived by the technical report; among other comments, "normal" polynomials were used, whereas using Legendre polynomials, for instance, would get rid of the covariance between the coefficients. Unknown User (aroy) mentions that this covariance is nonetheless accounted for in the sampling scheme mentioned above.

    Nestor Espinoza mentions that while there might be concerns on the details (or applicability) of the non-linearity correction, this has been to date the only team that has delivered errorbars on the non-linearity coefficients. It's the best we got to date; and indeed, Unknown User (aroy)'s work might spark more interest by the different instrument teams to get those as well. Right now, her work shows that this indeed important, and something we should look into in detail.

  • Unknown User (aroy) finishes with some extra questions and to-does on her work:

    - Does shape of error envelope make sense?
    - Add trace for eg SOSS SUBSTRIP256 and do calculations for pixels that will receive flux.
    - Add cross-dispersion profile for flux in orders.
    - Consider adding wavelength-dependent behaviors. 
5 mins4.TSO JWebbinar
  • Nestor Espinoza introduces that we have been asked to see if we could support a TSO JWebbinar. These are 2-hour classroom sessions, repeated up to 3 times. Material required is 1-2 presentations, 2-3 worked notebooks. 

  • For a Webinar date of Sep 15th, the deadline for material would be Sep 5th. Feedback from the team:

    Everett Schlawin: there definitely is the need from the community for more support using the pipeline. this was the experience from the ERS hack week. 
    Sarah Kendrew: appreciate that there is a need and it would be really good to host a webbinar for the community but personally don't have the time to commit to it.
    Knicole Colon: this might be a bit redundant with the Webinar made by the Transiting Exoplanet ERS team.
    Nestor Espinoza: partly agrees; but it is also true that there are other TSO use-cases that were not touched upon in those Webinars. For instance, there are proposals on black-hole accretion approved for Cycle 1 (see, e.g., 1586; 1666) which are most likely focused on getting the best possible cadence with the data at hand.

  • Given no further feedback, we'll discuss this internally — but if anyone shows interest on participating on this, please let us know.
5 mins5. Closing remarks
  • Brian Brooks asked what are the next steps for the JDox work. Sarah Kendrew has created JIRA tickets to create the new content and has green light to proceed. Will familiarise herself with the author workflow and contact the other WG members with instructions and info to proceed. Soon!