This page archives Slack comments from the splinter session on outlier detection of the Improving JWST Data Products Workshop (IJDPW).



Peter Zeidler

Dear all,
thanks for providing some input for your applications regarding the outlier detections. I prepared an initial (still incomplete) list of a few datasets for the different setups we can use to compare your results to each other (where applicable) and with the STScI pipeline. Given the large diversity in applying your routines we probably will adjust the datasets during the workshop as needed.NIRCam and MIRI coronagraphy: PID-1386, Obss 7, 8, 9, 30, and 31
NIRCam imaging: PID 1538, Obs 154 (F115W/F444W)
NIRSpec MOS: PID-1345 Obs 64 (G235M)
NIRSpec IFU: PID-2729 Obs 5 (G235H)
NIRISS imaging: PID 1475 Obs 1
MIRI MRS: PID 4489 Obs 2 



Peter Zeidler

The outlier rejection breakout session will happen in CafeCon, which is the conference room next to the Cafeteria (toward the patio on the back). The session will be from 1:30pm to 4:30 pm
Given the many different modes and the different pipeline stages, where people apply their outlier detection algorithm I would suggest to have more targeted discussions and maybe short demonstrations.


Peter Zeidler

We will have a few people who also would like to join the MSA slitstepping session so we agreed to start with the spectroscopic outlier detection especially the IFU


Peter Zeidler

If people have issues finding the datasets on MAST I created a box folder where I added the data


Kevin Volk

It looks like the .sh script for the NIRISS imaging example (set to be program 1475, observation 1) downloads the NIRCam prime data not the NIRISS parallel data. (edited) 
Also, we want program 1475 observation 6 (or any of 3 to 6) as observation 1 had an issue.


Thimothy Brandt

Can someone suggest a data set with a significant number of new warm pixels that are not in the bad pixel mask?

for my reference: MIRI MRS: PID 4489 Obs 2

David Law

Also for reference, Obs 1 which should have no sources in the field and makes the bad pixels much easier to find.

Thimothy Brandt

How are the reads from MIRI combined to form a ramp?  There is a very strong odd-even pattern that is largely in the first read.  Is the first read discarded, or are there calibrations for this?

I think my answers are here: https://iopscience.iop.org/article/10.1088/1538-3873/acdea6


David Law

Here's a notebook that I use outside of the pipeline for finding bad pixels from dedicated background observations and applying those flags to the science data.  Not heavily commented, but hopefully clear enough.

D_Law_Flag_badpix.ipynb

By default it write the DQ-corrected files to ratesub.fits files, which would then need to be passed into the next pipeline stage, though it can also overwrite the rate files.


Thimothy Brandt

Update to what I was saying about NIRSPEC: telegraph pixels are a significant culprit to jump detection.  I think I can do better here.



Peter Zeidler

Here are the summary points to what was discussed yesterday. Feel free to add information if I missed something:

  • Summary by David Law for the new IFU outlier detection: The new algorithm is rather a bad pixel masking than an outlier detection. One needs to find a good balance between rejecting true outliers and actual sources.
  • For most Instruments updating warm and hot pixels happens too infrequently, hence a significant number need to be clipped in the later outlier rejection steps rather than in the ramp fitting
  • For NIRSpec: Remaining outliers in the cube originate from the unreliable flat field DQ. This DQ was set when teh flat had to be extrapolated due to insufficient data or S/N. When masking than as DO_NOT_USE it improves the cubes (Jane Morrison) at the cost of additional data loss. This flag might give the wrong idea and a better explanation should be given or another flag should be introduced.
  • Michelle Hutchinson (GSFC) developed a promising outlier routine using layered sigma clipping. Details will be given in a paper that will come out in a few weeks
  • It would be better to use outlier rejection in earlier steps (Step1) where resampling (blotting) is not needed => early flagging. This can introduce additional uncertainties. A similar method as is used for HST could be introduced where bad pixels are detected through (almost) all science observations in a given time interval by searching through common outliers. James Davis worked on this for a spares field dataset. This could be introduced for all instruments, although the uneven illumination for the NURSpec detector might be challenging.
  • No labels