Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

A1: Webb Office HoursType your question into the WebEx chat. We will asynchronously copy questions from the chat to this main page and work through them as a group.  If you have images to share please give WebEx permission to share your screen (you may need to log out and log back in again to enable this feature.)

...

Q2.1: I am reducing NIRSpec and MIRI data to increase the quality of our data with the last version of the pipeline. Previously, I was fixing the warm pixels in my dataset with LA-Cosmic and NSClean. Now, I'm trying to use the STScI pipeline. The magenta result is from the STScI pipeline, applying the warm pixel correction that I was using previously. I was told it may not be necessary to use those tools anymore because the pipeline has improved; however, when I don't apply them I still have many warm pixels. How do I get rid of them? When I use my script for deleting the warm pixels, I get features that aren't real.

A2.1: Have you tried the pixel_replace step in calwebb_spec2?

Q2.2: I have not tried the pixel_replace step, I used the outlier_detection step. I spoke with David Law, who said it's difficult because the mask should be updated more often, or created for a specific dataset, because it will depend on the source brightness. I submitted a couple of help desk tickets, INC200233, INC200235. They suggested changing parameters, but I'm still having issues for both MIRI and NIRSpec and both teams are suggesting different things. What should I do? 

A2.2: There were some responses and suggestions in the help desk ticket, but it seems the E-mail notification from STARS got missed. The user will go back and look at the help desk ticket.

Q2.3: Maybe the issue is that I'm applying LA-Cosmic in addition to the pipeline?

A2.3: We're updating our warm pixels around every ~6 months and we haven't put in the latest yet, but the version from March should be good enough. We use background observations to flag warm pixels since there shouldn't be any sources. That's how we make the bad pixel mask. It seems like skipping the outlier_detection step would help (i.e., if you're overcorrecting), but if that's not the case it may be another issue. The other reason that feature might be present is when the extraction size is too small, but if you're using the same size extraction as for previous versions, that might not be the source of the issue either. Single pixel extraction can cause some problems, and it wouldn't be surprising to see these jagged line features since the PSF is undersampled. The MIRI team will investigate further and comment in the help desk ticket. In the meantime, try to do it without the extra bad pixel correction and with the outlier_detection step off, as a sanity check. 

...

Q3.1: When we deal with velocities and want to measure the errors, the resolution is different for different sub-channels. We'll have different errors for the velocities depending on the wavelength, right?

A3.1: Yes, it depends on wavelength resolution as you change. 

Q3.2: In this plot, I'm showing different lines and different velocities for a specific region. The error bars are from the fit. How can we trust any trends we see or not? Are there any wavelength calibration problems in this region? 

A3.2: Channel 4 has the most uncertainty with wavelength. On our JDox page, we have the wavelength calibration status (MIRI MRS Calibration Status). You can see how it's gotten better with time and how accurate it is in the 2B range (-1 +/- 6). We're much better now in channel 4 than we were before. It just depends on where in the spectrum we've had good lines to calibrate on. That table will give you a realistic error bar versus just using measurement error. Then, you'll be able to see if you have a significant change or not. 30 km/sec is significant. 5 km/sec could be within error. 

...

Q4.1: I have on-source imaging for MIRI (bright cluster galaxy and circumgalactic medium tail). Here's the image as I've calibrated it using the JWebbinar notebook materials (with and without background subtraction). Here's our source (the BCG, a white blob). I'm assuming the black dots mean oversaturation. Is that correct? Is there anything I can do to fix this? The program ID for this observation is 3629. We also have a background image.

A4.1: You have 12 groups and 4 dithers, which is good. The question would be is: is it partially or fully saturated? For infrared detectors, you read out the detector as you're exposing (the number of groups is the number of times you read out the detector as you're exposing). It's possible to recover saturated sources, unless it saturates for the first readout of the detector. If it saturates later in the ramp, you can recover saturated sources. You have to dig into the data to see if it's partially saturated (i.e., it didn't saturate immediately). It looks like your background image has some sources, which means it may not be the best background. It's possible the other black dots could be oversubtraction if there's a source in your background that got oversubtracted. If you file a ticket, they could walk you through what to do to recover pixels that are partially saturated. As long as it isn't fully saturated, you may potentially be able to recover some of that data. 

A4.1: How many black spots are low valued versus zero-valued? If they're just low-valued, it could be flat field normalization. Using the current pipeline, it isn't obvious offhand where there is no data in images, but this will be improved in future versions. For instance, the outside footprint of the image where there are no values will change to NaN, so then it will be more obvious where the image is NaN-valued. If the NaN-valued region is in the middle of the image, it would tell you there is no information there and it's being clipped away (e.g. due to outlier_detection). 

Q4.2: It seems like there's a line that separates the top and the bottom of the image. What is this from? I'm using the latest pipeline and reference files. The band is F2100. 

A4.2: If it's a flat issue, F2100 has a lot of thermal emission and it's sometimes difficult to handle the flat fielding. We're also seeing issues with outlier detection at long wavelengths. Submit a ticket and the team can investigate.

...

Q5: I reached out before about vertical stripes in the background our IFU data after collapsing our cubes (channel 1 and 2). It was mentioned that could be detector noise. We're looking at BCG and a CGM tail that extends upwards. It was very diffuse, so we wanted to make sure to handle it properly. I found a notebook for MIRI MRS reduction with a post-Stage 3 function that estimates background striping and subtracts it. I followed the same exact method, but ours is a mosaic with 6 total pointings. Is it reasonable to use this method for my data? The example used just 1 pointing.  

A5: That seems reasonable. You can do this kind of ad-hoc correction to measure and subtract the stripes, being careful not to subtract real science data (so it's very source- and program-specific). 

...

Q6: Regarding MIRI flux 1D spectrum errorbars, you mentioned the errors are not reliable and recommended estimating them yourself. I tried to do that. Is my method correct? 

A6: David has a paper on IFU covariance. We're going to be taking that into account in the pipeline in the future (we're pushing those updates right now). While they aren't available in the pipeline right now, they will see a major update over the next couple months or so. This won't be a cubeviz update, just a pipeline and CRDS update. The initial version should be available in ~1 month.  If your source has a smooth spectrum and you do a spline fit to remove astrophysical sources, then you can look at the fit to get the empirical RMS of your spectrum. It won't work if there are a lot of features in the spectrum, but this is one way you can get that information. 

...

Q7: 

A7: