Webb Office Hours Session 3:  March 14, 2024

Q&A's: 

Q1: Where are the Webb Office Hours procedures and guidelines?

A1: Webb Office HoursType your question into the WebEx chat. We will asynchronously copy questions from the chat to this main page and work through them as a group.  If you have images to share please give WebEx permission to share your screen (you may need to log out and log back in again to enable this feature.)


Q2: During the 8 Feb 2024 WOH, the user asked Q3: "What causes the black dots in the MIRI MRS channel 2 image below?". Experts suggested it was likely due to bad pixels. The bad pixel map was recently updated so their guidance was to rerun the pipeline using the updated files. The user was able to get rid of most of the weird spots with that advice; however, some were still appearing in the data. Then, when extracting spectra from regions with low SNR, they were finding spectra with negative flux (specifically in channel 1, which doesn't have many lines for their source; see attached image). Their science and background data were equally deep, so they used pixel-wise background subtraction, but even using master background subtraction, they are still seeing a negative flux. Is this potentially caused by the way the mask was applied? 



A2: If you're not detecting continuum, it's possible you're just in the error levels. The region has low flux and it's very diffuse. Pixel-wise background subtraction can sometimes work well when you have diffuse extended emission, few cosmic ray showers, and you're trying to eliminate any systematic background errors (especially if it's the same depth as your science data). The spectrum looks reasonable in that you're getting down into the detector noise. MIRI dark current is variable with time, which makes it complicated because you can't have a dark reference file to subtract that would give you an exact zero. One of the pipeline steps looks at a central region in the middle of the detector and computes the median effective residual dark, and subtracts that off. It assumes the delta dark current is constant across the detector. Within some fraction of the pixel uncertainty, it may not be exactly zero on average, so that could explain what you're seeing here. If it was multiple sigma below zero, it would be more concerning, but this spectrum seems reasonable. 


Q3: There are times when users want to do an aperture extraction over all channels and they want the aperture to be the same across every channel to extract from the same region in space. This data is for a bright cluster galaxy (BCG) with a CGM extended tail. The user selects some regions and plots them, so they have a BCG circular region to use for spectral extraction with the cubeviz extraction tool, and they want to use this for all channels; however, there's a problem where in some channels, the BCG falls outside the mosaic (specifically for channel 1) – the BCG isn't centered in the same place, so there are NaN pixels. To extract the spectrum from this region with the cubeviz tool, if the user wants to do a sum, it won't work due to the presence of NaN pixels. Is it possible to eventually change the code to use, e.g., numpy.nansum? It would make it easier to use the spectral extraction tool (rather than using a manual method). 

A3: Getting a spectrum will be challenging because it's falling out of the field of view. You would have to make an assumption about the profile in the region you don't see in the sky. You could make some reasonable assumptions (e.g., assume it's radially symmetric and paste in the values). We need to check with JDaviz experts about whether numpy.sum or numpy.nansum is used. Regardless, it should be able to sum considering NaN pixels, because there are different reasons why there may be NaNs in the data.


Q4: For arbitrary aperture shapes, when extracting a spectra over X amount of pixels based on a given SNR and position in the sky, is there anything that they should consider? 

A4: The PSF will change considerably from short wavelengths to long wavelengths, and there's an aperture correction to make if you want to get back the light lost outside the beam. For channel 1, for example, ~10% of the total light is lost outside the MRS footprint. When you do a point source extraction using the pipeline, it includes corrections for that and multiplies by the appropriate value to scale it up. With a custom extraction box, an aperture correction won't be made, so you would have to manually account for that. You may get slightly different results if you use your own box around the source versus the pipeline's method. Note that for a custom extraction region, one of the things you can do to limit effects of not having an aperture correction is to make sure the region is bigger than the FWHM. 


Q5.1: If a user wants to compute a moment zero map of a given line in cubeviz, they use the plugin, which reports units of MJy/sr. Units of a moment zero map of integrated flux should be in terms of cube unit times spectral unit, so the user thought the output unit would be MJy/sr*microns, which would be the spectral unit of the cube. It's not common in other literature to keep the moment zero map in terms of MJy/sr. It would be more common to put it in terms of, e.g., erg/cm2/sec/arcsec2.

A5.1 : You could convert to cgs units if that is desired, but the standard by which all Webb products are provided is to use units of MJy/sr. You could use whatever conversions you like to convert between the two, but we don't provide a standard translation to that, in general. 

Q5.2: Is the output unit of the moment zero map in JDaviz MJy/sr or is it MJy/sr*microns? 

A5.2: The maps provided in the actual data cubes are MJy/sr. The JDAviz documentation indicates that the units are cube unit times spectral axis units. This would be a good question to ask in the JDAviz help desk.


Q6: A user reported a bug in the outlier detection step to the pipeline help desk, but hasn't heard any updates. The issue is that if you want to work on a very large dataset, you can't do it on a normal server (you'd need > 1 TB of RAM to work with > 1000 exposures at once). Is this bug still being worked on? The user resolved the issue with their own script, but hasn't heard any other updates. 

A6: The pipeline team is aware of the Help Desk ticket and have been working to understand and address the issue.  They will keep the user updated via the Help Desk ticket.