Here we describe the various scripts we are using for the NGC1333 and CLASSy project. These scripts can be organized in a pipeline. Most of them will be based on the Miriad package, but we might also be using some DS9, KARMA and NEMO. To simplify/organize our scripts, we have assembled a CVS module called MIS, in which all scripts (and default parameter data to run the scripts) are maintained. First we describe MIS, after that the individual scripts. Related information can be found in the EGN pipeline, which is directly derived from the MIS pipeline described here, as well as AStute.
MIS (Miriad Interferometry Singledish) toolkit
This is our CVS based module to organize all our reduction scripts and store the parameters important for them as well. We separate these in a script independant way, so we allow anybody to use csh, sh, python, fortran, C, C++, Java, ...... There is also a manual available, although you can also generate a new one from your own MIS version.
If you want to grab a copy of MIS, for now use our CVS repository version (or use this tar file which you will need to cvs update):
cvs -d :pserver:$USER@cvs.astro.umd.edu:/home/cvsroot checkout mis
where $USER is your given username in CVS (or use anonymous if you only need to grab a copy for read-only use).
If you have never used CVS before, get your introduction from our CARMA pages here. If you need an account, you will need to email Peter your preferred username, and the hashed output of the password generator (details in that link) via the command
perl -e 'print crypt("my_password","b6"), "\n";'
and he better not get the response b60j72OwWnjlk from you!!!
Once you have obtained MIS, get it ready for installation:
cd mis ./configure source mis_start.csh
and you are in principle ready to use the MIS scripts. There are some additional powerful options to construct automated pipelines, but that is for another chapter.
Occasionally there will be updates. If you are not a developer, and just want to update MIS from what others have done, this should work in most cases:
is simply a shortcut to committing all your MIS changes back to CVS for others to pick up. Use that with caution, as it could commit more than you may have bargained for!
Not all scripts have been properly MIS-pipelined yet! Keywords followed by = means they can be input in this script, keywords preceded by = means they are set by this script
- getdata: project= rawdata= ary= track= =cvis
- map_inttime: source= =np
- uvcatSD: offname= fix=
- reduceSD.csh: example of single dish data reduction (using sinbad/sinpoly/varmaps) - MWP
- do_reduceSD: the same script, but now MIStified as well as cleaned up.
- do_mapSD: creates the SD cubes
- do_uvcat1: cut interferometer size down to just the 4 USB windows, and apply flags
- do_inspect1: inspect your data (you can edit to suit your needs)
- do_cal1: calibrate
- do_mos1: example mosaicing of just the UV data
- lincomb.csh: template script for combination of singledish and interferometric data using mossdi ("Stanimirovic method").
There are a lot more scripts now, not all described here
Lets say you just have downloaded and installed MIS, and you have placed all cx323 and c0924 compressed miriad tar files in some data directory. Assuming MIRIAD and MIS have been loaded in your shell environment, you will need to make a symbolic link from $MIS/rawdata/CLASSy to the directory where all your tar files live. e.g.
cd $MIS/rawdata ln -s /bigdisk/classy_data CLASSy
This is because we now use the convention in all mis.def files to use the reference rawdata=$MIS/rawdata/CLASSy, which will make them portable between users.
Next in a clean directory we will want to reduce the data for a given cloud. This is always controlled by a number of tracks, which we call projects. This live in $MIS/cat, so for convenience we keep a symbolic link in the working directory:
ln -s $MIS/cat/n1333.lis
Now we are ready to set up a clean set of project directories, one for each observing track:
piperun -c n1333.lis pipesetup project=%s
you need the -c flag, because the directories do not exist yet. If you wanted to start from a fresh MIS , leave out the -c flag, and since the project is already known, a simple
piperun n1333.lis pipesetup
would be sufficient. This will loose all your modification in the MIS files (mis.def, mis.uvflag etc.).
Now you are ready for a pipeline reduction. First the data should be extracted, and summaries made. Useful for both SD and UV data:
piperun n1333.lis 'getdata; report'
where you can notice the use of single quotes and a semi-colon to issue two commands in the pipe. After this you will see 23 projects having expanded a bit from 13.8 GB to 16.2 GB.
Now ready to extract the single dish data (sddata)
piperun -v n1333.lis do_uvcatSD
followed by a rather verbose reduction of the spectra, so we keep those in a logfile:
piperun -v -o reduceSD.log n1333.lis do_reduceSD device=/null sleep=0
Managing MIS files using CVS
Assuming you are familiar with CVS, in each project (directory) a symbolic link def points to a project within $MIS/def/ tree, which are managed via CVS. It is thus important that we all commit frequently to avoid nasty surprises.
If you work in pure CVS, you would normally do the following
cvs update file edit file (even if it had conflict from the update just done) cvs commit file
In MIS this is slightly more complicated, because we don't work directly in $MIS/def
mis -u (this runs a 'cvs update' in all of MIS, so you also get new scripts etc.) pipesetup (this copies the def/ files back in your project) do_your_work pipesave (this copied the files back to $MIS/def via the symlink) cvs2def (this commits the files back into CVS for others to pick up)
Example Session using MIS
A simple non-pipelined session could be as follows:
mis -q mkdir data10 cd data10 pipepar -c rawdata=/home/teuben/carma/work/n1333/rawdata getdata project=cx323.1E_89NGC133.17 getdata track=22 ary=E report map_inttime
A more advanced session making the inttime map would be to first construct a pipeline consisting of getdata and map_inttime for all projects, and then use the piperun command to execute this.
piperun -c n1333.lis 'pipeline 2 getdata map_inttime > Pipefile; pipepar -c project=%s; pipe all'
In each project directory you will then have a xyt.tab file for the next stage. For example to get a listing of how many pointings each track had, use
piperun n1333.lis pipepar -v project -v np
and here's how to then create the inttime maps,in various formats (miriad, fits):
When new data come in
When new data come in, and it has been placed in the rawdata directory, the following MIS command will get it into your local (and hopefully otherwise empty) directory:
pipepar -c getdata trial=1 ary=D report
where the ary=D is actually the default and not needed. Also, this will make a symlink, if you really must have a copy, add the link=0 keyword.
After this you can cut it down to its basic USB win4 (9,11,13,15) selection using
for interferometric, and
for single dish. Go ahead and inspect it the way you like, and create a mis.uvflag file with entries such as
edge=2 tsys=500. select=time(23:09:00.0,23:10:00.0),source(3c84) select=ant(7),time(00:00:00.0,00:25:00.0) select=ant(7) tsys=20.,100. select=time(23:00:00.0,23:20:00.0),ant(16) select=time(00:00:00.0,00:15:00.0),ant(16) select=ant(17),win(2,3)
you can formally flag this in pipeline fashion using
once you are happy with the flag file, save it:
and if this is a new project, you will have to add it to CVS for others to benefit:
cd $MIS/def cvs add cx323.1D_89NGC133.1 cvs add cx323.1D_89NGC133.1/mis.* cvs commit
anybody else now can retrieve these using the command
in your data project directory. Any changes to these persistent pipe files needs to be done using CVS, or if you only have those files to worry about (and not code) you can use our shortcut
Using CVS will give us a well known tool to make sure we're all agreeing on the parameters and the flag file(s), and whatever else we decide to save on a project-by-project (trial really) basis.
For final combined mapping we'll have to create another project, since only one mis.def file can exist per project.
Single Dish Data Reduction
When new data comes in, here is the standard list of things to do to reduce the Single Dish data (regardless of whether the interformeter people have already flagged things or not):
mis -u mkdir ExampleD1 cd ExampleD1 pipepar -c getdata trial=1 ary=D report do_uvcatSD do_uvflag vis=sddata do_reduceSD
You can replace the last line by
do_reduceSD device=/null sleep=0
if you don't want do_reduceSD to prompt and display the data. The maps for each antenna are stored in SD/, and the pictures of all the 23 antennas for each molecule are stored in /. (e.g HCN.allants.gif)
If you want to just blanket remove an antenna from the maps, and discard it (this would be on top of the standard flagging) add badants, e.g.
but if you do, make sure this is saved in $MIS for others to pick up
pipesave mis -i
Working from home/laptop
There's a potential pitfall if you want to work from home or your laptop, since you may not have mounted the rawdata directory (although you can actually do this using fusefs). But getdata also has an scp option, where you can full fake being at work, except of course symlinks don't work and you get a full copy of the data. Depending on your internet speed, this could be a slow processing getting the data. Here's an example:
mkdir D1 cd D1 pipepar -c getdata trial=1 ary=D firstname.lastname@example.org
this will ensure that your rawdata= is properly pointing to the motherland. If not, the mis -i would save a bad value to CVS and could mess up your collegues.
Another option is to use the wget_getdata script to create a local clone of all the compressed tar files. Re-running this script will then also keep your local repository in sync with the master version.
This script is supposed to run regularly/nightly, where all calibrated and processed visibilities and single dish cubes should be found for any subsequent mapping experiments. The script lives in $MIS/templates/n1333_nightly.csh and can be easily adapted for your needs. If you place a symlink from your $MIS/Nightly to the location where we keep this Nightly, others can reuse the processed data. This is currently /n/chara6/teuben/n1333/Nightly_latest. You can find a more complete description of the data products in Raw & Calibrated Data
Keeping your miriad and MIS up to date
will keep MIS up to date. If you have confirmed you can update your MIRIAD using
mirboss mir.subs fitsio mir.prog fits
then a good way to update miriad would be
For MIS you could also check the MiriadMis update page, to see what's new.