cat r001a.dat >>r001.dat
To copy structured binary files between machines of different binary format,
use the tsf(1NEMO)
, rsf(1NEMO)
programs and, if available, the compress(1)
,
uncompress(1)
utilities:
tsf in=r001.dat maxprec=t allline=t | compress >> r001.data.Z and on the other machine: zcat r001.data.Z | rsf - r001.datOn non-Unix supercomputers, often the ASCII "205" format (see e.g. atos(1NEMO) ) will be used. This may be saved in compressed format also, and can be processed by NEMO after
zcat r001.data.Z | atos - r001.datSee also the tcppipe(1NEMO) program to read the data over a pipe from another machine.
Some N-body programs, which are capable of handling a series of snapshots, and selecting them using the times= keyword, are not able to handle subsequent snapshots which are larger than the first one. In fact, unpredictable things may happen, although usually it results in a core because of illegal memory access. There are two solutions. The program can be recompiled, by using a -DREALLOC flag or #define it in the source code. The second solution is to prepend the datafile with a large enough ’dummy’ file.
To display a scatter diagram in the form of a contour map, convert the two columns to a snapshot by calling them ’x’ and ’y’ coordinates. Remaining phase space coordinates are unimportant. Set masses to 1, and use the atos(1NEMO) format. A program like awk(1) can write the file for atos(1NEMO) , then snapgrid(1NEMO) creates a image(5NEMO) file, which can be optionally smoothed using ccdsmooth(1NEMO) and displayed with ccdplot(1NEMO) . In case your host has nicer contour plotting programs, use ccdfits(1NEMO) to write a fits(5NEMO) format file. Check also the tabccd shell script, if available, or perhaps someone wrote it in C already. It calls awk, atos and snapgrid.
The ds9(1) program is one of the external programs which can be used to display images. ds9 can understand a variety of FITS compression standards. Transform your image to a fits file using ccdfits(1NEMO) , and use ds on that fits file.
mkdir /dev/shm/$USER cd /dev/shm/$USER mkspiral s000 1000000 nmodel=40The example of mkspiral(1NEMO) is taken from the NEMO bench(5NEMO) suite, but this example is actually not very I/O dominated. The variable $XDG_RUNTIME_DIR can also be used, or $TMPDIR, depending on your system configuration. Another option is using the mktemp(1) command, e.g. mktemp tmp.XXXXXXXX
Using tcppipe(1NEMO) one can read data produced on other machines
zrun(1)
can uncompress on
the fly, by prepending it to the command
zrun fitsccd ngc6503.fits.gz - | tsf -
pee(1) is tee for pipes
sponge(1) soaks up standard input and writes to a file
vipe(1)
edits a pipe using your editor,e.g.
command1 | vipe | command2
pv(1)
monitors the progress of data through a pipe, e.g.
mkplummer - 10000 nmodel=10 | pv | snapscale - . mscale=1
echo mkplummer . 10000 > run.txt echo mkplummer . 10000 >> run.txt parallel --jobs 2 < run.txtwhich will run both jobs in parallel.
One can also use the -j flag of make
of running commands in parallel. Similar to the run.txt file created here,
a well crafted Runfile can be created, and with
make -f Runfile -j 2should achieve the same result.
Although NEMO can be configured using --with-openmp
to take advantage of multi-cores OpenMP computing, there are really no programs
in NEMO taking advantage of this yet. However, programs using nemo_main()
should be aware of the user interface implications of controlling how many
cores are used:
0. the number of cores as per omp_get_num_procs(). We actually take a short-cut and use omp_get_max_threads() since it listens to OMP_NUM_THREADS (see next item) [the user has no control over this] 1. setting the environment variable OMP_NUM_THREADS to the (max) number of cores it will use 2. using the np= system keyword will override any possible setting of OMP_NUM_THREADS in the step before.
slurm is a popular package you will find on large computer clusters.
18-Aug-88 Document created PJT 5-mar-89 tabccd added PJT 6-mar-89 ds added PJT 9-oct-90 fixed some typos PJT jan-2020 added pipe/shm PJT may-2021 OpenMP PJT