Many programs are capable of producing standard output in (ASCII) tabular format. The output can be gathered into a file using standard UNIX I/O redirection. In the example
6% radprof r001.dat tab=true > r001.tab
the file r001.tab will contain (amongst others) columns with surface density and radius from the snapshot r001.dat. These (ASCII) 'table' files can be used by various programs for further display and analysis. NEMO also has a few programs for this purpose available (e.g.. tabhist for analysis and histogram plotting, tablsqfit for checking correlations between two columns and tabmath for general table handling). The manual pages of the relevant NEMO programs should inform you how to get nice tabular output, but sometimes it is also necessary to write a shell/awk script or parser to do the job. Note: the tab= keyword hints at the existence of such features.
A usefull (public domain) program redir(1NEMO) has been included in NEMO3.6to be able split the two standard UNIX output channels stdout and stderr to separate files.
7% redir -e debug.out tsf r001.dat debug=2
would run the tsf command, but redirecting the stderr standard error output to a file stderr.out. There are ways in the C-shell to do the same thing, but they are clumsy and hard to remember. In the bourne shell (/bin/sh) this is accomplished much easier:
7$ tsf r001.dat debug=2 2>debug.out
One last word of caution regarding tables: tables can also be used very effectively in pipes, for example take the first example, and pipe the output into tabplot to get a quick look at the profile:
8% snapprint r001.dat r | tabhist -
If the snapshot contains more than 10,000 points, tabhist cannot read the remainer of the file, since the default maximum number of libes for reading from pipes is set by a keyword nmax=10000. To properly read all lines, you have to know (or estimate) the number of lines. In the other case where the input is a regular file, table programs are always able to find the correct amount to allocate for their internal buffers by scanning over the file once. For very large tables this does introduce a little extra overhead.