Re: Galaxy Interaction Simulations

Steinn Sigurdsson (steinly@topaz.ucsc.edu)
8 Mar 94 11:58:51

In article <2lh1k8INNj9c@watt.cs.unc.edu> leech@cs.unc.edu (Jon Leech) writes:

In article <2l54nt$rkp@darkstar.UCSC.EDU> hos@helios.UCSC.EDU (Chris Mihos) writes:
>However,
>EVEN IF you satisfy the O-P criterion through a dark halo, in N-body
>calculations, bar modes will grow -- albeit slowly -- due to the
>amplification of discreteness noise in the halo particle distribution
>(see the Sellwood 89 MNRAS article I referred to above). To defeat
>this (ie to ensure the power in the m=2 mode stays small), you need to
>reduce the discreteness noise in the halo potential ("root-N noise")
>by increasing N. Therefore, to keep a disk from becoming bar unstable
>over many rotational periods, you need MANY MANY particles (10^6).

At the near certainty of demonstrating my ignorance, a suggestion: I've
been delving into molecular dynamics computations recently. I already had
some familiarity with computational aspects of astrophysical N-body
simulations. Superficially, the computations are similar: a gravitational
potential compared to electrostatic and bonding potentials for MD. And the
same efficiency hacks (tree codes, Greengard, etc.) are used in both.

What's different about MD computations is that most of the force model
is based not on the underlying QM which (perhaps) truly describes the
...
developing predictive value - even though they *know* the models are not
physically correct, they're close enough to be useful. Is there a potential
analogy to the astrophysics models? Some terms that might be added to tune
them to specific types of simulation?

Yes.
There is currently a lot of interest in "hybrid" codes, where
different parts of the physics is done using different
schemes. A lot of the N-body codes and all the hydro codes
already average over or approximate the microphysics, the
question is what is good enough for different astrophysical
systems of interest - this is a question of current research
interest that Chris for example is actively involved in.

>A treecode is an Nbody code which sorts the particles into nested
>hierarchical structures (hence the "tree" label) and evaluates the
>force between each particle and either other particles or hierarchical
>groups of particles, depending on accuracy criteria. It has the advantage
>of the CPU time scaling as NlogN rather than N^2 as direct nbody routines do.

I think it's more accurate to say it's bounded by O(N log N) and O(N^2)
(if not worse), since asymptotic performance depends on the accuracy
criterion.

Worse. The order quoted just tells you how each force calculation
scales with N, to actually do the dynamics to some finite precision
typically forces the scaling to be a steeper function of N, basically
larger N allows stronger clustering and smaller spatial scales to be
resolved, which forces the integration step down as some power of N
- this is a very crude explanation, the real problem is a bit more
subtle in general, and there are exceptions but only for certain
physical systems.
See for example, any of a number of review articles by Piet Hut...
Of course you can always do _worse_ than the nominal scaling
properties of your algorithm :-)

* Steinn Sigurdsson Lick Observatory *
* steinly@lick.ucsc.edu "standard disclaimer" *
* The laws of gravity are very,very strict *
* And you're just bending them for your own benefit - B.B. 1988*