Re: Galaxy Interaction Simulations

Jon Leech (
8 Mar 1994 00:14:48 -0500

In article <2l54nt$rkp@darkstar.UCSC.EDU> hos@helios.UCSC.EDU (Chris Mihos) writes:
>EVEN IF you satisfy the O-P criterion through a dark halo, in N-body
>calculations, bar modes will grow -- albeit slowly -- due to the
>amplification of discreteness noise in the halo particle distribution
>(see the Sellwood 89 MNRAS article I referred to above). To defeat
>this (ie to ensure the power in the m=2 mode stays small), you need to
>reduce the discreteness noise in the halo potential ("root-N noise")
>by increasing N. Therefore, to keep a disk from becoming bar unstable
>over many rotational periods, you need MANY MANY particles (10^6).

At the near certainty of demonstrating my ignorance, a suggestion: I've
been delving into molecular dynamics computations recently. I already had
some familiarity with computational aspects of astrophysical N-body
simulations. Superficially, the computations are similar: a gravitational
potential compared to electrostatic and bonding potentials for MD. And the
same efficiency hacks (tree codes, Greengard, etc.) are used in both.

What's different about MD computations is that most of the force model
is based not on the underlying QM which (perhaps) truly describes the
dynamics, but on nonphysical approximations: springs for bonds,
Lennart-Jones potentials, and so on. It's also difficult to compare the
results to experiment (positions from crystallographic structures are only
accurate to ~1 Angstrom). And the models are not stable over long timescales
(hundreds of picoseconds :-) As a result, the major MD packages have lots of
tuneable parameters in their force models, and end up with different
parameters even for something as simple as bond lengths of a water molecule.

I don't think this is *good*, in an abstract sense. Given fast enough
machines (say, in 10 or 15 years :-), presumably the MD guys would run
full-up quantum density functional theory and try to dispose of the
mechanics models. But what it does seem to do is provide a bunch of differnt
knobs to tune the models for particular *types* of MD - small inorganic
complexes, amino acid residues, protein backbone segments in solution, and
so on. And the tuning is getting good enough that the models seem to be
developing predictive value - even though they *know* the models are not
physically correct, they're close enough to be useful. Is there a potential
analogy to the astrophysics models? Some terms that might be added to tune
them to specific types of simulation?

>A treecode is an Nbody code which sorts the particles into nested
>hierarchical structures (hence the "tree" label) and evaluates the
>force between each particle and either other particles or hierarchical
>groups of particles, depending on accuracy criteria. It has the advantage
>of the CPU time scaling as NlogN rather than N^2 as direct nbody routines do.

I think it's more accurate to say it's bounded by O(N log N) and O(N^2)
(if not worse), since asymptotic performance depends on the accuracy