EDIT: With regards to my current batch of optimisations, it would be really handy to know the maximum possible size of a neighbour list (i.e. the largest number of particles it would be possible to fit into 9 grid cells, given the size of the cells, size of the particles, and if it's relevant, the numbers used to calculate the forces to keep the particles seperate). Do you know if there's a way to calculate that, or would I just have to think of a suitably large number, load the code up with asserts, and hope for the best?
Unfortunately that number is unbounded in theory, so I think the thing to do is just look at what you actually see and assume twice that size.
You may get some improvements in performance by pulling some of those list creations out of the inner loop - in Java it doesn't seem to make much of a difference, but in C++ you may
see better performance from clearing a vector than creating a new one. From what I remember STL is not very careful about keeping memory and new object allocations down to a minimum, so it might do a lot of copying and whatnot.
Worst comes to worst, you may want to just create your own simple dynamic growable integer array class and use that instead - even in Java, that's what I should
be doing, as the built in ArrayList class is slow as crap (not in the least because it autoboxes integer primitives into Integer objects...yuck). With all my "real" work that's how I always handle growable primitive arrays in Java these days (by doing them myself so I control everything), but I'm not sure if that translates as well to C++, maybe STL is already optimized to handle primitives fairly well? Sure doesn't sound like it, though, since if Java's crappy (for primitives, mind you - for objects they're pretty good) container classes perform better, that's some seriously inefficient C++ code...
Also, are you able to shed any light on the reasons for the SPH not being scale invariant?
I spent a bit of time trying to figure this one out when I noticed it, but I don't think I ever cracked it. Every single one of the equations looks like it should
be scale invariant (i.e. if you shrink all lengths by a factor of 10, the equations seem like they should all scale appropriately, with no leftover length factors or anything like that). But if you play around with the parameters, it just isn't so, and things go unstable if the size gets too low. Interestingly enough, until you hit that threshold, the dynamics appear mostly
scale invariant, but they eventually break down. It may be something I've done wrong - this isn't a true SPH implementation, and in fact I forget where I even saw this particular method. It may be worthwhile to try replacing the pressure/surface tension calcs with more standard SPH approaches, I think this one dumbs some of the steps down a little bit and that may be responsible for some of the weirdness.
I think I ultimately decided it was a precision issue, though I never finished tracking it down, and I remember being at least a bit dissatisfied with that answer. There could be a factor I missed somewhere that makes this explicitly scale dependent (maybe I didn't adjust pressures properly or something? It's been a while since I looked at this...). If you happen to discover the problem, please do let me know!