Steamin’

CPU_usageYeah! It feels great when I get the workhorse computer really chugging along. 85% is unusual to see on a normal run, since this beast (a 2010 3.32 Ghz Mac pro quad core) has a lot of computational capacity that you don’t need for most tasks. I’m getting up to 85% because all four cores are crunching away independently on the same problem (written in MatLab) but different parts of it. Each point on the resulting graph will be the result of having computed 2000 points. Each of those 2000 points comes from computing a boundary value problem discretized into a million points. See an earlier post for more about that.

(Update: Now running even more tasks associated with this problem: Up to 96% now:
CPU_usage_2
…with nothing idle at all. This is probably not as efficient any more, but it is for a few hours and then back to 85%. In the meantime, it is amusing how it makes me feel I’m doing more work somehow…)

I assigned different parts of the graph to different processes by hand, not using a fancy parallelization routine since in this case I don’t need that (and don’t know how to). It takes about 24 hours to produce the 2000 set and hence the one point, but the great thing is that I get them in batches of four, because of the parallel run. There’s someting nice about logging on and seeing the little collection of precious, shiny new points that have been mined after all that work. This morning I did the data analysis on the small cluster of them I have so far, and they are already lined up along the curve I expected, so I’m quite pleased. This all relates to an earlier work I talked about, if you’re interested.

More later…

-cvj

On this day on Asymptotia...

Bookmark the permalink.