I would like to put a change into the calls to Intel MKL to speed up simulation. Specifically, this is to reuse the symbolic factorization from the first Newton iteration for the remaining iterations.
The advantages is a significant speed increase, since the symbolic factorization is no longer done for each nonlinear iteration. The regression tests still seem to run to completion.
However, there are slight differences in the results, as the symbolic factorization is based only at the simulation matrix from the first iteration. Note that the per iteration matrix values are still considered for the numerical factorization.
Should this change should be the default, or should it be enabled by the user on a per simulation basis?
How big of a change in results are we talking about? I’d always go or higher speed if results are similar enough
You can get an idea of the changes here:
It is mostly in the residual norms of the updates, but it has some slight effects in the actual results.
The speed up is significant. Running all of the tests, single threaded, for windows resulted in a reduction from about 90 sec to 80 sec in wall clock time.
For those still interested in this topic, the symbolic factorization change will be in the version 2.6.0
To get the old behavior, I have added an environment variable which can be applied to do a new symbolic factorization on each newton iteration.