I simulated a 20000 nodes example with 1000 V applied, the calculating time is about two days. How can I decrease the calculation time?
could you give me a version used SuperLU in the macOS
I use the gcc+11 compile the project, but the errors are that
g++-11: error: unrecognized command-line option '-weak-lblas'
g++-11: error: unrecognized command-line option '-weak-llapack'
That is a significant ramping in voltage. Please check to make sure that the default
option is being used instead of log_damp
in the variable_update
option of the equation
command:
https://devsim.net/CommandReference.html#devsim.equation
The logarithmic damping option is best for when the voltage ramping is on the order of 25 mV to 1 V. For 1000 V, you should not use this option.
Please see the .travis.yml
file in the base directory to see how to call /scripts/build_macos.sh
script with the gcc
option, instead of clang
. The -lweak
linker options are specific to the macOS linker.
You will need to then make the appropriate edits in scripts/setup_osx_gcc.sh
and cmake/osx-gcc.cmake
to user SuperLU instead of the Intel MKL. If you wish to use extended precision, you will need the gfortran
compiler to compile external/getrf
, or this can be disabled by turning DEVSIM_EXTENDED_PRECISION=OFF
. Setting MKL_PARDISO=OFF
implies using SuperLU for the direct solver.