Hi there,
I’m trying to simulate an device work at 150V bias using the python API, and my strategy is to calculate the field at the voltages from 0 to the final voltage with step length 0.1V.
I’ve found that after every devsim.solve is done, the memory the process takes never drops and will soon reach the maximum of my CPU size, which makes it impossible for me to finish the computation.
Is there any way to release the memory taken?
Many thanks.
Hi @Chenxi_Fu
Thanks for trying the software. It is possible that there is a memory leak somewhere in the software, including the solver.
If you want to try increasing the voltage with larger steps, and you are using some of the existing scripts, you may try switching from logarithmic
to default
damping using:
opts = get_equation_command(name="PotentialEquation", device=device, region=region)
opts['variable_update']="default"
equation(**opts)
After the simulation has been setup.
The reset_devsim command can be used to clear the internal storage. However, this would require reloading your mesh and physics afterward. If that doesn’t help, then it is possible there is a leak within devsim
, or even the solver.
If you are using the Intel MKL, it may be possible to disable their memory manager, by setting the appropriate environment variable before running your script:
https://www.intel.com/content/www/us/en/docs/onemkl/developer-reference-c/2023-1/mkl-disable-fast-mm.html
It is also possible to use UMFPACK as an alternative solver, to help diagnose the issue. By running the script with:
python -mdevsim.umfpack.umfshim ssac_cap.py
where the last argument is the name of your script.
If you are able to provide a minimum reproducible example, please share it, preferably at:
https://github.com/devsim/devsim/issues
Regards,
Juan
Hi Juan,
Thank you for your kind answering, and I’ve tried the method reset_devsim with load_devices, but found it very difficult to re-define all the equations and boarder values.
I wonder is there an effective solution utilizing load_devices, that is, initializing the problem with the devsim file.
Many thanks,
Chenxi
By the way the leakage seems to only happen on gmsh meshed devices…
Please see these examples for restarting using the devsim written devices:
testing/mos_2d.py
testing/mos_2d_restart.py
testing/mos_2d_restart2.py
However, the easiest thing may be to save “Potential”, “Electrons”, and “Holes” and the contact bias for each region on the device. Then starting from your drift diffusion simulation, restore these variables before starting the simulation.
Please also try setting the environment variable to change the Intel MKL memory manager.
Thank you, these examples are very useful and I’m going to write a periodic reset-restart process based on these files.
Changing the env var didn’t make visible changes.
Sorry but my repo is too complicated to extract a minimum reproducible example immediately, I’m planning to make one two weeks later and raise an issue then. My mesh is about 70,000 nodes, and after each voltage step the CPU usage increases ~200M.
Thanks again for your help!
The program fails to converge several volts after I reload the values, equations and contacts. Weird.
I’ve tested the program on a long running 2D simulation on macOS, Windows, and Linux, and I am not seeing a memory leak.
Please make sure you are running the latest version of the software. The latest version is 2.8.4. You can find your version using:
import devsim
print(devsim.get_parameter(name='info'))
To get the latest version:
pip install devsim --upgrade
I have upgraded from 2.6.3 to the latest version and found add_db_entry() removed.
Our simulation includes multiple materials and different regions of the same material share the same fundamental parameters. Is there a similar method for me to define parameters throughout the same material? Many thanks.
Hi @Chenxi_Fu,
I did not realize people were using the material db, it was removed in version 2.8.1.
You can call
devsim.get_material
to get the material name for your region, and then set the parameters on those regions based on a lookup in your database.
Looking in CHANGES.md
, it is unclear if there is anything that would affect memory usage since version 2.6.3, but I suggest the bug fixes are probably worth it.
If you do not have time to adapt your code to the database removal, you can get version 2.8.0 from using:
pip install devsim==2.8.0
I have changed the parameter setting methods to set_parameter and re-run the program, and found that not only the leaking is not fixed but an additional convergence failure has emerged… I have uploaded the reproducible example on github.
For those interested in the topic. We were not able to reproduce a memory leak. However, in the GitHub link above, it was found that using UMFPACK, instead of Intel MKL Pardiso, as the solver resulted in a lower memory footprint. Also this particular case had convergence over a wider range of biases with UMFPACK.
Way much lower, for me the difference is from 0.5T (if reachable) to 1G.