In this post, I am going to describe one of the GitHub Repository, A Python Package Simulating For NVIDIA GPU Acceleration.
A python package simulating the quasi-2D pseudospin-1/2 Gross-Pitaevskii equation with NVIDIA GPU acceleration.
Introduction:
spinor-gpe
is high-level, object-oriented Python package for numerically solving the quasi-2D, psuedospinor (two component) Gross-Piteavskii equation (GPE), for both ground state solutions and real-time dynamics. This project grew out of a desire to make high-performance simulations of the GPE more accessible to the entering researcher.
While this package is primarily built on NumPy, the main computational heavy-lifting is performed using PyTorch, a deep neural network library commonly used in machine learning applications. PyTorch has a NumPy-like interface, but a backend that can run either on a conventional processor or a CUDA-enabled NVIDIA(R) graphics card. Accessing a CUDA device will provide a significant hardware acceleration of the simulations.
This package has been tested on Windows 10.
Dependencies:
Primary packages:
- PyTorch >= 1.8.0
- cudatoolkit >= 11.1
- NumPy
Other packages:
- matplotlib (visualizing results)
- tqdm (progress messages)
- scikit-image (matrix signal processing)
- ffmpeg = 4.3.1 (animation generation)
Installing Dependencies
The dependencies for spinor-gpe
can be installed directly into the new conda
virtual environment spinor
using the environment.yml file included with the package:
conda env create --file environment.yml
Note, this installation may take a while.
The dependencies can also be installed manually using conda
into a virtual environment:
conda activate <new_virt_env_name>
conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c conda-forge
conda install numpy matplotlib tqdm scikit-image ffmpeg spyder
Note
For more information on installing PyTorch, see its installation instructions page.
To verify that Pytorch was installed correctly, you should be able to import it:
>>>import torch >>>x = torch.rand(5, 3) >>>print(x) tensor([[0.2757, 0.3957, 0.9074], [0.6304, 0.1279, 0.7565], [0.0946, 0.7667, 0.2934], [0.9395, 0.4782, 0.9530], [0.2400, 0.0020, 0.9569]])
Also, if you have an NVIDIA GPU, you can test that it is available for GPU computing:
>>>torch.cuda.is_available() True
GitHub:
https://github.com/ultracoldYEG/spinor-gpeIf you like this GitHub repository which is based on GitHub NVIDIA acceleration on python packages please like, share and WhatsApp us to improve the content of these packages.