examples
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| examples [2025/12/29 21:30] – [C++ program which uses GPU] dimitar | examples [2025/12/30 23:27] (current) – [C++ program which uses GPU] dimitar | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| - | ====MPI4PI==== | + | ====Description==== |
| - | TOD | + | |
| - | ---- | + | This page provides examples on how to use the cluster. There are language specific examples for **C/C++**, and **Python**, which showcase how you can compile and run applications which are written in those languages on the cluster. Additionally, |
| - | ==== PyTorch==== | + | |
| - | Consider | + | |
| - | + | ||
| - | <code python> | + | |
| - | import torch | + | |
| - | def test_pytorch(): | ||
| - | print(" | ||
| - | print(" | ||
| - | | ||
| - | if torch.cuda.is_available(): | ||
| - | print(" | ||
| - | device = torch.device(" | ||
| - | else: | ||
| - | device = torch.device(" | ||
| - | | ||
| - | # Simple tensor operation | ||
| - | x = torch.tensor([1.0, | ||
| - | y = torch.tensor([4.0, | ||
| - | z = x + y | ||
| - | print(" | ||
| - | |||
| - | test_pytorch() | ||
| - | </ | ||
| - | |||
| - | To test it on the unite cluster you can use the folling sbatch scrpit to run it: | ||
| - | <code bash> | ||
| - | #!/bin/bash | ||
| - | #SBATCH --job-name=pytorch_test | ||
| - | #SBATCH --output=pytorch_test.out | ||
| - | #SBATCH --error=pytorch_test.err | ||
| - | #SBATCH --time=00: | ||
| - | #SBATCH --partition=a40 | ||
| - | #SBATCH --gres=gpu: | ||
| - | #SBATCH --mem=4G | ||
| - | #SBATCH --cpus-per-task=2 | ||
| - | |||
| - | # Load necessary modules (modify based on your system) | ||
| - | module load python/ | ||
| - | |||
| - | # Activate your virtual environment if needed | ||
| - | # source ~/ | ||
| - | |||
| - | # Run the PyTorch script | ||
| - | python3.13 pytorch_test.py | ||
| - | |||
| - | </ | ||
| ---- | ---- | ||
| - | ====Pandas==== | ||
| - | Consider the following simple python test script( “pandas_test.py”): | ||
| - | <code python> | ||
| - | import pandas as pd | ||
| - | import numpy as np | ||
| - | # Create a simple DataFrame | ||
| - | data = { | ||
| - | ' | ||
| - | ' | ||
| - | ' | ||
| - | } | ||
| - | df = pd.DataFrame(data) | ||
| - | print(" | ||
| - | print(df) | ||
| - | |||
| - | # Test basic operations | ||
| - | print(" | ||
| - | print(df.sum()) | ||
| - | |||
| - | print(" | ||
| - | print(df.mean()) | ||
| - | |||
| - | # Adding a new column | ||
| - | df[' | ||
| - | print(" | ||
| - | print(df) | ||
| - | |||
| - | # Filtering rows | ||
| - | filtered_df = df[df[' | ||
| - | print(" | ||
| - | print(filtered_df) | ||
| - | |||
| - | # Check if NaN values exist | ||
| - | print(" | ||
| - | print(df.isna().sum()) | ||
| - | </ | ||
| - | |||
| - | You can use the following snatch script to run it: | ||
| - | <code bash> | ||
| - | #!/bin/bash | ||
| - | #SBATCH --job-name=pytorch_test | ||
| - | #SBATCH --output=pytorch_test.out | ||
| - | #SBATCH --error=pytorch_test.err | ||
| - | #SBATCH --time=00: | ||
| - | #SBATCH --partition=a40 | ||
| - | #SBATCH --gres=gpu: | ||
| - | #SBATCH --mem=4G | ||
| - | #SBATCH --cpus-per-task=2 | ||
| - | |||
| - | # Load necessary modules (modify based on your system) | ||
| - | module load python/ | ||
| - | module load python/ | ||
| - | |||
| - | # Activate your virtual environment if needed | ||
| - | # source ~/ | ||
| - | |||
| - | # Run the PyTorch script | ||
| - | python3.13 pandas_test.py | ||
| - | </ | ||
| - | ---- | ||
| ====Simple C/C++ program==== | ====Simple C/C++ program==== | ||
| The following is a simple **C/C++** program which performs element-wise addition of 2 vectors. It does **not** use any dependent libraries: | The following is a simple **C/C++** program which performs element-wise addition of 2 vectors. It does **not** use any dependent libraries: | ||
| Line 572: | Line 465: | ||
| ---- | ---- | ||
| - | ====MPI==== | + | ====C++ program which uses MPI==== |
| The following is an example **C/C++** application which uses **MPI** to perform element-wise addition of two vectors. Each **MPI** task computes the addition of its local region and then sends it back to the leader. Using **MPI** with **Python** is similar assuming that you know how to manage **Python** dependencies on the cluster which is described in the previous section. What is important here is to understand how to manage the resources of the system. | The following is an example **C/C++** application which uses **MPI** to perform element-wise addition of two vectors. Each **MPI** task computes the addition of its local region and then sends it back to the leader. Using **MPI** with **Python** is similar assuming that you know how to manage **Python** dependencies on the cluster which is described in the previous section. What is important here is to understand how to manage the resources of the system. | ||
| Line 875: | Line 768: | ||
| CUDA_CHECK(cudaMalloc(& | CUDA_CHECK(cudaMalloc(& | ||
| - | double transfer_start = getTime(); | ||
| CUDA_CHECK(cudaMemcpy(d_a, | CUDA_CHECK(cudaMemcpy(d_a, | ||
| CUDA_CHECK(cudaMemcpy(d_b, | CUDA_CHECK(cudaMemcpy(d_b, | ||
| Line 899: | Line 791: | ||
| CUDA_CHECK(cudaFree(d_b)); | CUDA_CHECK(cudaFree(d_b)); | ||
| CUDA_CHECK(cudaFree(d_c)); | CUDA_CHECK(cudaFree(d_c)); | ||
| - | CUDA_CHECK(cudaEventDestroy(start)); | ||
| - | CUDA_CHECK(cudaEventDestroy(stop)); | ||
| free(h_a); | free(h_a); | ||
| free(h_b); | free(h_b); | ||
examples.1767036655.txt.gz · Last modified: 2025/12/29 21:30 by dimitar
