Week 5: Parallel Programming
April 7, 2023
Hi everyone! This week was a heavy reading week with not much coding. I read papers about parallel implementations of the Lattice Boltzmann Method to come up with a plan of how I was going to approach it.
~~~
Introduction to Parallel Computing:
Parallel computing is splitting up a process into many computations that can be conducted simultaneously. The goal is to speed up larger processes by splitting it up into many smaller processes that can run at the same time. For LBM, when the lattice grid gets really big, the simulation can start to take forever. And, the nature of the algorithm makes it suitable for parallel computing.
There are 3 main types of parallel processing: perfectly parallel where subprocesses can run independently without communication, shared memory parallel where all information is shared between subprocesses, and message passing where necessary information is passed between subprocesses.
Specific to LBM:
For LBM, I think the most suitable one is message passing. All computations at one time step for any lattice node only relies on information from the previous time step of the neighboring nodes. Recalling information from previous blogs, LBM’s algorithm is 2 steps: streaming and collision. The calculations performed in the collision step are a result of particles being streamed in from immediately neighboring nodes. This means a calculation for a node (50,50) only needs information from the 8 nodes neighboring it (50,51) (50,49) (51,51) (51,50) (51,49) (49,49) (49,50) (49,51). Sometimes, more complicated LBM models require information from more than one layer of neighboring nodes. Let’s look at this example. The channel is divided into 3 green areas representing 3 subprocesses (let’s just call them 1 2 and 3 from left to right). The unshaded grid parts are where information is to be shared between the subprocesses. The image below shows information being transferred from one subprocess to another. While streaming, the information from the dark blue region in subprocess 1 is sent to subprocess 2. Particles all carry a unit velocity, but only those with a rightwards component will be sent from the dark blue to light blue region. In reverse, information from subprocess 2 will be sent to subprocess 1, from light blue to dark blue region. Likewise, the particles that are sent over have a leftwards component for their velocity. Around a green box there are 4 strips of the unshaded area, meaning for every time step, there should be 4 message sharing occurrences. Send rightward moving fluid/receive incoming leftward moving fluid, send upward moving fluid/receive incoming downward moving fluid, send leftward moving/receive incoming rightward, and send downward/receive incoming upward. The illustrated in blue is subprocess 1 sending rightward/receiving incoming leftward.
Python:
“MPI For Python” is a package that allows you to use message passing interfaces (MPIs) to compute in parallel. MPI was originally for use in scientific languages Fortran, C, and C++, but this Python package facilitates something similar. When we’re passing array information, the Python code is almost as fast as C.
MPI can create as many processes as your computer has cores, and each process is named by a “rank” (just like how we named the three green boxes in the image subprocess 1 2 and 3). Methods like .send and .recv are for sending and receiving data. Point-to-point communication is when one subprocess sends/receives data from another subprocesses. This is likely what’s going to happen with LBM (as described above with the images).
~~~
Now that I’m more familiar with MPI, I’ll get started with the actual programming next week. Next week is also an extra fun week for me because I’m visiting Boston for an admitted students event at MIT for a few days! Thank you for reading, see you next week
~~~
Sources:
- Desplat, I. Pagonabarraga, P. Bladon, {LUDWIG}: A parallel Lattice-{B}oltzmann code for complex fluids, Computer Physics Communications 134 (2001) 273.
M.D. Mazzeo, P.V. Coveney, Heme LB: A high performance parallel lattice-Boltzmann code for large scale fluid flow in complex geometries, Computer Physics Communications 178 (12) (2008) 894–914.
Bing, H., Feng, W.-B., Z. Wu, Cheng, Y.-M.: Parallel Simulation of Compressible Fluid Dynamics Using Lattice Boltzmann Method. In: The First International Symposium on Optimization and Systems Biology (OSB 2007), Beijing, China, pp. 451–458 (2007)
Krukov, V.A.: Working out of Parallel Programs for Computing Clusters and Networks (in Russian). The Information Technology and Computing Systems (1-2), 42–61 (2003)