MinGW coding under Windows (C, C++, OpenMP, MPI)

This tutorial helps you set up a coding environment on Windows with the support for C/C++, Fortran, OpenMP, MPI, as well as compiling and running the TMAC package. If you use cygwin, please use this tutorial instead.


  • MSYS2 is a unix-like command-line evironment for Windows. It comes with the pacman package manager, which helps you install packages for coding and other purposes.

  • MINGW32 and MINGW64 include the GCC (GNU Compiler Collection), which you can use to compile C/C++, Fortran, and other source codes. As their names suggest, they are 32-bit and 64-bit versions, respectively. Codes that they compile can execute natively under Windows without runtime libraries from MINGW32, MINGW64, or MSYS2. GCC compilers can be called under both MSYS2 and Window's native CMD. I prefer MSYS2 because its sets up the environment, provides a package manager, and installs other coding tools (e.g., autoconf and automake). MINGW32 and MINGW64 can be also installed alone without MSYS2, but I suggest you install MSYS2 first and then use its package manager to install either MINGW32 (for 32-bit Windows) or MINGW64 (for 64-bit Windows).

  • OpenMP is an inter-thread communication specification; it often comes with compilers (e.g., latest GCC). In principle, threads can share memory, but processes cannot. To share data, processes use message passing.

  • MPI (Message Passing Interface) is a specification for inter-process communication via message passing. MS-MPI is Microsoft's implementation of MPI. It comes with header and library files, as well as some exe's, that you need to compile and execute your codes with the MPI support. Besides MS-MPI, Windows supports other MPI implementations.

  • BLAS is a collection of routines you can call from your codes to perform basic vector and matrix operations. They come with header and library files, and they are typically more efficiently than your own implementations. Besides the original NetLib implementation, BLAS has other implementations from ATLAS, INTEL MKL, Open BLAS, etc. Typically, you can use the include and library switches of your compilers to select the BLAS implementation that you code will use.

  • LAPACK a library of matrix algorithms. It implements algorithms for common matrix factorizations and solving linear systems, least squares problems, eigenvalue and singular value problems. LAPACK uses BLAS.

  • TMAC is a C++11 framework that lets you quickly develop your own codes for solving a variety of optimization problems in a parallel fashion. In particular, you can add your operators to TMAC and run your algorithms based on operator splitting and coordinate update methods. TMAC makes it easy to test both single-threaded and synchronous, as well as asynchronous, parallel algorithms.

Install MSYS2 and MINGW32 / MINGW64

  • start with no mingw or msys on my system (otherwise, please uninstall them)

  • run the MSYS2 installer, or use the MSYS2 installer at sourceforge (choose i686 for 32-bit MSYS2 and x86_64 for 64-bit MSYS2)

  • open MSYS2 at C:\msys64\msys2_shell.bat

  • let pacman updates MSYS2 and refreshes its package list

> pacman -Syuu   # update the package list
  • close and reopen MSYS2

  • install coding tools (from packages base-devel and toolchain) as follows (note that the two packages are large and have more than what you need; you can instead install just the modules you will need):

If you installed 64-bit MSYS2, then do

> pacman -S base-devel mingw-w64-x86_64-toolchain
  • in fact, it installs both MINGW64 and MINGW32. You can create 32-bit codes with MINGW32 in 64-bit Windows.

  • close MSYS2 and open MSYS2-MINGW64 by running C:\msys64\mingw64.exe

If you installed 32-bit MSYS2, then do

> pacman -S base-devel mingw-w64-i686-toolchain
  • close MSYS2 and open MSYS2-MINGW32 by runningC:\msys64\mingw32.exe

Verify installation

> gcc -v    # test gcc
  • check gcc version >= 4.9 and look for “Thread model: posix”

(For simplicity, the remaining tutorial assumes that you installed MINGW64)

Hello world and OpenMP

  • create a folder under home

> cd ~
> mkdir omp_hello
> cd omp_hello
  • next, download and compile the demo code

> wget https://computing.llnl.gov/tutorials/openMP/samples/C/omp_hello.c
> gcc -fopenmp omp_hello.c -o omp_hello.exe   # generate the executable file omp_hello.exe
  • try running it

> ./omp_hello.exe   # By default, gcc creates 1 thread for each core. My PC has 2 cores (4 cores under hyperthreading)
                    # The order of the five output lines will be random
Hello World from thread = 1
Hello World from thread = 2
Hello World from thread = 3
Hello World from thread = 0
Number of threads = 4
  • try it with more threads

> export OMP_NUM_THREADS=8  # explicitly set 8 threads by the OMP_NUM_THREADS environment variable
> ./omp_hello.exe
Hello World from thread = 3
Hello World from thread = 0
Number of threads = 8
Hello World from thread = 6
Hello World from thread = 5
Hello World from thread = 4
Hello World from thread = 1
Hello World from thread = 2
Hello World from thread = 7

Install Microsoft MPI (MS-MPI)

  • download MS MPI V7, and install both msmpisdk.msi and MSMpiSetup.exe

  • execute C:\msys64\mingw64.exe and locate the environment variables WINDIR, MSMPI_INC, and MSMPI_LIB64 by running:

> printenv | grep "WIN\|MSMPI"

If you don't see them, then your Windows environment variables are not passed to MSYS2-MINGW64; you need to correct this before proceeding to the next step.

  • add/create the header and library files for MS-MPI for later use:

> mkdir ~/msmpi                     # create a temporary folder under your home directory
> cd ~/msmpi                        # enter the folder
> cp "$MSMPI_LIB64/msmpi.lib" .     # copy msmpi.lib to ~/msmpi/; the import library, a placeholder for dll
> cp "$WINDIR/system32/msmpi.dll" . # copy msmpi.dll to ~/msmpi/; the runtime library
> gendef msmpi.dll                  # generate msmpi.def
> dlltool -d msmpi.def -D msmpi.dll -l libmsmpi.a   # generate the (static) library file libmsmpi.a
> cp libmsmpi.a /mingw64/lib        # copy the new library file to where g++ looks for them;
                                    # try "g++ --print-search-dirs"
> cp "$MSMPI_INC/mpi.h" .           # copy the header file mpi.h to ~/msmpi/
  • open mpi.h under ~/msmpi in an editor, look for “typedef __int64 MPI_Aint”. Just above it, add the new line with “#include <stdint.h>” (without the quotes), which we need for the definition __int64.

> cp mpi.h /mingw64/include         # copy the header file to the default include folder
  • now you can delete the folder ~/msmpi

Hello world / MPI

  • create a folder under home

> cd ~
> mkdir mpi_hello
> cd mpi_hello
  • next, download and compile the demo code

> wget https://raw.githubusercontent.com/wesleykendall/mpitutorial/gh-pages/tutorials/mpi-hello-world/code/mpi_hello_world.c
> gcc mpi_hello_world.c -lmsmpi -o mpi_hello.exe
	# -lmsmpi: links with the msmpi library, the file libmsmpi.a that we generated above
  • try running it

> ./mpi_hello.exe                   # 1 process
> export PATH="$MSMPI_BIN":$PATH    # add MSMPI_BIN (where mpiexec.exe is) to PATH
> mpiexec -n 4 mpi_hello.exe        # 4 processes

Compile BLAS from source (the generated code usually runs slower than OpenBLAS below)

if you want to install the original BLAS, do

> wget http://www.netlib.org/blas/blas-3.6.0.tgz
> tar xf blas-3.6.0.tgz
> cd BLAS-3.6.0
> gfortran.exe -c *.f       # compile each .f file and produce a .o fils
                            # (you can also add the Optimization switch -O3)
> ar rv libblas.a *.o       # combine all the .o files into a library file
> cp libblas.a /mingw64/lib # copy the library file to where g++ looks for them;
                            # try "g++ --print-search-dirs"
  • you must add -lgfortran during compiling/linking (see below) because BLAS requires the gfortran library (there might be a way to avoid this, but I didn't make an attempt)

Install OpenBLAS (preferred)

> pacman -S mingw-w64-x86_64-openblas

Install BLAS and LAPACK together (this BLAS code is typically slower than OpenBLAS above)

Let us use the scripts for building a set of packages from a Github Repo

> pacman -S  mingw-w64-x86_64-cmake                     # install CMake
> git clone https://github.com/msys2/MINGW-packages.git # clone the scripts
> cd MINGW-packages/mingw-w64-lapack                    # locate the LAPACK script

Open the file PKGBUILD in a text editor and replace “RESPOSE” to “RESPONSE” (to avoid an error message “ar.exe: Argument list too long”)

> makepkg-mingw                                   # build BLAS and LAPACK
> pacman -U mingw-w64-x86_64-lapack*.pkg.tar.xz   # install BLAS and LAPACK

Download lapack_test.cpp to test LAPACK:

> cd ~
> wget http://www.math.ucla.edu/~wotaoyin/software/lapack_test.cpp  # download
> g++ lapack_test.cpp -llapack -o lapack_test     # build
> ./lapack_test                                   # run


  • download the code from GitHub

> pacman -S git    # install git
> git clone https://github.com/uclaopt/tmac.git  # download TMAC from GitHub
  • open Makefile in the project root in an editor (e.g., Notepad, WinEdt)

  • if you installed BLAS, add -lgfortran (if not already there) to the end of the line starting “LIB := ” because BLAS requires the gfortran library; otherwise, make will run into an error

  • if you installed OpenBLAS, replace -lblas by -lopenblas in the line starting “LIB := ”; otherwise, make will run into an error

> make
> ./bin/tmac_prs_demo -problem_size 1500 -nthread 1    # run Peaceman-Rachford Splitting algorithm with 1 thread
> ./bin/tmac_prs_demo -problem_size 1500 -nthread 2    # run 2 threads
> ./bin/tmac_prs_demo -problem_size 1500 -nthread 4    # run 4 threads