MPI和OpenMP的基本介绍
MPI的基本介绍
MPI is a message-passing library specification proposed as a standard by a
committee of vendors, implementers, and users. It is designed to permit the
development of parallel software libraries
WHAT ITS NOT!
- A compiler
- A specific Product
The concept of message transferring so that processes communicate with other
processes by sending and receiving messages, is the core of the Message
Passing Interface (MPI)
MPI在分布式系统中使用比较频繁,在我们项目中是最基础的消息发送底层管理平台。
接触也有些时间了,但在使用的时候,调用的还是MPI中较少的一部分。下面这个例子
就是一个简单的应用实例。
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
main(int argc, char **argv)
{
int rank, size, myn, i, N;
double *vector, *myvec, sum, mysum, total;
MPI_Init(&argc, &argv );
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
/* In the root process read the vector length, init
the vector and determine the sub-vector sizes */
if (rank == 0) {
printf("Enter the vector length : ");
scanf("%d", &N);
vector = (double *)malloc(sizeof(double) * N);
for(i=0,sum=0;i<N; i++)
vector[i] = 1.0;
myn = N / size;
}
/* Broadcast the local vector size */
MPI_Bcast(&myn, 1, MPI_INT, 0, MPI_COMM_WORLD );
/* allocate the local vectors in each process */
myvec = (double *)malloc(sizeof(double)*myn);
/* Scatter the vector to all the processes */
MPI_Scatter(vector, myn, MPI_DOUBLE, myvec, myn, MPI_DOUBLE,
0, MPI_COMM_WORLD );
/* Find the sum of all the elements of the local vector */
for (i = 0, mysum = 0; i < myn; i++)
mysum += myvec[i];
/* Find the global sum of the vectors */
MPI_Allreduce(&mysum, &total, 1, MPI_DOUBLE, MPI_SUM, MPI_COMM_WORLD );
/* Multiply the local part of the vector by the global sum */
for (i = 0; i < myn; i++)
myvec[i] *= total;
/* Gather the local vector in the root process */
MPI_Gather(myvec, myn, MPI_DOUBLE, vector, myn, MPI_DOUBLE,
0, MPI_COMM_WORLD );
if (rank == 0)
for(i=0;i<N; i++)
printf("[%d] %f\n", rank, vector[i]);
MPI_Finalize();
return 0;
}
What is OpenMP
OpenMP is spec’s for a set of compiler directives, library routines, and
environment variables that used to specify shared memory parallelism.
Supports Fortran (77, 90, and 95), C, and C++
MP = Multi Processing
OpenMP stands for Open specifications for Multi Processing via collaborative
work with interested parties from the hardware and software industry,
government and academia
What OpenMP isn’t.
A specific language or compiler
Meant for distributed memory parallel systems (without help)
Implemented the same by every vendor
Guaranteed to make the most efficient use of shared memory
PARALEL Region Construct
Specifies a block of code that will be executed by multiple threads.
Fundamental OpenMP parallel construct
#pragma omp parallel [clause ...] newline
if (scalar_expression)
private (list)
shared (list)
default (shared | none)
firstprivate (list)
reduction (operator: list)
copyin (list) structured_block
{
structured code block
}
Work-Sharing Constructs
The following directives are designed specifically for distributing the
execution of the enclosed code throughout the members of the team that
encounter it.
for
SECTIONS
SINGLE
SECTIONS and SINGLE
SECTIONS divides the team into different sections and gives code for each
section.
SINGLE specifies that only one thread is to execute the following thread
These directives can also be used with the PARALLEL directive
For examples: http://www.llnl.gov/computing/tutorials/openMP/
www.openmp.org
www.llnl.gov/computing/tutorials/openMP
www.openmp.org/presentations/sc99/sc99_tutorial_files/frame.htm
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 如何编写易于单元测试的代码
· 10年+ .NET Coder 心语,封装的思维:从隐藏、稳定开始理解其本质意义
· .NET Core 中如何实现缓存的预热?
· 从 HTTP 原因短语缺失研究 HTTP/2 和 HTTP/3 的设计差异
· AI与.NET技术实操系列:向量存储与相似性搜索在 .NET 中的实现
· 周边上新:园子的第一款马克杯温暖上架
· Open-Sora 2.0 重磅开源!
· .NET周刊【3月第1期 2025-03-02】
· 分享 3 个 .NET 开源的文件压缩处理库,助力快速实现文件压缩解压功能!
· [AI/GPT/综述] AI Agent的设计模式综述
2010-01-18 Symbian开发平台的搭建之VC++6.0&&Carbide C++ 2.0