POJ1502 MPI Maelstrom
MPI Maelstrom
Time Limit: 1000MS | Memory Limit: 10000K | |
Total Submissions: 3378 | Accepted: 1990 |
Description
BIT has recently taken delivery of their new supercomputer, a 32 processor Apollo Odyssey distributed shared memory machine with a hierarchical communication subsystem. Valentine McKee's research advisor, Jack Swigert, has asked her to benchmark the new system.
``Since the Apollo is a distributed shared memory machine, memory access and communication times are not uniform,'' Valentine told Swigert. ``Communication is fast between processors that share the same memory subsystem, but it is slower between processors that are not on the same subsystem. Communication between the Apollo and machines in our lab is slower yet.''
``How is Apollo's port of the Message Passing Interface (MPI) working out?'' Swigert asked.
``Not so well,'' Valentine replied. ``To do a broadcast of a message from one processor to all the other n-1 processors, they just do a sequence of n-1 sends. That really serializes things and kills the performance.''
``Is there anything you can do to fix that?''
``Yes,'' smiled Valentine. ``There is. Once the first processor has sent the message to another, those two can then send messages to two other hosts at the same time. Then there will be four hosts that can send, and so on.''
``Ah, so you can do the broadcast as a binary tree!''
``Not really a binary tree -- there are some particular features of our network that we should exploit. The interface cards we have allow each processor to simultaneously send messages to any number of the other processors connected to it. However, the messages don't necessarily arrive at the destinations at the same time -- there is a communication cost involved. In general, we need to take into account the communication costs for each link in our network topologies and plan accordingly to minimize the total time required to do a broadcast.''
``Since the Apollo is a distributed shared memory machine, memory access and communication times are not uniform,'' Valentine told Swigert. ``Communication is fast between processors that share the same memory subsystem, but it is slower between processors that are not on the same subsystem. Communication between the Apollo and machines in our lab is slower yet.''
``How is Apollo's port of the Message Passing Interface (MPI) working out?'' Swigert asked.
``Not so well,'' Valentine replied. ``To do a broadcast of a message from one processor to all the other n-1 processors, they just do a sequence of n-1 sends. That really serializes things and kills the performance.''
``Is there anything you can do to fix that?''
``Yes,'' smiled Valentine. ``There is. Once the first processor has sent the message to another, those two can then send messages to two other hosts at the same time. Then there will be four hosts that can send, and so on.''
``Ah, so you can do the broadcast as a binary tree!''
``Not really a binary tree -- there are some particular features of our network that we should exploit. The interface cards we have allow each processor to simultaneously send messages to any number of the other processors connected to it. However, the messages don't necessarily arrive at the destinations at the same time -- there is a communication cost involved. In general, we need to take into account the communication costs for each link in our network topologies and plan accordingly to minimize the total time required to do a broadcast.''
Input
The
input will describe the topology of a network connecting n processors.
The first line of the input will be n, the number of processors, such
that 1 <= n <= 100.
The rest of the input defines an adjacency matrix, A. The adjacency matrix is square and of size n x n. Each of its entries will be either an integer or the character x. The value of A(i,j) indicates the expense of sending a message directly from node i to node j. A value of x for A(i,j) indicates that a message cannot be sent directly from node i to node j.
Note that for a node to send a message to itself does not require network communication, so A(i,i) = 0 for 1 <= i <= n. Also, you may assume that the network is undirected (messages can go in either direction with equal overhead), so that A(i,j) = A(j,i). Thus only the entries on the (strictly) lower triangular portion of A will be supplied.
The input to your program will be the lower triangular section of A. That is, the second line of input will contain one entry, A(2,1). The next line will contain two entries, A(3,1) and A(3,2), and so on.
The rest of the input defines an adjacency matrix, A. The adjacency matrix is square and of size n x n. Each of its entries will be either an integer or the character x. The value of A(i,j) indicates the expense of sending a message directly from node i to node j. A value of x for A(i,j) indicates that a message cannot be sent directly from node i to node j.
Note that for a node to send a message to itself does not require network communication, so A(i,i) = 0 for 1 <= i <= n. Also, you may assume that the network is undirected (messages can go in either direction with equal overhead), so that A(i,j) = A(j,i). Thus only the entries on the (strictly) lower triangular portion of A will be supplied.
The input to your program will be the lower triangular section of A. That is, the second line of input will contain one entry, A(2,1). The next line will contain two entries, A(3,1) and A(3,2), and so on.
Output
Your
program should output the minimum communication time required to
broadcast a message from the first processor to all the other
processors.
Sample Input
5 50 30 5 100 20 50 10 x x 10
Sample Output
35
Source
思路:求从点1到其他各个点的最短距离中最长的距离。方法很多。。。
1 #include <cstdlib> 2 #include <iostream> 3 #include <cstdio> 4 #include <cstring> 5 #include <string> 6 #include <cctype> 7 #include <cmath> 8 #include <queue> 9 #include <vector> 10 11 12 #define MAXINT 99999999 13 14 15 using namespace std; 16 17 18 int data[100+4][100+4]; 19 int vis[100+4]; 20 21 int lowcost[100+4]; 22 23 24 25 /** 26 *dijkstra 27 * 28 */ 29 30 int main(int argc, char *argv[]) 31 { 32 int n; 33 int i,j,k; 34 35 char tmpStr[1000]; 36 char useStr[1000]; 37 //fgets(tmpStr); 38 39 scanf("%d",&n); 40 gets(tmpStr); 41 42 for(i=1;i<=n;i++) 43 for(j=1;j<=n;j++) 44 data[i][j]=MAXINT; 45 46 47 48 for(i=1;i<n;i++) 49 { 50 51 52 gets(tmpStr); 53 54 int len=strlen(tmpStr); 55 j=0; 56 int endv=1; 57 while(j<len) 58 { 59 if(tmpStr[j]=='x') 60 {j+=2;endv++;continue;} 61 62 int tmpInt=0; 63 64 while((j<len)&&(tmpStr[j]!=' ')) 65 {tmpInt*=10;tmpInt+=tmpStr[j]-'0';j++;} 66 67 j++; 68 data[i+1][endv]=tmpInt; 69 data[endv][i+1]=tmpInt; 70 endv++; 71 } 72 73 74 } 75 76 77 78 for(i=1;i<=n;i++) 79 {lowcost[i]=data[1][i];vis[i]=0;} 80 81 int mincost=MAXINT; 82 83 vis[1]=1; 84 85 for(i=1;i<n;i++) 86 { 87 mincost=MAXINT; 88 k=-1; 89 90 for(j=1;j<=n;j++) 91 { 92 if((vis[j]==0)&&(mincost>lowcost[j])) 93 {k=j;mincost=lowcost[j];} 94 95 } 96 vis[k]=1; 97 98 99 100 for(j=1;j<=n;j++) 101 { 102 if((vis[j]==0)&&(lowcost[k]+data[k][j]<lowcost[j])) 103 lowcost[j]=lowcost[k]+data[k][j]; 104 } 105 106 } 107 108 int maxcost=0; 109 110 for(i=2;i<=n;i++) 111 { 112 if(maxcost<lowcost[i]) 113 maxcost=lowcost[i]; 114 115 116 } 117 printf("%d\n",maxcost); 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 //system("PAUSE"); 141 return EXIT_SUCCESS; 142 }