Mpi Error Code 139
Contents |
here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site bad termination of one of your application processes exit code 9 About Us Learn more about Stack Overflow the company Business Learn more about
Bad Termination Of One Of Your Application Processes Exit Code 255
hiring developers or posting ads with us Stack Overflow Questions Jobs Documentation Tags Users Badges Ask Question x Dismiss
Mpi Exit Code 1
Join the Stack Overflow Community Stack Overflow is a community of 6.2 million programmers, just like you, helping each other. Join them; it only takes a minute: Sign up Segmentation fault and MPI
Bad Termination Of One Of Your Application Processes Exit Code 135
[closed] up vote 0 down vote favorite In my program I need to do some matrix multiplication using MPI. When I run my program, I get the following error: ===================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = EXIT CODE: 139 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES ===================================================================================== APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault application terminated with the exit string: segmentation fault (signal 11) vasp (signal 11) It executes: printf("Sent a\n"); The error is in: MPI_Send(&b, nColA*nColB, MPI_FLOAT, dest, mtype, MPI_COMM_WORLD); It doesn't execute: printf("Sent b\n"); I don't now why. Can you help me? void multiplicaMatriz (int taskid, int numtasks, float **a, float **b, float **c, long int nLinA, long int nColA, long int nLinB, long int nColB) { long int i, j, k, rc; /* misc */ int numworkers, /* number of worker tasks */ source, /* task id of message source */ dest, /* task id of message destination */ mtype, /* message type */ rows, /* rows of matrix A sent to each worker */ averow, extra, offset; /* used to determine rows sent to each worker */ MPI_Status status; numworkers = numtasks-1; /**************************** master task ************************************/ if (taskid == MASTER) { printf("mpi_mm has started with %d tasks.\n",numtasks); /* Send matrix data to the worker tasks */ averow = nLinA/numworkers; extra = nLinA%numworkers; offset = 0; mtype = FROM_MASTER; for (dest=1; dest<=numworkers; dest++) { rows = (dest <= extra) ? averow+1 : averow; printf("Sending %d rows to task %d offset=%d\n",rows,dest,offset); MPI_Send(&offset, 1, MPI_INT, dest, mtype, MPI_COMM_WORLD); printf("Sent offset %d\n", offset); MPI_Send(&rows, 1, MPI_INT, dest, mtype,
here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the mpi gdb company Business Learn more about hiring developers or posting ads with us Stack Overflow Questions exit code: 139 Jobs Documentation Tags Users Badges Ask Question x Dismiss Join the Stack Overflow Community Stack Overflow is a community of 6.2 million programmers, just like you, helping each other. Join them; it only takes a minute: Sign up Segmentation fault while using MPI_File_open up vote 2 down vote favorite I'm trying to read from a file for an http://stackoverflow.com/questions/21057634/segmentation-fault-and-mpi MPI application. The cluster has 4 nodes with 12 cores in each node. I have tried running a basic program to compute rank and that works. When I added MPI_File_open it throws an exception at runtime BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = EXIT CODE: 139 The cluster has MPICH2 installed and has a Network File System. I check MPI_File_open with different parameters like ReadOnly mode, MPI_COMM_WORLD etc. Can I use MPI_File_open http://stackoverflow.com/questions/13736136/segmentation-fault-while-using-mpi-file-open with Network File System? int main(int argc, char* argv[]) { int myrank = 0; int nprocs = 0; int i = 0; MPI_Comm icomm = MPI_COMM_WORLD; MPI_Status status; MPI_Info info; MPI_File *fh = NULL; int error = 0; MPI_Init(&argc, &argv); MPI_Barrier(MPI_COMM_WORLD); // Wait for all processor to start MPI_Comm_size(MPI_COMM_WORLD, &nprocs); // Get number of processes MPI_Comm_rank(MPI_COMM_WORLD, &myrank); // Get own rank usleep(myrank*100000); if ( myrank == 1 || myrank == 0 ) printf("Hello from %d\r\n", myrank); if (myrank == 0) { error = MPI_File_open( MPI_COMM_SELF, "lw1.wei", MPI_MODE_UNIQUE_OPEN, MPI_INFO_NULL, fh); if ( error ) { printf("Error in opening file\r\n"); } else { printf("File successfully opened\r\n"); } MPI_File_close(fh); } MPI_Barrier(MPI_COMM_WORLD); //! Wait for all the processors to end MPI_Finalize(); if ( myrank == 0 ) { printf("Number of Processes %d\n\r", nprocs); } return 0; } mpi mpi-io share|improve this question edited Jul 8 '13 at 19:54 Mr_and_Mrs_D 9,1671070154 asked Dec 6 '12 at 2:44 Apoorv 757 add a comment| 1 Answer 1 active oldest votes up vote 2 down vote accepted You forgot to allocate an MPI_File object before opening the file. You may either change this line: MPI_File *fh = NULL; into: MPI_File fh; and open file by giving fh's address to MPI_File_open(..., &fh). Or you may simply allocate memory from heap using malloc(). MPI_File *fh = malloc(sizeof(MPI_File)); share|impr
cannot stop all process randomly Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] You compiled with Intel Fortran? That compiler puts static arrays on the stack, unlike GNU Fortran, which puts them on http://lists.mpich.org/pipermail/discuss/2013-July/001219.html the heap. You can either use ulimit to make the stack huge or you can look up the compiler option for Intel that puts these arrays on the heap. Only ancient, poorly designed Fortran 77 codes have this problem, by the way. Jeff On Tue, Jul 9, 2013 at 8:49 PM, Zheng Li