An Error Occurred In Mpi_init On Communicator Mpi_comm_world
[ by thread ] [ by subject ] [ mpi_err_rank: invalid rank by author ] [ by messages with attachments ] From: Hu, an error occurred in mpi_send Shaowen (JSC-SK)[USRA] <"Hu,> Date: Tue, 8 Apr 2008 15:19:22 -0500 Dear Dr. Case, Thank you very much for your patch. I succeeded in compiling parallel version of NAB on two platforms after applying this patch. However, I am not able to run a program yet. The test program is molecule m; float x[2000], f[2000], v[2000]; float dgrad, fret; float t1,t2; int ier, mytaskid, rank, size; file fp; mytaskid = mpiinit( argc, argv, rank, size ); fp = fopen("gbrna_long.traj","w"); m = getpdb("gbrna.pdb"); readparm(m, "gbrna.prmtop"); mm_options( "ntpr=100, ntwx=100, gb=1, kappa=0.10395, cut=99.0, diel=C, tempi=300., rattle=0"); mme_init( m, NULL, "::Z", x, fp); setxyz_from_mol( m, NULL, x ); t1 = second(); ier = md(3*m.natoms, 1000, x, f, v, mme ); t2 = second(); printf( "md returns %d; elapsed time was %8.3f\n", ier, t2-t1 ); putxv( "gbrna_long.x", "rattle md", m.natoms, 0.0, x, v ); fclose(fp); mpifinalize(); I compiled it with nab the same way as serial code. On the lammpi machine, the output message is MPI_Init: unclassified: MPI already initialized (rank 3, MPI_COMM_WORLD) Rank (1, MPI_COMM_WORLD): Call stack within LAM: Rank (1, MPI_COMM_WORLD): - MPI_Init() Rank (1, MPI_COMM_WORLD): - main() Rank (0, MPI_COMM_WORLD): Call stack within LAM: Rank (0, MPI_COMM_WORLD): - MPI_Init() Rank (0, MPI_COMM_WORLD): - main() Rank (2, MPI_COMM_WORLD): Call stack within LAM: Rank (2, MPI_COMM_WORLD): - MPI_Init() Rank (2, MPI_COMM_WORLD): - main() Rank (3, MPI_COMM_WORLD): Call stack within LAM: Rank (3, MPI_COMM_WORLD): - MPI_Init() Rank (3, MPI_COMM_WORLD): - main() While on the openmpi machine, warning:regcache incompatible with malloc warning:regcache incompatible with malloc [node060:20875] *** An error occurred in MPI_Init [node060:20875] *** on communicator MPI_COMM_WORLD [node060:20875] *** MPI_ERR_OTHER: known error not in li
here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company Business Learn more about hiring developers or posting ads with us Stack Overflow Questions Jobs Documentation Tags Users Badges Ask Question x Dismiss Join the Stack Overflow Community Stack Overflow is a community of 4.7 million programmers, just like you, helping each other. Join them; it only http://archive.ambermd.org/200804/0113.html takes a minute: Sign up MPI_Send/ Recv communicator datatype error up vote 1 down vote favorite I have the following basic MPI program written in Fortran 90: program sendRecv include 'mpif.h' !MPI Variables integer ierr, numProcs, procID !My variables integer dat, datRec !Init MPI call MPI_INIT ( ierr ) !Get number of processes/ cores requested call MPI_COMM_SIZE (MPI_COMM_WORLD, numProcs, ierr) http://stackoverflow.com/questions/20975994/mpi-send-recv-communicator-datatype-error !Get rank of process call MPI_COMM_RANK (MPI_COMM_WORLD, procID, ierr) if (procID .eq. 0) then dat=4 !Send num to process 1 call MPI_SEND (dat, 1, MPI_INT, 1, 0, MPI_COMM_WORLD, ierr) else if (procID .eq. 1) then !Recieve num from process 0 call MPI_RECV (datRec, 1, MPI_INT, 0, MPI_ANY_SOURCE, MPI_COMM_WORLD, MPI_STATUS_SIZE, ierr) !Display info write(*,*) "Process 1 recieved ", datRec, " from proc 0" else write(*,*)"Into else" end if !Finilise MPI call MPI_FINALIZE ( ierr ) end program sendRecv The purpose is just to send an integer from process 0 and receive and display it in process 1, but whatever i seem to try, i cannot get it to work. I am compiling and running this program with: mpif90 sendRecv.f90 -o tst mpirun -n 2 tst and am getting this: [conor-Latitude-XT2:3053] *** An error occurred in MPI_Send [conor-Latitude-XT2:3053] *** on communicator MPI_COMM_WORLD [conor-Latitude-XT2:3053] *** MPI_ERR_TYPE: invalid datatype [conor-Latitude-XT2:3053] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort) -------------------------------------------------------------------------- mpirun has exited due to process rank 1 with PID 3054 on node conor-Latitude-XT2 exiting without calling "finalize". This may have caused other processes i
here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this http://stackoverflow.com/questions/28205679/mpi-cart-create-error site About Us Learn more about Stack Overflow the company Business Learn more about hiring developers or posting ads with us Stack Overflow Questions Jobs Documentation Tags Users Badges Ask Question x http://osdir.com/ml/clustering.open-mpi.user/2008-04/msg00048.html Dismiss Join the Stack Overflow Community Stack Overflow is a community of 4.7 million programmers, just like you, helping each other. Join them; it only takes a minute: Sign up MPI_Cart_create error an error up vote 2 down vote favorite I've been having trouble getting the basic mpi_cart_create() function in Fortran working. The following code program main USE mpi implicit none integer :: old_comm, new_comm, ndims, ierr integer, DIMENSION(1) :: dim_size logical :: reorder logical, DIMENSION(1) :: periods call MPI_INIT(ierr) old_comm = MPI_COMM_WORLD ndims = 1 dim_size(1) = 4 periods(1) = .true. reorder = .true. call MPI_CART_CREATE(old_comm, ndims, an error occurred dim_size, periods, reorder, new_comm, ierr) call MPI_Finalize(ierr) end program Compiled with mpif90 mpitest.f90 Yields, during runtime, An error occurred in MPI_Cart_create on communicator MPI_COMM_WORLD MPI_ERR_OTHER: known error not in list MPI_ERRORS_ARE_FATAL: your MPI job will now abort This seems simple, but does anyone recognize the issue? EDIT: I updated the code (I was a bit hasty in cutting the code down before, thanks for opinting these out) to correct the problems noted below. I think I probably messed up the MPI installation though, since the code will run when compiled with (when using `use mpi`) mpif90 mpitest3.f90 mpirun -np 4 ./a.out OR (when using `include "mpif.h"`) mpifort mpitest.f90 orterun -np 4 ./a.out If I try to compile with mpifort with the use mpi statement I get PI_CART_CREATE(old_comm, ndims, dim_size, periods, reorder, new_comm, ierr) Error: There is no specific subroutine for the generic 'mpi_cart_create' at (1) And if I mix the compiler and run call (e.g. compile with mpif90 and run with orterun) I get Fatal error in PMPI_Cart_create: Invalid argument, error stack: PMPI_Cart_create(315).....: MPI_Cart_create(MPI_COMM_WORLD, ndims=1, dims=0x7fff26671130, periods=0x1c6e300, reorder=1, comm_cart=0x7fff26671124) failed MPIR_Cart_create_impl(191): MPIR_Cart_create(55)......: Size of the communicator (1) is smaller than t
a dual processor. I compiled my fortran program as follows:mpif90 add.f90 -o add_nI, however, forced to copy "mpif.h" library in my working directory where i run my program and also I inserted an additional line inside the file "/etc/openmpi/openmpi-mca-params.conf", the following : btl=^openib.I have then run the program as:mpirun -np 2 ./add_n (here I use 2 processor as my dual laptop has two processor)What I got is the following error message :[geosl063:13781] *** An error occurred in MPI_comm_size[geosl063:13780] *** An error occurred in MPI_comm_size[geosl063:13780] *** on communicator MPI_COMM_WORLD[geosl063:13780] *** MPI_ERR_COMM: invalid communicator[geosl063:13780] *** MPI_ERRORS_ARE_FATAL (goodbye)[geosl063:13781] *** on communicator MPI_COMM_WORLD[geosl063:13781] *** MPI_ERR_COMM: invalid communicatorI used MPI commands to program my fortran code. The program has been running in a linux cluster. The point here is to develop my parallel program in my linux laptop before I go and run it in a Linux cluster.Any comments. I appreciate any commentsThank you so muchYacob You rock. That's why Blockbuster's offering you one month of Blockbuster Total Access, No Cost._______________________________________________ users mailing list users@xxxxxxxxxxxx http://www.open-mpi.org/mailman/listinfo.cgi/users Thread at a glance: Previous Message by Date: Re: [OMPI users] Fwd: Problem with sending vectors Hi Albert, On 10:13 Mon 07 Apr , Albert Babinskas wrote: > Some code for the error that i get: [snip] > class Box has two int array inside it like > int a[3]; > int b[3]; Sorry, but "has two int array inside it like" isn't very precise. Do you mean: class Box { int a[3]; int b[3]; }; or might you also mean: class Box { int a[3]; Foo aLotOf; Bar otherStuff; int b[3]; }; ? Anyways, your MPI_Datatype construction doesn't seem to be right: > MPI_Type_contiguous(9, MPI_INT, &MPI_box); Fist of all, you should be using MPI_Type_create_struct for C++ classes in order to ensure that each member is addressed with its correct offset (b/c of compiler memory layout/alignment). And second you said that myclass/Box or whatever is build from two int[3], but in the code above you register 9 ints. Again, we can't help you if you don't provide us with self-sufficient code; small excerpts mixed with comments won't cut it in most cases. Cheers -Andreas -- ============================================ An