Qsub multiple jobs on Linux Cluster

Hi everyone,

I’m trying to run multiple jobs on my PC cluster which works with linux shell (bash). It works fine when i submit one job at time, but I’m struggling so much to make multiple submission with just one script on my cluster PC. Usually, when I want to run a single job I submit it via my script file named “single”. In this way, writing in the prompt:

qsub single

I can see the job running.

I tried modifying a bit my script to make it useful for multiple submission. I have different job files stored in different nested folders. I successfully created the directories and imported the files correctly in each folder from the ssh protocol connection, but the jobs still don’t run. I report the error files I get:

[compute-0-2.local:15366] Warning: could not find environment variable “LSTC_LICENSE_SERVER”

  • [compute-0-2.local:15366] Warning: could not find environment variable “LSTC_LICENSE”*
  • /var/spool/torque/mom_priv/jobs/16308.archimede.pcmgroup.dmti.unifi.it.SC: line 35: l2a_r712: command not found*

It sounds really strange because I don’t get any of this when I run single jobs, but maybe I miss something as I’m not an expert linux coder.

Any help will be very appreciated. I attach here my script file for submission so you can check if I’m wrong.


NUMBERS=$(seq 3 3)

for NUM in ${NUMBERS}

	echo "Submitting: ${NAME}"
	#PBS -N ${NAME}\n\
	#PBS -l nodes=compute-0-2:ppn=8\n\
	#PBS -l walltime=50:00:00\n\
	#PBS -r n\n\
	cd \$PBS_O_WORKDIR\n\
	. /opt/torque/etc/openmpi-setup.sh\  # set the enviroment vars needed to make OpenMPI work with torque
	NODI=$(awk '!x[$0]++' $PBS_NODEFILE)  		# reads compute nodes and eliminates duplicates
	rocks run host $NODI "mkdir -p $WORKDIR"    # crea sul cluster le directory che ho nella $HOME

	# stage in
	cp $RESULTS/* $WORKDIR						# copia i file in $RESULTS nella $WORKDIR

	# run
	/usr/lib64/openmpi/bin/mpirun -x LSTC_LICENSE_SERVER -x LSTC_LICENSE -np 8 mpp971s_r712 MEMORY=800000000 MEMORY2=80000000 i=$MODEL.k
	l2a_r712 binout**

		# stage out
	cp -p $WORKDIR/runrsf* $RESULTS
	cp -p $WORKDIR/d3dump* $RESULTS
	cp -p $WORKDIR/nodout $RESULTS
	cp -p $WORKDIR/glstat $RESULTS
	cp -p $WORKDIR/sbtout $RESULTS
	cp -p $WORKDIR/matsum $RESULTS
	cp -p $WORKDIR/deforc $RESULTS
	cp -p $WORKDIR/abstat $RESULTS
	cp -p $WORKDIR/elout $RESULTS
	cp -p $WORKDIR/jntforc $RESULTS
	cp -p $WORKDIR/rwforc $RESULTS
	cp -p $WORKDIR/rcforc $RESULTS
	cp -p $WORKDIR/sbtout $RESULTS
	cp -p $WORKDIR/secforc $RESULTS
	cp -p $WORKDIR/spc* $RESULTS
	cp -p $WORKDIR/sleout $RESULTS
	cp -p $WORKDIR/d3p* $RESULTS
	cp -p $WORKDIR/d3hsp $RESULTS
	rocks run host $NODI "cp -p $WORKDIR/binout* $RESULTS"
	rocks run host $NODI "cp -p $WORKDIR/mes* $RESULTS"
	echo -e ${PSB} | qsub


This will produce 3. It means job will run one times. For three times:

`NUMBERS=$(seq 1 3)`

Linux sysadmin blog - Linux/Unix Howtos and Tutorials - Linux bash shell scripting wiki