parallel.pod: Added doc for --sshdelay.

This commit is contained in:
Ole Tange 2013-01-21 23:09:47 +01:00
parent 05e1cf9ef4
commit 7dc6927898
2 changed files with 31 additions and 11 deletions

View file

@ -1220,6 +1220,13 @@ Do not use the first line of input (used by GNU B<parallel> itself
when called with B<--shebang>). when called with B<--shebang>).
=item B<--sshdelay> I<secs> (alpha testing)
Delay starting next ssh by I<secs> seconds. GNU B<parallel> will pause
I<secs> seconds after starting each ssh. I<secs> can be less than 1
seconds.
=item B<-S> I<[ncpu/]sshlogin[,[ncpu/]sshlogin[,...]]> =item B<-S> I<[ncpu/]sshlogin[,[ncpu/]sshlogin[,...]]>
=item B<--sshlogin> I<[ncpu/]sshlogin[,[ncpu/]sshlogin[,...]]> =item B<--sshlogin> I<[ncpu/]sshlogin[,[ncpu/]sshlogin[,...]]>

View file

@ -1317,6 +1317,13 @@ composed commands for GNU @strong{parallel}.
Do not use the first line of input (used by GNU @strong{parallel} itself Do not use the first line of input (used by GNU @strong{parallel} itself
when called with @strong{--shebang}). when called with @strong{--shebang}).
@item @strong{--sshdelay} @emph{secs} (alpha testing)
@anchor{@strong{--sshdelay} @emph{secs} (alpha testing)}
Delay starting next ssh by @emph{secs} seconds. GNU @strong{parallel} will pause
@emph{secs} seconds after starting each ssh. @emph{secs} can be less than 1
seconds.
@item @strong{-S} @emph{[ncpu/]sshlogin[,[ncpu/]sshlogin[,...]]} @item @strong{-S} @emph{[ncpu/]sshlogin[,[ncpu/]sshlogin[,...]]}
@anchor{@strong{-S} @emph{[ncpu/]sshlogin[@comma{}[ncpu/]sshlogin[@comma{}...]]}} @anchor{@strong{-S} @emph{[ncpu/]sshlogin[@comma{}[ncpu/]sshlogin[@comma{}...]]}}
@ -2612,15 +2619,21 @@ There are a two small issues when using GNU @strong{parallel} as queue
system/batch manager: system/batch manager:
@itemize @itemize
@item You will get a warning if you do not submit JobSlots jobs within the @item You will get a warning if you do not submit JobSlots jobs within
first second. E.g. if you have 8 cores and use @strong{-j+2} you have to submit the first second. E.g. if you have 8 cores and use -j+2 you have to
10 jobs. These can be dummy jobs (e.g. @strong{echo foo}). You can also simply submit 10 jobs. These can be dummy jobs (e.g. echo foo). You can also
ignore the warning. simply ignore the warning. For parallel versions 20110322 and higher,
the warnings will not appear.
@item Jobs will be run immediately, but output from jobs will only be @item You have to submit JobSlot number of jobs before they will start, and
printed when JobSlots more jobs has been started. E.g. if you have 10 after that you can submit one at a time, and job will start
jobslots then the output from the first completed job will only be immediately if free slots are available. Output from the running or
printed when job 11 is started. completed jobs are held back and will only be printed when JobSlots
more jobs has been started (unless you use --ungroup or -u, in which
case the output from the jobs are printed immediately). E.g. if you
have 10 jobslots then the output from the first completed job will
only be printed when job 11 has started, and the output of second
completed job will only be printed when job 12 has started.
@end itemize @end itemize
@ -2637,9 +2650,6 @@ called on other platforms file a bug report):
This will run the command @strong{echo} on each file put into @strong{my_dir} or This will run the command @strong{echo} on each file put into @strong{my_dir} or
subdirs of @strong{my_dir}. subdirs of @strong{my_dir}.
The @strong{-u} is needed because of a small bug in GNU @strong{parallel}. If that
proves to be a problem, file a bug report.
You can of course use @strong{-S} to distribute the jobs to remote You can of course use @strong{-S} to distribute the jobs to remote
computers: computers:
@ -2650,6 +2660,9 @@ If the files to be processed are in a tar file then unpacking one file
and processing it immediately may be faster than first unpacking all and processing it immediately may be faster than first unpacking all
files. Set up the dir processor as above and unpack into the dir. files. Set up the dir processor as above and unpack into the dir.
Using GNU Parallel as dir processor has the same limitations as using
GNU Parallel as queue system/batch manager.
@chapter QUOTING @chapter QUOTING
@anchor{QUOTING} @anchor{QUOTING}