Added *.rst format of documentation.

This commit is contained in:
Ole Tange 2021-02-17 15:49:47 +01:00
parent 0b44a9ceb2
commit 0753d4348d
37 changed files with 917 additions and 328 deletions

6
README
View file

@ -96,7 +96,7 @@ system is old or Microsoft Windows):
If you are developing your script to run on a remote server, that does
not have GNU Parallel installed, but you have it installed on you
development machine, the use can use `parallel --embed`.
development machine, then you can use `parallel --embed`.
parallel --embed > newscript.sh
@ -126,8 +126,8 @@ publication please cite:
Zenodo. https://doi.org/10.5281/zenodo.4454976
Copyright (C) 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015,
2016, 2017, 2018, 2019, 2020 Ole Tange, http://ole.tange.dk and Free
Software Foundation, Inc.
2016, 2017, 2018, 2019, 2020, 2021 Ole Tange, http://ole.tange.dk and
Free Software Foundation, Inc.
= New versions =

View file

@ -146,13 +146,13 @@ removed (and staying in main).
In other words: It is preferable having fewer users, who all know they
should cite, over having many users, who do not know they should cite.
If the goal had been to get more users, then the license would have
been public domain.
This is because a long-term survival with funding is more important
than short-term gains in popularity that can be achieved by being
distributed as part of a distribution.
If the goal had been to get more users, then the license would have
been public domain.
> Is there another way I can get rid of the citation notice?

View file

@ -1,5 +1,35 @@
Quote of the month:
Parallel is amazing!
-- fatboy93@reddit
parallel is my new kink
-- python_noob_001@reddit
GNU Parallel makes my life so much easier.
I'm glad I don't have to implement multi-threaded Python scripts on the regular.
-- Fredrick Brennan @fr_brennan@twitter
If you work with lots of files at once
Take a good look at GNU parallel
Change your life for the better
-- French @notareverser@twitter
@GnuParallel the best thing ever and it's not up for debate #EOchat
-- Nathan Thomas @DrNASApants@twitter
Using [GNU Parallel is] super easy if you use xargs, and it is magic for making things multi-process. Then adding in the ssh magic it can do it is an incredible tool that is completely underutilized.
-- Ancients @Ancients@twitter
GNU Parallel is one of the most helpful tools I've been using recently, and it's just something like: parallel -j4 'gzip {}' ::: folder/*.csv
-- Milton Pividori @miltondp@twitter
We use gnu parallel now - and happier for it.
-- Ben Davies @benjamindavies@twitter
This is a fantastic tool, and I wish I had upgraded from xargs years ago!
-- Stuart Anderson
GNU Parallel and ripgrep would be your friend here. Ripgrep is fast, really fast.
-- CENAPT @cenaptech@twitter

View file

@ -194,9 +194,9 @@ from:tange@gnu.org
to:parallel@gnu.org, bug-parallel@gnu.org
stable-bcc: Jesse Alama <jessealama@fastmail.fm>
Subject: GNU Parallel 20210122 ('Capitol Riots') released <<[stable]>>
Subject: GNU Parallel 20210222 ('AngSangSuKyi/Navalny/Håndbold/Larry King<<>>') released <<[stable]>>
GNU Parallel 20210122 ('') <<[stable]>> has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/
GNU Parallel 20210222 ('') <<[stable]>> has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/
<<No new functionality was introduced so this is a good candidate for a stable release.>>
@ -206,32 +206,41 @@ It does not have to be as detailed as Juan's. It is perfectly fine if you just s
Quote of the month:
I think many people would be surprised to learn that GNU parallel is "just" a single Perl script.
-- Peter Menzel @ptr_menzel@twitter
<<>>
New in this release:
* --memsuspend suspends jobs when there is little memory free. This way you can run jobs in parallel that added up potentially will take up more RAM than is available.
* --filter filter out jobs to run based on a GNU Parallel expression.
* $PARALLEL_ARGHOSTGROUPS and the replacement string {agrp} will give the hostgroup given on the argument when using --hostgroup.
* --tmpl <<>>
* --plus implements {0%} {0..0%} dynamic replacement string for slot and {0#} {0..0#} dynamic replacement string for seq
* crnl
* New scheduling code, which makes -S alpha quality.
* Handy time functions for {= =}: yyyy_mm_dd_hh_mm_ss() yyyy_mm_dd_hh_mm() yyyy_mm_dd() yyyymmddhhmmss() yyyymmddhhmm() yyyymmdd() hash($str)
* Bug fixes and man page updates.
News about GNU Parallel:
* Software Engineering For Scientists https://canvas.stanford.edu/courses/133091
https://www.polarmicrobes.org/a-short-tutorial-on-gnu-parallel/
* Optional Individual Submission 4 Job Handling - GNU Parallel https://www.youtube.com/watch?v=-tX0bFAYAxM
https://opensource.com/article/21/2/linux-programming
* 关于gnu parallelGnuParallel在集群上并行化脚本脚本将文件写入主节点https://www.codenong.com/25172209/
https://medium.com/analytics-vidhya/simple-tutorial-to-install-use-gnu-parallel-79251120d618
* すぐ忘れてしまう、仕事で使う技 https://qiita.com/hana_shin/items/53c3c78525c9c758ae7c
https://www.lido.tu-dortmund.de/cms/de/LiDO3/LiDO3_first_contact_handout.pdf
https://blog.jastix.biz/post/rill-stage-2-1-cli-data-analysis/
<<>>
https://blog.knoldus.com/introduction-to-gnu-parallel/
https://www.hahwul.com/cullinan/parallel/
* <<>>
Get the book: GNU Parallel 2018 http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html

View file

@ -13,13 +13,14 @@ all:
cd home\:tange/parallel/ && osc up
cd home\:tange/parallel/ && parallel osc add ::: *.spec *.dsc *.tar.gz *.tar.bz2 && echo Src added OK || true
cd home\:tange/parallel/ && osc ci -m "New release"
# wait for "building" state
cd home\:tange/parallel/ && yes building | parallel -j1 --delay 10 --halt now,success=1 'osc results|G -E {}'
# wait for "building" state to end
# Ignore RedHat_RHEL-6 and Univention_3.2 that are broken
cd home\:tange/parallel/ && yes building | parallel -j1 --delay 10 --halt now,fail=1 'osc results|G -v RedHat_RHEL-6 -v Univention_3.2 -E {}' || true
# wait for "finished" state of .deb
cd home\:tange/parallel/ && echo succeeded | parallel -j1 --retries 30 --delay 10 --halt now,success=1 'osc results|G -E "(Debian|Ubuntu).*{}"'
# wait for "finished" state of .rpm
cd home\:tange/parallel/ && echo succeeded | parallel -j1 --retries 30 --delay 10 --halt now,success=1 'osc results|G -E "(SUSE|SLE|Scientific|RHEL|Fedora|CentOS).*{}"'
### Wait for "building" state to complete
echo '### Wait for "building" state'
cd home\:tange/parallel/ && yes building | parallel -vj1 --delay 10 --halt now,success=1 'osc results|G -E {}'
echo '### Wait for "building" state to end'
echo '### Ignore RedHat_RHEL-6 and Univention_3.2 that are broken'
cd home\:tange/parallel/ && yes building | parallel -vj1 --delay 10 --halt now,fail=1 'osc results|G -v RedHat_RHEL-6 -v Univention_3.2 -E {}' || true
echo '### Wait for "finished" state of .deb'
cd home\:tange/parallel/ && echo succeeded | parallel -vj1 --retries 30 --delay 10 --halt now,success=1 'osc results|G -E "(Debian|Ubuntu).*{}"'
echo '### Wait for "finished" state of .rpm'
cd home\:tange/parallel/ && echo succeeded | parallel -vj1 --retries 30 --delay 10 --halt now,success=1 'osc results|G -E "(SUSE|SLE|Scientific|RHEL|Fedora|CentOS).*{}"'

View file

@ -1,6 +1,6 @@
<directory name="parallel" rev="295" vrev="1" srcmd5="3d5753d873ec7dd1c5e728f166982dd0">
<entry name="PKGBUILD" md5="84008e34fb54d73054e039551aa5a1b1" size="936" mtime="1611258951" />
<entry name="parallel-20210122.tar.bz2" md5="6799fa098597b5a32e87f82f8ef4d8ad" size="2129364" mtime="1611258952" />
<directory name="parallel" rev="297" vrev="3" srcmd5="d241350faf9509d6b0f6c2cd5e490318">
<entry name="PKGBUILD" md5="f078fe714342aa680f5babdf52815b63" size="936" mtime="1611260073" />
<entry name="parallel-20210122.tar.bz2" md5="86e78bbb2d820c2a23bcac06ec902204" size="2129334" mtime="1611260073" />
<entry name="parallel.spec" md5="73c304015393921bb524310ecc3f68da" size="4876" mtime="1611258952" />
<entry name="parallel_20210122.dsc" md5="327c5825cefbbf7674e1a668e57ef499" size="556" mtime="1611258953" />
<entry name="parallel_20210122.tar.gz" md5="f0b8b80919399ef704eb6d00d03f3a68" size="2319784" mtime="1611258953" />

View file

@ -20,6 +20,10 @@ doc_DATA = parallel.html env_parallel.html sem.html sql.html \
niceload.texi parallel_tutorial.texi parallel_book.texi \
parallel_design.texi parallel_alternatives.texi parcat.texi \
parset.texi parsort.texi \
parallel.rst env_parallel.rst sem.rst sql.rst \
niceload.rst parallel_tutorial.rst parallel_book.rst \
parallel_design.rst parallel_alternatives.rst parcat.rst \
parset.rst parsort.rst \
parallel.pdf env_parallel.pdf sem.pdf sql.pdf niceload.pdf \
parallel_tutorial.pdf parallel_book.pdf parallel_design.pdf \
parallel_alternatives.pdf parcat.pdf parset.pdf parsort.pdf \
@ -231,6 +235,54 @@ parsort.texi: parsort
pod2texi --output="$(srcdir)"/parsort.texi "$(srcdir)"/parsort \
|| echo "Warning: pod2texi not found. Using old parsort.texi"
parallel.rst: parallel.pod
pod2rst --outfile "$(srcdir)"/parallel.rst --infile="$(srcdir)"/parallel.pod \
|| echo "Warning: pod2rst not found. Using old parallel.rst"
env_parallel.rst: env_parallel.pod
pod2rst --outfile "$(srcdir)"/env_parallel.rst --infile="$(srcdir)"/env_parallel.pod \
|| echo "Warning: pod2rst not found. Using old env_parallel.rst"
parallel_tutorial.rst: parallel_tutorial.pod
pod2rst --outfile "$(srcdir)"/parallel_tutorial.rst --infile="$(srcdir)"/parallel_tutorial.pod \
|| echo "Warning: pod2rst not found. Using old parallel_tutorial.rst"
parallel_book.rst: parallel_book.pod
pod2rst --outfile "$(srcdir)"/parallel_book.rst --infile="$(srcdir)"/parallel_book.pod \
|| echo "Warning: pod2rst not found. Using old parallel_book.rst"
parallel_design.rst: parallel_design.pod
pod2rst --outfile "$(srcdir)"/parallel_design.rst --infile="$(srcdir)"/parallel_design.pod \
|| echo "Warning: pod2rst not found. Using old parallel_design.rst"
parallel_alternatives.rst: parallel_alternatives.pod
pod2rst --outfile "$(srcdir)"/parallel_alternatives.rst --infile="$(srcdir)"/parallel_alternatives.pod \
|| echo "Warning: pod2rst not found. Using old parallel_alternatives.rst"
sem.rst: sem.pod
pod2rst --outfile "$(srcdir)"/sem.rst --infile="$(srcdir)"/sem.pod \
|| echo "Warning: pod2rst not found. Using old sem.rst"
sql.rst: sql
pod2rst --outfile "$(srcdir)"/sql.rst --infile="$(srcdir)"/sql \
|| echo "Warning: pod2rst not found. Using old sql.rst"
niceload.rst: niceload.pod
pod2rst --outfile "$(srcdir)"/niceload.rst --infile="$(srcdir)"/niceload.pod \
|| echo "Warning: pod2rst not found. Using old niceload.rst"
parcat.rst: parcat.pod
pod2rst --outfile "$(srcdir)"/parcat.rst --infile="$(srcdir)"/parcat.pod \
|| echo "Warning: pod2rst not found. Using old parcat.rst"
parset.rst: parset.pod
pod2rst --outfile "$(srcdir)"/parset.rst --infile="$(srcdir)"/parset.pod \
|| echo "Warning: pod2rst not found. Using old parset.rst"
parsort.rst: parsort
pod2rst --outfile "$(srcdir)"/parsort.rst --infile="$(srcdir)"/parsort \
|| echo "Warning: pod2rst not found. Using old parsort.rst"
parallel.pdf: parallel.pod
pod2pdf --output-file "$(srcdir)"/parallel.pdf "$(srcdir)"/parallel.pod --title "GNU Parallel" \
|| echo "Warning: pod2pdf not found. Using old parallel.pdf"

View file

@ -247,6 +247,10 @@ bin_SCRIPTS = parallel sql niceload parcat parset parsort \
@DOCUMENTATION_TRUE@ niceload.texi parallel_tutorial.texi parallel_book.texi \
@DOCUMENTATION_TRUE@ parallel_design.texi parallel_alternatives.texi parcat.texi \
@DOCUMENTATION_TRUE@ parset.texi parsort.texi \
@DOCUMENTATION_TRUE@ parallel.rst env_parallel.rst sem.rst sql.rst \
@DOCUMENTATION_TRUE@ niceload.rst parallel_tutorial.rst parallel_book.rst \
@DOCUMENTATION_TRUE@ parallel_design.rst parallel_alternatives.rst parcat.rst \
@DOCUMENTATION_TRUE@ parset.rst parsort.rst \
@DOCUMENTATION_TRUE@ parallel.pdf env_parallel.pdf sem.pdf sql.pdf niceload.pdf \
@DOCUMENTATION_TRUE@ parallel_tutorial.pdf parallel_book.pdf parallel_design.pdf \
@DOCUMENTATION_TRUE@ parallel_alternatives.pdf parcat.pdf parset.pdf parsort.pdf \
@ -826,6 +830,54 @@ parsort.texi: parsort
pod2texi --output="$(srcdir)"/parsort.texi "$(srcdir)"/parsort \
|| echo "Warning: pod2texi not found. Using old parsort.texi"
parallel.rst: parallel.pod
pod2rst --outfile "$(srcdir)"/parallel.rst --infile="$(srcdir)"/parallel.pod \
|| echo "Warning: pod2rst not found. Using old parallel.rst"
env_parallel.rst: env_parallel.pod
pod2rst --outfile "$(srcdir)"/env_parallel.rst --infile="$(srcdir)"/env_parallel.pod \
|| echo "Warning: pod2rst not found. Using old env_parallel.rst"
parallel_tutorial.rst: parallel_tutorial.pod
pod2rst --outfile "$(srcdir)"/parallel_tutorial.rst --infile="$(srcdir)"/parallel_tutorial.pod \
|| echo "Warning: pod2rst not found. Using old parallel_tutorial.rst"
parallel_book.rst: parallel_book.pod
pod2rst --outfile "$(srcdir)"/parallel_book.rst --infile="$(srcdir)"/parallel_book.pod \
|| echo "Warning: pod2rst not found. Using old parallel_book.rst"
parallel_design.rst: parallel_design.pod
pod2rst --outfile "$(srcdir)"/parallel_design.rst --infile="$(srcdir)"/parallel_design.pod \
|| echo "Warning: pod2rst not found. Using old parallel_design.rst"
parallel_alternatives.rst: parallel_alternatives.pod
pod2rst --outfile "$(srcdir)"/parallel_alternatives.rst --infile="$(srcdir)"/parallel_alternatives.pod \
|| echo "Warning: pod2rst not found. Using old parallel_alternatives.rst"
sem.rst: sem.pod
pod2rst --outfile "$(srcdir)"/sem.rst --infile="$(srcdir)"/sem.pod \
|| echo "Warning: pod2rst not found. Using old sem.rst"
sql.rst: sql
pod2rst --outfile "$(srcdir)"/sql.rst --infile="$(srcdir)"/sql \
|| echo "Warning: pod2rst not found. Using old sql.rst"
niceload.rst: niceload.pod
pod2rst --outfile "$(srcdir)"/niceload.rst --infile="$(srcdir)"/niceload.pod \
|| echo "Warning: pod2rst not found. Using old niceload.rst"
parcat.rst: parcat.pod
pod2rst --outfile "$(srcdir)"/parcat.rst --infile="$(srcdir)"/parcat.pod \
|| echo "Warning: pod2rst not found. Using old parcat.rst"
parset.rst: parset.pod
pod2rst --outfile "$(srcdir)"/parset.rst --infile="$(srcdir)"/parset.pod \
|| echo "Warning: pod2rst not found. Using old parset.rst"
parsort.rst: parsort
pod2rst --outfile "$(srcdir)"/parsort.rst --infile="$(srcdir)"/parsort \
|| echo "Warning: pod2rst not found. Using old parsort.rst"
parallel.pdf: parallel.pod
pod2pdf --output-file "$(srcdir)"/parallel.pdf "$(srcdir)"/parallel.pod --title "GNU Parallel" \
|| echo "Warning: pod2pdf not found. Using old parallel.pdf"

View file

@ -382,7 +382,7 @@ _parset_main() {
return 255
fi
if [ "$_parset_NAME" = "--version" ] ; then
echo "parset 20210122 (GNU parallel `parallel --minversion 1`)"
echo "parset 20210123 (GNU parallel `parallel --minversion 1`)"
echo "Copyright (C) 2007-2021 Ole Tange, http://ole.tange.dk and Free Software"
echo "Foundation, Inc."
echo "License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>"

View file

@ -384,7 +384,7 @@ _parset_main() {
return 255
fi
if [ "$_parset_NAME" = "--version" ] ; then
echo "parset 20210122 (GNU parallel `parallel --minversion 1`)"
echo "parset 20210123 (GNU parallel `parallel --minversion 1`)"
echo "Copyright (C) 2007-2021 Ole Tange, http://ole.tange.dk and Free Software"
echo "Foundation, Inc."
echo "License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>"

View file

@ -382,7 +382,7 @@ _parset_main() {
return 255
fi
if [ "$_parset_NAME" = "--version" ] ; then
echo "parset 20210122 (GNU parallel `parallel --minversion 1`)"
echo "parset 20210123 (GNU parallel `parallel --minversion 1`)"
echo "Copyright (C) 2007-2021 Ole Tange, http://ole.tange.dk and Free Software"
echo "Foundation, Inc."
echo "License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>"

View file

@ -365,7 +365,7 @@ _parset_main() {
return 255
fi
if [ "$_parset_NAME" = "--version" ] ; then
echo "parset 20210122 (GNU parallel `parallel --minversion 1`)"
echo "parset 20210123 (GNU parallel `parallel --minversion 1`)"
echo "Copyright (C) 2007-2021 Ole Tange, http://ole.tange.dk and Free Software"
echo "Foundation, Inc."
echo "License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>"

View file

@ -368,7 +368,7 @@ _parset_main() {
return 255
fi
if [ "$_parset_NAME" = "--version" ] ; then
echo "parset 20210122 (GNU parallel `parallel --minversion 1`)"
echo "parset 20210123 (GNU parallel `parallel --minversion 1`)"
echo "Copyright (C) 2007-2021 Ole Tange, http://ole.tange.dk and Free Software"
echo "Foundation, Inc."
echo "License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>"

View file

@ -382,7 +382,7 @@ _parset_main() {
return 255
fi
if [ "$_parset_NAME" = "--version" ] ; then
echo "parset 20210122 (GNU parallel `parallel --minversion 1`)"
echo "parset 20210123 (GNU parallel `parallel --minversion 1`)"
echo "Copyright (C) 2007-2021 Ole Tange, http://ole.tange.dk and Free Software"
echo "Foundation, Inc."
echo "License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>"

View file

@ -359,7 +359,7 @@ _parset_main() {
return 255
fi
if [ "$_parset_NAME" = "--version" ] ; then
echo "parset 20210122 (GNU parallel `parallel --minversion 1`)"
echo "parset 20210123 (GNU parallel `parallel --minversion 1`)"
echo "Copyright (C) 2007-2021 Ole Tange, http://ole.tange.dk and Free Software"
echo "Foundation, Inc."
echo "License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>"

View file

@ -23,7 +23,7 @@
use strict;
use Getopt::Long;
$Global::progname="niceload";
$Global::version = 20210122;
$Global::version = 20210123;
Getopt::Long::Configure("bundling","require_order");
get_options_from_array(\@ARGV) || die_usage();
if($opt::version) {

View file

@ -729,7 +729,7 @@ Even quoted newlines are parsed correctly:
When used with B<--pipe> only pass full CSV-records.
=item B<--delay> I<mytime> (beta testing)
=item B<--delay> I<mytime>
Delay starting next job by I<mytime>. GNU B<parallel> will pause
I<mytime> after starting each job. I<mytime> is normally in seconds,
@ -863,6 +863,16 @@ Implies B<--pipe> unless B<--pipepart> is used.
See also: B<--cat>.
=item B<--filter> I<filter> (alpha testing)
Only run jobs where I<filter> is true. I<filter> can contain
replacement strings and Perl code. Example:
parallel --filter '{1} < {2}' echo ::: {1..3} ::: {1..3}
Runs: 1,2 1,3 2,3
=item B<--filter-hosts>
Remove down hosts. For each remote host: check that login through ssh
@ -901,7 +911,7 @@ B<--group> is the default. Can be reversed with B<-u>.
See also: B<--line-buffer> B<--ungroup>
=item B<--group-by> I<val> (beta testing)
=item B<--group-by> I<val>
Group input by value. Combined with B<--pipe>/B<--pipepart>
B<--group-by> groups lines with the same value into a record.
@ -1091,9 +1101,9 @@ B<--header :> is an alias for B<--header '.*\n'>.
If I<regexp> is a number, it is a fixed number of lines.
=item B<--hostgroups> (alpha testing)
=item B<--hostgroups> (beta testing)
=item B<--hgrp> (alpha testing)
=item B<--hgrp> (beta testing)
Enable hostgroups on arguments. If an argument contains '@' the string
after '@' will be removed and treated as a list of hostgroups on which
@ -1430,7 +1440,7 @@ should retry a given job.
See also: B<--memsuspend>
=item B<--memsuspend> I<size> (alpha testing)
=item B<--memsuspend> I<size> (beta testing)
Suspend jobs when there is less than 2 * I<size> memory free. The
I<size> can be postfixed with K, M, G, T, P, k, m, g, t, or p which
@ -1540,9 +1550,9 @@ of each job is saved in a file and the filename is then printed.
See also: B<--results>
=item B<--pipe> (beta testing)
=item B<--pipe>
=item B<--spreadstdin> (beta testing)
=item B<--spreadstdin>
Spread input to jobs on stdin (standard input). Read a block of data
from stdin (standard input) and give one block of data as input to one
@ -1615,6 +1625,12 @@ Activate additional replacement strings: {+/} {+.} {+..} {+...} {..}
B<{##}> is the total number of jobs to be run. It is incompatible with
B<-X>/B<-m>/B<--xargs>.
B<{0%}>, B<{00%}>, B<{000%}>, ... B<{0..0%}> prefix jobslot with
zeros. (alpha testing)
B<{0#}>, B<{00#}>, B<{000#}>, ... B<{0..0#}> prefix sequence numbers
with zeros. (alpha testing)
B<{choose_k}> is inspired by n choose k: Given a list of n elements,
choose k. k is the number of input sources and n is the number of
arguments in an input source. The content of the input sources must
@ -1906,7 +1922,7 @@ B<-.csv>/B<-.tsv> are special: It will give the file on stdout
(standard output).
B<JSON file output> (beta testing)
B<JSON file output>
If I<name> ends in B<.json> the output will be a JSON-file
named I<name>.
@ -1915,7 +1931,7 @@ B<-.json> is special: It will give the file on stdout (standard
output).
B<Replacement string output file> (beta testing)
B<Replacement string output file>
If I<name> contains a replacement string and the replaced result does
not end in /, then the standard output will be stored in a file named
@ -2477,7 +2493,7 @@ be overridden with B<--ssh>. It can also be set on a per server
basis (see B<--sshlogin>).
=item B<--sshdelay> I<mytime> (beta testing)
=item B<--sshdelay> I<mytime>
Delay starting next ssh by I<mytime>. GNU B<parallel> will not start
another ssh for the next I<mytime>.
@ -2614,6 +2630,15 @@ Silent. The job to be run will not be printed. This is the default.
Can be reversed with B<-v>.
=item B<--template> I<file>=I<repl> (alpha testing)
=item B<--tmpl> I<file>=I<repl> (alpha testing)
Copy I<file> to I<repl>. All replacement strings in the contents of
I<file> will be replaced. All replacement strings in the name I<repl>
will be replaced.
=item B<--tty>
Open terminal tty. If GNU B<parallel> is used for starting a program
@ -2975,7 +3000,9 @@ most likely do what is needed.
=back
=head1 EXAMPLE: Working as xargs -n1. Argument appending
=head1 EXAMPLES
=head2 EXAMPLE: Working as xargs -n1. Argument appending
GNU B<parallel> can work similar to B<xargs -n1>.
@ -2992,7 +3019,7 @@ FUBAR in all files in this dir and subdirs:
Note B<-q> is needed because of the space in 'FOO BAR'.
=head1 EXAMPLE: Simple network scanner
=head2 EXAMPLE: Simple network scanner
B<prips> can generate IP-addresses from CIDR notation. With GNU
B<parallel> you can build a simple network scanner to see which
@ -3003,7 +3030,7 @@ addresses respond to B<ping>:
'ping -c 1 {} >/dev/null && echo {}' 2>/dev/null
=head1 EXAMPLE: Reading arguments from command line
=head2 EXAMPLE: Reading arguments from command line
GNU B<parallel> can take the arguments from command line instead of
stdin (standard input). To compress all html files in the current dir
@ -3016,7 +3043,7 @@ To convert *.wav to *.mp3 using LAME running one process per CPU run:
parallel lame {} -o {.}.mp3 ::: *.wav
=head1 EXAMPLE: Inserting multiple arguments
=head2 EXAMPLE: Inserting multiple arguments
When moving a lot of files like this: B<mv *.log destdir> you will
sometimes get the error:
@ -3037,7 +3064,7 @@ In many shells you can also use B<printf>:
printf '%s\0' *.log | parallel -0 -m mv {} destdir
=head1 EXAMPLE: Context replace
=head2 EXAMPLE: Context replace
To remove the files I<pict0000.jpg> .. I<pict9999.jpg> you could do:
@ -3059,7 +3086,7 @@ This will also only run B<rm> as many times needed to keep the command
line length short enough.
=head1 EXAMPLE: Compute intensive jobs and substitution
=head2 EXAMPLE: Compute intensive jobs and substitution
If ImageMagick is installed this will generate a thumbnail of a jpg
file:
@ -3088,7 +3115,7 @@ make files like ./foo/bar_thumb.jpg:
parallel convert -geometry 120 {} {.}_thumb.jpg
=head1 EXAMPLE: Substitution and redirection
=head2 EXAMPLE: Substitution and redirection
This will generate an uncompressed version of .gz-files next to the .gz-file:
@ -3104,7 +3131,7 @@ to be put in quotes, as they may otherwise be interpreted by the shell
and not given to GNU B<parallel>.
=head1 EXAMPLE: Composed commands
=head2 EXAMPLE: Composed commands
A job can consist of several commands. This will print the number of
files in each directory:
@ -3139,7 +3166,7 @@ Find the files in a list that do not exist
cat file_list | parallel 'if [ ! -e {} ] ; then echo {}; fi'
=head1 EXAMPLE: Composed command with perl replacement string
=head2 EXAMPLE: Composed command with perl replacement string
You have a bunch of file. You want them sorted into dirs. The dir of
each file should be named the first letter of the file name.
@ -3147,7 +3174,7 @@ each file should be named the first letter of the file name.
parallel 'mkdir -p {=s/(.).*/$1/=}; mv {} {=s/(.).*/$1/=}' ::: *
=head1 EXAMPLE: Composed command with multiple input sources
=head2 EXAMPLE: Composed command with multiple input sources
You have a dir with files named as 24 hours in 5 minute intervals:
00:00, 00:05, 00:10 .. 23:55. You want to find the files missing:
@ -3156,7 +3183,7 @@ You have a dir with files named as 24 hours in 5 minute intervals:
::: {00..23} ::: {00..55..5}
=head1 EXAMPLE: Calling Bash functions
=head2 EXAMPLE: Calling Bash functions
If the composed command is longer than a line, it becomes hard to
read. In Bash you can use functions. Just remember to B<export -f> the
@ -3189,7 +3216,7 @@ can copy the full environment without having to B<export -f>
anything. See B<env_parallel>.
=head1 EXAMPLE: Function tester
=head2 EXAMPLE: Function tester
To test a program with different parameters:
@ -3208,7 +3235,7 @@ If B<my_program> fails a red FAIL will be printed followed by the failing
command; otherwise a green OK will be printed followed by the command.
=head1 EXAMPLE: Continously show the latest line of output
=head2 EXAMPLE: Continously show the latest line of output
It can be useful to monitor the output of running jobs.
@ -3219,7 +3246,7 @@ which the output of the job is printed in full:
3> >(perl -ne '$|=1;chomp;printf"%.'$COLUMNS's\r",$_." "x100')
=head1 EXAMPLE: Log rotate
=head2 EXAMPLE: Log rotate
Log rotation renames a logfile to an extension with a higher number:
log.1 becomes log.2, log.2 becomes log.3, and so on. The oldest log is
@ -3231,7 +3258,7 @@ the log:
mv log log.1
=head1 EXAMPLE: Removing file extension when processing files
=head2 EXAMPLE: Removing file extension when processing files
When processing files removing the file extension using B<{.}> is
often useful.
@ -3255,7 +3282,7 @@ Put all converted in the same directory:
parallel lame {} -o mydir/{/.}.mp3
=head1 EXAMPLE: Removing strings from the argument
=head2 EXAMPLE: Removing strings from the argument
If you have directory with tar.gz files and want these extracted in
the corresponding dir (e.g foo.tar.gz will be extracted in the dir
@ -3277,7 +3304,7 @@ To remove a string anywhere you can use regular expressions with
parallel --plus echo {/demo_/} ::: demo_mycode remove_demo_here
=head1 EXAMPLE: Download 24 images for each of the past 30 days
=head2 EXAMPLE: Download 24 images for each of the past 30 days
Let us assume a website stores images like:
@ -3299,7 +3326,7 @@ B<$(date -d "today -$1 days" +%Y%m%d)> will give the dates in
YYYYMMDD with B<$1> days subtracted.
=head1 EXAMPLE: Download world map from NASA
=head2 EXAMPLE: Download world map from NASA
NASA provides tiles to download on earthdata.nasa.gov. Download tiles
for Blue Marble world map and create a 10240x20480 map.
@ -3317,7 +3344,7 @@ for Blue Marble world map and create a 10240x20480 map.
convert -append line{0..19}.jpg world.jpg
=head1 EXAMPLE: Download Apollo-11 images from NASA using jq
=head2 EXAMPLE: Download Apollo-11 images from NASA using jq
Search NASA using their API to get JSON for images related to 'apollo
11' and has 'moon landing' in the description.
@ -3344,7 +3371,7 @@ B<parallel> finally uses B<wget> to fetch the images.
parallel wget
=head1 EXAMPLE: Download video playlist in parallel
=head2 EXAMPLE: Download video playlist in parallel
B<youtube-dl> is an excellent tool to download videos. It can,
however, not download videos in parallel. This takes a playlist and
@ -3357,7 +3384,7 @@ downloads 10 videos in parallel.
youtube-dl --playlist-start {#} --playlist-end {#} '"https://$url"'
=head1 EXAMPLE: Prepend last modified date (ISO8601) to file name
=head2 EXAMPLE: Prepend last modified date (ISO8601) to file name
parallel mv {} '{= $a=pQ($_); $b=$_;' \
'$_=qx{date -r "$a" +%FT%T}; chomp; $_="$_ $b" =}' ::: *
@ -3365,7 +3392,7 @@ downloads 10 videos in parallel.
B<{=> and B<=}> mark a perl expression. B<pQ> perl-quotes the
string. B<date +%FT%T> is the date in ISO8601 with time.
=head1 EXAMPLE: Save output in ISO8601 dirs
=head2 EXAMPLE: Save output in ISO8601 dirs
Save output from B<ps aux> every second into dirs named
yyyy-mm-ddThh:mm:ss+zz:zz.
@ -3374,7 +3401,7 @@ yyyy-mm-ddThh:mm:ss+zz:zz.
--results '{= $_=`date -Isec`; chomp=}/' ps aux
=head1 EXAMPLE: Digital clock with "blinking" :
=head2 EXAMPLE: Digital clock with "blinking" :
The : in a digital clock blinks. To make every other line have a ':'
and the rest a ' ' a perl expression is used to look at the 3rd input
@ -3384,7 +3411,7 @@ source. If the value modulo 2 is 1: Use ":" otherwise use " ":
::: {0..12} ::: {0..5} ::: {0..9}
=head1 EXAMPLE: Aggregating content of files
=head2 EXAMPLE: Aggregating content of files
This:
@ -3404,7 +3431,7 @@ So you end up with x1z1 .. x5z5 each containing the content of all
values of y.
=head1 EXAMPLE: Breadth first parallel web crawler/mirrorer
=head2 EXAMPLE: Breadth first parallel web crawler/mirrorer
This script below will crawl and mirror a URL in parallel. It
downloads first pages that are 1 click down, then 2 clicks down, then
@ -3451,7 +3478,7 @@ URLs and the process is started over until no unseen links are found.
rm -f $URLLIST $URLLIST2 $SEEN
=head1 EXAMPLE: Process files from a tar file while unpacking
=head2 EXAMPLE: Process files from a tar file while unpacking
If the files to be processed are in a tar file then unpacking one file
and processing it immediately may be faster than first unpacking all
@ -3464,7 +3491,7 @@ The Perl one-liner is needed to make sure the file is complete before
handing it to GNU B<parallel>.
=head1 EXAMPLE: Rewriting a for-loop and a while-read-loop
=head2 EXAMPLE: Rewriting a for-loop and a while-read-loop
for-loops like this:
@ -3529,7 +3556,7 @@ can both be rewritten as:
export -f doit
cat list | parallel doit
=head1 EXAMPLE: Rewriting nested for-loops
=head2 EXAMPLE: Rewriting nested for-loops
Nested for-loops like this:
@ -3556,7 +3583,7 @@ can be written like this:
parallel echo {1} {2} ::: red green blue ::: S M L XL XXL | sort
=head1 EXAMPLE: Finding the lowest difference between files
=head2 EXAMPLE: Finding the lowest difference between files
B<diff> is good for finding differences in text files. B<diff | wc -l>
gives an indication of the size of the difference. To find the
@ -3568,7 +3595,7 @@ This way it is possible to see if some files are closer to other
files.
=head1 EXAMPLE: for-loops with column names
=head2 EXAMPLE: for-loops with column names
When doing multiple nested for-loops it can be easier to keep track of
the loop variable if is is named instead of just having a number. Use
@ -3584,7 +3611,7 @@ This also works if the input file is a file with columns:
parallel --colsep '\t' --header : echo {Name} {E-mail address}
=head1 EXAMPLE: All combinations in a list
=head2 EXAMPLE: All combinations in a list
GNU B<parallel> makes all combinations when given two lists.
@ -3600,7 +3627,7 @@ B<{choose_k}> works for any number of input sources:
parallel --plus echo {choose_k} ::: A B C D ::: A B C D ::: A B C D
=head1 EXAMPLE: From a to b and b to c
=head2 EXAMPLE: From a to b and b to c
Assume you have input like:
@ -3628,7 +3655,7 @@ If the input is in the array $a here are two solutions:
parallel echo {1} - {2} ::: "${a[@]::${#a[@]}-1}" :::+ "${a[@]:1}"
=head1 EXAMPLE: Count the differences between all files in a dir
=head2 EXAMPLE: Count the differences between all files in a dir
Using B<--results> the results are saved in /tmp/diffcount*.
@ -3639,7 +3666,7 @@ To see the difference between file A and file B look at the file
'/tmp/diffcount/1/A/2/B'.
=head1 EXAMPLE: Speeding up fast jobs
=head2 EXAMPLE: Speeding up fast jobs
Starting a job on the local machine takes around 10 ms. This can be a
big overhead if the job takes very few ms to run. Often you can group
@ -3682,7 +3709,7 @@ The overhead is 100000 times smaller namely around 100 nanoseconds per
job.
=head1 EXAMPLE: Using shell variables
=head2 EXAMPLE: Using shell variables
When using shell variables you need to quote them correctly as they
may otherwise be interpreted by the shell.
@ -3721,7 +3748,7 @@ If you use them in a function you just quote as you normally would do:
parallel myfunc ::: '!'
=head1 EXAMPLE: Group output lines
=head2 EXAMPLE: Group output lines
When running jobs that output data, you often do not want the output
of multiple jobs to run together. GNU B<parallel> defaults to grouping
@ -3742,7 +3769,7 @@ Compare the output of:
https://ftpmirror.gnu.org/parallel/parallel-20{}0822.tar.bz2 \
::: {12..16}
=head1 EXAMPLE: Tag output lines
=head2 EXAMPLE: Tag output lines
GNU B<parallel> groups the output lines, but it can be hard to see
where the different jobs begin. B<--tag> prepends the argument to make
@ -3763,7 +3790,7 @@ Check the uptime of the servers in I<~/.parallel/sshloginfile>:
parallel --tag -S .. --nonall uptime
=head1 EXAMPLE: Colorize output
=head2 EXAMPLE: Colorize output
Give each job a new color. Most terminals support ANSI colors with the
escape code "\033[30;3Xm" where 0 <= X <= 7:
@ -3778,7 +3805,7 @@ To get rid of the initial \t (which comes from B<--tagstring>):
... | perl -pe 's/\t//'
=head1 EXAMPLE: Keep order of output same as order of input
=head2 EXAMPLE: Keep order of output same as order of input
Normally the output of a job will be printed as soon as it
completes. Sometimes you want the order of the output to remain the
@ -3826,7 +3853,7 @@ combined in the correct order.
{}0000000-{}9999999 http://example.com/the/big/file > file
=head1 EXAMPLE: Parallel grep
=head2 EXAMPLE: Parallel grep
B<grep -r> greps recursively through directories. On multicore CPUs
GNU B<parallel> can often speed this up.
@ -3836,7 +3863,7 @@ GNU B<parallel> can often speed this up.
This will run 1.5 job per CPU, and give 1000 arguments to B<grep>.
=head1 EXAMPLE: Grepping n lines for m regular expressions.
=head2 EXAMPLE: Grepping n lines for m regular expressions.
The simplest solution to grep a big file for a lot of regexps is:
@ -3860,7 +3887,7 @@ on the disk system it may be faster or slower to parallelize. The only
way to know for certain is to test and measure.
=head2 Limiting factor: RAM
=head3 Limiting factor: RAM
The normal B<grep -f regexps.txt bigfile> works no matter the size of
bigfile, but if regexps.txt is so big it cannot fit into memory, then
@ -3911,7 +3938,7 @@ If you can live with duplicated lines and wrong order, it is faster to do:
parallel --pipepart -a regexps.txt --block $percpu --compress \
grep -F -f - bigfile
=head2 Limiting factor: CPU
=head3 Limiting factor: CPU
If the CPU is the limiting factor parallelization should be done on
the regexps:
@ -3941,13 +3968,13 @@ combine the two using B<--cat>:
If a line matches multiple regexps, the line may be duplicated.
=head2 Bigger problem
=head3 Bigger problem
If the problem is too big to be solved by this, you are probably ready
for Lucene.
=head1 EXAMPLE: Using remote computers
=head2 EXAMPLE: Using remote computers
To run commands on a remote computer SSH needs to be set up and you
must be able to login without entering a password (The commands
@ -4021,7 +4048,7 @@ has 8 CPUs.
seq 10 | parallel --sshlogin 8/server.example.com echo
=head1 EXAMPLE: Transferring of files
=head2 EXAMPLE: Transferring of files
To recompress gzipped files with B<bzip2> using a remote computer run:
@ -4092,7 +4119,7 @@ the special short hand I<-S ..> can be used:
--trc {.}.bz2 "zcat {} | bzip2 -9 >{.}.bz2"
=head1 EXAMPLE: Distributing work to local and remote computers
=head2 EXAMPLE: Distributing work to local and remote computers
Convert *.mp3 to *.ogg running one process per CPU on local computer
and server2:
@ -4101,7 +4128,7 @@ and server2:
'mpg321 -w - {} | oggenc -q0 - -o {.}.ogg' ::: *.mp3
=head1 EXAMPLE: Running the same command on remote computers
=head2 EXAMPLE: Running the same command on remote computers
To run the command B<uptime> on remote computers you can do:
@ -4118,7 +4145,7 @@ output.
If you have a lot of hosts use '-j0' to access more hosts in parallel.
=head1 EXAMPLE: Running 'sudo' on remote computers
=head2 EXAMPLE: Running 'sudo' on remote computers
Put the password into passwordfile then run:
@ -4126,7 +4153,7 @@ Put the password into passwordfile then run:
-S user@server1,user@server2 sudo -S ls -l /root
=head1 EXAMPLE: Using remote computers behind NAT wall
=head2 EXAMPLE: Using remote computers behind NAT wall
If the workers are behind a NAT wall, you need some trickery to get to
them.
@ -4155,7 +4182,7 @@ can simply:
parallel -S host1,host2,host3 echo ::: This does work
=head2 No jumphost, but port forwards
=head3 No jumphost, but port forwards
If there is no jumphost but each server has port 22 forwarded from the
firewall (e.g. the firewall's port 22001 = port 22 on host1, 22002 = host2,
@ -4174,7 +4201,7 @@ And then use host{1..3}.v as normal hosts:
parallel -S host1.v,host2.v,host3.v echo ::: a b c
=head2 No jumphost, no port forwards
=head3 No jumphost, no port forwards
If ports cannot be forwarded, you need some sort of VPN to traverse
the NAT-wall. TOR is one options for that, as it is very easy to get
@ -4202,7 +4229,7 @@ If not all hosts are accessible through TOR:
See more B<ssh> tricks on https://en.wikibooks.org/wiki/OpenSSH/Cookbook/Proxies_and_Jump_Hosts
=head1 EXAMPLE: Parallelizing rsync
=head2 EXAMPLE: Parallelizing rsync
B<rsync> is a great tool, but sometimes it will not fill up the
available bandwidth. Running multiple B<rsync> in parallel can fix
@ -4228,7 +4255,7 @@ are called digits.png (e.g. 000000.png) you might be able to do:
seq -w 0 99 | parallel rsync -Havessh fooserver:src/*{}.png destdir/
=head1 EXAMPLE: Use multiple inputs in one command
=head2 EXAMPLE: Use multiple inputs in one command
Copy files like foo.es.ext to foo.ext:
@ -4257,7 +4284,7 @@ Alternative version:
find . -type f | sort | parallel convert {} {#}.png
=head1 EXAMPLE: Use a table as input
=head2 EXAMPLE: Use a table as input
Content of table_file.tsv:
@ -4279,7 +4306,7 @@ the columns. To keep the spaces:
parallel -a table_file.tsv --trim n --colsep '\t' cmd -o {2} -i {1}
=head1 EXAMPLE: Output to database
=head2 EXAMPLE: Output to database
GNU B<parallel> can output to a database table and a CSV-file:
@ -4319,7 +4346,7 @@ Or MySQL:
s/\\([\\tn])/$s{$1}/g;' mytable.tsv
=head1 EXAMPLE: Output to CSV-file for R
=head2 EXAMPLE: Output to CSV-file for R
If you have no need for the advanced job distribution control that a
database provides, but you simply want output into a CSV file that you
@ -4332,7 +4359,7 @@ can read into R or LibreCalc, then you can use B<--results>:
> write(as.character(mydf[2,c("Stdout")]),'')
=head1 EXAMPLE: Use XML as input
=head2 EXAMPLE: Use XML as input
The show Aflyttet on Radio 24syv publishes an RSS feed with their audio
podcasts on: http://arkiv.radio24syv.dk/audiopodcast/channel/4466232
@ -4345,7 +4372,7 @@ using GNU B<parallel>:
parallel -u wget '{= s/ url="//; s/"//; =}'
=head1 EXAMPLE: Run the same command 10 times
=head2 EXAMPLE: Run the same command 10 times
If you want to run the same command with the same arguments 10 times
in parallel you can do:
@ -4353,7 +4380,7 @@ in parallel you can do:
seq 10 | parallel -n0 my_command my_args
=head1 EXAMPLE: Working as cat | sh. Resource inexpensive jobs and evaluation
=head2 EXAMPLE: Working as cat | sh. Resource inexpensive jobs and evaluation
GNU B<parallel> can work similar to B<cat | sh>.
@ -4378,7 +4405,7 @@ To run 100 processes simultaneously do:
As there is not a I<command> the jobs will be evaluated by the shell.
=head1 EXAMPLE: Call program with FASTA sequence
=head2 EXAMPLE: Call program with FASTA sequence
FASTA files have the format:
@ -4397,7 +4424,7 @@ To call B<myprog> with the sequence as argument run:
'read a; echo Name: "$a"; myprog $(tr -d "\n")'
=head1 EXAMPLE: Processing a big file using more CPUs
=head2 EXAMPLE: Processing a big file using more CPUs
To process a big file or some output you can use B<--pipe> to split up
the data into blocks and pipe the blocks into the processing program.
@ -4434,7 +4461,7 @@ the parts directly to the program:
parallel -Xj1 sort -m {} ';' rm {} >bigfile.sort
=head1 EXAMPLE: Grouping input lines
=head2 EXAMPLE: Grouping input lines
When processing with B<--pipe> you may have lines grouped by a
value. Here is I<my.csv>:
@ -4473,7 +4500,7 @@ If your program can process multiple customers replace B<-N1> with a
reasonable B<--blocksize>.
=head1 EXAMPLE: Running more than 250 jobs workaround
=head2 EXAMPLE: Running more than 250 jobs workaround
If you need to run a massive amount of jobs in parallel, then you will
likely hit the filehandle limit which is often around 250 jobs. If you
@ -4492,7 +4519,7 @@ RAM to do this, and you may need to increase /proc/sys/kernel/pid_max):
parallel --pipe -N 250 --roundrobin -j250 parallel -j250 your_prg
=head1 EXAMPLE: Working as mutex and counting semaphore
=head2 EXAMPLE: Working as mutex and counting semaphore
The command B<sem> is an alias for B<parallel --semaphore>.
@ -4525,7 +4552,7 @@ same time:
seq 3 | parallel sem --id mymutex sed -i -e '1i{}' myfile
=head1 EXAMPLE: Mutex for a script
=head2 EXAMPLE: Mutex for a script
Assume a script is called from cron or from a web service, but only
one instance can be run at a time. With B<sem> and B<--shebang-wrap>
@ -4556,7 +4583,7 @@ Here B<python>:
print "exclusively";
=head1 EXAMPLE: Start editor with filenames from stdin (standard input)
=head2 EXAMPLE: Start editor with filenames from stdin (standard input)
You can use GNU B<parallel> to start interactive programs like emacs or vi:
@ -4567,7 +4594,7 @@ If there are more files than will fit on a single command line, the
editor will be started again with the remaining files.
=head1 EXAMPLE: Running sudo
=head2 EXAMPLE: Running sudo
B<sudo> requires a password to run a command as root. It caches the
access, so you only need to enter the password again if you have not
@ -4590,7 +4617,7 @@ or:
This way you only have to enter the sudo password once.
=head1 EXAMPLE: GNU Parallel as queue system/batch manager
=head2 EXAMPLE: GNU Parallel as queue system/batch manager
GNU B<parallel> can work as a simple job queue system or batch manager.
The idea is to put the jobs into a file and have GNU B<parallel> read
@ -4650,7 +4677,7 @@ the output of second completed job will only be printed when job 12
has started.
=head1 EXAMPLE: GNU Parallel as dir processor
=head2 EXAMPLE: GNU Parallel as dir processor
If you have a dir in which users drop files that needs to be processed
you can do this on GNU/Linux (If you know what B<inotifywait> is
@ -4676,7 +4703,7 @@ Using GNU B<parallel> as dir processor has the same limitations as
using GNU B<parallel> as queue system/batch manager.
=head1 EXAMPLE: Locate the missing package
=head2 EXAMPLE: Locate the missing package
If you have downloaded source and tried compiling it, you may have seen:
@ -4902,7 +4929,7 @@ Dir where GNU B<parallel> stores config files, semaphores, and caches
information between invocations. Default: $HOME/.parallel.
=item $PARALLEL_ARGHOSTGROUPS
=item $PARALLEL_ARGHOSTGROUPS (beta testing)
When using B<--hostgroups> GNU B<parallel> sets this to the hostgroups
of the job.

View file

@ -2777,7 +2777,7 @@ https://gitlab.com/netikras/bthread (Last checked: 2021-01)
Summary (see legend above):
I1 - - - - - I7
M1 - - - - M6
- O2 O3 - - O6 - N/A N/A O10
- O2 O3 - - O6 - x x O10
E1 - - - - - -
- - - - - - - - -
- -
@ -2877,7 +2877,7 @@ with replacement strings. Such as:
that can be used like:
parallel --header : --tmpl my.tmpl {#}.t myprog {#}.t \
parallel --header : --tmpl my.tmpl={#}.t myprog {#}.t \
::: x 1 2 3 ::: y 1 2 3
Filtering may also be supported as:
@ -2891,23 +2891,62 @@ which will basically do:
https://github.com/eviatarbach/parasweep (Last checked: 2021-01)
=head2 Todo
=head2 DIFFERENCES BETWEEN parallel-bash AND GNU Parallel
Summary (see legend above):
I1 I2 - - - - -
- - M3 - - M6
- O2 O3 - O5 O6 - O8 x O10
E1 - - - - - -
- - - - - - - - -
- -
B<parallel-bash> is written in pure bash. It is really fast (overhead
of ~0.05 ms/job compared to GNU B<parallel>'s ~3 ms/job). So if your
jobs are extremely short lived, and you can live with the quite
limited command, this may be useful.
B<parallel-bash> will not start the first job, until it has read all
input. The input can at most be 20935 lines and the lines cannot be
all be empty.
Ctrl-C does not stop spawning new jobs. Ctrl-Z does not suspend
running jobs.
=head3 EXAMPLES FROM parallel-bash
1$ some_input | parallel-bash -p 5 -c echo
1$ some_input | parallel -j 5 echo
2$ parallel-bash -p 5 -c echo < some_file
2$ parallel -j 5 echo < some_file
3$ parallel-bash -p 5 -c echo <<< 'some string'
3$ parallel -j 5 -c echo <<< 'some string'
4$ something | parallel-bash -p 5 -c echo {} {}
4$ something | parallel -j 5 echo {} {}
https://reposhub.com/python/command-line-tools/Akianonymus-parallel-bash.html
(Last checked: 2021-02)
=head2 Todo
https://github.com/Nukesor/pueue
PASH: Light-touch Data-Parallel Shell Processing
https://arxiv.org/pdf/2012.15443.pdf KumQuat
https://arxiv.org/pdf/2007.09436.pdf
https://arxiv.org/pdf/2007.09436.pdf PaSH: Light-touch Data-Parallel Shell Processing
https://github.com/JeiKeiLim/simple_distribute_job
https://github.com/Akianonymus/parallel-bash
https://github.com/reggi/pkgrun
https://github.com/reggi/pkgrun - not obvious how to use
https://github.com/benoror/better-npm-run - not obvious how to use
@ -2919,6 +2958,11 @@ https://github.com/flesler/parallel
https://github.com/Julian/Verge
http://manpages.ubuntu.com/manpages/xenial/man1/tsp.1.html
http://vicerveza.homeunix.net/~viric/soft/ts/
=head1 TESTING OTHER TOOLS

View file

@ -1153,7 +1153,7 @@ multiple B<:::>.
When B<:::> was chosen, B<::::> came as a fairly natural extension.
Linking input sources meant having to decide for some way to indicate
linking of B<:::> and B<::::>. B<:::+> and B<::::+> was chosen, so
linking of B<:::> and B<::::>. B<:::+> and B<::::+> were chosen, so
that they were similar to B<:::> and B<::::>.

View file

@ -118,7 +118,7 @@ GetOptions(
"help" => \$opt::dummy,
) || exit(255);
$Global::progname = ($0 =~ m:(^|/)([^/]+)$:)[1];
$Global::version = 20210122;
$Global::version = 20210123;
if($opt::version) { version(); exit 0; }
@Global::sortoptions =
shell_quote(@ARGV_before[0..($#ARGV_before-$#ARGV-1)]);

View file

@ -574,7 +574,7 @@ $Global::Initfile && unlink $Global::Initfile;
exit ($err);
sub parse_options {
$Global::version = 20210122;
$Global::version = 20210123;
$Global::progname = 'sql';
# This must be done first as this may exec myself

View file

@ -12,6 +12,53 @@ export -f stdsort
# Test amount of parallelization
# parallel --shuf --jl /tmp/myjl -j1 'export JOBS={1};'bash tests-to-run/parallel-local-0.3s.sh ::: {1..16} ::: {1..5}
par_crnl() {
echo '### Give a warning if input is DOS-ascii'
printf "b\r\nc\r\nd\r\ne\r\nf\r\n" | stdout parallel -k echo {}a
echo This should give no warning because -d is set
printf "b\r\nc\r\nd\r\ne\r\nf\r\n" | parallel -k -d '\r\n' echo {}a
echo This should give no warning because line2 has newline only
printf "b\r\nc\nd\r\ne\r\nf\r\n" | parallel -k echo {}a
}
par_tmpl() {
tmp1=$(mktemp)
tmp2=$(mktemp)
cat <<'EOF' > "$tmp1"
Template1
Xval: {x}
Yval: {y}
FixedValue: 9
Seq: {#}
Slot: {%}
# x with 2 decimals
DecimalX: {=x $_=sprintf("%.2f",$_) =}
TenX: {=x $_=$_*10 =}
RandomVal: {=1 $_=rand() =}
EOF
cat <<'EOF' > "$tmp2"
Template2
X,Y: {x},{y}
val1,val2: {1},{2}
EOF
myprog() {
echo "$@"
cat "$@"
}
export -f myprog
parallel -k --header : --tmpl "$tmp1"={#}.t1 \
--tmpl "$tmp2"=/tmp/tmpl-{x}-{y}.t2 \
myprog {#}.t1 /tmp/tmpl-{x}-{y}.t2 \
::: x 1.1 2.22 3.333 ::: y 111.111 222.222 333.333 |
perl -pe 's/0.\d{13,}/0.RANDOM_NUMBER/' |
perl -pe 's/Slot: \d/Slot: X/'
rm "$tmp1" "$tmp2"
}
par_resume_k() {
echo '### --resume -k'
tmp=$(tempfile)
@ -795,12 +842,6 @@ par_empty_input_on_stdin() {
true | stdout parallel --shuf echo
}
par_tee_too_many_args() {
echo '### Fail if there are more arguments than --jobs'
seq 11 | parallel -k --tag --pipe -j4 --tee grep {} ::: {1..4}
seq 11 | parallel -k --tag --pipe -j4 --tee grep {} ::: {1..5}
}
par_space_envvar() {
echo "### bug: --gnu was ignored if env var started with space: PARALLEL=' --gnu'"
export PARALLEL=" -v" && parallel echo ::: 'space in envvar OK'
@ -909,11 +950,6 @@ par_cr_newline_header() {
parallel --colsep , --header : echo {foo}
}
par_plus_slot_replacement() {
echo '### show {slot}'
parallel -k --plus echo '{slot}=$PARALLEL_JOBSLOT={%}' ::: A B C
}
par_PARALLEL_HOME_with_+() {
echo 'bug #59453: PARALLEL_HOME with plus sign causes error: config not readable'
tmp=$(mktemp -d)

View file

@ -539,7 +539,7 @@ par__pipepart_spawn() {
echo '### bug #46214: Using --pipepart doesnt spawn multiple jobs in version 20150922'
seq 1000000 > /tmp/num1000000
stdout parallel --pipepart --progress -a /tmp/num1000000 --block 10k -j0 true |
grep 1:local | perl -pe 's/\d\d\d/999/g; s/[2-9]/2+/g;'
grep 1:local | perl -pe 's/\d\d\d/999/g; s/\d\d+|[2-9]/2+/g;'
}
par__pipe_tee() {

View file

@ -4,16 +4,24 @@
# Each should be taking 1-3s and be possible to run in parallel
# I.e.: No race conditions, no logins
par_plus_slot_replacement() {
echo '### show {slot} {0%} {0#}'
parallel -k --plus echo '{slot}=$PARALLEL_JOBSLOT={%}' ::: A B C
parallel -j15 -k --plus 'echo Seq: {000#} {00#} {0#}' ::: {1..100} | sort
parallel -j15 -k --plus 'sleep 0.0{}; echo Slot: {000%} {00%} {0%}' ::: {1..100} |
sort -u
}
par_recend_recstart_hash() {
echo "### bug #59843: --regexp --recstart '#' fails"
(echo '#rec1'; echo 'bar'; echo '#rec2') |
parallel -k --regexp --pipe -N1 --recstart '#' wc
parallel -k --regexp --pipe -N1 --recstart '#' wc
(echo ' rec1'; echo 'bar'; echo ' rec2') |
parallel -k --regexp --pipe -N1 --recstart ' ' wc
parallel -k --regexp --pipe -N1 --recstart ' ' wc
(echo 'rec2'; echo 'bar#';echo 'rec2' ) |
parallel -k --regexp --pipe -N1 --recend '#' wc
parallel -k --regexp --pipe -N1 --recend '#' wc
(echo 'rec2'; echo 'bar ';echo 'rec2' ) |
parallel -k --regexp --pipe -N1 --recend ' ' wc
parallel -k --regexp --pipe -N1 --recend ' ' wc
}
par_sqlandworker_uninstalled_dbd() {
@ -222,7 +230,9 @@ par_test_job_number() {
par_seqreplace_long_line() {
echo '### Test --seqreplace and line too long'
seq 1 1000 | stdout parallel -j1 -s 210 -k --seqreplace I echo IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII \|wc | uniq -c
seq 1 1000 |
stdout parallel -j1 -s 210 -k --seqreplace I echo IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII \|wc |
uniq -c
}
par_bug37042() {
@ -238,11 +248,12 @@ par_bug37042() {
par_header() {
echo "### Test --header with -N"
(echo h1; echo h2; echo 1a;echo 1b; echo 2a;echo 2b; echo 3a)| parallel -j1 --pipe -N2 -k --header '.*\n.*\n' echo Start\;cat \; echo Stop
(echo h1; echo h2; echo 1a;echo 1b; echo 2a;echo 2b; echo 3a) |
parallel -j1 --pipe -N2 -k --header '.*\n.*\n' echo Start\;cat \; echo Stop
echo "### Test --header with --block 1k"
(echo h1; echo h2; perl -e '$a="x"x110;for(1..22){print $_,$a,"\n"}') |
parallel -j1 --pipe -k --block 1k --header '.*\n.*\n' echo Start\;cat \; echo Stop
parallel -j1 --pipe -k --block 1k --header '.*\n.*\n' echo Start\;cat \; echo Stop
echo "### Test --header with multiple :::"
parallel --header : echo {a} {b} {1} {2} ::: b b1 ::: a a2

View file

@ -251,9 +251,9 @@ par_memory_leak() {
}
export -f a_run
echo "### Test for memory leaks"
echo "Of 100 runs of 1 job at least one should be bigger than a 3000 job run"
echo "Of 300 runs of 1 job at least one should be bigger than a 3000 job run"
. `which env_parallel.bash`
parset small_max,big ::: 'seq 100 | parallel a_run 1 | jq -s max' 'a_run 3000'
parset small_max,big ::: 'seq 300 | parallel a_run 1 | jq -s max' 'a_run 3000'
if [ $small_max -lt $big ] ; then
echo "Bad: Memleak likely."
else

View file

@ -134,6 +134,12 @@ par_tee_with_premature_close() {
fi
}
par_tee_too_many_args() {
echo '### Fail if there are more arguments than --jobs'
seq 11 | stdout parallel -k --tag --pipe -j4 --tee grep {} ::: {1..4}
seq 11 | stdout parallel -k --tag --pipe -j0 --tee grep {} ::: {1..10000}
}
par_maxargs() {
echo '### Test -n and --max-args: Max number of args per line (only with -X and -m)'

View file

@ -69,7 +69,7 @@ par_export_functions_csh() {
par_progress_text_max_jobs_to_run() {
echo '### bug #49404: "Max jobs to run" does not equal the number of jobs specified when using GNU Parallel on remote server?'
echo should give 10 running jobs
stdout parallel -S 16/lo --progress true ::: {1..10} | grep /.10
stdout parallel -S 16/lo --progress true ::: {1..10} | grep /.16
}
par_hgrp_rpl() {

View file

@ -27,8 +27,8 @@ par_path_remote_bash() {
export A="`seq 1000`"
PATH=$PATH:/tmp
. /usr/local/bin/env_parallel.bash
# --filter to see if $PATH with parallel is transferred
env_parallel --filter --env A,PATH -Slo echo '$PATH' ::: OK
# --filter-hosts to see if $PATH with parallel is transferred
env_parallel --filter-hosts --env A,PATH -Slo echo '$PATH' ::: OK
_EOS
stdout ssh nopathbash@lo -T |
perl -ne '/StArT/..0 and print' |
@ -55,8 +55,8 @@ par_path_remote_csh() {
if ("`alias env_parallel`" == '') then
source `which env_parallel.csh`
endif
# --filter to see if $PATH with parallel is transferred
env_parallel --filter --env A,PATH -Slo echo '$PATH' ::: OK
# --filter-hosts to see if $PATH with parallel is transferred
env_parallel --filter-hosts --env A,PATH -Slo echo '$PATH' ::: OK
# Sleep needed to avoid stderr/stdout mixing
sleep 1
echo Done

View file

@ -6,9 +6,9 @@ SERVER3=parallel-server3
SSHUSER1=vagrant
SSHUSER2=vagrant
SSHUSER3=vagrant
SSHLOGIN1=$SSHUSER1@$SERVER1
SSHLOGIN2=$SSHUSER2@$SERVER2
SSHLOGIN3=$SSHUSER3@$SERVER3
export SSHLOGIN1=$SSHUSER1@$SERVER1
export SSHLOGIN2=$SSHUSER2@$SERVER2
export SSHLOGIN3=$SSHUSER3@$SERVER3
#SERVER1=parallel-server1
#SERVER2=lo
@ -16,31 +16,57 @@ SSHLOGIN3=$SSHUSER3@$SERVER3
#SSHLOGIN2=parallel@lo
#SSHLOGIN3=parallel@parallel-server2
echo '### Test use special ssh'
echo 'TODO test ssh with > 9 simultaneous'
echo 'ssh "$@"; echo "$@" >>/tmp/myssh1-run' >/tmp/myssh1
echo 'ssh "$@"; echo "$@" >>/tmp/myssh2-run' >/tmp/myssh2
chmod 755 /tmp/myssh1 /tmp/myssh2
seq 1 100 | parallel --sshdelay 0.03 --retries 10 --sshlogin "/tmp/myssh1 $SSHLOGIN1,/tmp/myssh2 $SSHLOGIN2" -k echo
par_special_ssh() {
echo '### Test use special ssh'
echo 'TODO test ssh with > 9 simultaneous'
echo 'ssh "$@"; echo "$@" >>/tmp/myssh1-run' >/tmp/myssh1
echo 'ssh "$@"; echo "$@" >>/tmp/myssh2-run' >/tmp/myssh2
chmod 755 /tmp/myssh1 /tmp/myssh2
seq 1 100 | parallel --sshdelay 0.03 --retries 10 --sshlogin "/tmp/myssh1 $SSHLOGIN1,/tmp/myssh2 $SSHLOGIN2" -k echo
}
par_filter_hosts_different_errors() {
echo '### --filter-hosts - OK, non-such-user, connection refused, wrong host'
stdout parallel --nonall --filter-hosts -S localhost,NoUser@localhost,154.54.72.206,"ssh 5.5.5.5" hostname |
grep -v 'parallel: Warning: Removed'
}
par_timeout_retries() {
echo '### test --timeout --retries'
parallel -j0 --timeout 5 --retries 3 -k ssh {} echo {} ::: 192.168.1.197 8.8.8.8 $SSHLOGIN1 $SSHLOGIN2 $SSHLOGIN3
}
par_filter_hosts_no_ssh_nxserver() {
echo '### test --filter-hosts with server w/o ssh, non-existing server'
stdout parallel -S 192.168.1.197,8.8.8.8,$SSHLOGIN1,$SSHLOGIN2,$SSHLOGIN3 --filter-hosts --nonall -k --tag echo |
grep -v 'parallel: Warning: Removed'
}
par_controlmaster_is_faster() {
echo '### bug #41964: --controlmaster not seems to reuse OpenSSH connections to the same host'
(parallel -S $SSHLOGIN1 true ::: {1..20};
echo No --controlmaster - finish last) &
(parallel -M -S $SSHLOGIN1 true ::: {1..20};
echo With --controlmaster - finish first) &
wait
}
par_workdir_in_HOME() {
echo '### test --workdir . in $HOME'
cd && mkdir -p parallel-test && cd parallel-test &&
echo OK > testfile && parallel --workdir . --transfer -S $SSHLOGIN1 cat {} ::: testfile
}
export -f $(compgen -A function | grep par_)
compgen -A function | grep par_ | LC_ALL=C sort |
parallel --timeout 1000% -j6 --tag -k --joblog /tmp/jl-`basename $0` '{} 2>&1' |
perl -pe 's:/usr/bin:/bin:g'
cat <<'EOF' | sed -e s/\$SERVER1/$SERVER1/\;s/\$SERVER2/$SERVER2/\;s/\$SSHLOGIN1/$SSHLOGIN1/\;s/\$SSHLOGIN2/$SSHLOGIN2/\;s/\$SSHLOGIN3/$SSHLOGIN3/ | parallel -vj3 -k -L1 -r
echo '### test --timeout --retries'
parallel -j0 --timeout 5 --retries 3 -k ssh {} echo {} ::: 192.168.1.197 8.8.8.8 $SSHLOGIN1 $SSHLOGIN2 $SSHLOGIN3
echo '### test --filter-hosts with server w/o ssh, non-existing server'
parallel -S 192.168.1.197,8.8.8.8,$SSHLOGIN1,$SSHLOGIN2,$SSHLOGIN3 --filter-hosts --nonall -k --tag echo
echo '### bug #41964: --controlmaster not seems to reuse OpenSSH connections to the same host'
(parallel -S $SSHLOGIN1 true ::: {1..20}; echo No --controlmaster - finish last) &
(parallel -M -S $SSHLOGIN1 true ::: {1..20}; echo With --controlmaster - finish first) &
wait
echo '### --filter-hosts - OK, non-such-user, connection refused, wrong host'
parallel --nonall --filter-hosts -S localhost,NoUser@localhost,154.54.72.206,"ssh 5.5.5.5" hostname
echo '### test --workdir . in $HOME'
cd && mkdir -p parallel-test && cd parallel-test &&
echo OK > testfile && parallel --workdir . --transfer -S $SSHLOGIN1 cat {} ::: testfile
echo '### TODO: test --filter-hosts proxied through the one host'

View file

@ -68,6 +68,25 @@ par_compress_stdout_stderr ### Test compress - stderr
par_compress_stdout_stderr ls: cannot access '/OK-if-missing-file': No such file or directory
par_cr_newline_header ### --header : should set named replacement string if input line ends in \r\n
par_cr_newline_header bar
par_crnl ### Give a warning if input is DOS-ascii
par_crnl parallel: Warning: The first three values end in CR-NL. Consider using -d "\r\n"
par_crnl b par_crnl a
par_crnl c par_crnl a
par_crnl d par_crnl a
par_crnl e par_crnl a
par_crnl f par_crnl a
par_crnl This should give no warning because -d is set
par_crnl ba
par_crnl ca
par_crnl da
par_crnl ea
par_crnl fa
par_crnl This should give no warning because line2 has newline only
par_crnl b par_crnl a
par_crnl ca
par_crnl d par_crnl a
par_crnl e par_crnl a
par_crnl f par_crnl a
par_csv col1"x3"-new
par_csv line col2-new2
par_csv line col3-col 4
@ -692,10 +711,6 @@ par_pipepart_recend_recstart 9
par_pipepart_recend_recstart 10
par_pipepart_roundrobin ### bug #45769: --round-robin --pipepart gives wrong results
par_pipepart_roundrobin 2
par_plus_slot_replacement ### show {slot}
par_plus_slot_replacement 1=1=1
par_plus_slot_replacement 2=2=2
par_plus_slot_replacement 3=3=3
par_profile ### Test -J profile, -J /dir/profile, -J ./profile
par_profile local local
par_profile abs abs
@ -936,14 +951,6 @@ par_tee 4 -l 122853
par_tee 4 -c 815290
par_tee 5 -l 122853
par_tee 5 -c 815290
par_tee_too_many_args ### Fail if there are more arguments than --jobs
par_tee_too_many_args 1 1
par_tee_too_many_args 1 10
par_tee_too_many_args 1 11
par_tee_too_many_args 2 2
par_tee_too_many_args 3 3
par_tee_too_many_args 4 4
par_tee_too_many_args parallel: Error: --tee requires --jobs to be higher. Try --jobs 0.
par_test_L_context_replace ### Test -N context replace
par_test_L_context_replace a1b a2b a3b a4b a5b a6b a7b a8b a9b a10b
par_test_L_context_replace a11b a12b a13b a14b a15b a16b a17b a18b a19b
@ -1014,6 +1021,150 @@ par_testquote yash "#&/
par_testquote yash ()*=?'
par_testquote zsh "#&/
par_testquote zsh ()*=?'
par_tmpl 1.t1 /tmp/tmpl-1.1-111.111.t2
par_tmpl Template1
par_tmpl Xval: 1.1
par_tmpl Yval: 111.111
par_tmpl FixedValue: 9
par_tmpl Seq: 1
par_tmpl Slot: X
par_tmpl # x with 2 decimals
par_tmpl DecimalX: 1.10
par_tmpl TenX: 11
par_tmpl RandomVal: 0.RANDOM_NUMBER
par_tmpl
par_tmpl Template2
par_tmpl X,Y: 1.1,111.111
par_tmpl val1,val2: 1.1,111.111
par_tmpl
par_tmpl 2.t1 /tmp/tmpl-1.1-222.222.t2
par_tmpl Template1
par_tmpl Xval: 1.1
par_tmpl Yval: 222.222
par_tmpl FixedValue: 9
par_tmpl Seq: 2
par_tmpl Slot: X
par_tmpl # x with 2 decimals
par_tmpl DecimalX: 1.10
par_tmpl TenX: 11
par_tmpl RandomVal: 0.RANDOM_NUMBER
par_tmpl
par_tmpl Template2
par_tmpl X,Y: 1.1,222.222
par_tmpl val1,val2: 1.1,222.222
par_tmpl
par_tmpl 3.t1 /tmp/tmpl-1.1-333.333.t2
par_tmpl Template1
par_tmpl Xval: 1.1
par_tmpl Yval: 333.333
par_tmpl FixedValue: 9
par_tmpl Seq: 3
par_tmpl Slot: X
par_tmpl # x with 2 decimals
par_tmpl DecimalX: 1.10
par_tmpl TenX: 11
par_tmpl RandomVal: 0.RANDOM_NUMBER
par_tmpl
par_tmpl Template2
par_tmpl X,Y: 1.1,333.333
par_tmpl val1,val2: 1.1,333.333
par_tmpl
par_tmpl 4.t1 /tmp/tmpl-2.22-111.111.t2
par_tmpl Template1
par_tmpl Xval: 2.22
par_tmpl Yval: 111.111
par_tmpl FixedValue: 9
par_tmpl Seq: 4
par_tmpl Slot: X
par_tmpl # x with 2 decimals
par_tmpl DecimalX: 2.22
par_tmpl TenX: 22.2
par_tmpl RandomVal: 0.RANDOM_NUMBER
par_tmpl
par_tmpl Template2
par_tmpl X,Y: 2.22,111.111
par_tmpl val1,val2: 2.22,111.111
par_tmpl
par_tmpl 5.t1 /tmp/tmpl-2.22-222.222.t2
par_tmpl Template1
par_tmpl Xval: 2.22
par_tmpl Yval: 222.222
par_tmpl FixedValue: 9
par_tmpl Seq: 5
par_tmpl Slot: X
par_tmpl # x with 2 decimals
par_tmpl DecimalX: 2.22
par_tmpl TenX: 22.2
par_tmpl RandomVal: 0.RANDOM_NUMBER
par_tmpl
par_tmpl Template2
par_tmpl X,Y: 2.22,222.222
par_tmpl val1,val2: 2.22,222.222
par_tmpl
par_tmpl 6.t1 /tmp/tmpl-2.22-333.333.t2
par_tmpl Template1
par_tmpl Xval: 2.22
par_tmpl Yval: 333.333
par_tmpl FixedValue: 9
par_tmpl Seq: 6
par_tmpl Slot: X
par_tmpl # x with 2 decimals
par_tmpl DecimalX: 2.22
par_tmpl TenX: 22.2
par_tmpl RandomVal: 0.RANDOM_NUMBER
par_tmpl
par_tmpl Template2
par_tmpl X,Y: 2.22,333.333
par_tmpl val1,val2: 2.22,333.333
par_tmpl
par_tmpl 7.t1 /tmp/tmpl-3.333-111.111.t2
par_tmpl Template1
par_tmpl Xval: 3.333
par_tmpl Yval: 111.111
par_tmpl FixedValue: 9
par_tmpl Seq: 7
par_tmpl Slot: X
par_tmpl # x with 2 decimals
par_tmpl DecimalX: 3.33
par_tmpl TenX: 33.33
par_tmpl RandomVal: 0.RANDOM_NUMBER
par_tmpl
par_tmpl Template2
par_tmpl X,Y: 3.333,111.111
par_tmpl val1,val2: 3.333,111.111
par_tmpl
par_tmpl 8.t1 /tmp/tmpl-3.333-222.222.t2
par_tmpl Template1
par_tmpl Xval: 3.333
par_tmpl Yval: 222.222
par_tmpl FixedValue: 9
par_tmpl Seq: 8
par_tmpl Slot: X
par_tmpl # x with 2 decimals
par_tmpl DecimalX: 3.33
par_tmpl TenX: 33.33
par_tmpl RandomVal: 0.RANDOM_NUMBER
par_tmpl
par_tmpl Template2
par_tmpl X,Y: 3.333,222.222
par_tmpl val1,val2: 3.333,222.222
par_tmpl
par_tmpl 9.t1 /tmp/tmpl-3.333-333.333.t2
par_tmpl Template1
par_tmpl Xval: 3.333
par_tmpl Yval: 333.333
par_tmpl FixedValue: 9
par_tmpl Seq: 9
par_tmpl Slot: X
par_tmpl # x with 2 decimals
par_tmpl DecimalX: 3.33
par_tmpl TenX: 33.33
par_tmpl RandomVal: 0.RANDOM_NUMBER
par_tmpl
par_tmpl Template2
par_tmpl X,Y: 3.333,333.333
par_tmpl val1,val2: 3.333,333.333
par_tmpl
par_tmux_command_not_found ### PARALLEL_TMUX not found
par_tmux_command_not_found parallel: Error: not-existing not found in $PATH.
par_total_from_joblog bug #47086: [PATCH] Initialize total_completed from joblog

View file

@ -20,7 +20,7 @@ par__pipe_tee bug #45479: --pipe/--pipepart --tee
par__pipe_tee --pipe --tee
par__pipe_tee 314572800
par__pipepart_spawn ### bug #46214: Using --pipepart doesnt spawn multiple jobs in version 20150922
par__pipepart_spawn 1:local / 2+ / 2+2+2+
par__pipepart_spawn 1:local / 2+ / 2+
par__pipepart_tee bug #45479: --pipe/--pipepart --tee
par__pipepart_tee --pipepart --tee
par__pipepart_tee 314572800
@ -300,6 +300,10 @@ par_k parallel: Warning: Try running 'parallel -jX -N X --pipe parallel -jX'
par_k parallel: Warning: or increasing 'ulimit -n' (try: ulimit -n `ulimit -Hn`)
par_k parallel: Warning: or increasing 'nofile' in /etc/security/limits.conf
par_k parallel: Warning: or increasing /proc/sys/fs/file-max
par_k parallel: Warning: Try running 'parallel -jX -N XXX --pipe parallel -jX'
par_k parallel: Warning: or increasing 'ulimit -n' (try: ulimit -n `ulimit -Hn`)
par_k parallel: Warning: or increasing 'nofile' in /etc/security/limits.conf
par_k parallel: Warning: or increasing /proc/sys/fs/file-max
par_k begin
par_k 1
par_k 2

View file

@ -270,6 +270,10 @@ par_open_files_blocks parallel: Warning: Try running 'parallel -j0 -N 2 --pipe p
par_open_files_blocks parallel: Warning: or increasing 'ulimit -n' (try: ulimit -n `ulimit -Hn`)
par_open_files_blocks parallel: Warning: or increasing 'nofile' in /etc/security/limits.conf
par_open_files_blocks parallel: Warning: or increasing /proc/sys/fs/file-max
par_open_files_blocks parallel: Warning: Try running 'parallel -j0 -N 100 --pipe parallel -j0'
par_open_files_blocks parallel: Warning: or increasing 'ulimit -n' (try: ulimit -n `ulimit -Hn`)
par_open_files_blocks parallel: Warning: or increasing 'nofile' in /etc/security/limits.conf
par_open_files_blocks parallel: Warning: or increasing /proc/sys/fs/file-max
par_open_files_blocks 1 of 21
par_open_files_blocks 2 of 21
par_open_files_blocks 3 of 21
@ -375,6 +379,125 @@ par_pipepart_block 17-20
par_pipepart_block 18-20
par_pipepart_block 19-20
par_pipepart_block 20-20
par_plus_slot_replacement ### show {slot} {0%} {0#}
par_plus_slot_replacement 1=1=1
par_plus_slot_replacement 2=2=2
par_plus_slot_replacement 3=3=3
par_plus_slot_replacement Seq: 001 01 1
par_plus_slot_replacement Seq: 002 02 2
par_plus_slot_replacement Seq: 003 03 3
par_plus_slot_replacement Seq: 004 04 4
par_plus_slot_replacement Seq: 005 05 5
par_plus_slot_replacement Seq: 006 06 6
par_plus_slot_replacement Seq: 007 07 7
par_plus_slot_replacement Seq: 008 08 8
par_plus_slot_replacement Seq: 009 09 9
par_plus_slot_replacement Seq: 010 10 10
par_plus_slot_replacement Seq: 011 11 11
par_plus_slot_replacement Seq: 012 12 12
par_plus_slot_replacement Seq: 013 13 13
par_plus_slot_replacement Seq: 014 14 14
par_plus_slot_replacement Seq: 015 15 15
par_plus_slot_replacement Seq: 016 16 16
par_plus_slot_replacement Seq: 017 17 17
par_plus_slot_replacement Seq: 018 18 18
par_plus_slot_replacement Seq: 019 19 19
par_plus_slot_replacement Seq: 020 20 20
par_plus_slot_replacement Seq: 021 21 21
par_plus_slot_replacement Seq: 022 22 22
par_plus_slot_replacement Seq: 023 23 23
par_plus_slot_replacement Seq: 024 24 24
par_plus_slot_replacement Seq: 025 25 25
par_plus_slot_replacement Seq: 026 26 26
par_plus_slot_replacement Seq: 027 27 27
par_plus_slot_replacement Seq: 028 28 28
par_plus_slot_replacement Seq: 029 29 29
par_plus_slot_replacement Seq: 030 30 30
par_plus_slot_replacement Seq: 031 31 31
par_plus_slot_replacement Seq: 032 32 32
par_plus_slot_replacement Seq: 033 33 33
par_plus_slot_replacement Seq: 034 34 34
par_plus_slot_replacement Seq: 035 35 35
par_plus_slot_replacement Seq: 036 36 36
par_plus_slot_replacement Seq: 037 37 37
par_plus_slot_replacement Seq: 038 38 38
par_plus_slot_replacement Seq: 039 39 39
par_plus_slot_replacement Seq: 040 40 40
par_plus_slot_replacement Seq: 041 41 41
par_plus_slot_replacement Seq: 042 42 42
par_plus_slot_replacement Seq: 043 43 43
par_plus_slot_replacement Seq: 044 44 44
par_plus_slot_replacement Seq: 045 45 45
par_plus_slot_replacement Seq: 046 46 46
par_plus_slot_replacement Seq: 047 47 47
par_plus_slot_replacement Seq: 048 48 48
par_plus_slot_replacement Seq: 049 49 49
par_plus_slot_replacement Seq: 050 50 50
par_plus_slot_replacement Seq: 051 51 51
par_plus_slot_replacement Seq: 052 52 52
par_plus_slot_replacement Seq: 053 53 53
par_plus_slot_replacement Seq: 054 54 54
par_plus_slot_replacement Seq: 055 55 55
par_plus_slot_replacement Seq: 056 56 56
par_plus_slot_replacement Seq: 057 57 57
par_plus_slot_replacement Seq: 058 58 58
par_plus_slot_replacement Seq: 059 59 59
par_plus_slot_replacement Seq: 060 60 60
par_plus_slot_replacement Seq: 061 61 61
par_plus_slot_replacement Seq: 062 62 62
par_plus_slot_replacement Seq: 063 63 63
par_plus_slot_replacement Seq: 064 64 64
par_plus_slot_replacement Seq: 065 65 65
par_plus_slot_replacement Seq: 066 66 66
par_plus_slot_replacement Seq: 067 67 67
par_plus_slot_replacement Seq: 068 68 68
par_plus_slot_replacement Seq: 069 69 69
par_plus_slot_replacement Seq: 070 70 70
par_plus_slot_replacement Seq: 071 71 71
par_plus_slot_replacement Seq: 072 72 72
par_plus_slot_replacement Seq: 073 73 73
par_plus_slot_replacement Seq: 074 74 74
par_plus_slot_replacement Seq: 075 75 75
par_plus_slot_replacement Seq: 076 76 76
par_plus_slot_replacement Seq: 077 77 77
par_plus_slot_replacement Seq: 078 78 78
par_plus_slot_replacement Seq: 079 79 79
par_plus_slot_replacement Seq: 080 80 80
par_plus_slot_replacement Seq: 081 81 81
par_plus_slot_replacement Seq: 082 82 82
par_plus_slot_replacement Seq: 083 83 83
par_plus_slot_replacement Seq: 084 84 84
par_plus_slot_replacement Seq: 085 85 85
par_plus_slot_replacement Seq: 086 86 86
par_plus_slot_replacement Seq: 087 87 87
par_plus_slot_replacement Seq: 088 88 88
par_plus_slot_replacement Seq: 089 89 89
par_plus_slot_replacement Seq: 090 90 90
par_plus_slot_replacement Seq: 091 91 91
par_plus_slot_replacement Seq: 092 92 92
par_plus_slot_replacement Seq: 093 93 93
par_plus_slot_replacement Seq: 094 94 94
par_plus_slot_replacement Seq: 095 95 95
par_plus_slot_replacement Seq: 096 96 96
par_plus_slot_replacement Seq: 097 97 97
par_plus_slot_replacement Seq: 098 98 98
par_plus_slot_replacement Seq: 099 99 99
par_plus_slot_replacement Seq: 100 100 100
par_plus_slot_replacement Slot: 001 01 1
par_plus_slot_replacement Slot: 002 02 2
par_plus_slot_replacement Slot: 003 03 3
par_plus_slot_replacement Slot: 004 04 4
par_plus_slot_replacement Slot: 005 05 5
par_plus_slot_replacement Slot: 006 06 6
par_plus_slot_replacement Slot: 007 07 7
par_plus_slot_replacement Slot: 008 08 8
par_plus_slot_replacement Slot: 009 09 9
par_plus_slot_replacement Slot: 010 10 10
par_plus_slot_replacement Slot: 011 11 11
par_plus_slot_replacement Slot: 012 12 12
par_plus_slot_replacement Slot: 013 13 13
par_plus_slot_replacement Slot: 014 14 14
par_plus_slot_replacement Slot: 015 15 15
par_profiles_with_space ### bug #42902: profiles containing arguments with space
par_profiles_with_space /bin/bash=/bin/bash
par_profiles_with_space echo '/bin/bash=/bin/bash'

View file

@ -223,7 +223,8 @@ par_test_build_and_install rm "/tmp/parallel-install/bin"/sem || true
par_test_build_and_install ln -s parallel "/tmp/parallel-install/bin"/sem
par_test_build_and_install make[0]: Leaving directory '/tmp/parallel-00000000/src'
par_test_build_and_install /bin/mkdir -p '/tmp/parallel-install/share/doc/parallel'
par_test_build_and_install /bin/install -c -m 644 parallel.html env_parallel.html sem.html sql.html niceload.html parallel_tutorial.html parallel_book.html parallel_design.html parallel_alternatives.html parcat.html parset.html parsort.html parallel.texi env_parallel.texi sem.texi sql.texi niceload.texi parallel_tutorial.texi parallel_book.texi parallel_design.texi parallel_alternatives.texi parcat.texi parset.texi parsort.texi parallel.pdf env_parallel.pdf sem.pdf sql.pdf niceload.pdf parallel_tutorial.pdf parallel_book.pdf parallel_design.pdf parallel_alternatives.pdf parcat.pdf parset.pdf parsort.pdf parallel_cheat_bw.pdf '/tmp/parallel-install/share/doc/parallel'
par_test_build_and_install /bin/install -c -m 644 parallel.html env_parallel.html sem.html sql.html niceload.html parallel_tutorial.html parallel_book.html parallel_design.html parallel_alternatives.html parcat.html parset.html parsort.html parallel.texi env_parallel.texi sem.texi sql.texi niceload.texi parallel_tutorial.texi parallel_book.texi parallel_design.texi parallel_alternatives.texi parcat.texi parset.texi parsort.texi parallel.rst env_parallel.rst sem.rst sql.rst niceload.rst parallel_tutorial.rst parallel_book.rst parallel_design.rst parallel_alternatives.rst parcat.rst parset.rst parsort.rst parallel.pdf env_parallel.pdf sem.pdf sql.pdf '/tmp/parallel-install/share/doc/parallel'
par_test_build_and_install /bin/install -c -m 644 niceload.pdf parallel_tutorial.pdf parallel_book.pdf parallel_design.pdf parallel_alternatives.pdf parcat.pdf parset.pdf parsort.pdf parallel_cheat_bw.pdf '/tmp/parallel-install/share/doc/parallel'
par_test_build_and_install /bin/mkdir -p '/tmp/parallel-install/share/man/man1'
par_test_build_and_install /bin/install -c -m 644 parallel.1 env_parallel.1 sem.1 sql.1 niceload.1 parcat.1 parset.1 parsort.1 '/tmp/parallel-install/share/man/man1'
par_test_build_and_install /bin/mkdir -p '/tmp/parallel-install/share/man/man7'
@ -637,7 +638,8 @@ par_test_build_and_install || echo "Warning: pod2pdf not found. Using old parset
par_test_build_and_install /bin/bash: pod2pdf: command not found
par_test_build_and_install Warning: pod2pdf not found. Using old parset.pdf
par_test_build_and_install /bin/mkdir -p '/tmp/parallel-install/share/doc/parallel'
par_test_build_and_install /bin/install -c -m 644 parallel.html env_parallel.html sem.html sql.html niceload.html parallel_tutorial.html parallel_book.html parallel_design.html parallel_alternatives.html parcat.html parset.html parsort.html parallel.texi env_parallel.texi sem.texi sql.texi niceload.texi parallel_tutorial.texi parallel_book.texi parallel_design.texi parallel_alternatives.texi parcat.texi parset.texi parsort.texi parallel.pdf env_parallel.pdf sem.pdf sql.pdf niceload.pdf parallel_tutorial.pdf parallel_book.pdf parallel_design.pdf parallel_alternatives.pdf parcat.pdf parset.pdf parsort.pdf parallel_cheat_bw.pdf '/tmp/parallel-install/share/doc/parallel'
par_test_build_and_install /bin/install -c -m 644 parallel.html env_parallel.html sem.html sql.html niceload.html parallel_tutorial.html parallel_book.html parallel_design.html parallel_alternatives.html parcat.html parset.html parsort.html parallel.texi env_parallel.texi sem.texi sql.texi niceload.texi parallel_tutorial.texi parallel_book.texi parallel_design.texi parallel_alternatives.texi parcat.texi parset.texi parsort.texi parallel.rst env_parallel.rst sem.rst sql.rst niceload.rst parallel_tutorial.rst parallel_book.rst parallel_design.rst parallel_alternatives.rst parcat.rst parset.rst parsort.rst parallel.pdf env_parallel.pdf sem.pdf sql.pdf '/tmp/parallel-install/share/doc/parallel'
par_test_build_and_install /bin/install -c -m 644 niceload.pdf parallel_tutorial.pdf parallel_book.pdf parallel_design.pdf parallel_alternatives.pdf parcat.pdf parset.pdf parsort.pdf parallel_cheat_bw.pdf '/tmp/parallel-install/share/doc/parallel'
par_test_build_and_install pod2man --release='00000000' --center='parallel' \
par_test_build_and_install --section=1 "."/parallel.pod > "."/parallel.1n \
par_test_build_and_install && mv "."/parallel.1n "."/parallel.1 \

View file

@ -1544,7 +1544,7 @@ par_memfree Free mem: 1k
par_memfree parallel: Warning: This job was killed because it timed out:
par_memfree parallel: Warning: parallel --memfree 1t echo Free mem: ::: 1t
par_memory_leak ### Test for memory leaks
par_memory_leak Of 100 runs of 1 job at least one should be bigger than a 3000 job run
par_memory_leak Of 300 runs of 1 job at least one should be bigger than a 3000 job run
par_memory_leak Good: No memleak detected.
par_no_newline_compress bug #41613: --compress --line-buffer - no newline
par_no_newline_compress tagstring=--tagstring {#} compress=--compress

View file

@ -119,7 +119,7 @@ par_eta 16
par_eta ### Test of --eta with no jobs
par_eta
par_eta Computers / CPU cores / Max jobs to run
par_eta 1:local / 8 / 1
par_eta 1:local / 8 / 8
par_eta par_eta ETA: 0s Left: 0 AVG: 0.00s 0
par_exitval_signal ### Test --joblog with exitval and Test --joblog with signal -- timing dependent
par_exitval_signal exitval=128+6 OK
@ -213,7 +213,7 @@ par_progress 16
par_progress ### Test of --progress with no jobs
par_progress
par_progress Computers / CPU cores / Max jobs to run
par_progress 1:local / 8 / 1
par_progress 1:local / 8 / 8
par_progress par_progress 0
par_replacement_slashslash ### Test {//}
par_replacement_slashslash . a
@ -291,6 +291,24 @@ par_sqlworker_hostname <hostname>
par_sqlworker_hostname <hostname>
par_sshdelay ### test --sshdelay
par_sshdelay OK
par_tee_too_many_args ### Fail if there are more arguments than --jobs
par_tee_too_many_args 1 1
par_tee_too_many_args 1 10
par_tee_too_many_args 1 11
par_tee_too_many_args 2 2
par_tee_too_many_args 3 3
par_tee_too_many_args 4 4
par_tee_too_many_args parallel: Warning: Only enough file handles to run 251 jobs in parallel.
par_tee_too_many_args parallel: Warning: Try running 'parallel -j0 -N 251 --pipe parallel -j0'
par_tee_too_many_args parallel: Warning: or increasing 'ulimit -n' (try: ulimit -n `ulimit -Hn`)
par_tee_too_many_args parallel: Warning: or increasing 'nofile' in /etc/security/limits.conf
par_tee_too_many_args parallel: Warning: or increasing /proc/sys/fs/file-max
par_tee_too_many_args parallel: Warning: No more file handles.
par_tee_too_many_args parallel: Warning: Try running 'parallel -j0 -N 100 --pipe parallel -j0'
par_tee_too_many_args parallel: Warning: or increasing 'ulimit -n' (try: ulimit -n `ulimit -Hn`)
par_tee_too_many_args parallel: Warning: or increasing 'nofile' in /etc/security/limits.conf
par_tee_too_many_args parallel: Warning: or increasing /proc/sys/fs/file-max
par_tee_too_many_args parallel: Error: --tee requires --jobs to be higher. Try --jobs 0.
par_tee_with_premature_close --tee --pipe should send all data to all commands
par_tee_with_premature_close even if a command closes stdin before reading everything
par_tee_with_premature_close tee with --output-error=warn-nopipe support

View file

@ -11,7 +11,7 @@ par_nonall_should_not_block ### bug #47608: parallel --nonall -S lo 'echo ::: '
par_nonall_should_not_block :::
par_progress_text_max_jobs_to_run ### bug #49404: "Max jobs to run" does not equal the number of jobs specified when using GNU Parallel on remote server?
par_progress_text_max_jobs_to_run should give 10 running jobs
par_progress_text_max_jobs_to_run 1:lo / 16 / 10
par_progress_text_max_jobs_to_run 1:lo / 16 / 16
par_quoting_for_onall ### bug #35427: quoting of {2} broken for --onall
par_quoting_for_onall /bin/ls
par_remote_function_nice ### functions and --nice

View file

@ -1,129 +1,126 @@
### Test use special ssh
TODO test ssh with > 9 simultaneous
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
echo '### test --timeout --retries'
### test --timeout --retries
parallel -j0 --timeout 5 --retries 3 -k ssh {} echo {} ::: 192.168.1.197 8.8.8.8 vagrant@parallel-server1 vagrant@parallel-server2 vagrant@parallel-server3
vagrant@parallel-server1
vagrant@parallel-server2
vagrant@parallel-server3
echo '### test --filter-hosts with server w/o ssh, non-existing server'
### test --filter-hosts with server w/o ssh, non-existing server
parallel -S 192.168.1.197,8.8.8.8,vagrant@parallel-server1,vagrant@parallel-server2,vagrant@parallel-server3 --filter-hosts --nonall -k --tag echo
vagrant@parallel-server1
vagrant@parallel-server2
vagrant@parallel-server3
echo '### bug #41964: --controlmaster not seems to reuse OpenSSH connections to the same host'
### bug #41964: --controlmaster not seems to reuse OpenSSH connections to the same host
(parallel -S vagrant@parallel-server1 true ::: {1..20}; echo No --controlmaster - finish last) & (parallel -M -S vagrant@parallel-server1 true ::: {1..20}; echo With --controlmaster - finish first) & wait
With --controlmaster - finish first
No --controlmaster - finish last
echo '### --filter-hosts - OK, non-such-user, connection refused, wrong host'
### --filter-hosts - OK, non-such-user, connection refused, wrong host
parallel --nonall --filter-hosts -S localhost,NoUser@localhost,154.54.72.206,"ssh 5.5.5.5" hostname
aspire
echo '### test --workdir . in $HOME'
### test --workdir . in $HOME
cd && mkdir -p parallel-test && cd parallel-test && echo OK > testfile && parallel --workdir . --transfer -S vagrant@parallel-server1 cat {} ::: testfile
OK
par_controlmaster_is_faster ### bug #41964: --controlmaster not seems to reuse OpenSSH connections to the same host
par_controlmaster_is_faster With --controlmaster - finish first
par_controlmaster_is_faster No --controlmaster - finish last
par_filter_hosts_different_errors ### --filter-hosts - OK, non-such-user, connection refused, wrong host
par_filter_hosts_different_errors aspire
par_filter_hosts_no_ssh_nxserver ### test --filter-hosts with server w/o ssh, non-existing server
par_filter_hosts_no_ssh_nxserver vagrant@parallel-server1
par_filter_hosts_no_ssh_nxserver vagrant@parallel-server2
par_filter_hosts_no_ssh_nxserver vagrant@parallel-server3
par_special_ssh ### Test use special ssh
par_special_ssh TODO test ssh with > 9 simultaneous
par_special_ssh 1
par_special_ssh 2
par_special_ssh 3
par_special_ssh 4
par_special_ssh 5
par_special_ssh 6
par_special_ssh 7
par_special_ssh 8
par_special_ssh 9
par_special_ssh 10
par_special_ssh 11
par_special_ssh 12
par_special_ssh 13
par_special_ssh 14
par_special_ssh 15
par_special_ssh 16
par_special_ssh 17
par_special_ssh 18
par_special_ssh 19
par_special_ssh 20
par_special_ssh 21
par_special_ssh 22
par_special_ssh 23
par_special_ssh 24
par_special_ssh 25
par_special_ssh 26
par_special_ssh 27
par_special_ssh 28
par_special_ssh 29
par_special_ssh 30
par_special_ssh 31
par_special_ssh 32
par_special_ssh 33
par_special_ssh 34
par_special_ssh 35
par_special_ssh 36
par_special_ssh 37
par_special_ssh 38
par_special_ssh 39
par_special_ssh 40
par_special_ssh 41
par_special_ssh 42
par_special_ssh 43
par_special_ssh 44
par_special_ssh 45
par_special_ssh 46
par_special_ssh 47
par_special_ssh 48
par_special_ssh 49
par_special_ssh 50
par_special_ssh 51
par_special_ssh 52
par_special_ssh 53
par_special_ssh 54
par_special_ssh 55
par_special_ssh 56
par_special_ssh 57
par_special_ssh 58
par_special_ssh 59
par_special_ssh 60
par_special_ssh 61
par_special_ssh 62
par_special_ssh 63
par_special_ssh 64
par_special_ssh 65
par_special_ssh 66
par_special_ssh 67
par_special_ssh 68
par_special_ssh 69
par_special_ssh 70
par_special_ssh 71
par_special_ssh 72
par_special_ssh 73
par_special_ssh 74
par_special_ssh 75
par_special_ssh 76
par_special_ssh 77
par_special_ssh 78
par_special_ssh 79
par_special_ssh 80
par_special_ssh 81
par_special_ssh 82
par_special_ssh 83
par_special_ssh 84
par_special_ssh 85
par_special_ssh 86
par_special_ssh 87
par_special_ssh 88
par_special_ssh 89
par_special_ssh 90
par_special_ssh 91
par_special_ssh 92
par_special_ssh 93
par_special_ssh 94
par_special_ssh 95
par_special_ssh 96
par_special_ssh 97
par_special_ssh 98
par_special_ssh 99
par_special_ssh 100
par_timeout_retries ### test --timeout --retries
par_timeout_retries ssh: connect to host 192.168.1.197 port 22: No route to host par_timeout_retries
par_timeout_retries parallel: Warning: This job was killed because it timed out:
par_timeout_retries parallel: Warning: ssh 8.8.8.8 echo 8.8.8.8
par_timeout_retries parallel: Warning: This job was killed because it timed out:
par_timeout_retries parallel: Warning: ssh 8.8.8.8 echo 8.8.8.8
par_timeout_retries parallel: Warning: This job was killed because it timed out:
par_timeout_retries parallel: Warning: ssh 8.8.8.8 echo 8.8.8.8
par_timeout_retries vagrant@parallel-server1
par_timeout_retries vagrant@parallel-server2
par_timeout_retries vagrant@parallel-server3
par_workdir_in_HOME ### test --workdir . in $HOME
par_workdir_in_HOME OK
echo '### TODO: test --filter-hosts proxied through the one host'
### TODO: test --filter-hosts proxied through the one host