mirror of
https://git.savannah.gnu.org/git/parallel.git
synced 2024-11-25 23:47:53 +00:00
Released as 20140422 ('세월호').
This commit is contained in:
parent
a57506d628
commit
d36cb5a2dd
71
NEWS
71
NEWS
|
@ -1,3 +1,72 @@
|
||||||
|
20140422
|
||||||
|
|
||||||
|
* --pipepart is a highly efficient alternative to --pipe if the input
|
||||||
|
is a real file and not a pipe.
|
||||||
|
|
||||||
|
* If using --cat or --fifo with --pipe the {} in the command will be
|
||||||
|
replaced with the name of a physical file and a fifo respectively
|
||||||
|
containing the block from --pipe. Useful for commands that cannot
|
||||||
|
read from standard input (stdin).
|
||||||
|
|
||||||
|
* --controlmaster has gotten an overhaul and is no longer
|
||||||
|
experimental.
|
||||||
|
|
||||||
|
* --env is now copied when determining CPUs on remote system. Useful
|
||||||
|
for copying $PATH if parallel is not in the normal path.
|
||||||
|
|
||||||
|
* --results now chops the argument if the argument is longer than the
|
||||||
|
allowed path length.
|
||||||
|
|
||||||
|
* Build now survives if pod2* are not installed.
|
||||||
|
|
||||||
|
* The git repository now contains tags of releases.
|
||||||
|
|
||||||
|
* GNU Parallel was cited in: Proactive System for Digital Forensic
|
||||||
|
Investigation
|
||||||
|
http://dspace.library.uvic.ca:8080/bitstream/handle/1828/5237/Alharbi_Soltan_PhD_2014.pdf
|
||||||
|
|
||||||
|
* GNU Parallel was cited in: Beyond MAP estimation with the
|
||||||
|
track-oriented multiple hypothesis tracker
|
||||||
|
http://ieeexplore.ieee.org/xpl/abstractReferences.jsp?tp=&arnumber=6766651&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6766651
|
||||||
|
|
||||||
|
* GNU Parallel was cited in: Prokka: rapid prokaryotic genome
|
||||||
|
annotation
|
||||||
|
http://bioinformatics.oxfordjournals.org/content/early/2014/03/18/bioinformatics.btu153.short
|
||||||
|
|
||||||
|
* GNU Parallel was used (unfortunately with improper citation) in:
|
||||||
|
Perspectives in magnetic resonance: NMR in the post-FFT era
|
||||||
|
http://www.sciencedirect.com/science/article/pii/S1090780713003054
|
||||||
|
|
||||||
|
* GNU Parallel is used in https://github.com/cc2qe/speedseq
|
||||||
|
|
||||||
|
* Batch XML validation at the command line.
|
||||||
|
http://horothesia.blogspot.dk/2014/04/batch-xml-validation-at-command-line.html
|
||||||
|
|
||||||
|
* freebayes-parallel uses GNU Parallel
|
||||||
|
https://github.com/ekg/freebayes/commit/31ee997984cebe8a196381c3de57e618e34a2281
|
||||||
|
|
||||||
|
* Org-mode with Parallel Babel http://draketo.de/english/emacs/parallel-babel#sec-2
|
||||||
|
|
||||||
|
* Speeding Up Grep Log Queries with GNU Parallel
|
||||||
|
http://www.cybersecurity.io/speeding-grep-queries-gnu-parallel/
|
||||||
|
|
||||||
|
* How to run tbss_2_reg in parallel
|
||||||
|
http://tadpolebrainimaging.blogspot.dk/2014/03/how-to-run-tbss2reg-in-parallel.html
|
||||||
|
|
||||||
|
* GNU parallel example: blastn https://asciinema.org/a/8775
|
||||||
|
|
||||||
|
* Iterative DNS Brute Forcing
|
||||||
|
http://www.room362.com/blog/2014/02/19/iterative-dns-brute-forcing/
|
||||||
|
|
||||||
|
* Ejecutando comandos en paralelo
|
||||||
|
http://jesusmercado.com/guias/ejecutando-comandos-en-paralelo/
|
||||||
|
|
||||||
|
* Ejecutando en paralelo en bash (ejemplo con rsync)
|
||||||
|
http://eithel-inside.blogspot.dk/2014/04/ejecutando-en-paralelo-en-bash-ejemplo.html
|
||||||
|
|
||||||
|
* Bug fixes and man page updates.
|
||||||
|
|
||||||
|
|
||||||
20140322
|
20140322
|
||||||
|
|
||||||
* Offical package for Alpine Linux now exists:
|
* Offical package for Alpine Linux now exists:
|
||||||
|
@ -38,8 +107,6 @@
|
||||||
|
|
||||||
20140222
|
20140222
|
||||||
|
|
||||||
New in this release:
|
|
||||||
|
|
||||||
* --tollef has been retired.
|
* --tollef has been retired.
|
||||||
|
|
||||||
* --compress has be redesigned due to bugs.
|
* --compress has be redesigned due to bugs.
|
||||||
|
|
12
README
12
README
|
@ -40,9 +40,9 @@ document.
|
||||||
|
|
||||||
Full installation of GNU Parallel is as simple as:
|
Full installation of GNU Parallel is as simple as:
|
||||||
|
|
||||||
wget http://ftpmirror.gnu.org/parallel/parallel-20140323.tar.bz2
|
wget http://ftpmirror.gnu.org/parallel/parallel-20140422.tar.bz2
|
||||||
bzip2 -dc parallel-20140323.tar.bz2 | tar xvf -
|
bzip2 -dc parallel-20140422.tar.bz2 | tar xvf -
|
||||||
cd parallel-20140323
|
cd parallel-20140422
|
||||||
./configure && make && make install
|
./configure && make && make install
|
||||||
|
|
||||||
|
|
||||||
|
@ -51,9 +51,9 @@ Full installation of GNU Parallel is as simple as:
|
||||||
If you are not root you can add ~/bin to your path and install in
|
If you are not root you can add ~/bin to your path and install in
|
||||||
~/bin and ~/share:
|
~/bin and ~/share:
|
||||||
|
|
||||||
wget http://ftpmirror.gnu.org/parallel/parallel-20140323.tar.bz2
|
wget http://ftpmirror.gnu.org/parallel/parallel-20140422.tar.bz2
|
||||||
bzip2 -dc parallel-20140323.tar.bz2 | tar xvf -
|
bzip2 -dc parallel-20140422.tar.bz2 | tar xvf -
|
||||||
cd parallel-20140323
|
cd parallel-20140422
|
||||||
./configure --prefix=$HOME && make && make install
|
./configure --prefix=$HOME && make && make install
|
||||||
|
|
||||||
Or if your system lacks 'make' you can simply copy src/parallel
|
Or if your system lacks 'make' you can simply copy src/parallel
|
||||||
|
|
20
configure
vendored
20
configure
vendored
|
@ -1,6 +1,6 @@
|
||||||
#! /bin/sh
|
#! /bin/sh
|
||||||
# Guess values for system-dependent variables and create Makefiles.
|
# Guess values for system-dependent variables and create Makefiles.
|
||||||
# Generated by GNU Autoconf 2.69 for parallel 20140323.
|
# Generated by GNU Autoconf 2.69 for parallel 20140422.
|
||||||
#
|
#
|
||||||
# Report bugs to <bug-parallel@gnu.org>.
|
# Report bugs to <bug-parallel@gnu.org>.
|
||||||
#
|
#
|
||||||
|
@ -579,8 +579,8 @@ MAKEFLAGS=
|
||||||
# Identity of this package.
|
# Identity of this package.
|
||||||
PACKAGE_NAME='parallel'
|
PACKAGE_NAME='parallel'
|
||||||
PACKAGE_TARNAME='parallel'
|
PACKAGE_TARNAME='parallel'
|
||||||
PACKAGE_VERSION='20140323'
|
PACKAGE_VERSION='20140422'
|
||||||
PACKAGE_STRING='parallel 20140323'
|
PACKAGE_STRING='parallel 20140422'
|
||||||
PACKAGE_BUGREPORT='bug-parallel@gnu.org'
|
PACKAGE_BUGREPORT='bug-parallel@gnu.org'
|
||||||
PACKAGE_URL=''
|
PACKAGE_URL=''
|
||||||
|
|
||||||
|
@ -1194,7 +1194,7 @@ if test "$ac_init_help" = "long"; then
|
||||||
# Omit some internal or obsolete options to make the list less imposing.
|
# Omit some internal or obsolete options to make the list less imposing.
|
||||||
# This message is too long to be a string in the A/UX 3.1 sh.
|
# This message is too long to be a string in the A/UX 3.1 sh.
|
||||||
cat <<_ACEOF
|
cat <<_ACEOF
|
||||||
\`configure' configures parallel 20140323 to adapt to many kinds of systems.
|
\`configure' configures parallel 20140422 to adapt to many kinds of systems.
|
||||||
|
|
||||||
Usage: $0 [OPTION]... [VAR=VALUE]...
|
Usage: $0 [OPTION]... [VAR=VALUE]...
|
||||||
|
|
||||||
|
@ -1260,7 +1260,7 @@ fi
|
||||||
|
|
||||||
if test -n "$ac_init_help"; then
|
if test -n "$ac_init_help"; then
|
||||||
case $ac_init_help in
|
case $ac_init_help in
|
||||||
short | recursive ) echo "Configuration of parallel 20140323:";;
|
short | recursive ) echo "Configuration of parallel 20140422:";;
|
||||||
esac
|
esac
|
||||||
cat <<\_ACEOF
|
cat <<\_ACEOF
|
||||||
|
|
||||||
|
@ -1327,7 +1327,7 @@ fi
|
||||||
test -n "$ac_init_help" && exit $ac_status
|
test -n "$ac_init_help" && exit $ac_status
|
||||||
if $ac_init_version; then
|
if $ac_init_version; then
|
||||||
cat <<\_ACEOF
|
cat <<\_ACEOF
|
||||||
parallel configure 20140323
|
parallel configure 20140422
|
||||||
generated by GNU Autoconf 2.69
|
generated by GNU Autoconf 2.69
|
||||||
|
|
||||||
Copyright (C) 2012 Free Software Foundation, Inc.
|
Copyright (C) 2012 Free Software Foundation, Inc.
|
||||||
|
@ -1344,7 +1344,7 @@ cat >config.log <<_ACEOF
|
||||||
This file contains any messages produced by compilers while
|
This file contains any messages produced by compilers while
|
||||||
running configure, to aid debugging if configure makes a mistake.
|
running configure, to aid debugging if configure makes a mistake.
|
||||||
|
|
||||||
It was created by parallel $as_me 20140323, which was
|
It was created by parallel $as_me 20140422, which was
|
||||||
generated by GNU Autoconf 2.69. Invocation command line was
|
generated by GNU Autoconf 2.69. Invocation command line was
|
||||||
|
|
||||||
$ $0 $@
|
$ $0 $@
|
||||||
|
@ -2159,7 +2159,7 @@ fi
|
||||||
|
|
||||||
# Define the identity of the package.
|
# Define the identity of the package.
|
||||||
PACKAGE='parallel'
|
PACKAGE='parallel'
|
||||||
VERSION='20140323'
|
VERSION='20140422'
|
||||||
|
|
||||||
|
|
||||||
cat >>confdefs.h <<_ACEOF
|
cat >>confdefs.h <<_ACEOF
|
||||||
|
@ -2710,7 +2710,7 @@ cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
|
||||||
# report actual input values of CONFIG_FILES etc. instead of their
|
# report actual input values of CONFIG_FILES etc. instead of their
|
||||||
# values after options handling.
|
# values after options handling.
|
||||||
ac_log="
|
ac_log="
|
||||||
This file was extended by parallel $as_me 20140323, which was
|
This file was extended by parallel $as_me 20140422, which was
|
||||||
generated by GNU Autoconf 2.69. Invocation command line was
|
generated by GNU Autoconf 2.69. Invocation command line was
|
||||||
|
|
||||||
CONFIG_FILES = $CONFIG_FILES
|
CONFIG_FILES = $CONFIG_FILES
|
||||||
|
@ -2772,7 +2772,7 @@ _ACEOF
|
||||||
cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
|
cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
|
||||||
ac_cs_config="`$as_echo "$ac_configure_args" | sed 's/^ //; s/[\\""\`\$]/\\\\&/g'`"
|
ac_cs_config="`$as_echo "$ac_configure_args" | sed 's/^ //; s/[\\""\`\$]/\\\\&/g'`"
|
||||||
ac_cs_version="\\
|
ac_cs_version="\\
|
||||||
parallel config.status 20140323
|
parallel config.status 20140422
|
||||||
configured by $0, generated by GNU Autoconf 2.69,
|
configured by $0, generated by GNU Autoconf 2.69,
|
||||||
with options \\"\$ac_cs_config\\"
|
with options \\"\$ac_cs_config\\"
|
||||||
|
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
AC_INIT([parallel], [20140323], [bug-parallel@gnu.org])
|
AC_INIT([parallel], [20140422], [bug-parallel@gnu.org])
|
||||||
AM_INIT_AUTOMAKE([-Wall -Werror foreign])
|
AM_INIT_AUTOMAKE([-Wall -Werror foreign])
|
||||||
AC_CONFIG_HEADERS([config.h])
|
AC_CONFIG_HEADERS([config.h])
|
||||||
AC_CONFIG_FILES([
|
AC_CONFIG_FILES([
|
||||||
|
|
|
@ -213,13 +213,27 @@ cc:Tim Cuthbertson <tim3d.junk@gmail.com>,
|
||||||
Ryoichiro Suzuki <ryoichiro.suzuki@gmail.com>,
|
Ryoichiro Suzuki <ryoichiro.suzuki@gmail.com>,
|
||||||
Jesse Alama <jesse.alama@gmail.com>
|
Jesse Alama <jesse.alama@gmail.com>
|
||||||
|
|
||||||
Subject: GNU Parallel 20140422 ('') released
|
Subject: GNU Parallel 20140422 ('세월호') released
|
||||||
|
|
||||||
GNU Parallel 20140422 ('') has been released. It is available for download at: http://ftp.gnu.org/gnu/parallel/
|
GNU Parallel 20140422 ('세월호') has been released. It is available for download at: http://ftp.gnu.org/gnu/parallel/
|
||||||
|
|
||||||
|
|
||||||
New in this release:
|
New in this release:
|
||||||
|
|
||||||
|
* --pipepart is a highly efficient alternative to --pipe if the input is a real file and not a pipe.
|
||||||
|
|
||||||
|
* If using --cat or --fifo with --pipe the {} in the command will be replaced with the name of a physical file and a fifo respectively containing the block from --pipe. Useful for commands that cannot read from standard input (stdin).
|
||||||
|
|
||||||
|
* --controlmaster has gotten an overhaul and is no longer experimental.
|
||||||
|
|
||||||
|
* --env is now copied when determining CPUs on remote system. Useful for copying $PATH if parallel is not in the normal path.
|
||||||
|
|
||||||
|
* --results now chops the argument if the argument is longer than the allowed path length.
|
||||||
|
|
||||||
|
* Build now survives if pod2* are not installed.
|
||||||
|
|
||||||
|
* The git repository now contains tags of releases.
|
||||||
|
|
||||||
* GNU Parallel was cited in: Proactive System for Digital Forensic Investigation http://dspace.library.uvic.ca:8080/bitstream/handle/1828/5237/Alharbi_Soltan_PhD_2014.pdf
|
* GNU Parallel was cited in: Proactive System for Digital Forensic Investigation http://dspace.library.uvic.ca:8080/bitstream/handle/1828/5237/Alharbi_Soltan_PhD_2014.pdf
|
||||||
|
|
||||||
* GNU Parallel was cited in: Beyond MAP estimation with the track-oriented multiple hypothesis tracker http://ieeexplore.ieee.org/xpl/abstractReferences.jsp?tp=&arnumber=6766651&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6766651
|
* GNU Parallel was cited in: Beyond MAP estimation with the track-oriented multiple hypothesis tracker http://ieeexplore.ieee.org/xpl/abstractReferences.jsp?tp=&arnumber=6766651&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6766651
|
||||||
|
@ -232,6 +246,8 @@ New in this release:
|
||||||
|
|
||||||
* Batch XML validation at the command line. http://horothesia.blogspot.dk/2014/04/batch-xml-validation-at-command-line.html
|
* Batch XML validation at the command line. http://horothesia.blogspot.dk/2014/04/batch-xml-validation-at-command-line.html
|
||||||
|
|
||||||
|
* freebayes-parallel uses GNU Parallel https://github.com/ekg/freebayes/commit/31ee997984cebe8a196381c3de57e618e34a2281
|
||||||
|
|
||||||
* Org-mode with Parallel Babel http://draketo.de/english/emacs/parallel-babel#sec-2
|
* Org-mode with Parallel Babel http://draketo.de/english/emacs/parallel-babel#sec-2
|
||||||
|
|
||||||
* Speeding Up Grep Log Queries with GNU Parallel http://www.cybersecurity.io/speeding-grep-queries-gnu-parallel/
|
* Speeding Up Grep Log Queries with GNU Parallel http://www.cybersecurity.io/speeding-grep-queries-gnu-parallel/
|
||||||
|
@ -244,6 +260,8 @@ New in this release:
|
||||||
|
|
||||||
* Ejecutando comandos en paralelo http://jesusmercado.com/guias/ejecutando-comandos-en-paralelo/
|
* Ejecutando comandos en paralelo http://jesusmercado.com/guias/ejecutando-comandos-en-paralelo/
|
||||||
|
|
||||||
|
* Ejecutando en paralelo en bash (ejemplo con rsync) http://eithel-inside.blogspot.dk/2014/04/ejecutando-en-paralelo-en-bash-ejemplo.html
|
||||||
|
|
||||||
* Bug fixes and man page updates.
|
* Bug fixes and man page updates.
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
Summary: Shell tool for executing jobs in parallel
|
Summary: Shell tool for executing jobs in parallel
|
||||||
Name: parallel
|
Name: parallel
|
||||||
Version: 20140322
|
Version: 20140422
|
||||||
Release: 1
|
Release: 1
|
||||||
License: GPL
|
License: GPL
|
||||||
Group: Productivity/File utilities
|
Group: Productivity/File utilities
|
||||||
|
|
|
@ -24,7 +24,7 @@
|
||||||
use strict;
|
use strict;
|
||||||
use Getopt::Long;
|
use Getopt::Long;
|
||||||
$Global::progname="niceload";
|
$Global::progname="niceload";
|
||||||
$Global::version = 20140323;
|
$Global::version = 20140422;
|
||||||
Getopt::Long::Configure("bundling","require_order");
|
Getopt::Long::Configure("bundling","require_order");
|
||||||
get_options_from_array(\@ARGV) || die_usage();
|
get_options_from_array(\@ARGV) || die_usage();
|
||||||
if($opt::version) {
|
if($opt::version) {
|
||||||
|
|
|
@ -819,7 +819,7 @@ sub get_options_from_array {
|
||||||
sub parse_options {
|
sub parse_options {
|
||||||
# Returns: N/A
|
# Returns: N/A
|
||||||
# Defaults:
|
# Defaults:
|
||||||
$Global::version = 20140409;
|
$Global::version = 20140422;
|
||||||
$Global::progname = 'parallel';
|
$Global::progname = 'parallel';
|
||||||
$Global::infinity = 2**31;
|
$Global::infinity = 2**31;
|
||||||
$Global::debug = 0;
|
$Global::debug = 0;
|
||||||
|
|
BIN
src/parallel.pdf
BIN
src/parallel.pdf
Binary file not shown.
|
@ -439,7 +439,7 @@ I<regexp> is a Perl Regular Expression:
|
||||||
http://perldoc.perl.org/perlre.html
|
http://perldoc.perl.org/perlre.html
|
||||||
|
|
||||||
|
|
||||||
=item B<--compress> (beta testing)
|
=item B<--compress>
|
||||||
|
|
||||||
Compress temporary files. If the output is big and very compressible
|
Compress temporary files. If the output is big and very compressible
|
||||||
this will take up less disk space in $TMPDIR and possibly be faster due to less
|
this will take up less disk space in $TMPDIR and possibly be faster due to less
|
||||||
|
@ -450,9 +450,9 @@ B<plzip>, B<bzip2>, B<lzma>, B<lzip>, B<xz> in that order, and use the
|
||||||
first available.
|
first available.
|
||||||
|
|
||||||
|
|
||||||
=item B<--compress-program> I<prg> (beta testing)
|
=item B<--compress-program> I<prg>
|
||||||
|
|
||||||
=item B<--decompress-program> I<prg> (beta testing)
|
=item B<--decompress-program> I<prg>
|
||||||
|
|
||||||
Use I<prg> for (de)compressing temporary files. It is assumed that I<prg
|
Use I<prg> for (de)compressing temporary files. It is assumed that I<prg
|
||||||
-dc> will decompress stdin (standard input) to stdout (standard
|
-dc> will decompress stdin (standard input) to stdout (standard
|
||||||
|
@ -786,7 +786,7 @@ B<-l 0> is an alias for B<-l 1>.
|
||||||
Implies B<-X> unless B<-m>, B<--xargs>, or B<--pipe> is set.
|
Implies B<-X> unless B<-m>, B<--xargs>, or B<--pipe> is set.
|
||||||
|
|
||||||
|
|
||||||
=item B<--line-buffer> (beta testing)
|
=item B<--line-buffer>
|
||||||
|
|
||||||
Buffer output on line basis. B<--group> will keep the output together
|
Buffer output on line basis. B<--group> will keep the output together
|
||||||
for a whole job. B<--ungroup> allows output to mixup with half a line
|
for a whole job. B<--ungroup> allows output to mixup with half a line
|
||||||
|
@ -914,6 +914,26 @@ defaults to '\n'. To have no record separator use B<--recend "">.
|
||||||
|
|
||||||
B<--files> is often used with B<--pipe>.
|
B<--files> is often used with B<--pipe>.
|
||||||
|
|
||||||
|
See also: B<--recstart>, B<--recend>, B<--fifo>, B<--cat>, B<--pipepart>.
|
||||||
|
|
||||||
|
|
||||||
|
=item B<--pipepart> (alpha testing)
|
||||||
|
|
||||||
|
Pipe parts of a physical file. B<--pipepart> works similar to
|
||||||
|
B<--pipe>, but is much faster. It has a few limitations:
|
||||||
|
|
||||||
|
=over 3
|
||||||
|
|
||||||
|
=item Z<>*
|
||||||
|
|
||||||
|
The file must be a physical (seekable) file and must be given using B<-a> or B<::::>.
|
||||||
|
|
||||||
|
=item Z<>*
|
||||||
|
|
||||||
|
Record counting (B<-N>) and line counting (B<-L>) do not work.
|
||||||
|
|
||||||
|
=back
|
||||||
|
|
||||||
|
|
||||||
=item B<--plain>
|
=item B<--plain>
|
||||||
|
|
||||||
|
|
|
@ -8,13 +8,74 @@
|
||||||
@node Top
|
@node Top
|
||||||
@top parallel
|
@top parallel
|
||||||
|
|
||||||
|
@menu
|
||||||
|
* NAME::
|
||||||
|
* SYNOPSIS::
|
||||||
|
* DESCRIPTION::
|
||||||
|
* OPTIONS::
|
||||||
|
* EXAMPLE@asis{:} Working as xargs -n1. Argument appending::
|
||||||
|
* EXAMPLE@asis{:} Reading arguments from command line::
|
||||||
|
* EXAMPLE@asis{:} Inserting multiple arguments::
|
||||||
|
* EXAMPLE@asis{:} Context replace::
|
||||||
|
* EXAMPLE@asis{:} Compute intensive jobs and substitution::
|
||||||
|
* EXAMPLE@asis{:} Substitution and redirection::
|
||||||
|
* EXAMPLE@asis{:} Composed commands::
|
||||||
|
* EXAMPLE@asis{:} Calling Bash functions::
|
||||||
|
* EXAMPLE@asis{:} Removing file extension when processing files::
|
||||||
|
* EXAMPLE@asis{:} Removing two file extensions when processing files and calling GNU Parallel from itself::
|
||||||
|
* EXAMPLE@asis{:} Download 10 images for each of the past 30 days::
|
||||||
|
* EXAMPLE@asis{:} Breadth first parallel web crawler/mirrorer::
|
||||||
|
* EXAMPLE@asis{:} Process files from a tar file while unpacking::
|
||||||
|
* EXAMPLE@asis{:} Rewriting a for-loop and a while-read-loop::
|
||||||
|
* EXAMPLE@asis{:} Rewriting nested for-loops::
|
||||||
|
* EXAMPLE@asis{:} Finding the lowest difference between files::
|
||||||
|
* EXAMPLE@asis{:} for-loops with column names::
|
||||||
|
* EXAMPLE@asis{:} Count the differences between all files in a dir::
|
||||||
|
* EXAMPLE@asis{:} Speeding up fast jobs::
|
||||||
|
* EXAMPLE@asis{:} Using shell variables::
|
||||||
|
* EXAMPLE@asis{:} Group output lines::
|
||||||
|
* EXAMPLE@asis{:} Tag output lines::
|
||||||
|
* EXAMPLE@asis{:} Keep order of output same as order of input::
|
||||||
|
* EXAMPLE@asis{:} Parallel grep::
|
||||||
|
* EXAMPLE@asis{:} Using remote computers::
|
||||||
|
* EXAMPLE@asis{:} Transferring of files::
|
||||||
|
* EXAMPLE@asis{:} Distributing work to local and remote computers::
|
||||||
|
* EXAMPLE@asis{:} Running the same command on remote computers::
|
||||||
|
* EXAMPLE@asis{:} Parallelizing rsync::
|
||||||
|
* EXAMPLE@asis{:} Use multiple inputs in one command::
|
||||||
|
* EXAMPLE@asis{:} Use a table as input::
|
||||||
|
* EXAMPLE@asis{:} Run the same command 10 times::
|
||||||
|
* EXAMPLE@asis{:} Working as cat | sh. Resource inexpensive jobs and evaluation::
|
||||||
|
* EXAMPLE@asis{:} Processing a big file using more cores::
|
||||||
|
* EXAMPLE@asis{:} Running more than 500 jobs workaround::
|
||||||
|
* EXAMPLE@asis{:} Working as mutex and counting semaphore::
|
||||||
|
* EXAMPLE@asis{:} Start editor with filenames from stdin (standard input)::
|
||||||
|
* EXAMPLE@asis{:} Running sudo::
|
||||||
|
* EXAMPLE@asis{:} GNU Parallel as queue system/batch manager::
|
||||||
|
* EXAMPLE@asis{:} GNU Parallel as dir processor::
|
||||||
|
* QUOTING::
|
||||||
|
* LIST RUNNING JOBS::
|
||||||
|
* COMPLETE RUNNING JOBS BUT DO NOT START NEW JOBS::
|
||||||
|
* ENVIRONMENT VARIABLES::
|
||||||
|
* DEFAULT PROFILE (CONFIG FILE)::
|
||||||
|
* PROFILE FILES::
|
||||||
|
* EXIT STATUS::
|
||||||
|
* DIFFERENCES BETWEEN GNU Parallel AND ALTERNATIVES::
|
||||||
|
* BUGS::
|
||||||
|
* REPORTING BUGS::
|
||||||
|
* AUTHOR::
|
||||||
|
* LICENSE::
|
||||||
|
* DEPENDENCIES::
|
||||||
|
* SEE ALSO::
|
||||||
|
@end menu
|
||||||
|
|
||||||
|
@node NAME
|
||||||
@chapter NAME
|
@chapter NAME
|
||||||
@anchor{NAME}
|
|
||||||
|
|
||||||
parallel - build and execute shell command lines from standard input in parallel
|
parallel - build and execute shell command lines from standard input in parallel
|
||||||
|
|
||||||
|
@node SYNOPSIS
|
||||||
@chapter SYNOPSIS
|
@chapter SYNOPSIS
|
||||||
@anchor{SYNOPSIS}
|
|
||||||
|
|
||||||
@strong{parallel} [options] [@emph{command} [arguments]] < list_of_arguments
|
@strong{parallel} [options] [@emph{command} [arguments]] < list_of_arguments
|
||||||
|
|
||||||
|
@ -25,8 +86,8 @@ parallel - build and execute shell command lines from standard input in parallel
|
||||||
|
|
||||||
@strong{#!/usr/bin/parallel} --shebang [options] [@emph{command} [arguments]]
|
@strong{#!/usr/bin/parallel} --shebang [options] [@emph{command} [arguments]]
|
||||||
|
|
||||||
|
@node DESCRIPTION
|
||||||
@chapter DESCRIPTION
|
@chapter DESCRIPTION
|
||||||
@anchor{DESCRIPTION}
|
|
||||||
|
|
||||||
GNU @strong{parallel} is a shell tool for executing jobs in parallel using
|
GNU @strong{parallel} is a shell tool for executing jobs in parallel using
|
||||||
one or more computers. A job can be a single command or a small
|
one or more computers. A job can be a single command or a small
|
||||||
|
@ -51,8 +112,12 @@ the line as arguments. If no @emph{command} is given, the line of input is
|
||||||
executed. Several lines will be run in parallel. GNU @strong{parallel} can
|
executed. Several lines will be run in parallel. GNU @strong{parallel} can
|
||||||
often be used as a substitute for @strong{xargs} or @strong{cat | bash}.
|
often be used as a substitute for @strong{xargs} or @strong{cat | bash}.
|
||||||
|
|
||||||
|
@menu
|
||||||
|
* Reader's guide::
|
||||||
|
@end menu
|
||||||
|
|
||||||
|
@node Reader's guide
|
||||||
@section Reader's guide
|
@section Reader's guide
|
||||||
@anchor{Reader's guide}
|
|
||||||
|
|
||||||
Start by watching the intro videos for a quick introduction:
|
Start by watching the intro videos for a quick introduction:
|
||||||
http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
|
http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
|
||||||
|
@ -66,8 +131,8 @@ parallel_tutorial}). Your command line will love you for it.
|
||||||
Finally you may want to look at the rest of this manual if you have
|
Finally you may want to look at the rest of this manual if you have
|
||||||
special needs not already covered.
|
special needs not already covered.
|
||||||
|
|
||||||
|
@node OPTIONS
|
||||||
@chapter OPTIONS
|
@chapter OPTIONS
|
||||||
@anchor{OPTIONS}
|
|
||||||
|
|
||||||
@table @asis
|
@table @asis
|
||||||
@item @emph{command}
|
@item @emph{command}
|
||||||
|
@ -82,10 +147,29 @@ If @emph{command} is given, GNU @strong{parallel} solve the same tasks as
|
||||||
similar to @strong{cat | sh}.
|
similar to @strong{cat | sh}.
|
||||||
|
|
||||||
The @emph{command} must be an executable, a script, a composed command, or
|
The @emph{command} must be an executable, a script, a composed command, or
|
||||||
a function. If it is a Bash function you need to @strong{export -f} the
|
a function.
|
||||||
|
|
||||||
|
If it is a Bash function you need to @strong{export -f} the
|
||||||
function first. An alias will, however, not work (see why
|
function first. An alias will, however, not work (see why
|
||||||
http://www.perlmonks.org/index.pl?node_id=484296).
|
http://www.perlmonks.org/index.pl?node_id=484296).
|
||||||
|
|
||||||
|
If it is a zsh function you will need to use this helper function
|
||||||
|
@strong{exportf} to export and to set $SHELL to bash:
|
||||||
|
|
||||||
|
@verbatim
|
||||||
|
function exportf (){
|
||||||
|
export $(echo $1)="`whence -f $1 | sed -e "s/$1 //" `"
|
||||||
|
}
|
||||||
|
|
||||||
|
function my_func(){
|
||||||
|
echo $1;
|
||||||
|
echo "hello";
|
||||||
|
}
|
||||||
|
|
||||||
|
exportf my_func
|
||||||
|
SHELL=/bin/bash parallel "my_func {}" ::: 1 2
|
||||||
|
@end verbatim
|
||||||
|
|
||||||
@item @strong{@{@}}
|
@item @strong{@{@}}
|
||||||
@anchor{@strong{@{@}}}
|
@anchor{@strong{@{@}}}
|
||||||
|
|
||||||
|
@ -444,8 +528,8 @@ separating the columns. The n'th column can be access using
|
||||||
@emph{regexp} is a Perl Regular Expression:
|
@emph{regexp} is a Perl Regular Expression:
|
||||||
http://perldoc.perl.org/perlre.html
|
http://perldoc.perl.org/perlre.html
|
||||||
|
|
||||||
@item @strong{--compress} (beta testing)
|
@item @strong{--compress}
|
||||||
@anchor{@strong{--compress} (beta testing)}
|
@anchor{@strong{--compress}}
|
||||||
|
|
||||||
Compress temporary files. If the output is big and very compressible
|
Compress temporary files. If the output is big and very compressible
|
||||||
this will take up less disk space in $TMPDIR and possibly be faster due to less
|
this will take up less disk space in $TMPDIR and possibly be faster due to less
|
||||||
|
@ -455,11 +539,11 @@ GNU @strong{parallel} will try @strong{lzop}, @strong{pigz}, @strong{gzip}, @str
|
||||||
@strong{plzip}, @strong{bzip2}, @strong{lzma}, @strong{lzip}, @strong{xz} in that order, and use the
|
@strong{plzip}, @strong{bzip2}, @strong{lzma}, @strong{lzip}, @strong{xz} in that order, and use the
|
||||||
first available.
|
first available.
|
||||||
|
|
||||||
@item @strong{--compress-program} @emph{prg} (beta testing)
|
@item @strong{--compress-program} @emph{prg}
|
||||||
@anchor{@strong{--compress-program} @emph{prg} (beta testing)}
|
@anchor{@strong{--compress-program} @emph{prg}}
|
||||||
|
|
||||||
@item @strong{--decompress-program} @emph{prg} (beta testing)
|
@item @strong{--decompress-program} @emph{prg}
|
||||||
@anchor{@strong{--decompress-program} @emph{prg} (beta testing)}
|
@anchor{@strong{--decompress-program} @emph{prg}}
|
||||||
|
|
||||||
Use @emph{prg} for (de)compressing temporary files. It is assumed that @emph{prg
|
Use @emph{prg} for (de)compressing temporary files. It is assumed that @emph{prg
|
||||||
-dc} will decompress stdin (standard input) to stdout (standard
|
-dc} will decompress stdin (standard input) to stdout (standard
|
||||||
|
@ -538,7 +622,7 @@ environment that the command is run in. This is especially useful for
|
||||||
remote execution.
|
remote execution.
|
||||||
|
|
||||||
In Bash @emph{var} can also be a Bash function - just remember to @strong{export
|
In Bash @emph{var} can also be a Bash function - just remember to @strong{export
|
||||||
-f} the function.
|
-f} the function, see @strong{command}.
|
||||||
|
|
||||||
The variable '_' is special. It will copy all enviroment variables
|
The variable '_' is special. It will copy all enviroment variables
|
||||||
except for the ones mentioned in ~/.parallel/ignored_vars.
|
except for the ones mentioned in ~/.parallel/ignored_vars.
|
||||||
|
@ -587,7 +671,7 @@ $PARALLEL, /etc/parallel/config or similar. This is because GNU
|
||||||
infinite loop. This will likely be fixed in a later release.
|
infinite loop. This will likely be fixed in a later release.
|
||||||
|
|
||||||
@item @strong{--gnu}
|
@item @strong{--gnu}
|
||||||
@anchor{@strong{--gnu} }
|
@anchor{@strong{--gnu}}
|
||||||
|
|
||||||
Behave like GNU @strong{parallel}. If @strong{--tollef} and @strong{--gnu} are both set,
|
Behave like GNU @strong{parallel}. If @strong{--tollef} and @strong{--gnu} are both set,
|
||||||
@strong{--gnu} takes precedence. @strong{--tollef} is retired, but @strong{--gnu} is
|
@strong{--gnu} takes precedence. @strong{--tollef} is retired, but @strong{--gnu} is
|
||||||
|
@ -820,8 +904,8 @@ standard specifies @strong{-L} instead.
|
||||||
|
|
||||||
Implies @strong{-X} unless @strong{-m}, @strong{--xargs}, or @strong{--pipe} is set.
|
Implies @strong{-X} unless @strong{-m}, @strong{--xargs}, or @strong{--pipe} is set.
|
||||||
|
|
||||||
@item @strong{--line-buffer} (beta testing)
|
@item @strong{--line-buffer}
|
||||||
@anchor{@strong{--line-buffer} (beta testing)}
|
@anchor{@strong{--line-buffer}}
|
||||||
|
|
||||||
Buffer output on line basis. @strong{--group} will keep the output together
|
Buffer output on line basis. @strong{--group} will keep the output together
|
||||||
for a whole job. @strong{--ungroup} allows output to mixup with half a line
|
for a whole job. @strong{--ungroup} allows output to mixup with half a line
|
||||||
|
@ -953,6 +1037,27 @@ defaults to '\n'. To have no record separator use @strong{--recend ""}.
|
||||||
|
|
||||||
@strong{--files} is often used with @strong{--pipe}.
|
@strong{--files} is often used with @strong{--pipe}.
|
||||||
|
|
||||||
|
See also: @strong{--recstart}, @strong{--recend}, @strong{--fifo}, @strong{--cat}, @strong{--pipepart}.
|
||||||
|
|
||||||
|
@item @strong{--pipepart} (alpha testing)
|
||||||
|
@anchor{@strong{--pipepart} (alpha testing)}
|
||||||
|
|
||||||
|
Pipe parts of a physical file. @strong{--pipepart} works similar to
|
||||||
|
@strong{--pipe}, but is much faster. It has a few limitations:
|
||||||
|
|
||||||
|
@table @asis
|
||||||
|
@item *
|
||||||
|
@anchor{*}
|
||||||
|
|
||||||
|
The file must be a physical (seekable) file and must be given using @strong{-a} or @strong{::::}.
|
||||||
|
|
||||||
|
@item *
|
||||||
|
@anchor{* 1}
|
||||||
|
|
||||||
|
Record counting (@strong{-N}) and line counting (@strong{-L}) do not work.
|
||||||
|
|
||||||
|
@end table
|
||||||
|
|
||||||
@item @strong{--plain}
|
@item @strong{--plain}
|
||||||
@anchor{@strong{--plain}}
|
@anchor{@strong{--plain}}
|
||||||
|
|
||||||
|
@ -1174,12 +1279,12 @@ will generate the files:
|
||||||
|
|
||||||
@verbatim
|
@verbatim
|
||||||
foo/a/I/b/III/stderr
|
foo/a/I/b/III/stderr
|
||||||
foo/a/I/b/IIII/stderr
|
|
||||||
foo/a/II/b/III/stderr
|
|
||||||
foo/a/II/b/IIII/stderr
|
|
||||||
foo/a/I/b/III/stdout
|
foo/a/I/b/III/stdout
|
||||||
|
foo/a/I/b/IIII/stderr
|
||||||
foo/a/I/b/IIII/stdout
|
foo/a/I/b/IIII/stdout
|
||||||
|
foo/a/II/b/III/stderr
|
||||||
foo/a/II/b/III/stdout
|
foo/a/II/b/III/stdout
|
||||||
|
foo/a/II/b/IIII/stderr
|
||||||
foo/a/II/b/IIII/stdout
|
foo/a/II/b/IIII/stdout
|
||||||
@end verbatim
|
@end verbatim
|
||||||
|
|
||||||
|
@ -1193,12 +1298,12 @@ will generate the files:
|
||||||
|
|
||||||
@verbatim
|
@verbatim
|
||||||
foo/1/I/2/III/stderr
|
foo/1/I/2/III/stderr
|
||||||
foo/1/I/2/IIII/stderr
|
|
||||||
foo/1/II/2/III/stderr
|
|
||||||
foo/1/II/2/IIII/stderr
|
|
||||||
foo/1/I/2/III/stdout
|
foo/1/I/2/III/stdout
|
||||||
|
foo/1/I/2/IIII/stderr
|
||||||
foo/1/I/2/IIII/stdout
|
foo/1/I/2/IIII/stdout
|
||||||
|
foo/1/II/2/III/stderr
|
||||||
foo/1/II/2/III/stdout
|
foo/1/II/2/III/stdout
|
||||||
|
foo/1/II/2/IIII/stderr
|
||||||
foo/1/II/2/IIII/stdout
|
foo/1/II/2/IIII/stdout
|
||||||
@end verbatim
|
@end verbatim
|
||||||
|
|
||||||
|
@ -1825,8 +1930,8 @@ See also @strong{--header}.
|
||||||
|
|
||||||
@end table
|
@end table
|
||||||
|
|
||||||
|
@node EXAMPLE: Working as xargs -n1. Argument appending
|
||||||
@chapter EXAMPLE: Working as xargs -n1. Argument appending
|
@chapter EXAMPLE: Working as xargs -n1. Argument appending
|
||||||
@anchor{EXAMPLE: Working as xargs -n1. Argument appending}
|
|
||||||
|
|
||||||
GNU @strong{parallel} can work similar to @strong{xargs -n1}.
|
GNU @strong{parallel} can work similar to @strong{xargs -n1}.
|
||||||
|
|
||||||
|
@ -1841,8 +1946,8 @@ FUBAR in all files in this dir and subdirs:
|
||||||
|
|
||||||
Note @strong{-q} is needed because of the space in 'FOO BAR'.
|
Note @strong{-q} is needed because of the space in 'FOO BAR'.
|
||||||
|
|
||||||
|
@node EXAMPLE: Reading arguments from command line
|
||||||
@chapter EXAMPLE: Reading arguments from command line
|
@chapter EXAMPLE: Reading arguments from command line
|
||||||
@anchor{EXAMPLE: Reading arguments from command line}
|
|
||||||
|
|
||||||
GNU @strong{parallel} can take the arguments from command line instead of
|
GNU @strong{parallel} can take the arguments from command line instead of
|
||||||
stdin (standard input). To compress all html files in the current dir
|
stdin (standard input). To compress all html files in the current dir
|
||||||
|
@ -1855,8 +1960,8 @@ run:
|
||||||
|
|
||||||
@strong{parallel lame @{@} -o @{.@}.mp3 ::: *.wav}
|
@strong{parallel lame @{@} -o @{.@}.mp3 ::: *.wav}
|
||||||
|
|
||||||
|
@node EXAMPLE: Inserting multiple arguments
|
||||||
@chapter EXAMPLE: Inserting multiple arguments
|
@chapter EXAMPLE: Inserting multiple arguments
|
||||||
@anchor{EXAMPLE: Inserting multiple arguments}
|
|
||||||
|
|
||||||
When moving a lot of files like this: @strong{mv *.log destdir} you will
|
When moving a lot of files like this: @strong{mv *.log destdir} you will
|
||||||
sometimes get the error:
|
sometimes get the error:
|
||||||
|
@ -1872,8 +1977,8 @@ as many arguments that will fit on the line:
|
||||||
|
|
||||||
@strong{ls | grep -E '\.log$' | parallel -m mv @{@} destdir}
|
@strong{ls | grep -E '\.log$' | parallel -m mv @{@} destdir}
|
||||||
|
|
||||||
|
@node EXAMPLE: Context replace
|
||||||
@chapter EXAMPLE: Context replace
|
@chapter EXAMPLE: Context replace
|
||||||
@anchor{EXAMPLE: Context replace}
|
|
||||||
|
|
||||||
To remove the files @emph{pict0000.jpg} .. @emph{pict9999.jpg} you could do:
|
To remove the files @emph{pict0000.jpg} .. @emph{pict9999.jpg} you could do:
|
||||||
|
|
||||||
|
@ -1894,8 +1999,8 @@ You could also run:
|
||||||
This will also only run @strong{rm} as many times needed to keep the command
|
This will also only run @strong{rm} as many times needed to keep the command
|
||||||
line length short enough.
|
line length short enough.
|
||||||
|
|
||||||
|
@node EXAMPLE: Compute intensive jobs and substitution
|
||||||
@chapter EXAMPLE: Compute intensive jobs and substitution
|
@chapter EXAMPLE: Compute intensive jobs and substitution
|
||||||
@anchor{EXAMPLE: Compute intensive jobs and substitution}
|
|
||||||
|
|
||||||
If ImageMagick is installed this will generate a thumbnail of a jpg
|
If ImageMagick is installed this will generate a thumbnail of a jpg
|
||||||
file:
|
file:
|
||||||
|
@ -1921,8 +2026,8 @@ make files like ./foo/bar_thumb.jpg:
|
||||||
|
|
||||||
@strong{find . -name '*.jpg' | parallel convert -geometry 120 @{@} @{.@}_thumb.jpg}
|
@strong{find . -name '*.jpg' | parallel convert -geometry 120 @{@} @{.@}_thumb.jpg}
|
||||||
|
|
||||||
|
@node EXAMPLE: Substitution and redirection
|
||||||
@chapter EXAMPLE: Substitution and redirection
|
@chapter EXAMPLE: Substitution and redirection
|
||||||
@anchor{EXAMPLE: Substitution and redirection}
|
|
||||||
|
|
||||||
This will generate an uncompressed version of .gz-files next to the .gz-file:
|
This will generate an uncompressed version of .gz-files next to the .gz-file:
|
||||||
|
|
||||||
|
@ -1937,8 +2042,8 @@ Other special shell characters (such as * ; $ > < | >> <<) also need
|
||||||
to be put in quotes, as they may otherwise be interpreted by the shell
|
to be put in quotes, as they may otherwise be interpreted by the shell
|
||||||
and not given to GNU @strong{parallel}.
|
and not given to GNU @strong{parallel}.
|
||||||
|
|
||||||
|
@node EXAMPLE: Composed commands
|
||||||
@chapter EXAMPLE: Composed commands
|
@chapter EXAMPLE: Composed commands
|
||||||
@anchor{EXAMPLE: Composed commands}
|
|
||||||
|
|
||||||
A job can consist of several commands. This will print the number of
|
A job can consist of several commands. This will print the number of
|
||||||
files in each directory:
|
files in each directory:
|
||||||
|
@ -1969,8 +2074,8 @@ Find the files in a list that do not exist
|
||||||
|
|
||||||
@strong{cat file_list | parallel 'if [ ! -e @{@} ] ; then echo @{@}; fi'}
|
@strong{cat file_list | parallel 'if [ ! -e @{@} ] ; then echo @{@}; fi'}
|
||||||
|
|
||||||
|
@node EXAMPLE: Calling Bash functions
|
||||||
@chapter EXAMPLE: Calling Bash functions
|
@chapter EXAMPLE: Calling Bash functions
|
||||||
@anchor{EXAMPLE: Calling Bash functions}
|
|
||||||
|
|
||||||
If the composed command is longer than a line, it becomes hard to
|
If the composed command is longer than a line, it becomes hard to
|
||||||
read. In Bash you can use functions. Just remember to @strong{export -f} the
|
read. In Bash you can use functions. Just remember to @strong{export -f} the
|
||||||
|
@ -2002,8 +2107,8 @@ To do this on remote servers you need to transfer the function using
|
||||||
parallel --env doubleit -S server doubleit ::: 1 2 3 ::: a b
|
parallel --env doubleit -S server doubleit ::: 1 2 3 ::: a b
|
||||||
@end verbatim
|
@end verbatim
|
||||||
|
|
||||||
|
@node EXAMPLE: Removing file extension when processing files
|
||||||
@chapter EXAMPLE: Removing file extension when processing files
|
@chapter EXAMPLE: Removing file extension when processing files
|
||||||
@anchor{EXAMPLE: Removing file extension when processing files}
|
|
||||||
|
|
||||||
When processing files removing the file extension using @strong{@{.@}} is
|
When processing files removing the file extension using @strong{@{.@}} is
|
||||||
often useful.
|
often useful.
|
||||||
|
@ -2025,8 +2130,8 @@ Put all converted in the same directory:
|
||||||
|
|
||||||
@strong{find sounddir -type f -name '*.wav' | parallel lame @{@} -o mydir/@{/.@}.mp3}
|
@strong{find sounddir -type f -name '*.wav' | parallel lame @{@} -o mydir/@{/.@}.mp3}
|
||||||
|
|
||||||
|
@node EXAMPLE: Removing two file extensions when processing files and calling GNU Parallel from itself
|
||||||
@chapter EXAMPLE: Removing two file extensions when processing files and calling GNU Parallel from itself
|
@chapter EXAMPLE: Removing two file extensions when processing files and calling GNU Parallel from itself
|
||||||
@anchor{EXAMPLE: Removing two file extensions when processing files and calling GNU Parallel from itself}
|
|
||||||
|
|
||||||
If you have directory with tar.gz files and want these extracted in
|
If you have directory with tar.gz files and want these extracted in
|
||||||
the corresponding dir (e.g foo.tar.gz will be extracted in the dir
|
the corresponding dir (e.g foo.tar.gz will be extracted in the dir
|
||||||
|
@ -2034,8 +2139,8 @@ foo) you can do:
|
||||||
|
|
||||||
@strong{ls *.tar.gz| parallel --er @{tar@} 'echo @{tar@}|parallel "mkdir -p @{.@} ; tar -C @{.@} -xf @{.@}.tar.gz"'}
|
@strong{ls *.tar.gz| parallel --er @{tar@} 'echo @{tar@}|parallel "mkdir -p @{.@} ; tar -C @{.@} -xf @{.@}.tar.gz"'}
|
||||||
|
|
||||||
|
@node EXAMPLE: Download 10 images for each of the past 30 days
|
||||||
@chapter EXAMPLE: Download 10 images for each of the past 30 days
|
@chapter EXAMPLE: Download 10 images for each of the past 30 days
|
||||||
@anchor{EXAMPLE: Download 10 images for each of the past 30 days}
|
|
||||||
|
|
||||||
Let us assume a website stores images like:
|
Let us assume a website stores images like:
|
||||||
|
|
||||||
|
@ -2051,8 +2156,8 @@ download images for the past 30 days:
|
||||||
@strong{$(date -d "today -@{1@} days" +%Y%m%d)} will give the dates in
|
@strong{$(date -d "today -@{1@} days" +%Y%m%d)} will give the dates in
|
||||||
YYYYMMDD with @{1@} days subtracted.
|
YYYYMMDD with @{1@} days subtracted.
|
||||||
|
|
||||||
|
@node EXAMPLE: Breadth first parallel web crawler/mirrorer
|
||||||
@chapter EXAMPLE: Breadth first parallel web crawler/mirrorer
|
@chapter EXAMPLE: Breadth first parallel web crawler/mirrorer
|
||||||
@anchor{EXAMPLE: Breadth first parallel web crawler/mirrorer}
|
|
||||||
|
|
||||||
This script below will crawl and mirror a URL in parallel. It
|
This script below will crawl and mirror a URL in parallel. It
|
||||||
downloads first pages that are 1 click down, then 2 clicks down, then
|
downloads first pages that are 1 click down, then 2 clicks down, then
|
||||||
|
@ -2098,8 +2203,8 @@ URLs and the process is started over until no unseen links are found.
|
||||||
rm -f $URLLIST $URLLIST2 $SEEN
|
rm -f $URLLIST $URLLIST2 $SEEN
|
||||||
@end verbatim
|
@end verbatim
|
||||||
|
|
||||||
|
@node EXAMPLE: Process files from a tar file while unpacking
|
||||||
@chapter EXAMPLE: Process files from a tar file while unpacking
|
@chapter EXAMPLE: Process files from a tar file while unpacking
|
||||||
@anchor{EXAMPLE: Process files from a tar file while unpacking}
|
|
||||||
|
|
||||||
If the files to be processed are in a tar file then unpacking one file
|
If the files to be processed are in a tar file then unpacking one file
|
||||||
and processing it immediately may be faster than first unpacking all
|
and processing it immediately may be faster than first unpacking all
|
||||||
|
@ -2110,8 +2215,8 @@ parallel echo}
|
||||||
|
|
||||||
The Perl one-liner is needed to avoid race condition.
|
The Perl one-liner is needed to avoid race condition.
|
||||||
|
|
||||||
|
@node EXAMPLE: Rewriting a for-loop and a while-read-loop
|
||||||
@chapter EXAMPLE: Rewriting a for-loop and a while-read-loop
|
@chapter EXAMPLE: Rewriting a for-loop and a while-read-loop
|
||||||
@anchor{EXAMPLE: Rewriting a for-loop and a while-read-loop}
|
|
||||||
|
|
||||||
for-loops like this:
|
for-loops like this:
|
||||||
|
|
||||||
|
@ -2161,8 +2266,8 @@ can be written like this:
|
||||||
|
|
||||||
@strong{cat list | parallel "do_something @{@} scale @{.@}.jpg ; do_step2 <@{@} @{.@}" | process_output}
|
@strong{cat list | parallel "do_something @{@} scale @{.@}.jpg ; do_step2 <@{@} @{.@}" | process_output}
|
||||||
|
|
||||||
|
@node EXAMPLE: Rewriting nested for-loops
|
||||||
@chapter EXAMPLE: Rewriting nested for-loops
|
@chapter EXAMPLE: Rewriting nested for-loops
|
||||||
@anchor{EXAMPLE: Rewriting nested for-loops}
|
|
||||||
|
|
||||||
Nested for-loops like this:
|
Nested for-loops like this:
|
||||||
|
|
||||||
|
@ -2192,8 +2297,8 @@ can be written like this:
|
||||||
|
|
||||||
@strong{parallel echo @{1@} @{2@} ::: M F ::: S M L XL XXL | sort}
|
@strong{parallel echo @{1@} @{2@} ::: M F ::: S M L XL XXL | sort}
|
||||||
|
|
||||||
|
@node EXAMPLE: Finding the lowest difference between files
|
||||||
@chapter EXAMPLE: Finding the lowest difference between files
|
@chapter EXAMPLE: Finding the lowest difference between files
|
||||||
@anchor{EXAMPLE: Finding the lowest difference between files}
|
|
||||||
|
|
||||||
@strong{diff} is good for finding differences in text files. @strong{diff | wc -l}
|
@strong{diff} is good for finding differences in text files. @strong{diff | wc -l}
|
||||||
gives an indication of the size of the difference. To find the
|
gives an indication of the size of the difference. To find the
|
||||||
|
@ -2204,8 +2309,8 @@ differences between all files in the current dir do:
|
||||||
This way it is possible to see if some files are closer to other
|
This way it is possible to see if some files are closer to other
|
||||||
files.
|
files.
|
||||||
|
|
||||||
|
@node EXAMPLE: for-loops with column names
|
||||||
@chapter EXAMPLE: for-loops with column names
|
@chapter EXAMPLE: for-loops with column names
|
||||||
@anchor{EXAMPLE: for-loops with column names}
|
|
||||||
|
|
||||||
When doing multiple nested for-loops it can be easier to keep track of
|
When doing multiple nested for-loops it can be easier to keep track of
|
||||||
the loop variable if is is named instead of just having a number. Use
|
the loop variable if is is named instead of just having a number. Use
|
||||||
|
@ -2222,8 +2327,8 @@ This also works if the input file is a file with columns:
|
||||||
cat addressbook.tsv | parallel --colsep '\t' --header : echo {Name} {E-mail address}
|
cat addressbook.tsv | parallel --colsep '\t' --header : echo {Name} {E-mail address}
|
||||||
@end verbatim
|
@end verbatim
|
||||||
|
|
||||||
|
@node EXAMPLE: Count the differences between all files in a dir
|
||||||
@chapter EXAMPLE: Count the differences between all files in a dir
|
@chapter EXAMPLE: Count the differences between all files in a dir
|
||||||
@anchor{EXAMPLE: Count the differences between all files in a dir}
|
|
||||||
|
|
||||||
Using @strong{--results} the results are saved in /tmp/diffcount*.
|
Using @strong{--results} the results are saved in /tmp/diffcount*.
|
||||||
|
|
||||||
|
@ -2234,8 +2339,8 @@ Using @strong{--results} the results are saved in /tmp/diffcount*.
|
||||||
To see the difference between file A and file B look at the file
|
To see the difference between file A and file B look at the file
|
||||||
'/tmp/diffcount 1 A 2 B' where spaces are TABs (\t).
|
'/tmp/diffcount 1 A 2 B' where spaces are TABs (\t).
|
||||||
|
|
||||||
|
@node EXAMPLE: Speeding up fast jobs
|
||||||
@chapter EXAMPLE: Speeding up fast jobs
|
@chapter EXAMPLE: Speeding up fast jobs
|
||||||
@anchor{EXAMPLE: Speeding up fast jobs}
|
|
||||||
|
|
||||||
Starting a job on the local machine takes around 3 ms. This can be a
|
Starting a job on the local machine takes around 3 ms. This can be a
|
||||||
big overhead if the job takes very few ms to run. Often you can group
|
big overhead if the job takes very few ms to run. Often you can group
|
||||||
|
@ -2260,8 +2365,8 @@ If @strong{-j0} normally spawns 506 jobs, then the above will try to spawn
|
||||||
of processes and/or filehandles. Look at 'ulimit -n' and 'ulimit -u'
|
of processes and/or filehandles. Look at 'ulimit -n' and 'ulimit -u'
|
||||||
to raise these limits.
|
to raise these limits.
|
||||||
|
|
||||||
|
@node EXAMPLE: Using shell variables
|
||||||
@chapter EXAMPLE: Using shell variables
|
@chapter EXAMPLE: Using shell variables
|
||||||
@anchor{EXAMPLE: Using shell variables}
|
|
||||||
|
|
||||||
When using shell variables you need to quote them correctly as they
|
When using shell variables you need to quote them correctly as they
|
||||||
may otherwise be split on spaces.
|
may otherwise be split on spaces.
|
||||||
|
@ -2290,8 +2395,8 @@ characters (e.g. space) you can quote them using @strong{'"$VAR"'} or using
|
||||||
parallel -q echo "$V" ::: spaces
|
parallel -q echo "$V" ::: spaces
|
||||||
@end verbatim
|
@end verbatim
|
||||||
|
|
||||||
|
@node EXAMPLE: Group output lines
|
||||||
@chapter EXAMPLE: Group output lines
|
@chapter EXAMPLE: Group output lines
|
||||||
@anchor{EXAMPLE: Group output lines}
|
|
||||||
|
|
||||||
When running jobs that output data, you often do not want the output
|
When running jobs that output data, you often do not want the output
|
||||||
of multiple jobs to run together. GNU @strong{parallel} defaults to grouping the
|
of multiple jobs to run together. GNU @strong{parallel} defaults to grouping the
|
||||||
|
@ -2307,8 +2412,8 @@ to the output of:
|
||||||
|
|
||||||
@strong{parallel -u traceroute ::: foss.org.my debian.org freenetproject.org}
|
@strong{parallel -u traceroute ::: foss.org.my debian.org freenetproject.org}
|
||||||
|
|
||||||
|
@node EXAMPLE: Tag output lines
|
||||||
@chapter EXAMPLE: Tag output lines
|
@chapter EXAMPLE: Tag output lines
|
||||||
@anchor{EXAMPLE: Tag output lines}
|
|
||||||
|
|
||||||
GNU @strong{parallel} groups the output lines, but it can be hard to see
|
GNU @strong{parallel} groups the output lines, but it can be hard to see
|
||||||
where the different jobs begin. @strong{--tag} prepends the argument to make
|
where the different jobs begin. @strong{--tag} prepends the argument to make
|
||||||
|
@ -2320,8 +2425,8 @@ Check the uptime of the servers in @emph{~/.parallel/sshloginfile}:
|
||||||
|
|
||||||
@strong{parallel --tag -S .. --nonall uptime}
|
@strong{parallel --tag -S .. --nonall uptime}
|
||||||
|
|
||||||
|
@node EXAMPLE: Keep order of output same as order of input
|
||||||
@chapter EXAMPLE: Keep order of output same as order of input
|
@chapter EXAMPLE: Keep order of output same as order of input
|
||||||
@anchor{EXAMPLE: Keep order of output same as order of input}
|
|
||||||
|
|
||||||
Normally the output of a job will be printed as soon as it
|
Normally the output of a job will be printed as soon as it
|
||||||
completes. Sometimes you want the order of the output to remain the
|
completes. Sometimes you want the order of the output to remain the
|
||||||
|
@ -2368,8 +2473,8 @@ combined in the correct order.
|
||||||
@strong{seq 0 99 | parallel -k curl -r \
|
@strong{seq 0 99 | parallel -k curl -r \
|
||||||
@{@}0000000-@{@}9999999 http://example.com/the/big/file} > @strong{file}
|
@{@}0000000-@{@}9999999 http://example.com/the/big/file} > @strong{file}
|
||||||
|
|
||||||
|
@node EXAMPLE: Parallel grep
|
||||||
@chapter EXAMPLE: Parallel grep
|
@chapter EXAMPLE: Parallel grep
|
||||||
@anchor{EXAMPLE: Parallel grep}
|
|
||||||
|
|
||||||
@strong{grep -r} greps recursively through directories. On multicore CPUs
|
@strong{grep -r} greps recursively through directories. On multicore CPUs
|
||||||
GNU @strong{parallel} can often speed this up.
|
GNU @strong{parallel} can often speed this up.
|
||||||
|
@ -2378,8 +2483,8 @@ GNU @strong{parallel} can often speed this up.
|
||||||
|
|
||||||
This will run 1.5 job per core, and give 1000 arguments to @strong{grep}.
|
This will run 1.5 job per core, and give 1000 arguments to @strong{grep}.
|
||||||
|
|
||||||
|
@node EXAMPLE: Using remote computers
|
||||||
@chapter EXAMPLE: Using remote computers
|
@chapter EXAMPLE: Using remote computers
|
||||||
@anchor{EXAMPLE: Using remote computers}
|
|
||||||
|
|
||||||
To run commands on a remote computer SSH needs to be set up and you
|
To run commands on a remote computer SSH needs to be set up and you
|
||||||
must be able to login without entering a password (The commands
|
must be able to login without entering a password (The commands
|
||||||
|
@ -2466,8 +2571,8 @@ computer has 8 CPU cores.
|
||||||
seq 10 | parallel --sshlogin 8/server.example.com echo
|
seq 10 | parallel --sshlogin 8/server.example.com echo
|
||||||
@end verbatim
|
@end verbatim
|
||||||
|
|
||||||
|
@node EXAMPLE: Transferring of files
|
||||||
@chapter EXAMPLE: Transferring of files
|
@chapter EXAMPLE: Transferring of files
|
||||||
@anchor{EXAMPLE: Transferring of files}
|
|
||||||
|
|
||||||
To recompress gzipped files with @strong{bzip2} using a remote computer run:
|
To recompress gzipped files with @strong{bzip2} using a remote computer run:
|
||||||
|
|
||||||
|
@ -2553,8 +2658,8 @@ the special short hand @emph{-S ..} can be used:
|
||||||
--trc {.}.bz2 "zcat {} | bzip2 -9 >{.}.bz2"
|
--trc {.}.bz2 "zcat {} | bzip2 -9 >{.}.bz2"
|
||||||
@end verbatim
|
@end verbatim
|
||||||
|
|
||||||
|
@node EXAMPLE: Distributing work to local and remote computers
|
||||||
@chapter EXAMPLE: Distributing work to local and remote computers
|
@chapter EXAMPLE: Distributing work to local and remote computers
|
||||||
@anchor{EXAMPLE: Distributing work to local and remote computers}
|
|
||||||
|
|
||||||
Convert *.mp3 to *.ogg running one process per CPU core on local computer and server2:
|
Convert *.mp3 to *.ogg running one process per CPU core on local computer and server2:
|
||||||
|
|
||||||
|
@ -2563,8 +2668,8 @@ Convert *.mp3 to *.ogg running one process per CPU core on local computer and se
|
||||||
'mpg321 -w - {} | oggenc -q0 - -o {.}.ogg' ::: *.mp3
|
'mpg321 -w - {} | oggenc -q0 - -o {.}.ogg' ::: *.mp3
|
||||||
@end verbatim
|
@end verbatim
|
||||||
|
|
||||||
|
@node EXAMPLE: Running the same command on remote computers
|
||||||
@chapter EXAMPLE: Running the same command on remote computers
|
@chapter EXAMPLE: Running the same command on remote computers
|
||||||
@anchor{EXAMPLE: Running the same command on remote computers}
|
|
||||||
|
|
||||||
To run the command @strong{uptime} on remote computers you can do:
|
To run the command @strong{uptime} on remote computers you can do:
|
||||||
|
|
||||||
|
@ -2580,8 +2685,8 @@ output.
|
||||||
|
|
||||||
If you have a lot of hosts use '-j0' to access more hosts in parallel.
|
If you have a lot of hosts use '-j0' to access more hosts in parallel.
|
||||||
|
|
||||||
|
@node EXAMPLE: Parallelizing rsync
|
||||||
@chapter EXAMPLE: Parallelizing rsync
|
@chapter EXAMPLE: Parallelizing rsync
|
||||||
@anchor{EXAMPLE: Parallelizing rsync}
|
|
||||||
|
|
||||||
@strong{rsync} is a great tool, but sometimes it will not fill up the
|
@strong{rsync} is a great tool, but sometimes it will not fill up the
|
||||||
available bandwidth. This is often a problem when copying several big
|
available bandwidth. This is often a problem when copying several big
|
||||||
|
@ -2603,8 +2708,8 @@ are called digits.png (e.g. 000000.png) you might be able to do:
|
||||||
|
|
||||||
@strong{seq -w 0 99 | parallel rsync -Havessh fooserver:src-path/*@{@}.png destdir/}
|
@strong{seq -w 0 99 | parallel rsync -Havessh fooserver:src-path/*@{@}.png destdir/}
|
||||||
|
|
||||||
|
@node EXAMPLE: Use multiple inputs in one command
|
||||||
@chapter EXAMPLE: Use multiple inputs in one command
|
@chapter EXAMPLE: Use multiple inputs in one command
|
||||||
@anchor{EXAMPLE: Use multiple inputs in one command}
|
|
||||||
|
|
||||||
Copy files like foo.es.ext to foo.ext:
|
Copy files like foo.es.ext to foo.ext:
|
||||||
|
|
||||||
|
@ -2632,8 +2737,8 @@ Alternative version:
|
||||||
|
|
||||||
@strong{find . -type f | sort | parallel convert @{@} @{#@}.png}
|
@strong{find . -type f | sort | parallel convert @{@} @{#@}.png}
|
||||||
|
|
||||||
|
@node EXAMPLE: Use a table as input
|
||||||
@chapter EXAMPLE: Use a table as input
|
@chapter EXAMPLE: Use a table as input
|
||||||
@anchor{EXAMPLE: Use a table as input}
|
|
||||||
|
|
||||||
Content of table_file.tsv:
|
Content of table_file.tsv:
|
||||||
|
|
||||||
|
@ -2657,16 +2762,16 @@ Note: The default for GNU @strong{parallel} is to remove the spaces around the c
|
||||||
|
|
||||||
@strong{parallel -a table_file.tsv --trim n --colsep '\t' cmd -o @{2@} -i @{1@}}
|
@strong{parallel -a table_file.tsv --trim n --colsep '\t' cmd -o @{2@} -i @{1@}}
|
||||||
|
|
||||||
|
@node EXAMPLE: Run the same command 10 times
|
||||||
@chapter EXAMPLE: Run the same command 10 times
|
@chapter EXAMPLE: Run the same command 10 times
|
||||||
@anchor{EXAMPLE: Run the same command 10 times}
|
|
||||||
|
|
||||||
If you want to run the same command with the same arguments 10 times
|
If you want to run the same command with the same arguments 10 times
|
||||||
in parallel you can do:
|
in parallel you can do:
|
||||||
|
|
||||||
@strong{seq 10 | parallel -n0 my_command my_args}
|
@strong{seq 10 | parallel -n0 my_command my_args}
|
||||||
|
|
||||||
|
@node EXAMPLE: Working as cat | sh. Resource inexpensive jobs and evaluation
|
||||||
@chapter EXAMPLE: Working as cat | sh. Resource inexpensive jobs and evaluation
|
@chapter EXAMPLE: Working as cat | sh. Resource inexpensive jobs and evaluation
|
||||||
@anchor{EXAMPLE: Working as cat | sh. Resource inexpensive jobs and evaluation}
|
|
||||||
|
|
||||||
GNU @strong{parallel} can work similar to @strong{cat | sh}.
|
GNU @strong{parallel} can work similar to @strong{cat | sh}.
|
||||||
|
|
||||||
|
@ -2692,8 +2797,8 @@ To run 100 processes simultaneously do:
|
||||||
|
|
||||||
As there is not a @emph{command} the jobs will be evaluated by the shell.
|
As there is not a @emph{command} the jobs will be evaluated by the shell.
|
||||||
|
|
||||||
|
@node EXAMPLE: Processing a big file using more cores
|
||||||
@chapter EXAMPLE: Processing a big file using more cores
|
@chapter EXAMPLE: Processing a big file using more cores
|
||||||
@anchor{EXAMPLE: Processing a big file using more cores}
|
|
||||||
|
|
||||||
To process a big file or some output you can use @strong{--pipe} to split up
|
To process a big file or some output you can use @strong{--pipe} to split up
|
||||||
the data into blocks and pipe the blocks into the processing program.
|
the data into blocks and pipe the blocks into the processing program.
|
||||||
|
@ -2720,8 +2825,8 @@ files are passed to the second @strong{parallel} that runs @strong{sort -m} on t
|
||||||
files before it removes the files. The output is saved to
|
files before it removes the files. The output is saved to
|
||||||
@strong{bigfile.sort}.
|
@strong{bigfile.sort}.
|
||||||
|
|
||||||
|
@node EXAMPLE: Running more than 500 jobs workaround
|
||||||
@chapter EXAMPLE: Running more than 500 jobs workaround
|
@chapter EXAMPLE: Running more than 500 jobs workaround
|
||||||
@anchor{EXAMPLE: Running more than 500 jobs workaround}
|
|
||||||
|
|
||||||
If you need to run a massive amount of jobs in parallel, then you will
|
If you need to run a massive amount of jobs in parallel, then you will
|
||||||
likely hit the filehandle limit which is often around 500 jobs. If you
|
likely hit the filehandle limit which is often around 500 jobs. If you
|
||||||
|
@ -2736,8 +2841,8 @@ This will spawn up to 250000 jobs (use with caution - you need 250 GB RAM to do
|
||||||
|
|
||||||
@strong{cat myinput | parallel --pipe -N 500 --round-robin -j500 parallel -j500 your_prg}
|
@strong{cat myinput | parallel --pipe -N 500 --round-robin -j500 parallel -j500 your_prg}
|
||||||
|
|
||||||
|
@node EXAMPLE: Working as mutex and counting semaphore
|
||||||
@chapter EXAMPLE: Working as mutex and counting semaphore
|
@chapter EXAMPLE: Working as mutex and counting semaphore
|
||||||
@anchor{EXAMPLE: Working as mutex and counting semaphore}
|
|
||||||
|
|
||||||
The command @strong{sem} is an alias for @strong{parallel --semaphore}.
|
The command @strong{sem} is an alias for @strong{parallel --semaphore}.
|
||||||
|
|
||||||
|
@ -2775,8 +2880,8 @@ same time:
|
||||||
seq 3 | parallel sem --id mymutex sed -i -e 'i{}' myfile
|
seq 3 | parallel sem --id mymutex sed -i -e 'i{}' myfile
|
||||||
@end verbatim
|
@end verbatim
|
||||||
|
|
||||||
|
@node EXAMPLE: Start editor with filenames from stdin (standard input)
|
||||||
@chapter EXAMPLE: Start editor with filenames from stdin (standard input)
|
@chapter EXAMPLE: Start editor with filenames from stdin (standard input)
|
||||||
@anchor{EXAMPLE: Start editor with filenames from stdin (standard input)}
|
|
||||||
|
|
||||||
You can use GNU @strong{parallel} to start interactive programs like emacs or vi:
|
You can use GNU @strong{parallel} to start interactive programs like emacs or vi:
|
||||||
|
|
||||||
|
@ -2787,8 +2892,8 @@ You can use GNU @strong{parallel} to start interactive programs like emacs or vi
|
||||||
If there are more files than will fit on a single command line, the
|
If there are more files than will fit on a single command line, the
|
||||||
editor will be started again with the remaining files.
|
editor will be started again with the remaining files.
|
||||||
|
|
||||||
|
@node EXAMPLE: Running sudo
|
||||||
@chapter EXAMPLE: Running sudo
|
@chapter EXAMPLE: Running sudo
|
||||||
@anchor{EXAMPLE: Running sudo}
|
|
||||||
|
|
||||||
@strong{sudo} requires a password to run a command as root. It caches the
|
@strong{sudo} requires a password to run a command as root. It caches the
|
||||||
access, so you only need to enter the password again if you have not
|
access, so you only need to enter the password again if you have not
|
||||||
|
@ -2816,8 +2921,8 @@ or:
|
||||||
|
|
||||||
This way you only have to enter the sudo password once.
|
This way you only have to enter the sudo password once.
|
||||||
|
|
||||||
|
@node EXAMPLE: GNU Parallel as queue system/batch manager
|
||||||
@chapter EXAMPLE: GNU Parallel as queue system/batch manager
|
@chapter EXAMPLE: GNU Parallel as queue system/batch manager
|
||||||
@anchor{EXAMPLE: GNU Parallel as queue system/batch manager}
|
|
||||||
|
|
||||||
GNU @strong{parallel} can work as a simple job queue system or batch manager.
|
GNU @strong{parallel} can work as a simple job queue system or batch manager.
|
||||||
The idea is to put the jobs into a file and have GNU @strong{parallel} read
|
The idea is to put the jobs into a file and have GNU @strong{parallel} read
|
||||||
|
@ -2846,8 +2951,8 @@ E.g. if you have 10 jobslots then the output from the first completed
|
||||||
job will only be printed when job 11 has started, and the output of
|
job will only be printed when job 11 has started, and the output of
|
||||||
second completed job will only be printed when job 12 has started.
|
second completed job will only be printed when job 12 has started.
|
||||||
|
|
||||||
|
@node EXAMPLE: GNU Parallel as dir processor
|
||||||
@chapter EXAMPLE: GNU Parallel as dir processor
|
@chapter EXAMPLE: GNU Parallel as dir processor
|
||||||
@anchor{EXAMPLE: GNU Parallel as dir processor}
|
|
||||||
|
|
||||||
If you have a dir in which users drop files that needs to be processed
|
If you have a dir in which users drop files that needs to be processed
|
||||||
you can do this on GNU/Linux (If you know what @strong{inotifywait} is
|
you can do this on GNU/Linux (If you know what @strong{inotifywait} is
|
||||||
|
@ -2872,8 +2977,8 @@ files. Set up the dir processor as above and unpack into the dir.
|
||||||
Using GNU Parallel as dir processor has the same limitations as using
|
Using GNU Parallel as dir processor has the same limitations as using
|
||||||
GNU Parallel as queue system/batch manager.
|
GNU Parallel as queue system/batch manager.
|
||||||
|
|
||||||
|
@node QUOTING
|
||||||
@chapter QUOTING
|
@chapter QUOTING
|
||||||
@anchor{QUOTING}
|
|
||||||
|
|
||||||
GNU @strong{parallel} is very liberal in quoting. You only need to quote
|
GNU @strong{parallel} is very liberal in quoting. You only need to quote
|
||||||
characters that have special meaning in shell:
|
characters that have special meaning in shell:
|
||||||
|
@ -3008,8 +3113,8 @@ Or for substituting output:
|
||||||
easier just to write a small script or a function (remember to
|
easier just to write a small script or a function (remember to
|
||||||
@strong{export -f} the function) and have GNU @strong{parallel} call that.
|
@strong{export -f} the function) and have GNU @strong{parallel} call that.
|
||||||
|
|
||||||
|
@node LIST RUNNING JOBS
|
||||||
@chapter LIST RUNNING JOBS
|
@chapter LIST RUNNING JOBS
|
||||||
@anchor{LIST RUNNING JOBS}
|
|
||||||
|
|
||||||
If you want a list of the jobs currently running you can run:
|
If you want a list of the jobs currently running you can run:
|
||||||
|
|
||||||
|
@ -3018,8 +3123,8 @@ If you want a list of the jobs currently running you can run:
|
||||||
GNU @strong{parallel} will then print the currently running jobs on stderr
|
GNU @strong{parallel} will then print the currently running jobs on stderr
|
||||||
(standard error).
|
(standard error).
|
||||||
|
|
||||||
|
@node COMPLETE RUNNING JOBS BUT DO NOT START NEW JOBS
|
||||||
@chapter COMPLETE RUNNING JOBS BUT DO NOT START NEW JOBS
|
@chapter COMPLETE RUNNING JOBS BUT DO NOT START NEW JOBS
|
||||||
@anchor{COMPLETE RUNNING JOBS BUT DO NOT START NEW JOBS}
|
|
||||||
|
|
||||||
If you regret starting a lot of jobs you can simply break GNU @strong{parallel},
|
If you regret starting a lot of jobs you can simply break GNU @strong{parallel},
|
||||||
but if you want to make sure you do not have half-completed jobs you
|
but if you want to make sure you do not have half-completed jobs you
|
||||||
|
@ -3030,8 +3135,8 @@ should send the signal @strong{SIGTERM} to GNU @strong{parallel}:
|
||||||
This will tell GNU @strong{parallel} to not start any new jobs, but wait until
|
This will tell GNU @strong{parallel} to not start any new jobs, but wait until
|
||||||
the currently running jobs are finished before exiting.
|
the currently running jobs are finished before exiting.
|
||||||
|
|
||||||
|
@node ENVIRONMENT VARIABLES
|
||||||
@chapter ENVIRONMENT VARIABLES
|
@chapter ENVIRONMENT VARIABLES
|
||||||
@anchor{ENVIRONMENT VARIABLES}
|
|
||||||
|
|
||||||
@table @asis
|
@table @asis
|
||||||
@item $PARALLEL_PID
|
@item $PARALLEL_PID
|
||||||
|
@ -3090,8 +3195,8 @@ must be one argument.
|
||||||
|
|
||||||
@end table
|
@end table
|
||||||
|
|
||||||
|
@node DEFAULT PROFILE (CONFIG FILE)
|
||||||
@chapter DEFAULT PROFILE (CONFIG FILE)
|
@chapter DEFAULT PROFILE (CONFIG FILE)
|
||||||
@anchor{DEFAULT PROFILE (CONFIG FILE)}
|
|
||||||
|
|
||||||
The file ~/.parallel/config (formerly known as .parallelrc) will be
|
The file ~/.parallel/config (formerly known as .parallelrc) will be
|
||||||
read if it exists. Lines starting with '#' will be ignored. It can be
|
read if it exists. Lines starting with '#' will be ignored. It can be
|
||||||
|
@ -3102,8 +3207,8 @@ Options on the command line takes precedence over the environment
|
||||||
variable $PARALLEL which takes precedence over the file
|
variable $PARALLEL which takes precedence over the file
|
||||||
~/.parallel/config.
|
~/.parallel/config.
|
||||||
|
|
||||||
|
@node PROFILE FILES
|
||||||
@chapter PROFILE FILES
|
@chapter PROFILE FILES
|
||||||
@anchor{PROFILE FILES}
|
|
||||||
|
|
||||||
If @strong{--profile} set, GNU @strong{parallel} will read the profile from that file instead of
|
If @strong{--profile} set, GNU @strong{parallel} will read the profile from that file instead of
|
||||||
~/.parallel/config. You can have multiple @strong{--profiles}.
|
~/.parallel/config. You can have multiple @strong{--profiles}.
|
||||||
|
@ -3140,8 +3245,8 @@ remote computers:
|
||||||
parallel -J dist --trc {.}.bz2 bzip2 -9 ::: *
|
parallel -J dist --trc {.}.bz2 bzip2 -9 ::: *
|
||||||
@end verbatim
|
@end verbatim
|
||||||
|
|
||||||
|
@node EXIT STATUS
|
||||||
@chapter EXIT STATUS
|
@chapter EXIT STATUS
|
||||||
@anchor{EXIT STATUS}
|
|
||||||
|
|
||||||
If @strong{--halt-on-error} 0 or not specified:
|
If @strong{--halt-on-error} 0 or not specified:
|
||||||
|
|
||||||
|
@ -3170,15 +3275,32 @@ Other error.
|
||||||
|
|
||||||
If @strong{--halt-on-error} 1 or 2: Exit status of the failing job.
|
If @strong{--halt-on-error} 1 or 2: Exit status of the failing job.
|
||||||
|
|
||||||
|
@node DIFFERENCES BETWEEN GNU Parallel AND ALTERNATIVES
|
||||||
@chapter DIFFERENCES BETWEEN GNU Parallel AND ALTERNATIVES
|
@chapter DIFFERENCES BETWEEN GNU Parallel AND ALTERNATIVES
|
||||||
@anchor{DIFFERENCES BETWEEN GNU Parallel AND ALTERNATIVES}
|
|
||||||
|
|
||||||
There are a lot programs with some of the functionality of GNU
|
There are a lot programs with some of the functionality of GNU
|
||||||
@strong{parallel}. GNU @strong{parallel} strives to include the best of the
|
@strong{parallel}. GNU @strong{parallel} strives to include the best of the
|
||||||
functionality without sacrificing ease of use.
|
functionality without sacrificing ease of use.
|
||||||
|
|
||||||
|
@menu
|
||||||
|
* SUMMARY TABLE::
|
||||||
|
* DIFFERENCES BETWEEN xargs AND GNU Parallel::
|
||||||
|
* DIFFERENCES BETWEEN find -exec AND GNU Parallel::
|
||||||
|
* DIFFERENCES BETWEEN make -j AND GNU Parallel::
|
||||||
|
* DIFFERENCES BETWEEN ppss AND GNU Parallel::
|
||||||
|
* DIFFERENCES BETWEEN pexec AND GNU Parallel::
|
||||||
|
* DIFFERENCES BETWEEN xjobs AND GNU Parallel::
|
||||||
|
* DIFFERENCES BETWEEN prll AND GNU Parallel::
|
||||||
|
* DIFFERENCES BETWEEN dxargs AND GNU Parallel::
|
||||||
|
* DIFFERENCES BETWEEN mdm/middleman AND GNU Parallel::
|
||||||
|
* DIFFERENCES BETWEEN xapply AND GNU Parallel::
|
||||||
|
* DIFFERENCES BETWEEN paexec AND GNU Parallel::
|
||||||
|
* DIFFERENCES BETWEEN map AND GNU Parallel::
|
||||||
|
* DIFFERENCES BETWEEN ClusterSSH AND GNU Parallel::
|
||||||
|
@end menu
|
||||||
|
|
||||||
|
@node SUMMARY TABLE
|
||||||
@section SUMMARY TABLE
|
@section SUMMARY TABLE
|
||||||
@anchor{SUMMARY TABLE}
|
|
||||||
|
|
||||||
The following features are in some of the comparable tools:
|
The following features are in some of the comparable tools:
|
||||||
|
|
||||||
|
@ -3308,8 +3430,8 @@ supports (See REPORTING BUGS).
|
||||||
ClusterSSH: TODO - Please file a bug-report if you know what features ClusterSSH
|
ClusterSSH: TODO - Please file a bug-report if you know what features ClusterSSH
|
||||||
supports (See REPORTING BUGS).
|
supports (See REPORTING BUGS).
|
||||||
|
|
||||||
|
@node DIFFERENCES BETWEEN xargs AND GNU Parallel
|
||||||
@section DIFFERENCES BETWEEN xargs AND GNU Parallel
|
@section DIFFERENCES BETWEEN xargs AND GNU Parallel
|
||||||
@anchor{DIFFERENCES BETWEEN xargs AND GNU Parallel}
|
|
||||||
|
|
||||||
@strong{xargs} offer some of the same possibilities as GNU @strong{parallel}.
|
@strong{xargs} offer some of the same possibilities as GNU @strong{parallel}.
|
||||||
|
|
||||||
|
@ -3384,8 +3506,8 @@ becomes (assuming you have 8 cores)
|
||||||
|
|
||||||
@strong{ls | xargs -d "\n" -P8 -I @{@} bash -c "echo @{@}; ls @{@}|wc"}
|
@strong{ls | xargs -d "\n" -P8 -I @{@} bash -c "echo @{@}; ls @{@}|wc"}
|
||||||
|
|
||||||
|
@node DIFFERENCES BETWEEN find -exec AND GNU Parallel
|
||||||
@section DIFFERENCES BETWEEN find -exec AND GNU Parallel
|
@section DIFFERENCES BETWEEN find -exec AND GNU Parallel
|
||||||
@anchor{DIFFERENCES BETWEEN find -exec AND GNU Parallel}
|
|
||||||
|
|
||||||
@strong{find -exec} offer some of the same possibilities as GNU @strong{parallel}.
|
@strong{find -exec} offer some of the same possibilities as GNU @strong{parallel}.
|
||||||
|
|
||||||
|
@ -3393,8 +3515,8 @@ becomes (assuming you have 8 cores)
|
||||||
hosts or URLs) will require creating these inputs as files. @strong{find
|
hosts or URLs) will require creating these inputs as files. @strong{find
|
||||||
-exec} has no support for running commands in parallel.
|
-exec} has no support for running commands in parallel.
|
||||||
|
|
||||||
|
@node DIFFERENCES BETWEEN make -j AND GNU Parallel
|
||||||
@section DIFFERENCES BETWEEN make -j AND GNU Parallel
|
@section DIFFERENCES BETWEEN make -j AND GNU Parallel
|
||||||
@anchor{DIFFERENCES BETWEEN make -j AND GNU Parallel}
|
|
||||||
|
|
||||||
@strong{make -j} can run jobs in parallel, but requires a crafted Makefile
|
@strong{make -j} can run jobs in parallel, but requires a crafted Makefile
|
||||||
to do this. That results in extra quoting to get filename containing
|
to do this. That results in extra quoting to get filename containing
|
||||||
|
@ -3409,8 +3531,8 @@ this.
|
||||||
(Very early versions of GNU @strong{parallel} were coincidently implemented
|
(Very early versions of GNU @strong{parallel} were coincidently implemented
|
||||||
using @strong{make -j}).
|
using @strong{make -j}).
|
||||||
|
|
||||||
|
@node DIFFERENCES BETWEEN ppss AND GNU Parallel
|
||||||
@section DIFFERENCES BETWEEN ppss AND GNU Parallel
|
@section DIFFERENCES BETWEEN ppss AND GNU Parallel
|
||||||
@anchor{DIFFERENCES BETWEEN ppss AND GNU Parallel}
|
|
||||||
|
|
||||||
@strong{ppss} is also a tool for running jobs in parallel.
|
@strong{ppss} is also a tool for running jobs in parallel.
|
||||||
|
|
||||||
|
@ -3437,8 +3559,12 @@ postprocessing if written using @strong{ppss}.
|
||||||
For remote systems PPSS requires 3 steps: config, deploy, and
|
For remote systems PPSS requires 3 steps: config, deploy, and
|
||||||
start. GNU @strong{parallel} only requires one step.
|
start. GNU @strong{parallel} only requires one step.
|
||||||
|
|
||||||
|
@menu
|
||||||
|
* EXAMPLES FROM ppss MANUAL::
|
||||||
|
@end menu
|
||||||
|
|
||||||
|
@node EXAMPLES FROM ppss MANUAL
|
||||||
@subsection EXAMPLES FROM ppss MANUAL
|
@subsection EXAMPLES FROM ppss MANUAL
|
||||||
@anchor{EXAMPLES FROM ppss MANUAL}
|
|
||||||
|
|
||||||
Here are the examples from @strong{ppss}'s manual page with the equivalent
|
Here are the examples from @strong{ppss}'s manual page with the equivalent
|
||||||
using GNU @strong{parallel}:
|
using GNU @strong{parallel}:
|
||||||
|
@ -3484,8 +3610,8 @@ using GNU @strong{parallel}:
|
||||||
|
|
||||||
@strong{9} killall -SIGUSR2 parallel
|
@strong{9} killall -SIGUSR2 parallel
|
||||||
|
|
||||||
|
@node DIFFERENCES BETWEEN pexec AND GNU Parallel
|
||||||
@section DIFFERENCES BETWEEN pexec AND GNU Parallel
|
@section DIFFERENCES BETWEEN pexec AND GNU Parallel
|
||||||
@anchor{DIFFERENCES BETWEEN pexec AND GNU Parallel}
|
|
||||||
|
|
||||||
@strong{pexec} is also a tool for running jobs in parallel.
|
@strong{pexec} is also a tool for running jobs in parallel.
|
||||||
|
|
||||||
|
@ -3542,8 +3668,8 @@ faster as only one process will be either reading or writing:
|
||||||
@strong{8} ls *jpg | parallel -j8 'sem --id diskio cat @{@} | jpegtopnm |' \
|
@strong{8} ls *jpg | parallel -j8 'sem --id diskio cat @{@} | jpegtopnm |' \
|
||||||
'pnmscale 0.5 | pnmtojpeg | sem --id diskio cat > th_@{@}'
|
'pnmscale 0.5 | pnmtojpeg | sem --id diskio cat > th_@{@}'
|
||||||
|
|
||||||
|
@node DIFFERENCES BETWEEN xjobs AND GNU Parallel
|
||||||
@section DIFFERENCES BETWEEN xjobs AND GNU Parallel
|
@section DIFFERENCES BETWEEN xjobs AND GNU Parallel
|
||||||
@anchor{DIFFERENCES BETWEEN xjobs AND GNU Parallel}
|
|
||||||
|
|
||||||
@strong{xjobs} is also a tool for running jobs in parallel. It only supports
|
@strong{xjobs} is also a tool for running jobs in parallel. It only supports
|
||||||
running jobs on your local computer.
|
running jobs on your local computer.
|
||||||
|
@ -3584,8 +3710,8 @@ cat /var/run/my_named_pipe | parallel &
|
||||||
echo unzip 1.zip >> /var/run/my_named_pipe;
|
echo unzip 1.zip >> /var/run/my_named_pipe;
|
||||||
echo tar cf /backup/myhome.tar /home/me >> /var/run/my_named_pipe
|
echo tar cf /backup/myhome.tar /home/me >> /var/run/my_named_pipe
|
||||||
|
|
||||||
|
@node DIFFERENCES BETWEEN prll AND GNU Parallel
|
||||||
@section DIFFERENCES BETWEEN prll AND GNU Parallel
|
@section DIFFERENCES BETWEEN prll AND GNU Parallel
|
||||||
@anchor{DIFFERENCES BETWEEN prll AND GNU Parallel}
|
|
||||||
|
|
||||||
@strong{prll} is also a tool for running jobs in parallel. It does not
|
@strong{prll} is also a tool for running jobs in parallel. It does not
|
||||||
support running jobs on remote computers.
|
support running jobs on remote computers.
|
||||||
|
@ -3607,8 +3733,8 @@ prll -s 'mogrify -flip $1' *.jpg
|
||||||
|
|
||||||
parallel mogrify -flip ::: *.jpg
|
parallel mogrify -flip ::: *.jpg
|
||||||
|
|
||||||
|
@node DIFFERENCES BETWEEN dxargs AND GNU Parallel
|
||||||
@section DIFFERENCES BETWEEN dxargs AND GNU Parallel
|
@section DIFFERENCES BETWEEN dxargs AND GNU Parallel
|
||||||
@anchor{DIFFERENCES BETWEEN dxargs AND GNU Parallel}
|
|
||||||
|
|
||||||
@strong{dxargs} is also a tool for running jobs in parallel.
|
@strong{dxargs} is also a tool for running jobs in parallel.
|
||||||
|
|
||||||
|
@ -3616,8 +3742,8 @@ parallel mogrify -flip ::: *.jpg
|
||||||
MaxStartups. @strong{dxargs} is only built for remote run jobs, but does not
|
MaxStartups. @strong{dxargs} is only built for remote run jobs, but does not
|
||||||
support transferring of files.
|
support transferring of files.
|
||||||
|
|
||||||
|
@node DIFFERENCES BETWEEN mdm/middleman AND GNU Parallel
|
||||||
@section DIFFERENCES BETWEEN mdm/middleman AND GNU Parallel
|
@section DIFFERENCES BETWEEN mdm/middleman AND GNU Parallel
|
||||||
@anchor{DIFFERENCES BETWEEN mdm/middleman AND GNU Parallel}
|
|
||||||
|
|
||||||
middleman(mdm) is also a tool for running jobs in parallel.
|
middleman(mdm) is also a tool for running jobs in parallel.
|
||||||
|
|
||||||
|
@ -3630,8 +3756,8 @@ to GNU @strong{parallel}:
|
||||||
|
|
||||||
@strong{find dir -execdir sem cmd @{@} \;}
|
@strong{find dir -execdir sem cmd @{@} \;}
|
||||||
|
|
||||||
|
@node DIFFERENCES BETWEEN xapply AND GNU Parallel
|
||||||
@section DIFFERENCES BETWEEN xapply AND GNU Parallel
|
@section DIFFERENCES BETWEEN xapply AND GNU Parallel
|
||||||
@anchor{DIFFERENCES BETWEEN xapply AND GNU Parallel}
|
|
||||||
|
|
||||||
@strong{xapply} can run jobs in parallel on the local computer.
|
@strong{xapply} can run jobs in parallel on the local computer.
|
||||||
|
|
||||||
|
@ -3686,8 +3812,8 @@ using GNU @strong{parallel}:
|
||||||
|
|
||||||
@strong{11} parallel '[ -f @{@} ] && echo @{@}' < List | ...
|
@strong{11} parallel '[ -f @{@} ] && echo @{@}' < List | ...
|
||||||
|
|
||||||
|
@node DIFFERENCES BETWEEN paexec AND GNU Parallel
|
||||||
@section DIFFERENCES BETWEEN paexec AND GNU Parallel
|
@section DIFFERENCES BETWEEN paexec AND GNU Parallel
|
||||||
@anchor{DIFFERENCES BETWEEN paexec AND GNU Parallel}
|
|
||||||
|
|
||||||
@strong{paexec} can run jobs in parallel on both the local and remote computers.
|
@strong{paexec} can run jobs in parallel on both the local and remote computers.
|
||||||
|
|
||||||
|
@ -3743,8 +3869,8 @@ using GNU @strong{parallel}:
|
||||||
|
|
||||||
@end table
|
@end table
|
||||||
|
|
||||||
|
@node DIFFERENCES BETWEEN map AND GNU Parallel
|
||||||
@section DIFFERENCES BETWEEN map AND GNU Parallel
|
@section DIFFERENCES BETWEEN map AND GNU Parallel
|
||||||
@anchor{DIFFERENCES BETWEEN map AND GNU Parallel}
|
|
||||||
|
|
||||||
@strong{map} sees it as a feature to have less features and in doing so it
|
@strong{map} sees it as a feature to have less features and in doing so it
|
||||||
also handles corner cases incorrectly. A lot of GNU @strong{parallel}'s code
|
also handles corner cases incorrectly. A lot of GNU @strong{parallel}'s code
|
||||||
|
@ -3837,8 +3963,8 @@ delimiter (only field delimiter), logging of jobs run with possibility
|
||||||
to resume, keeping the output in the same order as input, --pipe
|
to resume, keeping the output in the same order as input, --pipe
|
||||||
processing, and dynamically timeouts.
|
processing, and dynamically timeouts.
|
||||||
|
|
||||||
|
@node DIFFERENCES BETWEEN ClusterSSH AND GNU Parallel
|
||||||
@section DIFFERENCES BETWEEN ClusterSSH AND GNU Parallel
|
@section DIFFERENCES BETWEEN ClusterSSH AND GNU Parallel
|
||||||
@anchor{DIFFERENCES BETWEEN ClusterSSH AND GNU Parallel}
|
|
||||||
|
|
||||||
ClusterSSH solves a different problem than GNU @strong{parallel}.
|
ClusterSSH solves a different problem than GNU @strong{parallel}.
|
||||||
|
|
||||||
|
@ -3857,11 +3983,18 @@ GNU @strong{parallel} can be used as a poor-man's version of ClusterSSH:
|
||||||
|
|
||||||
@strong{parallel --nonall -S server-a,server-b do_stuff foo bar}
|
@strong{parallel --nonall -S server-a,server-b do_stuff foo bar}
|
||||||
|
|
||||||
|
@node BUGS
|
||||||
@chapter BUGS
|
@chapter BUGS
|
||||||
@anchor{BUGS}
|
|
||||||
|
|
||||||
|
@menu
|
||||||
|
* Quoting of newline::
|
||||||
|
* Speed::
|
||||||
|
* --nice limits command length::
|
||||||
|
* Aliases and functions do not work::
|
||||||
|
@end menu
|
||||||
|
|
||||||
|
@node Quoting of newline
|
||||||
@section Quoting of newline
|
@section Quoting of newline
|
||||||
@anchor{Quoting of newline}
|
|
||||||
|
|
||||||
Because of the way newline is quoted this will not work:
|
Because of the way newline is quoted this will not work:
|
||||||
|
|
||||||
|
@ -3875,18 +4008,25 @@ echo 1,2,3 | parallel -vkd, "echo 'a'@{@}'b'"
|
||||||
|
|
||||||
echo 1,2,3 | parallel -vkd, "echo 'a'"@{@}"'b'"
|
echo 1,2,3 | parallel -vkd, "echo 'a'"@{@}"'b'"
|
||||||
|
|
||||||
|
@node Speed
|
||||||
@section Speed
|
@section Speed
|
||||||
@anchor{Speed}
|
|
||||||
|
|
||||||
|
@menu
|
||||||
|
* Startup::
|
||||||
|
* Job startup::
|
||||||
|
* SSH::
|
||||||
|
* Disk access::
|
||||||
|
@end menu
|
||||||
|
|
||||||
|
@node Startup
|
||||||
@subsection Startup
|
@subsection Startup
|
||||||
@anchor{Startup}
|
|
||||||
|
|
||||||
GNU @strong{parallel} is slow at starting up - around 250 ms. Half of the
|
GNU @strong{parallel} is slow at starting up - around 250 ms. Half of the
|
||||||
startup time is spent finding the maximal length of a command
|
startup time is spent finding the maximal length of a command
|
||||||
line. Setting @strong{-s} will remove this part of the startup time.
|
line. Setting @strong{-s} will remove this part of the startup time.
|
||||||
|
|
||||||
|
@node Job startup
|
||||||
@subsection Job startup
|
@subsection Job startup
|
||||||
@anchor{Job startup}
|
|
||||||
|
|
||||||
Starting a job on the local machine takes around 3 ms. This can be a
|
Starting a job on the local machine takes around 3 ms. This can be a
|
||||||
big overhead if the job takes very few ms to run. Often you can group
|
big overhead if the job takes very few ms to run. Often you can group
|
||||||
|
@ -3895,8 +4035,8 @@ significant.
|
||||||
|
|
||||||
Using @strong{--ungroup} the 3 ms can be lowered to around 2 ms.
|
Using @strong{--ungroup} the 3 ms can be lowered to around 2 ms.
|
||||||
|
|
||||||
|
@node SSH
|
||||||
@subsection SSH
|
@subsection SSH
|
||||||
@anchor{SSH}
|
|
||||||
|
|
||||||
When using multiple computers GNU @strong{parallel} opens @strong{ssh} connections
|
When using multiple computers GNU @strong{parallel} opens @strong{ssh} connections
|
||||||
to them to figure out how many connections can be used reliably
|
to them to figure out how many connections can be used reliably
|
||||||
|
@ -3908,8 +4048,8 @@ If your jobs are short you may see that there are fewer jobs running
|
||||||
on the remove systems than expected. This is due to time spent logging
|
on the remove systems than expected. This is due to time spent logging
|
||||||
in and out. @strong{-M} may help here.
|
in and out. @strong{-M} may help here.
|
||||||
|
|
||||||
|
@node Disk access
|
||||||
@subsection Disk access
|
@subsection Disk access
|
||||||
@anchor{Disk access}
|
|
||||||
|
|
||||||
A single disk can normally read data faster if it reads one file at a
|
A single disk can normally read data faster if it reads one file at a
|
||||||
time instead of reading a lot of files in parallel, as this will avoid
|
time instead of reading a lot of files in parallel, as this will avoid
|
||||||
|
@ -3936,16 +4076,16 @@ If the jobs are of the form read-compute-read-compute, it may be
|
||||||
faster to run more jobs in parallel than the system has CPUs, as some
|
faster to run more jobs in parallel than the system has CPUs, as some
|
||||||
of the jobs will be stuck waiting for disk access.
|
of the jobs will be stuck waiting for disk access.
|
||||||
|
|
||||||
|
@node --nice limits command length
|
||||||
@section --nice limits command length
|
@section --nice limits command length
|
||||||
@anchor{--nice limits command length}
|
|
||||||
|
|
||||||
The current implementation of @strong{--nice} is too pessimistic in the max
|
The current implementation of @strong{--nice} is too pessimistic in the max
|
||||||
allowed command length. It only uses a little more than half of what
|
allowed command length. It only uses a little more than half of what
|
||||||
it could. This affects @strong{-X} and @strong{-m}. If this becomes a real problem for
|
it could. This affects @strong{-X} and @strong{-m}. If this becomes a real problem for
|
||||||
you file a bug-report.
|
you file a bug-report.
|
||||||
|
|
||||||
|
@node Aliases and functions do not work
|
||||||
@section Aliases and functions do not work
|
@section Aliases and functions do not work
|
||||||
@anchor{Aliases and functions do not work}
|
|
||||||
|
|
||||||
If you get:
|
If you get:
|
||||||
|
|
||||||
|
@ -3961,8 +4101,8 @@ need to @strong{export -f} the function first. An alias will, however, not
|
||||||
work (see why http://www.perlmonks.org/index.pl?node_id=484296), so
|
work (see why http://www.perlmonks.org/index.pl?node_id=484296), so
|
||||||
change your alias to a script.
|
change your alias to a script.
|
||||||
|
|
||||||
|
@node REPORTING BUGS
|
||||||
@chapter REPORTING BUGS
|
@chapter REPORTING BUGS
|
||||||
@anchor{REPORTING BUGS}
|
|
||||||
|
|
||||||
Report bugs to <bug-parallel@@gnu.org> or
|
Report bugs to <bug-parallel@@gnu.org> or
|
||||||
https://savannah.gnu.org/bugs/?func=additem&group=parallel
|
https://savannah.gnu.org/bugs/?func=additem&group=parallel
|
||||||
|
@ -4008,8 +4148,8 @@ will put more burden on you and it is extra important you give any
|
||||||
information that help. In general the problem will be fixed faster and
|
information that help. In general the problem will be fixed faster and
|
||||||
with less work for you if you can reproduce the error on a VirtualBox.
|
with less work for you if you can reproduce the error on a VirtualBox.
|
||||||
|
|
||||||
|
@node AUTHOR
|
||||||
@chapter AUTHOR
|
@chapter AUTHOR
|
||||||
@anchor{AUTHOR}
|
|
||||||
|
|
||||||
When using GNU @strong{parallel} for a publication please cite:
|
When using GNU @strong{parallel} for a publication please cite:
|
||||||
|
|
||||||
|
@ -4026,8 +4166,8 @@ and Free Software Foundation, Inc.
|
||||||
Parts of the manual concerning @strong{xargs} compatibility is inspired by
|
Parts of the manual concerning @strong{xargs} compatibility is inspired by
|
||||||
the manual of @strong{xargs} from GNU findutils 4.4.2.
|
the manual of @strong{xargs} from GNU findutils 4.4.2.
|
||||||
|
|
||||||
|
@node LICENSE
|
||||||
@chapter LICENSE
|
@chapter LICENSE
|
||||||
@anchor{LICENSE}
|
|
||||||
|
|
||||||
Copyright (C) 2007,2008,2009,2010,2011,2012,2013 Free Software Foundation,
|
Copyright (C) 2007,2008,2009,2010,2011,2012,2013 Free Software Foundation,
|
||||||
Inc.
|
Inc.
|
||||||
|
@ -4045,8 +4185,13 @@ GNU General Public License for more details.
|
||||||
You should have received a copy of the GNU General Public License
|
You should have received a copy of the GNU General Public License
|
||||||
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
@menu
|
||||||
|
* Documentation license I::
|
||||||
|
* Documentation license II::
|
||||||
|
@end menu
|
||||||
|
|
||||||
|
@node Documentation license I
|
||||||
@section Documentation license I
|
@section Documentation license I
|
||||||
@anchor{Documentation license I}
|
|
||||||
|
|
||||||
Permission is granted to copy, distribute and/or modify this documentation
|
Permission is granted to copy, distribute and/or modify this documentation
|
||||||
under the terms of the GNU Free Documentation License, Version 1.3 or
|
under the terms of the GNU Free Documentation License, Version 1.3 or
|
||||||
|
@ -4054,8 +4199,8 @@ any later version published by the Free Software Foundation; with no
|
||||||
Invariant Sections, with no Front-Cover Texts, and with no Back-Cover
|
Invariant Sections, with no Front-Cover Texts, and with no Back-Cover
|
||||||
Texts. A copy of the license is included in the file fdl.txt.
|
Texts. A copy of the license is included in the file fdl.txt.
|
||||||
|
|
||||||
|
@node Documentation license II
|
||||||
@section Documentation license II
|
@section Documentation license II
|
||||||
@anchor{Documentation license II}
|
|
||||||
|
|
||||||
You are free:
|
You are free:
|
||||||
|
|
||||||
|
@ -4135,15 +4280,15 @@ license terms of this work.
|
||||||
|
|
||||||
A copy of the full license is included in the file as cc-by-sa.txt.
|
A copy of the full license is included in the file as cc-by-sa.txt.
|
||||||
|
|
||||||
|
@node DEPENDENCIES
|
||||||
@chapter DEPENDENCIES
|
@chapter DEPENDENCIES
|
||||||
@anchor{DEPENDENCIES}
|
|
||||||
|
|
||||||
GNU @strong{parallel} uses Perl, and the Perl modules Getopt::Long,
|
GNU @strong{parallel} uses Perl, and the Perl modules Getopt::Long,
|
||||||
IPC::Open3, Symbol, IO::File, POSIX, and File::Temp. For remote usage
|
IPC::Open3, Symbol, IO::File, POSIX, and File::Temp. For remote usage
|
||||||
it also uses rsync with ssh.
|
it also uses rsync with ssh.
|
||||||
|
|
||||||
|
@node SEE ALSO
|
||||||
@chapter SEE ALSO
|
@chapter SEE ALSO
|
||||||
@anchor{SEE ALSO}
|
|
||||||
|
|
||||||
@strong{ssh}(1), @strong{rsync}(1), @strong{find}(1), @strong{xargs}(1), @strong{dirname}(1),
|
@strong{ssh}(1), @strong{rsync}(1), @strong{find}(1), @strong{xargs}(1), @strong{dirname}(1),
|
||||||
@strong{make}(1), @strong{pexec}(1), @strong{ppss}(1), @strong{xjobs}(1), @strong{prll}(1),
|
@strong{make}(1), @strong{pexec}(1), @strong{ppss}(1), @strong{xjobs}(1), @strong{prll}(1),
|
||||||
|
|
|
@ -124,7 +124,7 @@
|
||||||
.\" ========================================================================
|
.\" ========================================================================
|
||||||
.\"
|
.\"
|
||||||
.IX Title "PARALLEL_TUTORIAL 1"
|
.IX Title "PARALLEL_TUTORIAL 1"
|
||||||
.TH PARALLEL_TUTORIAL 1 "2014-01-25" "20140323" "parallel"
|
.TH PARALLEL_TUTORIAL 1 "2014-01-25" "20140422" "parallel"
|
||||||
.\" For nroff, turn off justification. Always turn off hyphenation; it makes
|
.\" For nroff, turn off justification. Always turn off hyphenation; it makes
|
||||||
.\" way too many mistakes in technical documents.
|
.\" way too many mistakes in technical documents.
|
||||||
.if n .ad l
|
.if n .ad l
|
||||||
|
|
2
src/sql
2
src/sql
|
@ -556,7 +556,7 @@ $Global::Initfile && unlink $Global::Initfile;
|
||||||
exit ($err);
|
exit ($err);
|
||||||
|
|
||||||
sub parse_options {
|
sub parse_options {
|
||||||
$Global::version = 20140323;
|
$Global::version = 20140422;
|
||||||
$Global::progname = 'sql';
|
$Global::progname = 'sql';
|
||||||
|
|
||||||
# This must be done first as this may exec myself
|
# This must be done first as this may exec myself
|
||||||
|
|
BIN
src/sql.pdf
BIN
src/sql.pdf
Binary file not shown.
75
src/sql.texi
75
src/sql.texi
|
@ -8,13 +8,27 @@
|
||||||
@node Top
|
@node Top
|
||||||
@top sql
|
@top sql
|
||||||
|
|
||||||
|
@menu
|
||||||
|
* NAME::
|
||||||
|
* SYNOPSIS::
|
||||||
|
* DESCRIPTION::
|
||||||
|
* DBURL::
|
||||||
|
* EXAMPLES::
|
||||||
|
* REPORTING BUGS::
|
||||||
|
* AUTHOR::
|
||||||
|
* LICENSE::
|
||||||
|
* DEPENDENCIES::
|
||||||
|
* FILES::
|
||||||
|
* SEE ALSO::
|
||||||
|
@end menu
|
||||||
|
|
||||||
|
@node NAME
|
||||||
@chapter NAME
|
@chapter NAME
|
||||||
@anchor{NAME}
|
|
||||||
|
|
||||||
sql - execute a command on a database determined by a dburl
|
sql - execute a command on a database determined by a dburl
|
||||||
|
|
||||||
|
@node SYNOPSIS
|
||||||
@chapter SYNOPSIS
|
@chapter SYNOPSIS
|
||||||
@anchor{SYNOPSIS}
|
|
||||||
|
|
||||||
@strong{sql} [options] @emph{dburl} [@emph{commands}]
|
@strong{sql} [options] @emph{dburl} [@emph{commands}]
|
||||||
|
|
||||||
|
@ -22,8 +36,8 @@ sql - execute a command on a database determined by a dburl
|
||||||
|
|
||||||
@strong{#!/usr/bin/sql} @strong{--shebang} [options] @emph{dburl}
|
@strong{#!/usr/bin/sql} @strong{--shebang} [options] @emph{dburl}
|
||||||
|
|
||||||
|
@node DESCRIPTION
|
||||||
@chapter DESCRIPTION
|
@chapter DESCRIPTION
|
||||||
@anchor{DESCRIPTION}
|
|
||||||
|
|
||||||
GNU @strong{sql} aims to give a simple, unified interface for accessing
|
GNU @strong{sql} aims to give a simple, unified interface for accessing
|
||||||
databases through all the different databases' command line
|
databases through all the different databases' command line
|
||||||
|
@ -202,8 +216,8 @@ For this to work @strong{--shebang} or @strong{-Y} must be set as the first opti
|
||||||
|
|
||||||
@end table
|
@end table
|
||||||
|
|
||||||
|
@node DBURL
|
||||||
@chapter DBURL
|
@chapter DBURL
|
||||||
@anchor{DBURL}
|
|
||||||
|
|
||||||
A DBURL has the following syntax:
|
A DBURL has the following syntax:
|
||||||
[sql:]vendor://
|
[sql:]vendor://
|
||||||
|
@ -249,11 +263,23 @@ Example of aliases:
|
||||||
:query sqlite:////tmp/db.sqlite?SELECT * FROM foo;
|
:query sqlite:////tmp/db.sqlite?SELECT * FROM foo;
|
||||||
@end verbatim
|
@end verbatim
|
||||||
|
|
||||||
|
@node EXAMPLES
|
||||||
@chapter EXAMPLES
|
@chapter EXAMPLES
|
||||||
@anchor{EXAMPLES}
|
|
||||||
|
|
||||||
|
@menu
|
||||||
|
* Get an interactive prompt::
|
||||||
|
* Run a query::
|
||||||
|
* Copy a PostgreSQL database::
|
||||||
|
* Empty all tables in a MySQL database::
|
||||||
|
* Drop all tables in a PostgreSQL database::
|
||||||
|
* Run as a script::
|
||||||
|
* Use --colsep to process multiple columns::
|
||||||
|
* Retry if the connection fails::
|
||||||
|
* Get info about the running database system::
|
||||||
|
@end menu
|
||||||
|
|
||||||
|
@node Get an interactive prompt
|
||||||
@section Get an interactive prompt
|
@section Get an interactive prompt
|
||||||
@anchor{Get an interactive prompt}
|
|
||||||
|
|
||||||
The most basic use of GNU @strong{sql} is to get an interactive prompt:
|
The most basic use of GNU @strong{sql} is to get an interactive prompt:
|
||||||
|
|
||||||
|
@ -263,8 +289,8 @@ If you have setup an alias you can do:
|
||||||
|
|
||||||
@strong{sql :myora}
|
@strong{sql :myora}
|
||||||
|
|
||||||
|
@node Run a query
|
||||||
@section Run a query
|
@section Run a query
|
||||||
@anchor{Run a query}
|
|
||||||
|
|
||||||
To run a query directly from the command line:
|
To run a query directly from the command line:
|
||||||
|
|
||||||
|
@ -279,30 +305,30 @@ Or this:
|
||||||
|
|
||||||
@strong{sql :myora "SELECT * FROM foo;\nSELECT * FROM bar;"}
|
@strong{sql :myora "SELECT * FROM foo;\nSELECT * FROM bar;"}
|
||||||
|
|
||||||
|
@node Copy a PostgreSQL database
|
||||||
@section Copy a PostgreSQL database
|
@section Copy a PostgreSQL database
|
||||||
@anchor{Copy a PostgreSQL database}
|
|
||||||
|
|
||||||
To copy a PostgreSQL database use pg_dump to generate the dump and GNU
|
To copy a PostgreSQL database use pg_dump to generate the dump and GNU
|
||||||
@strong{sql} to import it:
|
@strong{sql} to import it:
|
||||||
|
|
||||||
@strong{pg_dump pg_database | sql pg://scott:tiger@@pg.example.com/pgdb}
|
@strong{pg_dump pg_database | sql pg://scott:tiger@@pg.example.com/pgdb}
|
||||||
|
|
||||||
|
@node Empty all tables in a MySQL database
|
||||||
@section Empty all tables in a MySQL database
|
@section Empty all tables in a MySQL database
|
||||||
@anchor{Empty all tables in a MySQL database}
|
|
||||||
|
|
||||||
Using GNU @strong{parallel} it is easy to empty all tables without dropping them:
|
Using GNU @strong{parallel} it is easy to empty all tables without dropping them:
|
||||||
|
|
||||||
@strong{sql -n mysql:/// 'show tables' | parallel sql mysql:/// DELETE FROM @{@};}
|
@strong{sql -n mysql:/// 'show tables' | parallel sql mysql:/// DELETE FROM @{@};}
|
||||||
|
|
||||||
|
@node Drop all tables in a PostgreSQL database
|
||||||
@section Drop all tables in a PostgreSQL database
|
@section Drop all tables in a PostgreSQL database
|
||||||
@anchor{Drop all tables in a PostgreSQL database}
|
|
||||||
|
|
||||||
To drop all tables in a PostgreSQL database do:
|
To drop all tables in a PostgreSQL database do:
|
||||||
|
|
||||||
@strong{sql -n pg:/// '\dt' | parallel --colsep '\|' -r sql pg:/// DROP TABLE @{2@};}
|
@strong{sql -n pg:/// '\dt' | parallel --colsep '\|' -r sql pg:/// DROP TABLE @{2@};}
|
||||||
|
|
||||||
|
@node Run as a script
|
||||||
@section Run as a script
|
@section Run as a script
|
||||||
@anchor{Run as a script}
|
|
||||||
|
|
||||||
Instead of doing:
|
Instead of doing:
|
||||||
|
|
||||||
|
@ -319,23 +345,23 @@ Then do:
|
||||||
|
|
||||||
@strong{chmod +x demosql; ./demosql}
|
@strong{chmod +x demosql; ./demosql}
|
||||||
|
|
||||||
|
@node Use --colsep to process multiple columns
|
||||||
@section Use --colsep to process multiple columns
|
@section Use --colsep to process multiple columns
|
||||||
@anchor{Use --colsep to process multiple columns}
|
|
||||||
|
|
||||||
Use GNU @strong{parallel}'s @strong{--colsep} to separate columns:
|
Use GNU @strong{parallel}'s @strong{--colsep} to separate columns:
|
||||||
|
|
||||||
@strong{sql -s '\t' :myalias 'SELECT * FROM foo;' | parallel --colsep '\t' do_stuff @{4@} @{1@}}
|
@strong{sql -s '\t' :myalias 'SELECT * FROM foo;' | parallel --colsep '\t' do_stuff @{4@} @{1@}}
|
||||||
|
|
||||||
|
@node Retry if the connection fails
|
||||||
@section Retry if the connection fails
|
@section Retry if the connection fails
|
||||||
@anchor{Retry if the connection fails}
|
|
||||||
|
|
||||||
If the access to the database fails occasionally @strong{--retries} can help
|
If the access to the database fails occasionally @strong{--retries} can help
|
||||||
make sure the query succeeds:
|
make sure the query succeeds:
|
||||||
|
|
||||||
@strong{sql --retries 5 :myalias 'SELECT * FROM really_big_foo;'}
|
@strong{sql --retries 5 :myalias 'SELECT * FROM really_big_foo;'}
|
||||||
|
|
||||||
|
@node Get info about the running database system
|
||||||
@section Get info about the running database system
|
@section Get info about the running database system
|
||||||
@anchor{Get info about the running database system}
|
|
||||||
|
|
||||||
Show how big the database is:
|
Show how big the database is:
|
||||||
|
|
||||||
|
@ -353,13 +379,13 @@ List the running processes:
|
||||||
|
|
||||||
@strong{sql --show-processlist :myalias}
|
@strong{sql --show-processlist :myalias}
|
||||||
|
|
||||||
|
@node REPORTING BUGS
|
||||||
@chapter REPORTING BUGS
|
@chapter REPORTING BUGS
|
||||||
@anchor{REPORTING BUGS}
|
|
||||||
|
|
||||||
GNU @strong{sql} is part of GNU @strong{parallel}. Report bugs to <bug-parallel@@gnu.org>.
|
GNU @strong{sql} is part of GNU @strong{parallel}. Report bugs to <bug-parallel@@gnu.org>.
|
||||||
|
|
||||||
|
@node AUTHOR
|
||||||
@chapter AUTHOR
|
@chapter AUTHOR
|
||||||
@anchor{AUTHOR}
|
|
||||||
|
|
||||||
When using GNU @strong{sql} for a publication please cite:
|
When using GNU @strong{sql} for a publication please cite:
|
||||||
|
|
||||||
|
@ -371,8 +397,8 @@ Copyright (C) 2008,2009,2010 Ole Tange http://ole.tange.dk
|
||||||
Copyright (C) 2010,2011 Ole Tange, http://ole.tange.dk and Free
|
Copyright (C) 2010,2011 Ole Tange, http://ole.tange.dk and Free
|
||||||
Software Foundation, Inc.
|
Software Foundation, Inc.
|
||||||
|
|
||||||
|
@node LICENSE
|
||||||
@chapter LICENSE
|
@chapter LICENSE
|
||||||
@anchor{LICENSE}
|
|
||||||
|
|
||||||
Copyright (C) 2007,2008,2009,2010,2011 Free Software Foundation, Inc.
|
Copyright (C) 2007,2008,2009,2010,2011 Free Software Foundation, Inc.
|
||||||
|
|
||||||
|
@ -389,8 +415,13 @@ GNU General Public License for more details.
|
||||||
You should have received a copy of the GNU General Public License
|
You should have received a copy of the GNU General Public License
|
||||||
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
@menu
|
||||||
|
* Documentation license I::
|
||||||
|
* Documentation license II::
|
||||||
|
@end menu
|
||||||
|
|
||||||
|
@node Documentation license I
|
||||||
@section Documentation license I
|
@section Documentation license I
|
||||||
@anchor{Documentation license I}
|
|
||||||
|
|
||||||
Permission is granted to copy, distribute and/or modify this documentation
|
Permission is granted to copy, distribute and/or modify this documentation
|
||||||
under the terms of the GNU Free Documentation License, Version 1.3 or
|
under the terms of the GNU Free Documentation License, Version 1.3 or
|
||||||
|
@ -398,8 +429,8 @@ any later version published by the Free Software Foundation; with no
|
||||||
Invariant Sections, with no Front-Cover Texts, and with no Back-Cover
|
Invariant Sections, with no Front-Cover Texts, and with no Back-Cover
|
||||||
Texts. A copy of the license is included in the file fdl.txt.
|
Texts. A copy of the license is included in the file fdl.txt.
|
||||||
|
|
||||||
|
@node Documentation license II
|
||||||
@section Documentation license II
|
@section Documentation license II
|
||||||
@anchor{Documentation license II}
|
|
||||||
|
|
||||||
You are free:
|
You are free:
|
||||||
|
|
||||||
|
@ -476,8 +507,8 @@ license terms of this work.
|
||||||
|
|
||||||
A copy of the full license is included in the file as cc-by-sa.txt.
|
A copy of the full license is included in the file as cc-by-sa.txt.
|
||||||
|
|
||||||
|
@node DEPENDENCIES
|
||||||
@chapter DEPENDENCIES
|
@chapter DEPENDENCIES
|
||||||
@anchor{DEPENDENCIES}
|
|
||||||
|
|
||||||
GNU @strong{sql} uses Perl. If @strong{mysql} is installed, MySQL dburls will
|
GNU @strong{sql} uses Perl. If @strong{mysql} is installed, MySQL dburls will
|
||||||
work. If @strong{psql} is installed, PostgreSQL dburls will work. If
|
work. If @strong{psql} is installed, PostgreSQL dburls will work. If
|
||||||
|
@ -486,15 +517,15 @@ installed, SQLite3 dburls will work. If @strong{sqlplus} is installed,
|
||||||
Oracle dburls will work. If @strong{rlwrap} is installed, GNU @strong{sql} will
|
Oracle dburls will work. If @strong{rlwrap} is installed, GNU @strong{sql} will
|
||||||
have a command history for Oracle.
|
have a command history for Oracle.
|
||||||
|
|
||||||
|
@node FILES
|
||||||
@chapter FILES
|
@chapter FILES
|
||||||
@anchor{FILES}
|
|
||||||
|
|
||||||
~/.sql/aliases - user's own aliases with DBURLs
|
~/.sql/aliases - user's own aliases with DBURLs
|
||||||
|
|
||||||
/etc/sql/aliases - common aliases with DBURLs
|
/etc/sql/aliases - common aliases with DBURLs
|
||||||
|
|
||||||
|
@node SEE ALSO
|
||||||
@chapter SEE ALSO
|
@chapter SEE ALSO
|
||||||
@anchor{SEE ALSO}
|
|
||||||
|
|
||||||
@strong{mysql}(1), @strong{psql}(1), @strong{rlwrap}(1), @strong{sqlite}(1), @strong{sqlite3}(1), @strong{sqlplus}(1)
|
@strong{mysql}(1), @strong{psql}(1), @strong{rlwrap}(1), @strong{sqlite}(1), @strong{sqlite3}(1), @strong{sqlplus}(1)
|
||||||
|
|
||||||
|
|
|
@ -273,7 +273,7 @@ parallel -k -a <(printf 'def\tabc\njkl\tghi') --colsep '\t' echo {2} {1}
|
||||||
|
|
||||||
EOF
|
EOF
|
||||||
|
|
||||||
echo '### Test of -j filename with file content changing';
|
echo '### Test of -j filename with file content changing (missing -k is correct)';
|
||||||
echo 1 >/tmp/jobs_to_run2;
|
echo 1 >/tmp/jobs_to_run2;
|
||||||
(sleep 3; echo 10 >/tmp/jobs_to_run2) &
|
(sleep 3; echo 10 >/tmp/jobs_to_run2) &
|
||||||
parallel -j /tmp/jobs_to_run2 -v sleep {} ::: 3.3 1.5 1.5 1.5 1.5 1 1 1 1 1 1 1 1 1 1 1
|
parallel -j /tmp/jobs_to_run2 -v sleep {} ::: 3.3 1.5 1.5 1.5 1.5 1 1 1 1 1 1 1 1 1 1 1
|
||||||
|
|
|
@ -4,6 +4,7 @@ SERVER1=parallel-server3
|
||||||
SERVER2=lo
|
SERVER2=lo
|
||||||
SSHLOGIN1=parallel@parallel-server3
|
SSHLOGIN1=parallel@parallel-server3
|
||||||
SSHLOGIN2=parallel@lo
|
SSHLOGIN2=parallel@lo
|
||||||
|
SSHLOGIN3=parallel@parallel-server2
|
||||||
|
|
||||||
echo '### Test use special ssh'
|
echo '### Test use special ssh'
|
||||||
echo 'TODO test ssh with > 9 simultaneous'
|
echo 'TODO test ssh with > 9 simultaneous'
|
||||||
|
@ -12,7 +13,7 @@ echo 'ssh "$@"; echo "$@" >>/tmp/myssh2-run' >/tmp/myssh2
|
||||||
chmod 755 /tmp/myssh1 /tmp/myssh2
|
chmod 755 /tmp/myssh1 /tmp/myssh2
|
||||||
seq 1 100 | parallel --sshdelay 0.05 --sshlogin "/tmp/myssh1 $SSHLOGIN1,/tmp/myssh2 $SSHLOGIN2" -k echo
|
seq 1 100 | parallel --sshdelay 0.05 --sshlogin "/tmp/myssh1 $SSHLOGIN1,/tmp/myssh2 $SSHLOGIN2" -k echo
|
||||||
|
|
||||||
cat <<'EOF' | sed -e s/\$SERVER1/$SERVER1/\;s/\$SERVER2/$SERVER2/\;s/\$SSHLOGIN1/$SSHLOGIN1/ | parallel -j2 -k -L1
|
cat <<'EOF' | sed -e s/\$SERVER1/$SERVER1/\;s/\$SERVER2/$SERVER2/\;s/\$SSHLOGIN1/$SSHLOGIN1/\;s/\$SSHLOGIN2/$SSHLOGIN2/\;s/\$SSHLOGIN3/$SSHLOGIN3/ | parallel -j2 -k -L1
|
||||||
echo '### --filter-hosts - OK, non-such-user, connection refused, wrong host'
|
echo '### --filter-hosts - OK, non-such-user, connection refused, wrong host'
|
||||||
parallel --nonall --filter-hosts -S localhost,NoUser@localhost,154.54.72.206,"ssh 5.5.5.5" hostname
|
parallel --nonall --filter-hosts -S localhost,NoUser@localhost,154.54.72.206,"ssh 5.5.5.5" hostname
|
||||||
|
|
||||||
|
@ -21,9 +22,11 @@ echo '### test --workdir . in $HOME'
|
||||||
echo OK > testfile && parallel --workdir . --transfer -S $SSHLOGIN1 cat {} ::: testfile
|
echo OK > testfile && parallel --workdir . --transfer -S $SSHLOGIN1 cat {} ::: testfile
|
||||||
|
|
||||||
echo '### test --timeout --retries'
|
echo '### test --timeout --retries'
|
||||||
parallel -j0 --timeout 5 --retries 3 -k ssh {} echo {} ::: 192.168.1.197 8.8.8.8 n m o c f w
|
parallel -j0 --timeout 5 --retries 3 -k ssh {} echo {} ::: 192.168.1.197 8.8.8.8 $SSHLOGIN1 $SSHLOGIN2 $SSHLOGIN3
|
||||||
|
|
||||||
echo '### test --filter-hosts with server w/o ssh, non-existing server, and 5 proxied through the same'
|
echo '### test --filter-hosts with server w/o ssh, non-existing server'
|
||||||
parallel -S 192.168.1.197,8.8.8.8,n,m,o,c,f,w --filter-hosts --nonall -k --tag echo
|
parallel -S 192.168.1.197,8.8.8.8,$SSHLOGIN1,$SSHLOGIN2,$SSHLOGIN3 --filter-hosts --nonall -k --tag echo
|
||||||
|
|
||||||
|
echo '### Missing: test --filter-hosts proxied through the one host'
|
||||||
|
|
||||||
EOF
|
EOF
|
||||||
|
|
|
@ -354,7 +354,7 @@ abc def
|
||||||
ghi jkl
|
ghi jkl
|
||||||
abc def
|
abc def
|
||||||
ghi jkl
|
ghi jkl
|
||||||
### Test of -j filename with file content changing
|
### Test of -j filename with file content changing (missing -k is correct)
|
||||||
sleep 3.3
|
sleep 3.3
|
||||||
sleep 1
|
sleep 1
|
||||||
sleep 1
|
sleep 1
|
||||||
|
|
|
@ -105,16 +105,11 @@ aspire
|
||||||
### test --workdir . in $HOME
|
### test --workdir . in $HOME
|
||||||
OK
|
OK
|
||||||
### test --timeout --retries
|
### test --timeout --retries
|
||||||
n
|
parallel@parallel-server3
|
||||||
m
|
parallel@lo
|
||||||
o
|
parallel@parallel-server2
|
||||||
c
|
### test --filter-hosts with server w/o ssh, non-existing server
|
||||||
f
|
parallel@lo
|
||||||
w
|
parallel@parallel-server2
|
||||||
### test --filter-hosts with server w/o ssh, non-existing server, and 5 proxied through the same
|
parallel@parallel-server3
|
||||||
c
|
### Missing: test --filter-hosts proxied through the one host
|
||||||
f
|
|
||||||
m
|
|
||||||
n
|
|
||||||
o
|
|
||||||
w
|
|
||||||
|
|
Loading…
Reference in a new issue