Released as 20191022 ('DrivingIT')

This commit is contained in:
Ole Tange 2019-10-21 21:18:32 +02:00
parent 73f554ad8f
commit 1ba05d3bb7
21 changed files with 166 additions and 77 deletions

35
NEWS
View file

@ -1,3 +1,38 @@
20191022
* --tee will use --output-error=warn-nopipe if supported by tee.
* GNU Parallel will be presented at Driving IT 2019:
https://ida.dk/arrangementer-og-kurser/konferencer/driving-it/tools
* UMN Duluth: Job parallelization with task arrays and GNU parallel
https://www.msi.umn.edu/tutorials/umn-duluth-job-parallelization-task-arrays-and-gnu-parallel
* Genome updater uses GNU Parallel
https://github.com/pirovc/genome_updater
* Using GNU-Parallel for bioinformatics
https://www.danielecook.com/using-gnu-parallel-for-bioinformatics/
* Speeding up PostgreSQL ETL pipeline with the help of GODS
https://cfengine.com/company/blog-detail/speeding-up-postgresql-etl-pipeline-with-the-help-of-gods/
* Runing linux commands in parallel
https://dev.to/voyeg3r/runing-linux-commands-in-parallel-4ff8
* Research Computing University of Colorado Boulder contains an intro
to GNU Parallel
https://readthedocs.org/projects/curc/downloads/pdf/latest/
* 如何使用Parallel在Shell中并行执行命令
https://www.myfreax.com/gnu-parallel/
* 如何测试 Amazon Elastic File System
https://aws.amazon.com/cn/blogs/china/how-to-test-drive-amazon-elastic-file-system/
* Bug fixes and man page updates.
20190922 20190922
* --nice is now inherited by the nice level that GNU Parallel is * --nice is now inherited by the nice level that GNU Parallel is

20
README
View file

@ -54,11 +54,11 @@ document.
Full installation of GNU Parallel is as simple as: Full installation of GNU Parallel is as simple as:
wget https://ftpmirror.gnu.org/parallel/parallel-20190922.tar.bz2 wget https://ftpmirror.gnu.org/parallel/parallel-20191022.tar.bz2
wget https://ftpmirror.gnu.org/parallel/parallel-20190922.tar.bz2.sig wget https://ftpmirror.gnu.org/parallel/parallel-20191022.tar.bz2.sig
gpg parallel-20190922.tar.bz2.sig gpg parallel-20191022.tar.bz2.sig
bzip2 -dc parallel-20190922.tar.bz2 | tar xvf - bzip2 -dc parallel-20191022.tar.bz2 | tar xvf -
cd parallel-20190922 cd parallel-20191022
./configure && make && sudo make install ./configure && make && sudo make install
@ -67,11 +67,11 @@ Full installation of GNU Parallel is as simple as:
If you are not root you can add ~/bin to your path and install in If you are not root you can add ~/bin to your path and install in
~/bin and ~/share: ~/bin and ~/share:
wget https://ftpmirror.gnu.org/parallel/parallel-20190922.tar.bz2 wget https://ftpmirror.gnu.org/parallel/parallel-20191022.tar.bz2
wget https://ftpmirror.gnu.org/parallel/parallel-20190922.tar.bz2.sig wget https://ftpmirror.gnu.org/parallel/parallel-20191022.tar.bz2.sig
gpg parallel-20190922.tar.bz2.sig gpg parallel-20191022.tar.bz2.sig
bzip2 -dc parallel-20190922.tar.bz2 | tar xvf - bzip2 -dc parallel-20191022.tar.bz2 | tar xvf -
cd parallel-20190922 cd parallel-20191022
./configure --prefix=$HOME && make && make install ./configure --prefix=$HOME && make && make install
Or if your system lacks 'make' you can simply copy src/parallel Or if your system lacks 'make' you can simply copy src/parallel

20
configure vendored
View file

@ -1,6 +1,6 @@
#! /bin/sh #! /bin/sh
# Guess values for system-dependent variables and create Makefiles. # Guess values for system-dependent variables and create Makefiles.
# Generated by GNU Autoconf 2.69 for parallel 20190922. # Generated by GNU Autoconf 2.69 for parallel 20191022.
# #
# Report bugs to <bug-parallel@gnu.org>. # Report bugs to <bug-parallel@gnu.org>.
# #
@ -579,8 +579,8 @@ MAKEFLAGS=
# Identity of this package. # Identity of this package.
PACKAGE_NAME='parallel' PACKAGE_NAME='parallel'
PACKAGE_TARNAME='parallel' PACKAGE_TARNAME='parallel'
PACKAGE_VERSION='20190922' PACKAGE_VERSION='20191022'
PACKAGE_STRING='parallel 20190922' PACKAGE_STRING='parallel 20191022'
PACKAGE_BUGREPORT='bug-parallel@gnu.org' PACKAGE_BUGREPORT='bug-parallel@gnu.org'
PACKAGE_URL='' PACKAGE_URL=''
@ -1214,7 +1214,7 @@ if test "$ac_init_help" = "long"; then
# Omit some internal or obsolete options to make the list less imposing. # Omit some internal or obsolete options to make the list less imposing.
# This message is too long to be a string in the A/UX 3.1 sh. # This message is too long to be a string in the A/UX 3.1 sh.
cat <<_ACEOF cat <<_ACEOF
\`configure' configures parallel 20190922 to adapt to many kinds of systems. \`configure' configures parallel 20191022 to adapt to many kinds of systems.
Usage: $0 [OPTION]... [VAR=VALUE]... Usage: $0 [OPTION]... [VAR=VALUE]...
@ -1281,7 +1281,7 @@ fi
if test -n "$ac_init_help"; then if test -n "$ac_init_help"; then
case $ac_init_help in case $ac_init_help in
short | recursive ) echo "Configuration of parallel 20190922:";; short | recursive ) echo "Configuration of parallel 20191022:";;
esac esac
cat <<\_ACEOF cat <<\_ACEOF
@ -1357,7 +1357,7 @@ fi
test -n "$ac_init_help" && exit $ac_status test -n "$ac_init_help" && exit $ac_status
if $ac_init_version; then if $ac_init_version; then
cat <<\_ACEOF cat <<\_ACEOF
parallel configure 20190922 parallel configure 20191022
generated by GNU Autoconf 2.69 generated by GNU Autoconf 2.69
Copyright (C) 2012 Free Software Foundation, Inc. Copyright (C) 2012 Free Software Foundation, Inc.
@ -1374,7 +1374,7 @@ cat >config.log <<_ACEOF
This file contains any messages produced by compilers while This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake. running configure, to aid debugging if configure makes a mistake.
It was created by parallel $as_me 20190922, which was It was created by parallel $as_me 20191022, which was
generated by GNU Autoconf 2.69. Invocation command line was generated by GNU Autoconf 2.69. Invocation command line was
$ $0 $@ $ $0 $@
@ -2237,7 +2237,7 @@ fi
# Define the identity of the package. # Define the identity of the package.
PACKAGE='parallel' PACKAGE='parallel'
VERSION='20190922' VERSION='20191022'
cat >>confdefs.h <<_ACEOF cat >>confdefs.h <<_ACEOF
@ -2880,7 +2880,7 @@ cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
# report actual input values of CONFIG_FILES etc. instead of their # report actual input values of CONFIG_FILES etc. instead of their
# values after options handling. # values after options handling.
ac_log=" ac_log="
This file was extended by parallel $as_me 20190922, which was This file was extended by parallel $as_me 20191022, which was
generated by GNU Autoconf 2.69. Invocation command line was generated by GNU Autoconf 2.69. Invocation command line was
CONFIG_FILES = $CONFIG_FILES CONFIG_FILES = $CONFIG_FILES
@ -2942,7 +2942,7 @@ _ACEOF
cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
ac_cs_config="`$as_echo "$ac_configure_args" | sed 's/^ //; s/[\\""\`\$]/\\\\&/g'`" ac_cs_config="`$as_echo "$ac_configure_args" | sed 's/^ //; s/[\\""\`\$]/\\\\&/g'`"
ac_cs_version="\\ ac_cs_version="\\
parallel config.status 20190922 parallel config.status 20191022
configured by $0, generated by GNU Autoconf 2.69, configured by $0, generated by GNU Autoconf 2.69,
with options \\"\$ac_cs_config\\" with options \\"\$ac_cs_config\\"

View file

@ -1,4 +1,4 @@
AC_INIT([parallel], [20190922], [bug-parallel@gnu.org]) AC_INIT([parallel], [20191022], [bug-parallel@gnu.org])
AM_INIT_AUTOMAKE([-Wall -Werror foreign]) AM_INIT_AUTOMAKE([-Wall -Werror foreign])
AC_CONFIG_HEADERS([config.h]) AC_CONFIG_HEADERS([config.h])
AC_CONFIG_FILES([ AC_CONFIG_FILES([

View file

@ -1,4 +1,4 @@
== Citation notice FAQ == == Citation FAQ ==
> Why does GNU Parallel show a citation notice? > Why does GNU Parallel show a citation notice?
@ -9,7 +9,7 @@ maintaining GNU Parallel. This is much easier to get if GNU Parallel
is cited in scientific journals, and history has shown that is cited in scientific journals, and history has shown that
researchers forget to do this if they are not reminded explicitly. researchers forget to do this if they are not reminded explicitly.
It is therefore important for the long term survival of GNU Parallel It is therefore important for the long-term survival of GNU Parallel
that it is cited. The citation notice makes users aware of this. that it is cited. The citation notice makes users aware of this.
See also: https://lists.gnu.org/archive/html/parallel/2013-11/msg00006.html See also: https://lists.gnu.org/archive/html/parallel/2013-11/msg00006.html
@ -139,8 +139,8 @@ should cite, over having many users, who do not know they should cite.
If the goal had been to get more users, then the license would have If the goal had been to get more users, then the license would have
been public domain. been public domain.
This is because a long term survival with funding is more important This is because a long-term survival with funding is more important
than short term gains in popularity that can be achieved by being than short-term gains in popularity that can be achieved by being
distributed as part of a distribution. distributed as part of a distribution.

View file

@ -58,6 +58,9 @@ GNU parallel has helped me kill a Hadoop cluster before.
=== Used === === Used ===
I've said it before: The command line program GNU Parallel is a godsend.
-- Jo Chr. Oterhals @oterhals
IMHO, SQLite and GNU Parallel are among the world's great software. IMHO, SQLite and GNU Parallel are among the world's great software.
-- singe@reddit -- singe@reddit

View file

@ -221,21 +221,31 @@ See https://www.gnu.org/software/parallel/10-years-anniversary.html
Quote of the month: Quote of the month:
<<>> I've said it before: The command line program GNU Parallel is a godsend.
-- Jo Chr. Oterhals @oterhals@twitter
New in this release: New in this release:
Uses GNU Parallel https://github.com/pirovc/genome_updater * --tee will use --output-error=warn-nopipe if supported by tee.
Using GNU-Parallel for bioinformatics https://www.danielecook.com/using-gnu-parallel-for-bioinformatics/ * GNU Parallel will be presented at Driving IT 2019: https://ida.dk/arrangementer-og-kurser/konferencer/driving-it/tools
Speeding up PostgreSQL ETL pipeline with the help of GODS https://cfengine.com/company/blog-detail/speeding-up-postgresql-etl-pipeline-with-the-help-of-gods/ * UMN Duluth: Job parallelization with task arrays and GNU parallel https://www.msi.umn.edu/tutorials/umn-duluth-job-parallelization-task-arrays-and-gnu-parallel
* Genome updater uses GNU Parallel https://github.com/pirovc/genome_updater
https://readthedocs.org/projects/curc/downloads/pdf/latest/ * Using GNU-Parallel for bioinformatics https://www.danielecook.com/using-gnu-parallel-for-bioinformatics/
* Speeding up PostgreSQL ETL pipeline with the help of GODS https://cfengine.com/company/blog-detail/speeding-up-postgresql-etl-pipeline-with-the-help-of-gods/
* Runing linux commands in parallel https://dev.to/voyeg3r/runing-linux-commands-in-parallel-4ff8
* Research Computing University of Colorado Boulder contains an intro to GNU Parallel https://readthedocs.org/projects/curc/downloads/pdf/latest/
* 如何使用Parallel在Shell中并行执行命令https://www.myfreax.com/gnu-parallel/ * 如何使用Parallel在Shell中并行执行命令https://www.myfreax.com/gnu-parallel/
* 如何测试 Amazon Elastic File System https://aws.amazon.com/cn/blogs/china/how-to-test-drive-amazon-elastic-file-system/
* Bug fixes and man page updates. * Bug fixes and man page updates.
Get the book: GNU Parallel 2018 http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html Get the book: GNU Parallel 2018 http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html

View file

@ -1,6 +1,6 @@
<directory name="parallel" rev="243" srcmd5="ba6c3e7fc48a6e47a37036e6b9adfd07" vrev="1"> <directory name="parallel" rev="244" srcmd5="0bc3563ade8042405b36915c03015124" vrev="1">
<entry md5="7612849c980a824d4fa53fbcea70d38d" mtime="1569136034" name="parallel-20190922.tar.bz2" size="2075216" /> <entry md5="457471b4306f1a2b356691444247a717" mtime="1571685322" name="parallel-20191022.tar.bz2" size="2080451" />
<entry md5="af96bbd6ff8e24206dd266138a970622" mtime="1569136034" name="parallel.spec" size="4751" /> <entry md5="c8e6f66a12226dfda64fb14b8cdc7159" mtime="1571685322" name="parallel.spec" size="4751" />
<entry md5="0ea55cca2fd79fc1fb3be0d64468a921" mtime="1569136035" name="parallel_20190922.dsc" size="556" /> <entry md5="04f57f67c60bc39a33adbe69b11cb257" mtime="1571685322" name="parallel_20191022.dsc" size="556" />
<entry md5="0dad22203209974d6a2d390f5e519008" mtime="1569136035" name="parallel_20190922.tar.gz" size="2252973" /> <entry md5="5a61e4a88a27b355ce12f58023d8a26a" mtime="1571685323" name="parallel_20191022.tar.gz" size="2257704" />
</directory> </directory>

View file

@ -1,7 +1,7 @@
Summary: Shell tool for executing jobs in parallel Summary: Shell tool for executing jobs in parallel
Name: parallel Name: parallel
Version: 20190922 Version: 20191022
Release: 1.3 Release: 1.3
License: GPL-3.0-or-later License: GPL-3.0-or-later
Group: Productivity/File utilities Group: Productivity/File utilities

View file

@ -1,7 +1,7 @@
Summary: Shell tool for executing jobs in parallel Summary: Shell tool for executing jobs in parallel
Name: parallel Name: parallel
Version: 20190922 Version: 20191022
Release: 1.3 Release: 1.3
License: GPL-3.0-or-later License: GPL-3.0-or-later
Group: Productivity/File utilities Group: Productivity/File utilities

View file

@ -23,7 +23,7 @@
use strict; use strict;
use Getopt::Long; use Getopt::Long;
$Global::progname="niceload"; $Global::progname="niceload";
$Global::version = 20190922; $Global::version = 20191022;
Getopt::Long::Configure("bundling","require_order"); Getopt::Long::Configure("bundling","require_order");
get_options_from_array(\@ARGV) || die_usage(); get_options_from_array(\@ARGV) || die_usage();
if($opt::version) { if($opt::version) {

View file

@ -161,7 +161,8 @@ sub pipepart_setup() {
# Set --blocksize = size / no of proc / (- $blocksize) # Set --blocksize = size / no of proc / (- $blocksize)
$Global::dummy_jobs = 1; $Global::dummy_jobs = 1;
$Global::blocksize = 1 + $Global::blocksize = 1 +
int($size / max_jobs_running() / -$opt::blocksize); int($size / max_jobs_running() /
-multiply_binary_prefix($opt::blocksize));
} }
@Global::cat_prepends = map { pipe_part_files($_) } @opt::a; @Global::cat_prepends = map { pipe_part_files($_) } @opt::a;
# Unget the empty arg as many times as there are parts # Unget the empty arg as many times as there are parts
@ -188,6 +189,7 @@ sub pipe_tee_setup() {
# Test if tee supports --output-error=warn-nopipe # Test if tee supports --output-error=warn-nopipe
`echo | tee --output-error=warn-nopipe /dev/null >/dev/null 2>/dev/null`; `echo | tee --output-error=warn-nopipe /dev/null >/dev/null 2>/dev/null`;
my $opt = $? ? "" : "--output-error=warn-nopipe"; my $opt = $? ? "" : "--output-error=warn-nopipe";
::debug("init","tee $opt");
# Let tee inherit our stdin # Let tee inherit our stdin
# and redirect stdout to null # and redirect stdout to null
open STDOUT, ">","/dev/null"; open STDOUT, ">","/dev/null";
@ -502,7 +504,7 @@ sub pipe_shard_setup() {
} else { } else {
$script = sharder_script(); $script = sharder_script();
} }
# cat foo | sharder sep col fifo1 fifo2 fifo3 ... fifoN # cat foo | sharder sep col fifo1 fifo2 fifo3 ... fifoN
if($shardbin =~ /^[a-z_][a-z_0-9]*(\s|$)/i) { if($shardbin =~ /^[a-z_][a-z_0-9]*(\s|$)/i) {
@ -635,7 +637,10 @@ sub find_split_positions($$$) {
push(@pos,$pos); push(@pos,$pos);
} else { } else {
# Seek the the block start # Seek the the block start
seek($fh, $pos, 0) || die; if(not seek($fh, $pos, 0)) {
::error("Cannot seek to $pos in $file");
edit(255);
}
while(read($fh,substr($buf,length $buf,0),$dd_block_size)) { while(read($fh,substr($buf,length $buf,0),$dd_block_size)) {
if($opt::regexp) { if($opt::regexp) {
# If match /$recend$recstart/ => Record position # If match /$recend$recstart/ => Record position
@ -2066,7 +2071,7 @@ sub check_invalid_option_combinations() {
sub init_globals() { sub init_globals() {
# Defaults: # Defaults:
$Global::version = 20190922; $Global::version = 20191022;
$Global::progname = 'parallel'; $Global::progname = 'parallel';
$::name = "GNU Parallel"; $::name = "GNU Parallel";
$Global::infinity = 2**31; $Global::infinity = 2**31;
@ -10383,7 +10388,7 @@ sub push($) {
push @{$self->{'arg_list'}}, $record; push @{$self->{'arg_list'}}, $record;
# Make @arg available for {= =} # Make @arg available for {= =}
*Arg::arg = $self->{'arg_list_flat_orig'}; *Arg::arg = $self->{'arg_list_flat_orig'};
my $quote_arg = ($Global::quote_replace and not $Global::quoting); my $quote_arg = ($Global::quote_replace and not $Global::quoting);
for my $perlexpr (keys %{$self->{'replacecount'}}) { for my $perlexpr (keys %{$self->{'replacecount'}}) {
if($perlexpr =~ /^(\d+) /) { if($perlexpr =~ /^(\d+) /) {

View file

@ -105,7 +105,7 @@ B<Bash, Csh, or Tcsh aliases>: Use B<env_parallel>.
B<Zsh, Fish, Ksh, and Pdksh functions and aliases>: Use B<env_parallel>. B<Zsh, Fish, Ksh, and Pdksh functions and aliases>: Use B<env_parallel>.
=item B<{}> (beta testing) =item B<{}>
Input line. This replacement string will be replaced by a full line Input line. This replacement string will be replaced by a full line
read from the input source. The input source is normally stdin read from the input source. The input source is normally stdin
@ -279,7 +279,6 @@ perl quote a string
=item Z<> B<uq()> (or B<uq>) =item Z<> B<uq()> (or B<uq>)
(beta testing)
do not quote current replacement string do not quote current replacement string
=item Z<> B<total_jobs()> =item Z<> B<total_jobs()>
@ -1244,9 +1243,9 @@ Similar to B<--memfree>.
=back =back
=item B<--line-buffer> (beta testing) =item B<--line-buffer>
=item B<--lb> (beta testing) =item B<--lb>
Buffer output on line basis. B<--group> will keep the output together Buffer output on line basis. B<--group> will keep the output together
for a whole job. B<--ungroup> allows output to mixup with half a line for a whole job. B<--ungroup> allows output to mixup with half a line
@ -1574,20 +1573,20 @@ on remote computers).
Print the number of physical CPU cores and exit. Print the number of physical CPU cores and exit.
=item B<--number-of-cores> (beta testing) =item B<--number-of-cores>
Print the number of physical CPU cores and exit (used by GNU B<parallel> itself Print the number of physical CPU cores and exit (used by GNU B<parallel> itself
to determine the number of physical CPU cores on remote computers). to determine the number of physical CPU cores on remote computers).
=item B<--number-of-sockets> (beta testing) =item B<--number-of-sockets>
Print the number of filled CPU sockets and exit (used by GNU Print the number of filled CPU sockets and exit (used by GNU
B<parallel> itself to determine the number of filled CPU sockets on B<parallel> itself to determine the number of filled CPU sockets on
remote computers). remote computers).
=item B<--number-of-threads> (beta testing) =item B<--number-of-threads>
Print the number of hyperthreaded CPU cores and exit (used by GNU Print the number of hyperthreaded CPU cores and exit (used by GNU
B<parallel> itself to determine the number of hyperthreaded CPU cores B<parallel> itself to determine the number of hyperthreaded CPU cores
@ -1600,7 +1599,7 @@ Overrides an earlier B<--keep-order> (e.g. if set in
B<~/.parallel/config>). B<~/.parallel/config>).
=item B<--nice> I<niceness> (alpha testing) =item B<--nice> I<niceness> (beta testing)
Run the command at this niceness. Run the command at this niceness.
@ -1638,9 +1637,9 @@ B<,,>:
See also: B<--rpl> B<{= perl expression =}> See also: B<--rpl> B<{= perl expression =}>
=item B<--profile> I<profilename> (beta testing) =item B<--profile> I<profilename>
=item B<-J> I<profilename> (beta testing) =item B<-J> I<profilename>
Use profile I<profilename> for options. This is useful if you want to Use profile I<profilename> for options. This is useful if you want to
have multiple profiles. You could have one profile for running jobs in have multiple profiles. You could have one profile for running jobs in
@ -2192,7 +2191,7 @@ Only supported in B<Ash, Bash, Dash, Ksh, Sh, and Zsh>.
See also B<--env>, B<--record-env>. See also B<--env>, B<--record-env>.
=item B<--shard> I<shardexpr> (alpha testing) =item B<--shard> I<shardexpr> (beta testing)
Use I<shardexpr> as shard key and shard input to the jobs. Use I<shardexpr> as shard key and shard input to the jobs.
@ -2518,7 +2517,7 @@ to GNU B<parallel> giving each child its own process group, which is
then killed. Process groups are dependant on the tty. then killed. Process groups are dependant on the tty.
=item B<--tag> (beta testing) =item B<--tag>
Tag lines with arguments. Each output line will be prepended with the Tag lines with arguments. Each output line will be prepended with the
arguments and TAB (\t). When combined with B<--onall> or B<--nonall> arguments and TAB (\t). When combined with B<--onall> or B<--nonall>
@ -2527,7 +2526,7 @@ the lines will be prepended with the sshlogin instead.
B<--tag> is ignored when using B<-u>. B<--tag> is ignored when using B<-u>.
=item B<--tagstring> I<str> (beta testing) =item B<--tagstring> I<str>
Tag lines with a string. Each output line will be prepended with Tag lines with a string. Each output line will be prepended with
I<str> and TAB (\t). I<str> can contain replacement strings such as I<str> and TAB (\t). I<str> can contain replacement strings such as
@ -4781,7 +4780,7 @@ B<--ssh>. It can also be set on a per server basis (see
B<--sshlogin>). B<--sshlogin>).
=item $PARALLEL_SSHLOGIN (beta testing) =item $PARALLEL_SSHLOGIN
The environment variable $PARALLEL_SSHLOGIN is set by GNU B<parallel> The environment variable $PARALLEL_SSHLOGIN is set by GNU B<parallel>
and is visible to the jobs started from GNU B<parallel>. The value is and is visible to the jobs started from GNU B<parallel>. The value is

View file

@ -2307,6 +2307,29 @@ https://github.com/flesler/parallel
https://github.com/Julian/Verge https://github.com/Julian/Verge
https://github.com/ExpectationMax/simple_gpu_scheduler
simple_gpu_scheduler --gpus 0 1 2 < gpu_commands.txt
parallel -j3 --shuf CUDA_VISIBLE_DEVICES='{=1 $_=slot()-1 =} {=uq;=}' < gpu_commands.txt
simple_hypersearch "python3 train_dnn.py --lr {lr} --batch_size {bs}" -p lr 0.001 0.0005 0.0001 -p bs 32 64 128 | simple_gpu_scheduler --gpus 0,1,2
parallel --header : --shuf -j3 -v CUDA_VISIBLE_DEVICES='{=1 $_=slot()-1 =}' python3 train_dnn.py --lr {lr} --batch_size {bs} ::: lr 0.001 0.0005 0.0001 ::: bs 32 64 128
simple_hypersearch "python3 train_dnn.py --lr {lr} --batch_size {bs}" --n-samples 5 -p lr 0.001 0.0005 0.0001 -p bs 32 64 128 | simple_gpu_scheduler --gpus 0,1,2
parallel --header : --shuf CUDA_VISIBLE_DEVICES='{=1 $_=slot()-1; seq() > 5 and skip() =}' python3 train_dnn.py --lr {lr} --batch_size {bs} ::: lr 0.001 0.0005 0.0001 ::: bs 32 64 128
touch gpu.queue
tail -f -n 0 gpu.queue | simple_gpu_scheduler --gpus 0,1,2 &
echo "my_command_with | and stuff > logfile" >> gpu.queue
touch gpu.queue
tail -f -n 0 gpu.queue | parallel -j3 CUDA_VISIBLE_DEVICES='{=1 $_=slot()-1 =} {=uq;=}' &
# Needed to fill job slots once
seq 3 | parallel echo true >> gpu.queue
# Add jobs
echo "my_command_with | and stuff > logfile" >> gpu.queue
# Needed to flush output from completed jobs
seq 3 | parallel echo true >> gpu.queue
=head1 TESTING OTHER TOOLS =head1 TESTING OTHER TOOLS

View file

@ -574,7 +574,7 @@ $Global::Initfile && unlink $Global::Initfile;
exit ($err); exit ($err);
sub parse_options { sub parse_options {
$Global::version = 20190922; $Global::version = 20191022;
$Global::progname = 'sql'; $Global::progname = 'sql';
# This must be done first as this may exec myself # This must be done first as this may exec myself

View file

@ -2165,6 +2165,14 @@ par_null_resume() {
rm "$log" rm "$log"
} }
par_block_negative_prefix() {
tmp=`mktemp`
seq 100000 > $tmp
echo '### This should generate 10*2 jobs'
parallel -j2 -a $tmp --pipepart --block -0.01k -k md5sum | wc
rm $tmp
}
export -f $(compgen -A function | grep par_) export -f $(compgen -A function | grep par_)
compgen -A function | grep par_ | LC_ALL=C sort | compgen -A function | grep par_ | LC_ALL=C sort |
parallel -j6 --tag -k --joblog /tmp/jl-`basename $0` '{} 2>&1' parallel -j6 --tag -k --joblog /tmp/jl-`basename $0` '{} 2>&1'

View file

@ -31,8 +31,8 @@ par_memory_leak() {
export -f a_run export -f a_run
echo "### Test for memory leaks" echo "### Test for memory leaks"
echo "Of 100 runs of 1 job none should be bigger than a 3000 job run" echo "Of 100 runs of 1 job none should be bigger than a 3000 job run"
small_max=$(seq 100 | parallel a_run 1 | jq -s max) . `which env_parallel.bash`
big=$(a_run 3000) parset small_max,big ::: 'seq 100 | parallel a_run 1 | jq -s max' 'a_run 3000'
if [ $small_max -lt $big ] ; then if [ $small_max -lt $big ] ; then
echo "Bad: Memleak likely." echo "Bad: Memleak likely."
else else
@ -181,6 +181,7 @@ par_linebuffer_files() {
rm -rf "/tmp/par48658-$compress" rm -rf "/tmp/par48658-$compress"
} }
export -f doit export -f doit
# lrz complains 'Warning, unable to set nice value on thread'
parallel -j1 --tag -k doit ::: zstd pzstd clzip lz4 lzop pigz pxz gzip plzip pbzip2 lzma xz lzip bzip2 lbzip2 lrz parallel -j1 --tag -k doit ::: zstd pzstd clzip lz4 lzop pigz pxz gzip plzip pbzip2 lzma xz lzip bzip2 lbzip2 lrz
} }

View file

@ -181,11 +181,14 @@ par_no_route_to_host() {
( (
# Cache a list of hosts that fail fast with 'No route' # Cache a list of hosts that fail fast with 'No route'
# Filter the list 4 times to make sure to get good hosts # Filter the list 5 times to make sure to get good hosts
renice 10 -p $$ >/dev/null export -f findhosts
findhosts | filterhosts | filterhosts | filterhosts | export -f filterhosts
filterhosts | filterhosts | head > /tmp/filtered.$$ nice bash -c '
mv /tmp/filtered.$$ /tmp/filtered.hosts findhosts | filterhosts | filterhosts | filterhosts |
filterhosts | filterhosts | head > /tmp/filtered.$$
mv /tmp/filtered.$$ /tmp/filtered.hosts
'
) & ) &
( (
# We just need one of each to complete # We just need one of each to complete

View file

@ -1,3 +1,5 @@
par_block_negative_prefix ### This should generate 10*2 jobs
par_block_negative_prefix 20 40 720
par_compress_prg_fails ### bug #44546: If --compress-program fails: fail par_compress_prg_fails ### bug #44546: If --compress-program fails: fail
par_compress_prg_fails 1 par_compress_prg_fails 1
par_compress_prg_fails parallel: Error: false failed. par_compress_prg_fails parallel: Error: false failed.

View file

@ -83,7 +83,7 @@ echo '### bug #42893: --block should not cause decimals in cat_partial'
echo '### bug #42892: parallel -a nonexiting --pipepart' echo '### bug #42892: parallel -a nonexiting --pipepart'
### bug #42892: parallel -a nonexiting --pipepart ### bug #42892: parallel -a nonexiting --pipepart
parallel --pipepart -a nonexisting wc parallel --pipepart -a nonexisting wc
parallel: Error: nonexisting is neither a file nor a block device parallel: Error: File not found: nonexisting
echo '### added transfersize/returnsize to local jobs' echo '### added transfersize/returnsize to local jobs'
### added transfersize/returnsize to local jobs ### added transfersize/returnsize to local jobs
echo '### normal' echo '### normal'

View file

@ -788,6 +788,17 @@ freebsd ~/.profile
freebsd ~/.cshrc freebsd ~/.cshrc
freebsd ~/.tcshrc freebsd ~/.tcshrc
freebsd install-OK freebsd install-OK
hpux Installed env_parallel in:
hpux ~/.bashrc
hpux ~/.shrc
hpux ~/.zshenv
hpux ~/.config/fish/config.fish
hpux ~/.kshrc
hpux ~/.mkshrc
hpux ~/.profile
hpux ~/.cshrc
hpux ~/.tcshrc
hpux install-OK
hpux-ia64 Installed env_parallel in: hpux-ia64 Installed env_parallel in:
hpux-ia64 ~/.bashrc hpux-ia64 ~/.bashrc
hpux-ia64 ~/.shrc hpux-ia64 ~/.shrc
@ -811,17 +822,6 @@ hurd ~/.profile
hurd ~/.cshrc hurd ~/.cshrc
hurd ~/.tcshrc hurd ~/.tcshrc
hurd install-OK hurd install-OK
hpux Installed env_parallel in:
hpux ~/.bashrc
hpux ~/.shrc
hpux ~/.zshenv
hpux ~/.config/fish/config.fish
hpux ~/.kshrc
hpux ~/.mkshrc
hpux ~/.profile
hpux ~/.cshrc
hpux ~/.tcshrc
hpux install-OK
hpux-ia64 Installed env_parallel in: hpux-ia64 Installed env_parallel in:
hpux-ia64 /home/t/tange/.bashrc hpux-ia64 /home/t/tange/.bashrc
hpux-ia64 /home/t/tange/.shrc hpux-ia64 /home/t/tange/.shrc