parallel: stdin_buffer changed to block.

This commit is contained in:
Ole Tange 2015-11-22 11:12:47 +01:00
parent 5be0293a96
commit fc8e346bee
6 changed files with 124 additions and 107 deletions

50
NEWS
View file

@ -1,8 +1,8 @@
20151022 20151022
* --plus makes it possible to use {##} as a short had for {= * --plus makes it possible to use {##} as a shorthand for
$_=$Global::JobQueue->total_jobs() =} which gives the the number of {= $_=$Global::JobQueue->total_jobs() =} which gives the the number
jobs to run. of jobs to run in total.
* {= $_=$Global::JobQueue->total_jobs() =} is incompatible with -X, * {= $_=$Global::JobQueue->total_jobs() =} is incompatible with -X,
-m, and --xargs. -m, and --xargs.
@ -37,30 +37,48 @@
20150922 20150922
* GNU Parallel was cited in: Flexible Modeling of Epidemics with an Empirical Bayes Framework http://journals.plos.org/ploscompbiol/article?id=10.1371%2Fjournal.pcbi.1004382 * GNU Parallel was cited in: Flexible Modeling of Epidemics with an
Empirical Bayes Framework
http://journals.plos.org/ploscompbiol/article?id=10.1371%2Fjournal.pcbi.1004382
* GNU Parallel was cited in: BL1: 2D Potts Model with a Twist https://sucs.swan.ac.uk/~rjames93/Dissertation.pdf * GNU Parallel was cited in: BL1: 2D Potts Model with a Twist
https://sucs.swan.ac.uk/~rjames93/Dissertation.pdf
* GNU Parallel was cited in: DockBench: An Integrated Informatic Platform Bridging the Gap between the Robust Validation of Docking Protocols and Virtual Screening Simulations http://www.mdpi.com/1420-3049/20/6/9977/pdf * GNU Parallel was cited in: DockBench: An Integrated Informatic
Platform Bridging the Gap between the Robust Validation of Docking
Protocols and Virtual Screening Simulations
http://www.mdpi.com/1420-3049/20/6/9977/pdf
* GNU Parallel was cited in: A Scalable Parallel Implementation of Evolutionary Algorithms for * GNU Parallel was cited in: A Scalable Parallel Implementation of
Multi-Objective Optimization on GPUs http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7257074 Evolutionary Algorithms for Multi-Objective Optimization on GPUs
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7257074
* GNU Parallel was cited in: Tools and techniques for computational reproducibility http://biorxiv.org/content/biorxiv/early/2015/07/17/022707.full.pdf * GNU Parallel was cited in: Tools and techniques for computational
reproducibility
http://biorxiv.org/content/biorxiv/early/2015/07/17/022707.full.pdf
* GNU Parallel was cited in: How Can We Measure the Similarity Between Résumés of Selected Candidates for a Job? http://www.researchgate.net/publication/275954089_How_can_we_measure_the_similarity_between_rsums_of_selected_candidates_for_a_job * GNU Parallel was cited in: How Can We Measure the Similarity Between
Résumés of Selected Candidates for a Job?
http://www.researchgate.net/publication/275954089_How_can_we_measure_the_similarity_between_rsums_of_selected_candidates_for_a_job
* GNU Parallel was cited in: Interplay of cell dynamics and epithelial tension during morphogenesis of the Drosophila pupal wing http://www.researchgate.net/profile/Raphael_Etournay/publication/279061859_Interplay_of_cell_dynamics_and_epithelial_tension_during_morphogenesis_of_the_Drosophila_pupal_wing/links/558a95ad08aeae8413bcceea.pdf * GNU Parallel was cited in: Interplay of cell dynamics and epithelial
tension during morphogenesis of the Drosophila pupal wing
http://www.researchgate.net/profile/Raphael_Etournay/publication/279061859_Interplay_of_cell_dynamics_and_epithelial_tension_during_morphogenesis_of_the_Drosophila_pupal_wing/links/558a95ad08aeae8413bcceea.pdf
* Third-party selling GNU Parallel T-shirts http://www.aliexpress.com/item/2015F-BSO-GNU-LOGO-GNU-PARALLEL-men-s-shirt-sleeve-visual-illusion-error/32464827966.html * Third-party selling GNU Parallel T-shirts
http://www.aliexpress.com/item/2015F-BSO-GNU-LOGO-GNU-PARALLEL-men-s-shirt-sleeve-visual-illusion-error/32464827966.html
* Joys of gnu parallel http://scottolesen.com/index.php/2015/08/26/joys-of-gnu-parallel/ * Joys of gnu parallel
http://scottolesen.com/index.php/2015/08/26/joys-of-gnu-parallel/
* Crop and resize images with bash and ImageMagick https://www.simonholywell.com/post/2015/08/image-resize-crop-bash-imagemagick/ * Crop and resize images with bash and ImageMagick
https://www.simonholywell.com/post/2015/08/image-resize-crop-bash-imagemagick/
* Three Ways to Script Processes in Parallel http://www.codeword.xyz/2015/09/02/three-ways-to-script-processes-in-parallel/ * Three Ways to Script Processes in Parallel
http://www.codeword.xyz/2015/09/02/three-ways-to-script-processes-in-parallel/
* What It Looks Like to Process 3.5 Million Books http://blog.gdeltproject.org/what-it-looks-like-to-process-3-5-million-books/ * What It Looks Like to Process 3.5 Million Books
http://blog.gdeltproject.org/what-it-looks-like-to-process-3-5-million-books/
* Bug fixes and man page updates. * Bug fixes and man page updates.

View file

@ -4,7 +4,7 @@
Check that documentation is updated (compare to web): Check that documentation is updated (compare to web):
Fixet for 20150922 Fixet for 20151022
git diff last-release-commit git diff last-release-commit
Unmodified beta since last version => production Unmodified beta since last version => production
Unmodified alpha since last version => beta Unmodified alpha since last version => beta
@ -212,17 +212,15 @@ cc:Tim Cuthbertson <tim3d.junk@gmail.com>,
Ryoichiro Suzuki <ryoichiro.suzuki@gmail.com>, Ryoichiro Suzuki <ryoichiro.suzuki@gmail.com>,
Jesse Alama <jesse.alama@gmail.com> Jesse Alama <jesse.alama@gmail.com>
Subject: GNU Parallel 20151022 ('Liquid Water / 9N314M') released <<[stable]>> Subject: GNU Parallel 20151122 ('Kronan/Sharm el-Sheik/Bataclan') released <<[stable]>>
GNU Parallel 20151022 ('Liquid Water') <<[stable]>> has been released. It is available for download at: http://ftp.gnu.org/gnu/parallel/ GNU Parallel 20151122 ('Kronan/Sharm el-Sheik/Bataclan') <<[stable]>> has been released. It is available for download at: http://ftp.gnu.org/gnu/parallel/
<<No new functionality was introduced so this is a good candidate for a stable release.>> <<No new functionality was introduced so this is a good candidate for a stable release.>>
Haiku of the month: Haiku of the month:
In parallel land <<>>
everything is quick and fast
Use GNU Parallel.
-- Ole Tange -- Ole Tange
New in this release: New in this release:
@ -236,25 +234,15 @@ New in this release:
* <<kontaktet 2015-06-22 Afventer svar fra journal>> GNU Parallel was used (unfortunately with wrong citation) in: TADSim: Discrete Event-Based Performance Prediction for Temperature-Accelerated Dynamics http://vruehle.de/publications/2015c.pdf * <<kontaktet 2015-06-22 Afventer svar fra journal>> GNU Parallel was used (unfortunately with wrong citation) in: TADSim: Discrete Event-Based Performance Prediction for Temperature-Accelerated Dynamics http://vruehle.de/publications/2015c.pdf
http://www.researchgate.net/profile/Christoph_Junghans/publication/276178326_TADSim_Discrete_Event-Based_Performance_Prediction_for_Temperature-Accelerated_Dynamics/links/55562b6708ae980ca60c8369.pdf http://www.researchgate.net/profile/Christoph_Junghans/publication/276178326_TADSim_Discrete_Event-Based_Performance_Prediction_for_Temperature-Accelerated_Dynamics/links/55562b6708ae980ca60c8369.pdf
* --plus makes it possible to use {##} as a short had for {= $_=$Global::JobQueue->total_jobs() =} which gives the the number of jobs to run. * << Update forventet juni Rachel har lige svaret >> GNU Parallel was used in: SISRS: Site Identification from Short Read Sequences https://github.com/rachelss/SISRS/
* GNU Parallel packaged for CERN CentOS: http://linuxsoft.cern.ch/cern/centos/7/cern/x86_64/repoview/parallel.html
* {= $_=$Global::JobQueue->total_jobs() =} is incompatible with -X, -m, and --xargs. * Automating large numbers of tasks https://rcc.uchicago.edu/docs/tutorials/kicp-tutorials/running-jobs.html
* GNU Parallel is now mostly compatible with lsh (http://www.lysator.liu.se/~nisse/lsh/) and somewhat compatible with autossh (http://www.harding.motd.ca/autossh/). * Max out your IOPs with GNU Parallel http://blog.bitratchet.com/2015/11/11/max-out-your-iops-with-gnu-parallel/
* --workdir ... now also works when run locally. * GNU Parallel was cited in: Evolution of movement strategies under competitive interactions http://digital.csic.es/bitstream/10261/115973/1/evolution_movement_strategies_Kiziridis.pdf
* GNU Parallel was cited in: There is no (75, 32, 10, 16) strongly regular graph http://arxiv.org/pdf/1509.05933.pdf
* GNU Parallel was cited in: Roary: rapid large-scale prokaryote pan genome analysis http://bioinformatics.oxfordjournals.org/content/early/2015/08/05/bioinformatics.btv421.full.pdf+html
* GNU Parallel is used in TraitAR: https://testpypi.python.org/pypi/traitar/0.1.4
* GNU Parallel is used in youtube-dl-parallel: https://github.com/dlh/youtube-dl-parallel
* A parallel and fast way to download multiple files http://onetipperday.blogspot.com/2015/10/a-parallel-and-fast-way-to-download.html
* Usar GNU Parallel para aumentar el rendimiento de tus scripts http://adrianarroyocalle.github.io/blog/2015/10/20/usar-gnu-parallel/
* Bug fixes and man page updates. * Bug fixes and man page updates.

View file

@ -435,8 +435,8 @@ sub spreadstdin {
my $something_written = 0; my $something_written = 0;
for my $pid (keys %incomplete_jobs) { for my $pid (keys %incomplete_jobs) {
my $job = $incomplete_jobs{$pid}; my $job = $incomplete_jobs{$pid};
if($job->stdin_buffer_length()) { if($job->block_length()) {
$something_written += $job->non_block_write(); $something_written += $job->non_blocking_write();
} else { } else {
delete $incomplete_jobs{$pid} delete $incomplete_jobs{$pid}
} }
@ -509,7 +509,7 @@ sub nindex {
# %Global::running # %Global::running
# Returns: # Returns:
# $something_written = amount of bytes written # $something_written = amount of bytes written
my ($header_ref,$block_ref,$recstart,$recend,$endpos) = @_; my ($header_ref,$buffer_ref,$recstart,$recend,$endpos) = @_;
my $something_written = 0; my $something_written = 0;
my $block_passed = 0; my $block_passed = 0;
my $sleep = 1; my $sleep = 1;
@ -522,13 +522,13 @@ sub nindex {
values %Global::running); values %Global::running);
} }
while(my $job = shift @robin_queue) { while(my $job = shift @robin_queue) {
if($job->stdin_buffer_length() > 0) { if($job->block_length() > 0) {
$something_written += $job->non_block_write(); $something_written += $job->non_blocking_write();
} else { } else {
$job->set_stdin_buffer($header_ref,$block_ref,$endpos,$recstart,$recend); $job->set_block($header_ref,$buffer_ref,$endpos,$recstart,$recend);
$block_passed = 1; $block_passed = 1;
$job->set_virgin(0); $job->set_virgin(0);
$something_written += $job->non_block_write(); $something_written += $job->non_blocking_write();
last; last;
} }
} }
@ -633,21 +633,21 @@ sub write_record_to_pipe {
# Input: # Input:
# $chunk_number = sequence number - to see if already run # $chunk_number = sequence number - to see if already run
# $header_ref = reference to header string to prepend # $header_ref = reference to header string to prepend
# $record_ref = reference to record to write # $buffer_ref = reference to record to write
# $recstart = start string of record # $recstart = start string of record
# $recend = end string of record # $recend = end string of record
# $endpos = position in $record_ref where record ends # $endpos = position in $buffer_ref where record ends
# Uses: # Uses:
# $Global::job_already_run # $Global::job_already_run
# $opt::roundrobin # $opt::roundrobin
# @Global::virgin_jobs # @Global::virgin_jobs
# Returns: # Returns:
# Number of chunks written (0 or 1) # Number of chunks written (0 or 1)
my ($chunk_number,$header_ref,$record_ref,$recstart,$recend,$endpos) = @_; my ($chunk_number,$header_ref,$buffer_ref,$recstart,$recend,$endpos) = @_;
if($endpos == 0) { return 0; } if($endpos == 0) { return 0; }
if(vec($Global::job_already_run,$chunk_number,1)) { return 1; } if(vec($Global::job_already_run,$chunk_number,1)) { return 1; }
if($opt::roundrobin) { if($opt::roundrobin) {
return round_robin_write($header_ref,$record_ref,$recstart,$recend,$endpos); return round_robin_write($header_ref,$buffer_ref,$recstart,$recend,$endpos);
} }
# If no virgin found, backoff # If no virgin found, backoff
my $sleep = 0.0001; # 0.01 ms - better performance on highend my $sleep = 0.0001; # 0.01 ms - better performance on highend
@ -655,28 +655,31 @@ sub write_record_to_pipe {
::debug("pipe", "No virgin jobs"); ::debug("pipe", "No virgin jobs");
$sleep = ::reap_usleep($sleep); $sleep = ::reap_usleep($sleep);
# Jobs may not be started because of loadavg # Jobs may not be started because of loadavg
# or too little time between each ssh login. # or too little time between each ssh login
# or retrying failed jobs.
start_more_jobs(); start_more_jobs();
} }
my $job = shift @Global::virgin_jobs; my $job = shift @Global::virgin_jobs;
# Job is no longer virgin # Job is no longer virgin
$job->set_virgin(0); $job->set_virgin(0);
# We ignore the removed rec_sep which is technically wrong. if(1) {
$job->add_transfersize($endpos + length $$header_ref); # We ignore the removed rec_sep which is technically wrong.
if(fork()) { $job->add_transfersize($endpos + length $$header_ref);
# Skip if(fork()) {
} else { # Skip
# Chop of at $endpos as we do not know how many rec_sep will } else {
# be removed. # Chop of at $endpos as we do not know how many rec_sep will
substr($$record_ref,$endpos,length $$record_ref) = ""; # be removed.
# Remove rec_sep substr($$buffer_ref,$endpos,length $$buffer_ref) = "";
if($opt::remove_rec_sep) { # Remove rec_sep
Job::remove_rec_sep($record_ref,$recstart,$recend); if($opt::remove_rec_sep) {
Job::remove_rec_sep($buffer_ref,$recstart,$recend);
}
$job->write($header_ref);
$job->write($buffer_ref);
close $job->fh(0,"w");
exit(0);
} }
$job->write($header_ref);
$job->write($record_ref);
close $job->fh(0,"w");
exit(0);
} }
close $job->fh(0,"w"); close $job->fh(0,"w");
return 1; return 1;
@ -1100,7 +1103,7 @@ sub parse_options {
sub init_globals { sub init_globals {
# Defaults: # Defaults:
$Global::version = 20151022; $Global::version = 20151023;
$Global::progname = 'parallel'; $Global::progname = 'parallel';
$Global::infinity = 2**31; $Global::infinity = 2**31;
$Global::debug = 0; $Global::debug = 0;
@ -6198,32 +6201,38 @@ sub write {
} }
} }
sub set_stdin_buffer { sub set_block {
# Copy stdin buffer from $block_ref up to $endpos # Copy stdin buffer from $block_ref up to $endpos
# Prepend with $header_ref # Prepend with $header_ref if virgin (i.e. not --roundrobin)
# Remove $recstart and $recend if needed # Remove $recstart and $recend if needed
# Input: # Input:
# $header_ref = ref to $header to prepend # $header_ref = ref to $header to prepend
# $block_ref = ref to $block to pass on # $buffer_ref = ref to $buffer containing the block
# $endpos = length of $block to pass on # $endpos = length of $block to pass on
# $recstart = --recstart regexp # $recstart = --recstart regexp
# $recend = --recend regexp # $recend = --recend regexp
# Returns: # Returns:
# N/A # N/A
my $self = shift; my $self = shift;
my ($header_ref,$block_ref,$endpos,$recstart,$recend) = @_; my ($header_ref,$buffer_ref,$endpos,$recstart,$recend) = @_;
$self->{'stdin_buffer'} = ($self->virgin() ? $$header_ref : "").substr($$block_ref,0,$endpos); $self->{'block'} = ($self->virgin() ? $$header_ref : "").substr($$buffer_ref,0,$endpos);
if($opt::remove_rec_sep) { if($opt::remove_rec_sep) {
remove_rec_sep(\$self->{'stdin_buffer'},$recstart,$recend); remove_rec_sep(\$self->{'block'},$recstart,$recend);
} }
$self->{'stdin_buffer_length'} = length $self->{'stdin_buffer'}; $self->{'block_length'} = length $self->{'block'};
$self->{'stdin_buffer_pos'} = 0; $self->{'block_pos'} = 0;
$self->add_transfersize($self->{'stdin_buffer_length'}); $self->add_transfersize($self->{'block_length'});
} }
sub stdin_buffer_length { sub block_ref {
my $self = shift; my $self = shift;
return $self->{'stdin_buffer_length'}; return \$self->{'block'};
}
sub block_length {
my $self = shift;
return $self->{'block_length'};
} }
sub remove_rec_sep { sub remove_rec_sep {
@ -6234,26 +6243,26 @@ sub remove_rec_sep {
$$block_ref =~ s/$recend$//os; $$block_ref =~ s/$recend$//os;
} }
sub non_block_write { sub non_blocking_write {
my $self = shift; my $self = shift;
my $something_written = 0; my $something_written = 0;
use POSIX qw(:errno_h); use POSIX qw(:errno_h);
# for loop used to avoid copying substr: $buf will be an alias for the substr # for loop used to avoid copying substr: $buf will be an alias for the substr
for my $buf (substr($self->{'stdin_buffer'},$self->{'stdin_buffer_pos'})) { for my $buf (substr($self->{'block'},$self->{'block_pos'})) {
my $in = $self->fh(0,"w"); my $in = $self->fh(0,"w");
my $rv = syswrite($in, $buf); my $rv = syswrite($in, $buf);
if (!defined($rv) && $! == EAGAIN) { if (!defined($rv) && $! == EAGAIN) {
# would block # would block
$something_written = 0; $something_written = 0;
} elsif ($self->{'stdin_buffer_pos'}+$rv != $self->{'stdin_buffer_length'}) { } elsif ($self->{'block_pos'}+$rv != $self->{'block_length'}) {
# incomplete write # incomplete write
# Remove the written part # Remove the written part
$self->{'stdin_buffer_pos'} += $rv; $self->{'block_pos'} += $rv;
$something_written = $rv; $something_written = $rv;
} else { } else {
# successfully wrote everything # successfully wrote everything
my $a = ""; my $a = "";
$self->set_stdin_buffer(\$a,\$a,0,"",""); $self->set_block(\$a,\$a,0,"","");
$something_written = $rv; $something_written = $rv;
} }
} }
@ -8001,11 +8010,13 @@ sub push {
if($perlexpr =~ /^(\d+) /) { if($perlexpr =~ /^(\d+) /) {
# Positional # Positional
defined($record->[$1-1]) or next; defined($record->[$1-1]) or next;
$self->{'len'}{$perlexpr} += length $record->[$1-1]->replace($perlexpr,$quote_arg,$self); $self->{'len'}{$perlexpr} +=
length $record->[$1-1]->replace($perlexpr,$quote_arg,$self);
} else { } else {
for my $arg (@$record) { for my $arg (@$record) {
if(defined $arg) { if(defined $arg) {
$self->{'len'}{$perlexpr} += length $arg->replace($perlexpr,$quote_arg,$self); $self->{'len'}{$perlexpr} +=
length $arg->replace($perlexpr,$quote_arg,$self);
} }
} }
} }
@ -8604,7 +8615,7 @@ sub get {
if($opt::pipe or $opt::pipepart) { if($opt::pipe or $opt::pipepart) {
if($cmd_line->replaced() eq "") { if($cmd_line->replaced() eq "") {
# Empty command - pipe requires a command # Empty command - pipe requires a command
::error("--pipe must have a command to pipe into (e.g. 'cat')."); ::error("--pipe/--pipepart must have a command to pipe into (e.g. 'cat').");
::wait_and_exit(255); ::wait_and_exit(255);
} }
} else { } else {

View file

@ -426,7 +426,7 @@ string that is not in the command line.
See also: B<:::>. See also: B<:::>.
=item B<--bar> (alpha testing) =item B<--bar> (beta testing)
Show progress as a progress bar. In the bar is shown: % of jobs Show progress as a progress bar. In the bar is shown: % of jobs
completed, estimated seconds left, and number of jobs started. completed, estimated seconds left, and number of jobs started.
@ -513,7 +513,7 @@ I<size> defaults to 1M.
See B<--pipe> and B<--pipepart> for use of this. See B<--pipe> and B<--pipepart> for use of this.
=item B<--cat> (alpha testing) =item B<--cat> (beta testing)
Create a temporary file with content. Normally B<--pipe>/B<--pipepart> Create a temporary file with content. Normally B<--pipe>/B<--pipepart>
will give data to the program on stdin (standard input). With B<--cat> will give data to the program on stdin (standard input). With B<--cat>
@ -613,7 +613,7 @@ I<secs> seconds after starting each job. I<secs> can be less than 1
second. second.
=item B<--dry-run> (alpha testing) =item B<--dry-run> (beta testing)
Print the job to run on stdout (standard output), but do not run the Print the job to run on stdout (standard output), but do not run the
job. Use B<-v -v> to include the wrapping that GNU Parallel generates job. Use B<-v -v> to include the wrapping that GNU Parallel generates
@ -672,7 +672,7 @@ and functions) use env_parallel as described under the option I<command>.
See also: B<--record-env>. See also: B<--record-env>.
=item B<--eta> (alpha testing) =item B<--eta> (beta testing)
Show the estimated number of seconds before finishing. This forces GNU Show the estimated number of seconds before finishing. This forces GNU
B<parallel> to read all jobs before starting to find the number of B<parallel> to read all jobs before starting to find the number of
@ -693,7 +693,7 @@ Implies B<--semaphore>.
See also B<--bg>, B<man sem>. See also B<--bg>, B<man sem>.
=item B<--fifo> (alpha testing) =item B<--fifo> (beta testing)
Create a temporary fifo with content. Normally B<--pipe> and Create a temporary fifo with content. Normally B<--pipe> and
B<--pipepart> will give data to the program on stdin (standard B<--pipepart> will give data to the program on stdin (standard
@ -725,7 +725,7 @@ over B<--tollef>. The B<--tollef> option is now retired, and therefore
may not be used. B<--gnu> is kept for compatibility. may not be used. B<--gnu> is kept for compatibility.
=item B<--group> (alpha testing) =item B<--group> (beta testing)
Group output. Output from each jobs is grouped together and is only Group output. Output from each jobs is grouped together and is only
printed when the command is finished. stderr (standard error) first printed when the command is finished. stderr (standard error) first
@ -747,9 +747,9 @@ See also: B<--line-buffer> B<--ungroup>
Print a summary of the options to GNU B<parallel> and exit. Print a summary of the options to GNU B<parallel> and exit.
=item B<--halt-on-error> I<val> (alpha testing) =item B<--halt-on-error> I<val> (beta testing)
=item B<--halt> I<val> (alpha testing) =item B<--halt> I<val> (beta testing)
When should GNU B<parallel> terminate? In some situations it makes no When should GNU B<parallel> terminate? In some situations it makes no
sense to run all jobs. GNU B<parallel> should simply give up as soon sense to run all jobs. GNU B<parallel> should simply give up as soon
@ -1172,7 +1172,7 @@ performance is important use B<--pipepart>.
See also: B<--recstart>, B<--recend>, B<--fifo>, B<--cat>, B<--pipepart>. See also: B<--recstart>, B<--recend>, B<--fifo>, B<--cat>, B<--pipepart>.
=item B<--pipepart> (alpha testing) =item B<--pipepart> (beta testing)
Pipe parts of a physical file. B<--pipepart> works similar to Pipe parts of a physical file. B<--pipepart> works similar to
B<--pipe>, but is much faster. It has a few limitations: B<--pipe>, but is much faster. It has a few limitations:
@ -1198,7 +1198,7 @@ control on the command line (used by GNU B<parallel> internally when
called with B<--sshlogin>). called with B<--sshlogin>).
=item B<--plus> (alpha testing) =item B<--plus> (beta testing)
Activate additional replacement strings: {+/} {+.} {+..} {+...} {..} Activate additional replacement strings: {+/} {+.} {+..} {+...} {..}
{...} {/..} {/...} {##}. The idea being that '{+foo}' matches the opposite of {...} {/..} {/...} {##}. The idea being that '{+foo}' matches the opposite of
@ -1288,7 +1288,7 @@ Overrides an earlier B<--keep-order> (e.g. if set in
B<~/.parallel/config>). B<~/.parallel/config>).
=item B<--nice> I<niceness> (alpha testing) =item B<--nice> I<niceness> (beta testing)
Run the command at this niceness. For simple commands you can just add Run the command at this niceness. For simple commands you can just add
B<nice> in front of the command. But if the command consists of more B<nice> in front of the command. But if the command consists of more
@ -1737,7 +1737,7 @@ Does not run the command but quotes it. Useful for making quoted
composed commands for GNU B<parallel>. composed commands for GNU B<parallel>.
=item B<--shuf> (alpha testing) =item B<--shuf> (beta testing)
Shuffle jobs. When having multiple input sources it is hard to Shuffle jobs. When having multiple input sources it is hard to
randomize jobs. --shuf will generate all jobs, and shuffle them before randomize jobs. --shuf will generate all jobs, and shuffle them before
@ -1765,13 +1765,13 @@ I<secs> seconds after starting each ssh. I<secs> can be less than 1
seconds. seconds.
=item B<-S> I<[@hostgroups/][ncpu/]sshlogin[,[@hostgroups/][ncpu/]sshlogin[,...]]> (alpha testing) =item B<-S> I<[@hostgroups/][ncpu/]sshlogin[,[@hostgroups/][ncpu/]sshlogin[,...]]> (beta testing)
=item B<-S> I<@hostgroup> (alpha testing) =item B<-S> I<@hostgroup> (beta testing)
=item B<--sshlogin> I<[@hostgroups/][ncpu/]sshlogin[,[@hostgroups/][ncpu/]sshlogin[,...]]> (alpha testing) =item B<--sshlogin> I<[@hostgroups/][ncpu/]sshlogin[,[@hostgroups/][ncpu/]sshlogin[,...]]> (beta testing)
=item B<--sshlogin> I<@hostgroup> (alpha testing) =item B<--sshlogin> I<@hostgroup> (beta testing)
Distribute jobs to remote computers. The jobs will be run on a list of Distribute jobs to remote computers. The jobs will be run on a list of
remote computers. remote computers.
@ -2088,9 +2088,9 @@ Use B<-v> B<-v> to print the wrapping ssh command when running remotely.
Print the version GNU B<parallel> and exit. Print the version GNU B<parallel> and exit.
=item B<--workdir> I<mydir> (alpha testing) =item B<--workdir> I<mydir> (beta testing)
=item B<--wd> I<mydir> (alpha testing) =item B<--wd> I<mydir> (beta testing)
Files transferred using B<--transfer> and B<--return> will be relative Files transferred using B<--transfer> and B<--return> will be relative
to I<mydir> on remote computers, and the command will be executed in to I<mydir> on remote computers, and the command will be executed in

View file

@ -3,7 +3,7 @@
# SSH only allowed to localhost/lo # SSH only allowed to localhost/lo
cat <<'EOF' | sed -e s/\$SERVER1/$SERVER1/\;s/\$SERVER2/$SERVER2/ | parallel -vj2 --retries 3 -k --joblog /tmp/jl-`basename $0` -L1 cat <<'EOF' | sed -e s/\$SERVER1/$SERVER1/\;s/\$SERVER2/$SERVER2/ | parallel -vj2 --retries 3 -k --joblog /tmp/jl-`basename $0` -L1
echo '### --hostgroup force ncpu' echo '### --hostgroup force ncpu'
parallel --delay 0.1 --hgrp -S @g1/1/parallel@lo -S @g2/3/lo whoami\;sleep 0.2{} ::: {1..8} | sort parallel --delay 0.1 --hgrp -S @g1/1/parallel@lo -S @g2/3/lo whoami\;sleep 0.3{} ::: {1..8} | sort
echo '### --hostgroup two group arg' echo '### --hostgroup two group arg'
parallel -k --sshdelay 0.1 --hgrp -S @g1/1/parallel@lo -S @g2/3/lo whoami\;sleep 0.3{} ::: {1..8}@g1+g2 | sort parallel -k --sshdelay 0.1 --hgrp -S @g1/1/parallel@lo -S @g2/3/lo whoami\;sleep 0.3{} ::: {1..8}@g1+g2 | sort
@ -12,7 +12,7 @@ echo '### --hostgroup one group arg'
parallel --delay 0.1 --hgrp -S @g1/1/parallel@lo -S @g2/3/lo whoami\;sleep 0.2{} ::: {1..8}@g2 parallel --delay 0.1 --hgrp -S @g1/1/parallel@lo -S @g2/3/lo whoami\;sleep 0.2{} ::: {1..8}@g2
echo '### --hostgroup multiple group arg + unused group' echo '### --hostgroup multiple group arg + unused group'
parallel --delay 0.1 --hgrp -S @g1/1/parallel@lo -S @g1/3/lo -S @g3/100/tcsh@lo whoami\;sleep 0.2{} ::: {1..8}@g1+g2 | sort parallel --delay 0.1 --hgrp -S @g1/1/parallel@lo -S @g1/3/lo -S @g3/100/tcsh@lo whoami\;sleep 0.4{} ::: {1..8}@g1+g2 | sort
echo '### --hostgroup two groups @' echo '### --hostgroup two groups @'
parallel -k --hgrp -S @g1/parallel@lo -S @g2/lo --tag whoami\;echo ::: parallel@g1 tange@g2 parallel -k --hgrp -S @g1/parallel@lo -S @g2/lo --tag whoami\;echo ::: parallel@g1 tange@g2

View file

@ -1,6 +1,6 @@
echo '### --hostgroup force ncpu' echo '### --hostgroup force ncpu'
### --hostgroup force ncpu ### --hostgroup force ncpu
parallel --delay 0.1 --hgrp -S @g1/1/parallel@lo -S @g2/3/lo whoami\;sleep 0.2{} ::: {1..8} | sort parallel --delay 0.1 --hgrp -S @g1/1/parallel@lo -S @g2/3/lo whoami\;sleep 0.3{} ::: {1..8} | sort
parallel parallel
parallel parallel
tange tange
@ -33,7 +33,7 @@ tange
tange tange
echo '### --hostgroup multiple group arg + unused group' echo '### --hostgroup multiple group arg + unused group'
### --hostgroup multiple group arg + unused group ### --hostgroup multiple group arg + unused group
parallel --delay 0.1 --hgrp -S @g1/1/parallel@lo -S @g1/3/lo -S @g3/100/tcsh@lo whoami\;sleep 0.2{} ::: {1..8}@g1+g2 | sort parallel --delay 0.1 --hgrp -S @g1/1/parallel@lo -S @g1/3/lo -S @g3/100/tcsh@lo whoami\;sleep 0.4{} ::: {1..8}@g1+g2 | sort
parallel parallel
parallel parallel
tange tange