Fixed bug #38908: niceload: Ctrl-C should resume jobs if using -p

This commit is contained in:
Ole Tange 2014-03-03 19:26:19 +01:00
parent 6fc02c1fc8
commit 583e6fb4b7
8 changed files with 241 additions and 49 deletions

View file

@ -69,8 +69,12 @@ if(@opt::prg) {
# Find all pids of prg
my $out = `pidof -x @opt::prg`;
$process->set_pid(split /\s+/,$out);
$::resume_process = $process;
$SIG{TERM} = $SIG{INT} = \&resume;
} elsif(@opt::pid) {
$process->set_pid(@opt::pid);
$::resume_process = $process;
$SIG{TERM} = $SIG{INT} = \&resume;
} elsif (@ARGV) {
# Wait until limit is below start_limit and run_limit
while($limit->over_start_limit()
@ -97,6 +101,11 @@ while($process->is_alive()) {
exit($::exitstatus);
sub resume {
$::resume_process->resume();
exit(0);
}
sub uniq {
# Remove duplicates and return unique values
return keys %{{ map { $_ => 1 } @_ }};
@ -462,11 +471,12 @@ sub new {
my $startmem = $opt::start_mem ? ::multiply_binary_prefix($opt::start_mem) : 0;
my $runnoswap = $opt::run_noswap ? ::multiply_binary_prefix($opt::run_noswap) : 0;
my $startnoswap = $opt::start_noswap ? ::multiply_binary_prefix($opt::start_noswap) : 0;
my $recheck = $opt::recheck ? ::multiply_binary_prefix($opt::recheck) : 1; # Default
my $runtime = $opt::suspend ? ::multiply_binary_prefix($opt::suspend) : 1; # Default
return bless {
'hard' => $hard,
'recheck' => 1, # Default
'runtime' => 1, # Default
'recheck' => $recheck,
'runio' => $runio,
'startio' => $startio,
'runload' => $runload,
@ -476,8 +486,8 @@ sub new {
'runnoswap' => $runnoswap,
'startnoswap' => $startnoswap,
'factor' => $opt::factor || 1,
'recheck' => $opt::recheck || 1,
'runtime' => $opt::recheck || 1,
'recheck' => $recheck,
'runtime' => $runtime,
'over_run_limit' => 1,
'over_start_limit' => 1,
'verbose' => $opt::verbose,

Binary file not shown.

View file

@ -1701,7 +1701,7 @@ GNU B<parallel> can work similar to B<xargs -n1>.
To compress all html files using B<gzip> run:
B<find . -name '*.html' | parallel gzip>
B<find . -name '*.html' | parallel gzip --best>
If the file names may contain a newline use B<-0>. Substitute FOO BAR with
FUBAR in all files in this dir and subdirs:
@ -1717,7 +1717,7 @@ GNU B<parallel> can take the arguments from command line instead of
stdin (standard input). To compress all html files in the current dir
using B<gzip> run:
B<parallel gzip ::: *.html>
B<parallel gzip --best ::: *.html>
To convert *.wav to *.mp3 using LAME running one process per CPU core
run:
@ -3487,6 +3487,84 @@ using GNU B<parallel>:
=back
=head2 DIFFERENCES BETWEEN map AND GNU Parallel
B<map> sees it as a feature to have less features and in doing so it
also handles corner cases incorrectly. A lot of GNU B<parallel>'s code
is to handle corner cases correctly on every platform, so you will not
get a nasty surprise if a user for example saves a file called: I<My
brother's 12" records.txt>
B<map>'s example showing how to deal with special characters fails on
special characters:
echo "The Cure" > My\ brother\'s\ 12\"\ records
ls | map 'echo -n `gzip < "%" | wc -c`; echo -n '*100/'; wc -c < "%"' | bc
It works with GNU B<parallel>:
ls | parallel 'echo -n `gzip < {} | wc -c`; echo -n '*100/'; wc -c < {}' | bc
And you can even get the file name prepended:
ls | parallel --tag '(echo -n `gzip < {} | wc -c`'*100/'; wc -c < {}) | bc'
B<map> has no support for grouping. So this gives the wrong results
without any warnings:
parallel perl -e '\$a=\"1{}\"x10000000\;print\ \$a,\"\\n\"' '>' {} ::: a b c d e f
ls -l a b c d e f
parallel -kP4 -n1 grep 1 > out.par ::: a b c d e f
map -p 4 'grep 1' a b c d e f > out.map-unbuf
map -p 4 'grep --line-buffered 1' a b c d e f > out.map-linebuf
map -p 1 'grep --line-buffered 1' a b c d e f > out.map-serial
ls -l out*
md5sum out*
The documentation shows a workaround, but not only does that mix
stdout (standard output) with stderr (standard error) it also fails
completely for certain jobs (and may even be considered less readable):
parallel echo -n {} ::: 1 2 3
map -p 4 'echo -n % 2>&1 | sed -e "s/^/$$:/"' 1 2 3 | sort | cut -f2- -d:
B<map> cannot handle bundled options: B<map -vp 0 echo this fails>
B<map> does not have an argument separator on the command line, but
uses the first argument as command. This makes quoting harder which again
may affect readability. Compare:
map -p 2 perl\\\ -ne\\\ \\\'/^\\\\S+\\\\s+\\\\S+\\\$/\\\ and\\\ print\\\ \\\$ARGV,\\\"\\\\n\\\"\\\' *
parallel -q perl -ne '/^\S+\s+\S+$/ and print $ARGV,"\n"' ::: *
B<map> can do multiple arguments with context replace, but not without
context replace:
parallel --xargs echo 'BEGIN{'{}'}END' ::: 1 2 3
B<map> does not set exit value according to whether one of the jobs
failed:
parallel false ::: 1 || echo Job failed
map false 1 || echo Never run
B<map> requires Perl v5.10.0 making it harder to use on old systems.
B<map> has no way of using % in the command (GNU Parallel has -I to
specify another replacement string than {}).
By design B<map> is option incompatible with B<xargs>, it does not
have remote job execution, a structured way of saving results,
multiple input sources, progress indicator, configurable record
delimiter (only field delimiter), logging of jobs run with possibility
to resume, keeping the output in the same order as input, --pipe
processing, and dynamically timeouts.
=head2 DIFFERENCES BETWEEN ClusterSSH AND GNU Parallel
ClusterSSH solves a different problem than GNU B<parallel>.

View file

@ -1822,7 +1822,7 @@ GNU @strong{parallel} can work similar to @strong{xargs -n1}.
To compress all html files using @strong{gzip} run:
@strong{find . -name '*.html' | parallel gzip}
@strong{find . -name '*.html' | parallel gzip --best}
If the file names may contain a newline use @strong{-0}. Substitute FOO BAR with
FUBAR in all files in this dir and subdirs:
@ -1838,7 +1838,7 @@ GNU @strong{parallel} can take the arguments from command line instead of
stdin (standard input). To compress all html files in the current dir
using @strong{gzip} run:
@strong{parallel gzip ::: *.html}
@strong{parallel gzip --best ::: *.html}
To convert *.wav to *.mp3 using LAME running one process per CPU core
run:
@ -3733,6 +3733,100 @@ using GNU @strong{parallel}:
@end table
@section DIFFERENCES BETWEEN map AND GNU Parallel
@anchor{DIFFERENCES BETWEEN map AND GNU Parallel}
@strong{map} sees it as a feature to have less features and in doing so it
also handles corner cases incorrectly. A lot of GNU @strong{parallel}'s code
is to handle corner cases correctly on every platform, so you will not
get a nasty surprise if a user for example saves a file called: @emph{My
brother's 12" records.txt}
@strong{map}'s example showing how to deal with special characters fails on
special characters:
@verbatim
echo "The Cure" > My\ brother\'s\ 12\"\ records
ls | map 'echo -n `gzip < "%" | wc -c`; echo -n '*100/'; wc -c < "%"' | bc
@end verbatim
It works with GNU @strong{parallel}:
@verbatim
ls | parallel 'echo -n `gzip < {} | wc -c`; echo -n '*100/'; wc -c < {}' | bc
@end verbatim
And you can even get the file name prepended:
@verbatim
ls | parallel --tag '(echo -n `gzip < {} | wc -c`'*100/'; wc -c < {}) | bc'
@end verbatim
@strong{map} has no support for grouping. So this gives the wrong results
without any warnings:
@verbatim
parallel perl -e '\$a=\"1{}\"x10000000\;print\ \$a,\"\\n\"' '>' {} ::: a b c d e f
ls -l a b c d e f
parallel -kP4 -n1 grep 1 > out.par ::: a b c d e f
map -p 4 'grep 1' a b c d e f > out.map-unbuf
map -p 4 'grep --line-buffered 1' a b c d e f > out.map-linebuf
map -p 1 'grep --line-buffered 1' a b c d e f > out.map-serial
ls -l out*
md5sum out*
@end verbatim
The documentation shows a workaround, but not only does that mix
stdout (standard output) with stderr (standard error) it also fails
completely for certain jobs (and may even be considered less readable):
@verbatim
parallel echo -n {} ::: 1 2 3
map -p 4 'echo -n % 2>&1 | sed -e "s/^/$$:/"' 1 2 3 | sort | cut -f2- -d:
@end verbatim
@strong{map} cannot handle bundled options: @strong{map -vp 0 echo this fails}
@strong{map} does not have an argument separator on the command line, but
uses the first argument as command. This makes quoting harder which again
may affect readability. Compare:
@verbatim
map -p 2 perl\\\ -ne\\\ \\\'/^\\\\S+\\\\s+\\\\S+\\\$/\\\ and\\\ print\\\ \\\$ARGV,\\\"\\\\n\\\"\\\' *
parallel -q perl -ne '/^\S+\s+\S+$/ and print $ARGV,"\n"' ::: *
@end verbatim
@strong{map} can do multiple arguments with context replace, but not without
context replace:
@verbatim
parallel --xargs echo 'BEGIN{'{}'}END' ::: 1 2 3
@end verbatim
@strong{map} does not set exit value according to whether one of the jobs
failed:
@verbatim
parallel false ::: 1 || echo Job failed
map false 1 || echo Never run
@end verbatim
@strong{map} requires Perl v5.10.0 making it harder to use on old systems.
@strong{map} has no way of using % in the command (GNU Parallel has -I to
specify another replacement string than @{@}).
By design @strong{map} is option incompatible with @strong{xargs}, it does not
have remote job execution, a structured way of saving results,
multiple input sources, progress indicator, configurable record
delimiter (only field delimiter), logging of jobs run with possibility
to resume, keeping the output in the same order as input, --pipe
processing, and dynamically timeouts.
@section DIFFERENCES BETWEEN ClusterSSH AND GNU Parallel
@anchor{DIFFERENCES BETWEEN ClusterSSH AND GNU Parallel}

View file

@ -3,13 +3,44 @@
# force load > 10
while uptime | grep -v age:.[1-9][0-9].[0-9][0-9] >/dev/null ; do (timeout 5 nice burnP6 2>/dev/null &) done
cat <<'EOF' | stdout parallel -j0 -L1
int() {
perl -pe 's/(\d+\.\d*)/int($1)/e'
}
export -f int
cat <<'EOF' | stdout parallel -kj0 -L1
# The seq 10000000 should take > 1 cpu sec to run.
echo '### --soft -f and test if child is actually suspended and thus takes longer'
niceload --soft -f 0.5 'seq 20000000 | wc;echo This should finish last'
(sleep 1; seq 20000000 | wc;echo This should finish first)
niceload --soft -f 0.5 'seq 20000000 | wc;echo This should finish last' &
(sleep 1; seq 20000000 | wc;echo This should finish first) &
wait
echo '### niceload with no arguments should give no output'
niceload
echo '### Test -t and -s'
niceload -v -t 1 -s 2 sleep 4.5
echo 'bug #38908: niceload: Ctrl-C/TERM should resume jobs if using -p'
# This should take 10 seconds to run + delay from niceload
# niceload killed after 1 sec => The delay from niceload should be no more than 1 second
stdout /usr/bin/time -f %e perl -e 'for(1..100) { select(undef, undef, undef, 0.1); } print "done\n"' | int &
niceload -vt 1 -s 10 -p $! &
export A=$!;
sleep 2;
kill -s TERM $A;
wait
echo 'bug #38908: niceload: Ctrl-C should resume jobs if using -p'
# This should take 10 seconds to run + delay from niceload
# niceload killed after 1 sec => The delay from niceload should be no more than 1 second
stdout /usr/bin/time -f %e perl -e 'for(1..100) { select(undef, undef, undef, 0.1); } print "done\n"' | int &
niceload -vt 1 -s 10 -p $! &
export A=$!;
sleep 2;
kill -s INT $A;
wait
EOF
# TODO test -f + -t

View file

@ -17,7 +17,7 @@ ping -c 1 freebsd7.tange.dk >/dev/null 2>&1
echo "### bug #37589: Red Hat 9 (Shrike) perl v5.8.0 built for i386-linux-thread-multi error"
cp `which parallel` /tmp/parallel
stdout parallel -kj10 --argsep == --basefile /tmp/parallel --tag --nonall -S redhat9.tange.dk,centos3.tange.dk,centos5.tange.dk,freebsd7.tange.dk /tmp/parallel echo ::: OK_if_no_perl_warnings | sort
stdout parallel -kj10 --argsep == --basefile /tmp/parallel --tag --nonall -S redhat9.tange.dk,centos3.tange.dk,centos5.tange.dk,freebsd7.tange.dk /tmp/parallel --no-notice echo ::: OK_if_no_perl_warnings | sort
#VBoxManage controlvm CentOS3-root:centos3 savestate
VBoxManage controlvm CentOS5-root:centos5 savestate

View file

@ -1,6 +1,21 @@
### --soft -f and test if child is actually suspended and thus takes longer
### niceload with no arguments should give no output
20000000 20000000 168888897
This should finish first
20000000 20000000 168888897
This should finish last
### niceload with no arguments should give no output
### Test -t and -s
Sleeping 1s
Running 2s
Sleeping 1s
Running 2s
bug #38908: niceload: Ctrl-C/TERM should resume jobs if using -p
Sleeping 1s
Running 10s
done
10
bug #38908: niceload: Ctrl-C should resume jobs if using -p
Sleeping 1s
Running 10s
done
10

View file

@ -4,43 +4,7 @@ tange@centos3
tange@centos5
tange@freebsd7
### bug #37589: Red Hat 9 (Shrike) perl v5.8.0 built for i386-linux-thread-multi error
centos3.tange.dk
centos3.tange.dk
centos3.tange.dk
centos3.tange.dk
centos3.tange.dk ;login: The USENIX Magazine, February 2011:42-47.
centos3.tange.dk O. Tange (2011): GNU Parallel - The Command-Line Power Tool,
centos3.tange.dk OK_if_no_perl_warnings
centos3.tange.dk This helps funding further development; and it won't cost you a cent.
centos3.tange.dk To silence this citation notice run 'parallel --bibtex' once or use '--no-notice'.
centos3.tange.dk When using programs that use GNU Parallel to process data for publication please cite:
centos5.tange.dk
centos5.tange.dk
centos5.tange.dk
centos5.tange.dk
centos5.tange.dk ;login: The USENIX Magazine, February 2011:42-47.
centos5.tange.dk O. Tange (2011): GNU Parallel - The Command-Line Power Tool,
centos5.tange.dk OK_if_no_perl_warnings
centos5.tange.dk This helps funding further development; and it won't cost you a cent.
centos5.tange.dk To silence this citation notice run 'parallel --bibtex' once or use '--no-notice'.
centos5.tange.dk When using programs that use GNU Parallel to process data for publication please cite:
freebsd7.tange.dk
freebsd7.tange.dk
freebsd7.tange.dk
freebsd7.tange.dk
freebsd7.tange.dk ;login: The USENIX Magazine, February 2011:42-47.
freebsd7.tange.dk O. Tange (2011): GNU Parallel - The Command-Line Power Tool,
freebsd7.tange.dk OK_if_no_perl_warnings
freebsd7.tange.dk This helps funding further development; and it won't cost you a cent.
freebsd7.tange.dk To silence this citation notice run 'parallel --bibtex' once or use '--no-notice'.
freebsd7.tange.dk When using programs that use GNU Parallel to process data for publication please cite:
redhat9.tange.dk
redhat9.tange.dk
redhat9.tange.dk
redhat9.tange.dk
redhat9.tange.dk ;login: The USENIX Magazine, February 2011:42-47.
redhat9.tange.dk O. Tange (2011): GNU Parallel - The Command-Line Power Tool,
redhat9.tange.dk OK_if_no_perl_warnings
redhat9.tange.dk This helps funding further development; and it won't cost you a cent.
redhat9.tange.dk To silence this citation notice run 'parallel --bibtex' once or use '--no-notice'.
redhat9.tange.dk When using programs that use GNU Parallel to process data for publication please cite: