parallel/doc/FUTURE_IDEAS

53 lines
1.5 KiB
Plaintext

=head1 IDEAS
One char options not used: F G J K P Q Y
Test if -0 works on filenames ending in '\n'
monitor to see which jobs are currently running
http://code.google.com/p/ppss/
Accept signal INT instead of TERM to complete current running jobs but
do not start new jobs. Print out the number of jobs waiting to
complete on STDERR. Accept sig INT again to kill now. This seems to be
hard, as all foreground processes get the INT from the shell.
If there are nomore jobs (STDIN is closed) then make sure to
distribute the arguments evenly if running -X.
Reuse ssh connection (-M and -S) if not using own ssh command.
SEED=$RANDOM
ssh -MST /tmp/ssh-%r@%h:%p-$SEED elvis
rsync --rsh="ssh -S /tmp/ssh-%r@%h:%p-$SEED" gitup elvis:/tmp/
ssh -S /tmp/ssh-%r@%h:%p-$SEED elvis hostname
http://www.semicomplete.com/blog/geekery/distributed-xargs.html?source=rss20
Parallelize so this can be done:
mdm.screen find dir -execdir mdm-run cmd {} \;
Maybe:
find dir -execdir par$ --communication-file /tmp/comfile cmd {} \;
=head2 Comfile
This will put a lock on /tmp/comfile. The number of locks is the number of running commands.
If the number is smaller than -j then it will start a process in the background ( cmd & ),
otherwise wait.
par$ --wait /tmp/comfile will wait until no more locks on the file
=head2 mutex
mutex -n -l lockid -m max_locks [command]
mutex -u lockid
-l lockfile will lock using the lockid
-n nonblocking
-m maximal number of locks (default 1)
-u unlock
If command given works like: mutex -l lockfile -n number_of_locks ; command; mutex -u lockfile