* Re: a command "spool", ie queueing commands?
[not found] ` <1122969644l.1939l.0l@hyttynen>
@ 2005-08-02 13:14 ` asterr
2005-08-04 0:57 ` Emiliano Castagnari
1 sibling, 0 replies; 6+ messages in thread
From: asterr @ 2005-08-02 13:14 UTC (permalink / raw)
To: urgrue; +Cc: Adrian C., linux-admin
See the batch command.
On Tue, 2 Aug 2005, urgrue wrote:
> > Could you please give us a real-life example?
>
> Well, for example imagine you're doing some maintenance on a server.
> You want to move some big files, run updatedb by hand, run a script
> that parses a bunch of files, and finally merge some video files.
>
> Because these are very disk-heavy operations, it's much better and
> quicker if they are not run at once, but are run sequentially instead.
>
> So normally you'd probably use "at" to run one now, another in an hour,
> etc. But this is dumb because you have to simply try and guess if the
> job is done or not. You could also run them in sequence by putting them
> all in the same "at" job, but its not possible to add or remove
> commands from an "at" job once its created. Also, "at" is "user-aware",
> ie if two users set up two at jobs they will happily run at the same
> time. What I propose would add all commands to the same queue, even if
> a different user is the one launching the command (but of course the
> commands should be run AS the user who started it).
>
> So what I'm thinking is like this (lets assume the command is called
> "cs" for "command spooler"):
> me:~> cs
> Usage: cs [add|remove|show] <command>
> me:~> cs show
> no entires
> me:~> cs add mv /path/hugefiles /other/path/hugefiles
> command added to queue
> me:~> cs show
> 1 [running] : mv /path/hugefiles /other/path/hugefiles
> me:~> cs add updatedb
> command added to queue
> me:~> cs add /path/my_file_parsing_script /path/lots/of/files
> command added to queue
> me:~> cs add avimerge -i clip1.avi clip2.avi clip3.avi -o allclips.avi
> command added to queue
> me:~> cs show
> 1 [running] : mv /path/hugefiles /other/path/hugefiles
> 2 [queued] : updatedb
> 3 [queued] : path/my_file_parsing_script /path/lots/of/files
> 4 [queued] : avimerge -i clip1.avi clip2.avi clip3.avi -o allclips.avi
>
>
> As I do lots of disk-heavy operations, I would find this INCREDIBLY
> useful.
>
> urgrue
>
>
> > --Adrian.
> >
> > On 8/2/05, urgrue <urgrue@tumsan.fi> wrote:
> > > i realized it would be useful to be able to add commands into a
> > command
> > > queue, from where they would get executed in sequence. for example
> > > numerous large hard disk-intensive operations would be better
> > executed
> > > in sequence rather than at once.
> > > in other words, exactly like a printer spool, but for commands. you
> > > could add commands in, list the queue, and remove from the queue.
> > >
> > > does anyone know of something like this?
> > >
> > >
> > > -
> > > To unsubscribe from this list: send the line "unsubscribe
> > linux-admin" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at http://vger.kernel.org/majordomo-info.html
> > >
> >
> >
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-admin" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: a command "spool", ie queueing commands?
[not found] ` <1122969644l.1939l.0l@hyttynen>
2005-08-02 13:14 ` a command "spool", ie queueing commands? asterr
@ 2005-08-04 0:57 ` Emiliano Castagnari
2005-08-04 7:26 ` urgrue
1 sibling, 1 reply; 6+ messages in thread
From: Emiliano Castagnari @ 2005-08-04 0:57 UTC (permalink / raw)
To: urgrue; +Cc: Adrian C., linux-admin
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
El [ Tue 02, Aug 05 - 08:00 ] , urgrue expreso:
> >Could you please give us a real-life example?
>
> Well, for example imagine you're doing some maintenance on a server.
> You want to move some big files, run updatedb by hand, run a script
> that parses a bunch of files, and finally merge some video files.
>
> Because these are very disk-heavy operations, it's much better and
> quicker if they are not run at once, but are run sequentially instead.
>
> So normally you'd probably use "at" to run one now, another in an hour,
> etc. But this is dumb because you have to simply try and guess if the
> job is done or not. You could also run them in sequence by putting them
> all in the same "at" job, but its not possible to add or remove
> commands from an "at" job once its created. Also, "at" is "user-aware",
> ie if two users set up two at jobs they will happily run at the same
> time. What I propose would add all commands to the same queue, even if
> a different user is the one launching the command (but of course the
> commands should be run AS the user who started it).
Hi !!
You could try something like:
$ command1 && command2 && command3
&& is a meta character. If the return code of the last command executed
evaluates to true, it will execute the next command.
It is a bash an sh meta-character, I don't know if it'll work on other shells.
The only drawback I can see, is that if any of this commands finish with an
error, that will be the end of your queue :(
Another way, is to simply develop it ... it doesn't seem like a very hard one
to code :)
Cheers !!
- --
# Emiliano Castagnari
[ -========================================================================- ]
('> # Debian Sarge - GNU/Linux - lambda 2.6.10-l01 #
Gnu / (/) # JID: torian@lug.fi.uba.ar
'-- - } [ Libera tu mente - Libera tu Codigo ] { -
[ -========================================================================- ]
[ GPGKey: http://gpg.lug.fi.uba.ar:11371/pks/lookup?op=get&search=0x0B8AF76F ]
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.5 (GNU/Linux)
iD8DBQFC8WfmbsFzaguK928RAuTOAKC3YJiRnOc1jU4aZC5VFyz0N38PgACgpxWT
Ac1nvXdKaiwMSAWhzSyWFmc=
=h9Lm
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 6+ messages in thread