Best practice for variables shared between processes?

Mark Fowler mark at twoshortplanks.com
Mon Sep 20 18:45:46 BST 2010


On 20 Sep 2010, at 17:30, Roger Burton West <roger at firedrake.org> wrote:

I wish to have two processes, a "producer" (which will create files) and
a "consumer" (which will do something with them), running
simultaneously. Ideally the producer would push filenames to a list as
it finishes producing them, while the consumer would shift them off the
same list (or loop-wait, if the list is empty).


You could use threads instead of processes and use a shared variable. Of
course, this would prevent you from starting and stopping the processes
independently and would complicate safely restarting the process(es) as
you'd have to somehow either wait for the queue to empty or serialise it
(and woe be unto you if one or the other threads errors out). I guess it
depends entirely on how 'repeatable' your jobs are and if you can re-run
them on failure.


What is the accepted best practice for achieving this effect in modern
idiomatic Perl?


All the cool kids are playing with redis these days (a nosql thingy that
does particularly well with queuing amongst other things) if you can stomach
an extra process.

The more tried and tested (if slower, and requiring a handy mysql server)
solution has already been mentioned: theschwartz. This is king of the Perl
'reliable' home-baked queue solutions that can survive happily with a server
restart and/or jobs crashing out etc and won't lose things without retrying
them a configurable number of times, etc.

A more lightweight solution that tends towards more realtime stuff would be
Gearman. This isn't nearly as reliable (i.e. it does handing off jobs rather
than a centralised persisted queue) but might be more suited to what you're
doing.

YYMV depending on the nature of what you're doing and there isn't one one
size fits all solution.


More information about the london.pm mailing list