Best practice for variables shared between processes?

Peter Edwards peter at dragonstaff.co.uk
Mon Sep 20 19:09:45 BST 2010


And I spotted this tonight
http://octobot.taco.cat/
"Supports AMQP/RabbitMQ, Beanstalk, and Redis PubSub. Others easily addable"

On Sep 20, 2010 6:51 PM, "Mark Fowler" <mark at twoshortplanks.com> wrote:

On 20 Sep 2010, at 17:30, Roger Burton West <roger at firedrake.org> wrote: I
wish to have two process...
You could use threads instead of processes and use a shared variable. Of
course, this would prevent you from starting and stopping the processes
independently and would complicate safely restarting the process(es) as
you'd have to somehow either wait for the queue to empty or serialise it
(and woe be unto you if one or the other threads errors out). I guess it
depends entirely on how 'repeatable' your jobs are and if you can re-run
them on failure.

What is the accepted best practice for achieving this effect in modern
idiomatic Perl?
All the cool kids are playing with redis these days (a nosql thingy that
does particularly well with queuing amongst other things) if you can stomach
an extra process.

The more tried and tested (if slower, and requiring a handy mysql server)
solution has already been mentioned: theschwartz. This is king of the Perl
'reliable' home-baked queue solutions that can survive happily with a server
restart and/or jobs crashing out etc and won't lose things without retrying
them a configurable number of times, etc.

A more lightweight solution that tends towards more realtime stuff would be
Gearman. This isn't nearly as reliable (i.e. it does handing off jobs rather
than a centralised persisted queue) but might be more suited to what you're
doing.

YYMV depending on the nature of what you're doing and there isn't one one
size fits all solution.


More information about the london.pm mailing list