Thursday, August 21, 2008

Clojure is here

Clojure is a Lisp family, dynamic, functional programming language targeting the Java Virtual Machine, with excellent support for concurrent programming. Rich Hickey, Clojure author, prepared a few impressive presentations about the language, which are definitely worth watching. They are not only about Clojure itself, but also give a deep insight into computer language design and concurrent programming in general.

There have been some attempts to compare Clojure with Erlang, although both of the languages address different problem classes. Erlang has strong telecommunication background - starting with the language name itself. It features its own, highly efficient virtual machine, which allows spawning hundreds of thousands of processes and soft real-time performance. Most of its syntax comes from Prolog, since initially Erlang emerged as its dialect - everyone who programmed in Prolog before (like me) will feel at home with Erlang. Finally, Erlang supports programming model based on transparent distribution, which means that you can spawn processes throughout one or several machines without making any changes to the software. Processes (also called actors) share no common memory or data and communicate purely through message passing.

Clojure does not provide its own virtual machine with ability to handle millions of soft realtime processes. Instead, it compiles directly to JVM bytecode, which immediately gives you access to all Java classes and to thousands of libraries written for the Java platform. Clojure is based on Lisp, which is much more popular for programming real-life applications than Prolog. Therefore, despite of its (Lots (of Irritating (and Superfluous (Parentheses)))) syntax, Clojure may be faster to learn and adapt by regular software developers than Erlang. As regards concurrency, Clojure does not support distributed programming. Instead, it uses Software Transactional Memory model. Clojure agents, unlike Erlang actors, use actions instead of message passing to change their state. Actions can be committed inside transactions, which allows for example to read the state of all agents within one transaction, and this way obtain a consistent snapshot of a whole system (pretty tough to achieve in distributed enviornment).

Erlang has a 20-year success story in telecommunication industry and is especially suitable for building highly scalable, fault tolerant, distributed application clusters that are expected to work under very heavy load. It is also ready to work on multiprocessor machines out of the box. However, when finally multi-core CPUs became standard in home computers, Erlang did not sweep away other anachronic programming languages, despite its obvious advantages. I think it is because most home applications don't need many of the features Erlang has to offer. Users don't need 99,98% uptime, as they usually turn off their machines overnight. They don't need soft real-time processing, since they can easily tolerate a few second delay in data processing. Finally, they don't need high scalability, as very few of them are geeks who connect their machines in clusters (who needs it anyway, if you have more and more cores in one box?). As Rich Hickey, Clojure author, said in his discussion on STM:

"Scalability is not a universal problem - some systems need to handle thousands of simultaneous connections - most don't, some need to leverage hundreds of CPUs in a single application - most don't."

Clojure seems to meet the needs of desktop programmers slightly better than Erlang. It provides easy concurrency and all qualities of functional programming, allows you to take advantage of vast set of Java libraries, and lets you to run your software without modification on every platform for which the Java runtime exists. Erlang is unbeatable on server platform, but leaves a lot of space for its desktop counterparts. Clojure is just the first of them. I am sure there will be more.

Thursday, August 14, 2008

MapReduce in Erlang

MapReduce is a Java framework for parallel computations, designed and developed by Google. It is often referred to not only as a platform, but also more general as an algorithm (or a "programming model", as Google calls it itself). The main idea behind MapReduce is to spawn computations on a set of data to many single processes (map), and then gather their results (reduce).

Implementing MapReduce in Erlang is unbelievably trivial. Joe Armstrong, Erlang inventor and main architect, published a sample pmap/2 function, which can be used as a parallel equivalent of standard Erlang lists:map/2:
pmap(F, L) ->
    S = self(),
    Pids = lists:map(fun(I) ->
                         spawn(fun() -> do_f(S, F, I) end)
                     end, L),
    gather(Pids).

gather([H|T]) ->
    receive
        {H, Ret} -> [Ret|gather(T)]
    end;
gather([]) ->
    [].

do_f(Parent, F, I) ->
    Parent ! {self(), (catch F(I))}.
Joe didn't provide any particular examples of using pmap, so I wrote a test function that makes a sequence of Fibonacci numbers. First, let's start an Erlang shell with -smp flag, that enables it to use many CPUs and many CPU cores:
erl -smp
Now define a Fibonacci function:
fib(0) -> 0;
fib(1) -> 1;
fib(N) when N > 0 -> fib(N-1) + fib(N-2).
and generate a test list as a sequence of numbers (for example from 0 to 40):
L = lists:seq(0,40).
To make every element of the list L to be processed by fib/1 function we use lists:map/2:
lists:map(fun(X) -> fib(X) end, L).
To do a parallel computation we need to use pmap/2 instead of lists:map/2:
pmap(fun(X) -> fib(X) end, L).
On my laptop with Core2 Duo T5450 1.66 Ghz running the parallel version of fib/1 function reduced execution time from 53.405534 to 31.354125 seconds (as measured by timer:tc/3). On dual Xeon E5430 2.66 Ghz (8 cores total) the speedup was from 25.635218 seconds to 10.017358. It is less than Joe Armstrong managed to achieve (he probably used more sophisticated test functions and different hardware than a regular PC), but it clearly shows how easy it is to parallelize an Erlang program - just replace every instance of map/2 with pmap/2 and you're done.

I prepared a modified version of pmap/2, which spawns processes not only on local machine, but also on all nodes in a cluster:
pmap(F, L) ->
    S = self(),
    Nod = [node()|nodes()],
    {Pids, _} = lists:mapfoldl(
        fun(I, {N1, N2}) ->
            case N1 == [] of
                true -> N = N2;
                false -> N = N1
            end,
            [H|T] = N,
            {spawn(H, fun() -> do_f(S, F, I) end), {T, N2}}
        end, {Nod, Nod}, L),
    gather(Pids).
Using the modified pmap/2, I got similar results with a cluster of two Erlang shells running on a single Core2 Duo machine as with Joe's function on a single shell started in smp mode. This version of pmap/2 also allows you to rebalance computations by starting as many Erlang shells as there are CPUs (or CPU cores) on multiple machines, and joining them in one cluster.

Monday, August 11, 2008

Scheme & Termite: Erlang alternative?

Gambit-C is a renowned Scheme interpreter and compiler, which can generate fast and portable C code. It features Termite library written in Scheme, which implements Erlang-like concurrency model for distributed programming. Let's see how it works.

First, create two nodes on two separate machines. Suppose machine one IP address is 192.168.1.101 while machine two is 192.168.1.102. Start a Scheme REPL on the first machine through tsi script shipped with Termite (it enhances standard Gambit gsi shell with Termite functions) and init it as node1:
(node-init (make-node "192.168.1.101" 4321))
Now start tsi on the second machine and init it as node2:
(node-init (make-node "192.168.1.102" 4321))
Stay at node2 shell and define an expression to identify node1:
(define node1 (make-node "192.168.1.101" 4321))
Now you can remotely spawn a process on node1:
(define p (remote-spawn node1
  (lambda ()
    (let loop ()
      (let ((x (?)))
        (! (car x) ((cdr x)))
        (loop))))))
The code above remotely spawns procedure p on node1, which waits for a signal in an infinite loop. When it receives signal x as a pair of sender and expression to evaluate, it evaluates the expression, sends its result back to the sender, and loops back. To see it in action evaluate the following expression in node1 REPL:
(! p (cons (self) host-name))
It causes command host-name to be sent to process p on node1. Now you can display received result with:
(?)
Erlang equivalent would look like this:
-module(p).

-export([start/0,loop/0]).

start() ->
    register(p, spawn(node1, p, loop, [])).

loop() ->
    receive
        {Pid, F} ->
            Pid ! (catch F()),
            loop()
    end.
You would call it with:
{p, node1} ! {self(), fun() -> N = node(), N end}.
and read its result with:
receive X -> X end.
So far so good. Now let's halt the Termite shell on node1:
,q
And try to invoke procedure p from node2 again:
(! p (cons (self) host-name))
The shell will segfault. Oops...

This is not the only problem with Termite. Gambit-C comes with gsc, a Scheme to C compiler, which I used to compile the sample script above. First, I had to go to /usr/local/gambc/current/lib/termite and compile Termite itself:
gsc termite
Then I prepared a sample file remote.scm:
(##include "~~/lib/gambit#.scm")
(##include "~~/lib/termite/termite#.scm")

(node-init (make-node "192.168.1.102" 4321))
(define node1 (make-node "192.168.1.101" 4322))

(define p (remote-spawn node1
  (lambda ()
    (let loop ()
      (let ((x (?)))
        (print (cdr x))
        (! (car x) ((cdr x)))
        (loop))))))

(! p (cons (self) host-name))
(print (?))
(newline)
And compiled it to executable binary code:
gsc -c remote.scm

gsc -link /usr/local/gambc/current/lib/termite/termite.c remote.c

gcc -o remote /usr/local/gambc/current/lib/termite/termite.c remote.c remote_.c \
    -lgambc -lm -ldl -lutil -I/usr/local/gambc/current/include -L/usr/local/gambc/current/lib
Now I started node1 on 192.168.1.101 again:
(node-init (make-node "192.168.1.101" 4321))
and tried to run remote binary code from 192.168.1.102:
./remote
Kaboom! Another segfault... Seems like compiled binary cannot handle message passing correctly, although it works in tsi interpreter.

I tried to create a bit more sophisticated Gambit-C/Termite application, but unfortunately I failed to find more useful documentation on Termite than its initial paper. There are also very few examples on Termite, and the most interesting I found come from Dominique Boucher's blog (my test script is based on those examples).

On the other hand, a great advantage of Gambit-C over Erlang is that it can translate Scheme applications to plain C, which than can be compiled to standalone executables using gcc. Thanks to this feature I managed to compile a few Gambit examples with OpenWRT-SDK and ran it on my home Linksys WRT54GL router with OpenWRT firmware - the only thing I had to do was to replace gcc command with mipsel-linux-gcc while compiling gsc output files.

To sum up, Termite is a very interesting and ambitious project, and its authors deserve applause for their job. Unfortunatelly, it seems that it is still in its experimental stage, and is not ready for production use yet. However, when finally done, it can be very useful Erlang alternative (thanks to the Gambit-C compiler) for programming embedded devices and systems, where Erlang VM does not exist. I think this post is also a good comment on Erlang and Termite.

Thursday, August 7, 2008

Web services need balance

So we already have a nice and fast web service based on PHP and Yaws or not so fast, but still nice, service powered by Java and Tomcat. We also have a powerful Mnesia storage back-end we can communicate with through a dedicated api. Now we are ready to scale.

There's a lot of ways for balancing a web cluster. You can use one of many software load balancers, configure a front-end web server as a reverse proxy or even use a dedicated hardware load balancer. But hey, we have Erlang, so why bother with complicated software or spend money for expensive hardware? Writing a simple load balancer in Erlang took me about an hour, including testing, and I am not a very experienced Erlang programmer. This can give you some picture of how functional programming can improve your performance as a developer.

What the balancer actually does is checking the system load on all nodes in a cluster and returning a name of the least loaded one. The software is GPL licensed and can be downloaded from here. You should compile it with:
erlc balancer.erl
and deploy it to all machines you want to monitor. Now you can check the system load of the current node:
balancer:load().
all of the nodes you are connected to:
balancer:show(nodes()).
all of the nodes including current node:
balancer:show([node()|nodes()]).
pick the least loaded one with:
balancer:pick(nodes()).
or with:
balancer:pick([node()|nodes()]).
Due to Erlang nature, dead nodes are instantly removed from the nodes() list, so they are not queried. Additionally, the balancer filters out all nodes that returned an error, so you always get valid results, with no timeouts, deadlocks, etc. However, the result only says that a machine is up, so if a web server dies for some reason, and the machine itself did not crash, the node will still appear on the list (which is quite obvious).

Since the balancer is written in Erlang, it seems natural to deploy Yaws as a front-end web server and use it to redirect users to the servers that are least loaded at the moment. Suppose we have each web server running on different sub-domain (s1.host.domain, s2.host.domain, etc.), and each of them runs a single Erlang node. In this case our index.yaws on http://host.domain/ can look like this:
<html>
<erl>
out(Arg) ->
    {Node, _} = balancer:pick(nodes()),
    [_|[Host|_]] = string:tokens(atom_to_list(Node), "@"),
    Url = string:concat("http://", Host),
    {redirect, Url}.
</erl>
</html>
Connecting to http://host.domain/ will now automatically redirect you to the fastest server. It is your choice if you want to keep users bound to the back-end server or refactor links on your web site to make them go through the redirector each time they click on a link. In the latter case, if you provide dynamic content, you can use Mnesia to store user sessions so they would not get logged out if they suddenly switch from one back-end server to another.

Mnapi: Mnesia API for Java and PHP

Mnesia is a scalable database system designed to work as a storage system for large, distributed applications. In my previous post I showed you how to set up a Mnesia cluster using Erlang shell. Unfortunately, Mnesia cannot be easily accessed from languages other than Erlang. If you are a web developer, there is even an MVC framework for you - Erlyweb. But what if you are a mainstream programmer who would like to use the power of Mnesia, but is not willing to learn another language and change the whole way of thinking in order to understand the functional programming philosophy?

This is why I created Mnapi - API for Java and PHP, which provides basic methods to create, fetch, update and delete records in Mnesia. It is licensed under LGPL, which means that you can get it and use it in any commercial or non-commercial applications, and you are only obliged to publish only the source code of Mnapi if you extend it or modify it (the license does not affect an application which makes a use of it). You can download Mnapi here.

Please bear in mind that Mnesia does not support creating multiple databases or schemas within one instance, joining tables, etc. - and the API reflects those restrictions. You can learn more about it reading documentation attached to the source code and running example applications, so I will not be describing it here in detail. What you need to start using Mnapi is to compile Erlang driver for Mnesia:
erlc mnapi.erl

then start an Erlang shell:
erl -sname erl -setcookie mysecretcookie

and run Mnapi server from the shell:
mnapi:start().

Now you can use Java or PHP to talk to Mnesia through Mnapi libraries.

Note that to use Mnapi from PHP you first need to install and configure Erlang extension for PHP. Read instructions provided with the extension to learn how to configure you PHP installation.

To compile Java api you need Jinterface. If you cannot find Jinterface in lib folder of your standard Erlang installation, you can either download Erlang source code distribution or install it as a package from CEAN.

Have fun!

Erlang tips and tricks: Mnesia

Mnesia is one of Erlang killer applications. It's a distributed, fault tolerant, scalable, soft real-time database system which makes building data clusters really easy. It is not designed as a fully blown relational database, and it should not be treated as its replacement. Although Mnesia supports transactions (in a much more powerful way than traditional databases, as any Erlang function can be a transaction), it does not support SQL or any of its subsets. Mnesia is best suitable for a distributed, easily scalable data storage, which can be shared by a large cluster of servers. One of the most notably known examples are ejabberd - a famous distributed jabber/xmpp server - and cacherl - a distributed data caching system compatible with memcached interface.

Setting up a cluster of Mnesia nodes is easy. First you need to set up a cluster of Erlang nodes as I described in my post Erlang tips and tricks: nodes. Start Mnesia instances on all of them:
mnesia:start().
Now choose one of the nodes as initial Mnesia server node. It can be any node, as Mnesia works in peer-to-peer model and does not use any central management point. Go to a chosen Erlang shell and move the database schema to disc to make it persistent:
mnesia:change_table_copy_type(schema, node(), disc_copies).
You can now create a new table, for example:
mnesia:create_table(test, [{attributes, [key, value]}, {disc_copies, [node()]}]).
which creates table test with two columns: key and value. Attribute disc_copies means that a table will be held in RAM, but its copy will be also stored on disc. If you don't store any persistent data in your table, you can use ram_copies attribute for better performance. On the other hand, if you want to spare some of your system memory, you can use disc_only_copies attribute, but at the cost of reduced performance.
Now let's add a second node to Mnesia:
mnesia:change_config(extra_db_nodes, [node2@host.domain]).
Of course replace node2@host.domain with the name of a node you want to add (or a list of nodes: [node2@host.domain, node3@host.domain, ...]).
Now switch to Erlang shell at the node you have just added and move its schema to disc, so Mnesia remembers it's a part of the cluster:
mnesia:change_table_copy_type(schema, node(), disc_copies).
You can now create a copy of table test on this node:
mnesia:add_table_copy(test, node(), disc_copies).
Now you can add a third node, a fourth node, etc. in the same way as you added the second one. When you have many nodes in the cluster you can keep tables as disc_copies only on some nodes as backup, and use ram__copies on other nodes to improve their performance. It generally makes sense to keep tables on disc only on one of the nodes per single machine.

To remove node@host.domain from the cluster stop Mnesia on that node:
mnesia:stop().
Now switch to any other node in the cluster and do:
mnesia:del_table_copy(schema, node@host.domain).
Mnesia is strongly fault-tolerant, which means that generally you don't need to worry when one of your nodes crashes. Just restart it and reconnect it to the cluster - Mnesia node will synchronize itself and fix all broken and out-of-date tables. I really like to imagine Mnesia as a database equivalent of T-1000, which even heavily damaged or broken into pieces, every time reassembles itself to its original form.

Unfortunately Mnesia has its limits. A storage limit seems to be the most troublesome of them.

Tuesday, August 5, 2008

Erlang tips and tricks: interactive shell

In my previous post I described issues you can run into when configuring an Erlang cluster. Now let's move a step further.

One of the most impressive Erlang features is its unique ability to do a hot code swapping. It means you can exchange the code of a running system without stopping it! Pretty neat, huh? Let's have a look at a very simple server, that just loops and displays its status when asked:
-module(test).
-export([start/0, loop/0]).
start() ->
register(test, spawn(test, loop, [])).
loop() ->
receive
status ->
io:format("~n Status OK ~n"),
loop();
reload ->
test:loop();
quit ->
ok;
_ ->
loop()
end.
The main loop waits for signals from clients and according to the signal does one of the following:
1) On status it displays a text "Status OK" and continues to loop.
2) On reload it hot swaps the server code.
3) On quit it quits with finishes with ok.
4) On any other signal it just loops.
After saving this code as test.erl and compiling with:
erlc test.erl
you can start a REPL shell and run the server with:
test:start().
The start() function spawns a new Erlang process and registers it on the currently running node, so you can call it through a name instead of through its process ID like this:
test ! status.
If you play around with Erlang you already know that. But how about calling test from another node? When you connect another node to the cluster and try to call it you will get an error saying that test is not a registered name. The answer is simple, but unfortunately not so easy to find in Erlang documentation. You need to call it with a tuple (pair) containing both node name and process name:
{test, test1@host.domain} ! status.
So now comes the time for hot swapping. You can edit the source file, change "Status OK" to any other text of your choice and recompile - of course without leaving the interactive shell, since our server is supposed to have 100% uptime :-)
Afterwards you switch back to REPL, do
test ! reload.
than
test ! status.
and you see... no change whatsoever. Why? Because Erlang shell caches executable code. If you want to load the modified version of the code you just compiled into REPL, you need to tell it to do so with:
l(test).
Suppose you have more nodes in your cluster where the code exists and you want to refresh Erlang cache on all of them, you should use:
nl(test).
You can also use the latter command to distribute your code across the cluster, without the need of sending the compiled binary test.beam to all of them through FTP or SCP. Sweet, isn't it?

Monday, August 4, 2008

Erlang tips and tricks: nodes

The title might be a bit exaggerated, but if you have just started your adventure with Erlang I would like to provide you with a couple of hints that can spare you a serious headache.

The basic tool to work with Erlang is its REPL shell, started in terminal mode with erl command. The name REPL comes from read, eval, print, loop cycle and among functional languages is a commonly used term for interactive shell. Since Erlang has been developed with distributed programming in mind, you can start as many shells as you like and make them communicate with each other. The basic rule you have to remember about is that you can interface only shells that share the same cookie. A cookie is simply a string that acts as a shared password to all nodes in a cluster, whether they run on the same computer or different machines across the network. You can set it either from the command line when starting REPL using -setcookie parameter or from the Erlang shell itself:
erlang:set_cookie(node(),mysecretcookie).
You can also edit .erlang.cookie file in your home directory so you don't have to set it up every time you start the shell. However, this method has some nasty side effects and that's why it is not recommended. The first one is that all Erlang applications you start from your user account, including all REPLs, will share the same cookie, which not always is a desired behaviour. Secondly, it is very likely that you forget to edit this file on another machine where you move your Erlang application to (or will not have enough permissions to do it), and your application will not work in environment other than yours.

So now when you have started your shells and set up a shared cookie you may want to check the connectivity between them. But first you need to know how to call them - now here's where the real fun begins. Erlang allows you to call a node (shell instance) with either a short or a long name ("-sname" and "-name" command line parameters respectively). A short name is a common name like "test" while a long name is a fully qualified domain name like "test@host.domain". Short names are shared across machines in the same domain, while FQDN names can be used (theoretically) across the whole Internet. So you start a short name REPL with:
/usr/bin/erl -sname test1 -setcookie mysecretcookie
So far so good. Now try another one within the same domain:
/usr/bin/erl -sname test2 -setcookie mysecretcookie
And now you want to ping test2 from test1 to check if everything is OK. So you input the following command in test1 REPL:
net_adm:ping(test2).
And you see "pang", which means that something went wrong. So you start to tear your hair out until you realize that test1 is not a node name in Erlang! Now go back to your REPL again and look carefully at the prompt. You will probably see something like:
(test1@localhost)1>
Now try to ping test2 again, but this time use a full name as displayed in REPL prompt:
net_adm:ping(test2@localhost).
And what you should now see is "pong", which means that the nodes can now see each other. Note that test2@localhost, although doesn't seem so is still a short name, not a fully qualified domain name (it lacks a domain part).

You can always see a list of all other hosts in the cluster after issuing command:
nodes().
in REPL. But remember about one important thing: new hosts are not seen by Erlang automatically. If you start a new Erlang instance and want it to show on the nodes() list, you have to ping one of the hosts already existing in the cluster. Information about a new node will be automatically propagated among all other nodes. To avoid it, you can use .hosts.erlang file in your home directory, which role is similar to the role of .erlang.cookie (including side effects) - it holds the list of all nodes which will be automatically informed about every new Erlang instance started on your user account, for example:
'test1@localhost'.
'test2@localhost'.
You need to have one empty line at the file end (look here for more information about the syntax).

So here are the basic things you should remember about when building an Erlang cluster:
1) Choose carefully between short and fully qualified domain names, since the first cannot communicate with the latter ones (to make it simple: you cannot mix short and long named nodes across one Erlang cluster).
2) When using more than one machine in your cluster, make sure all DNS records on all machines are set up properly to avoid communication problems.
3) Use the same cookie for all Erlang instances you want to put in a cluster.
4) When you start a new node always inform the cluster about it by pinging one of the running nodes.
5) Open port 4369 TCP/UDP on your firewall for all clustered machines - it is used by epmd daemon, which handles inter-nodes Erlang communication.

PHP Zend Framework on Yaws

Web developers looking for an efficient web server should definitely give a chance to Yaws, a lightweight and very fast open source web server developed in Erlang. For those who do some work in Erlang, this should be enough for a recommendation, since Erlang has been designed with high performance and scalability in mind. Those who haven't heard of Yaws and Erlang should have a look at this Yaws vs Apache comparison to get the idea.

Unfortunately, Yaws is not as mature as Apache in terms of available functionality. The one thing I missed particularly about it was the lack of easily definable rewrite rules, which would allow me to set up my favourite PHP framework (namely Zend Framework) to work under control of Yaws. Browsing through Yaws Wiki I found a post explaining how to use rewrite feature in Yaws. I used it as a starting point for my own rewriting module, which allowed me to run ZF on Yaws. I published it on Trapexit as a free code under no particular licence, beacuse - as I said - I didn't write it from scratch, but used a public domain work. Here's how it works (I assume you already have Erlang and Yaws installed in your system):

First you need to configure your MVC enviornment. I suggest following a manual at Akra’s DevNotes to create the basic directory structure. Suppose your Yaws document directory is /opt/yaws, you'll have to create folders /opt/yaws/application, /opt/yaws/library and /opt/yaws/public. Also create /opt/yaws/log folder for web server logs. Then create /opt/yaws/ebin and put there rewriter.erl. Compile the rewriter module from the command line with
erlc rewriter.erl
Create yaws configuration file yaws.conf and inform the web server to use php scripts and the rewrite module
php_exe_path = /usr/bin/php-cgi
ebin_dir = /opt/yaws/ebin
logdir = /opt/yaws/log

<server localhost>
port = 8080
listen = 0.0.0.0
docroot = /opt/yaws/public
allowed_scripts = yaws php
arg_rewrite_mod = rewriter
</server>
Finally, run your web server
yaws --conf yaws.conf
and go to http://localhost:8080 to see if it works.

From now on, all client requests to localhost will be redirected through a bootstrap file /opt/yaws/public/index.php. Make sure all paths in your PHP scripts lead to correct locations.

Tested on Erlang R11B with yaws 1.68 on Linux Mint 4.0 32-bit and yaws 1.73 on Ubuntu Server 8.04 64-bit with Zend Framework 1.5.

Good luck!