Wednesday, December 30, 2009

Go memcache client package

Recently I needed to access Memcached from Go. I couldn't find a suitable package anywhere on the web, so I created one. Gomemcache provides basic operations to store, retrieve and delete data using memcache text protocol. You can download the package from its Github repository.

Edit: Gomemcache is now distributed under the terms of LGPL license with static linking exception. It means that you can link it statically with independent modules to produce an executable, regardless of the license terms of these independent modules, and to copy and distribute the resulting executable under terms of your choice, provided that you meet the terms and conditions of LGPL for the package itself. Originally GNU Lesser General Public License does not allow unproblematic static linking with proprietary source code.

Sunday, December 27, 2009

Go Programming Language Resources

Go is a fairly new programming language, so at the moment it is hard to find interesting projects associated with it. Go Programming Language Resources is a web site that tries to gather them in one place. It also contains links to mailing lists, discussion groups and IRC archives, as well as Go ports to different operating systems. You can also find there a few interesting development tools and syntax highlighting for the most popular programmer's editors. If you are interested in Go, this site is definitely worth adding to your bookmarks.

Thursday, December 24, 2009

Go - a new programming language from Google

Go is a new programming language developed at Google, which according to its FAQ "was born out of frustration with existing languages and environments for systems programming". Some people ask if the world needs another programming language, but those who know that among Go authors are Ken Thompson and Rob Pike, famous Unix hackers, usually don't. If there is a language that has a chance to replace plain C in system programming, Go is a perfect candidate. It features a syntax derived from the C tree (which makes learning curve fairly easy for most of the programmers), fast compilation to native machine code, and fast execution of compiled binaries. Additionally, Go provides a built-in garbage collector and language constructs that simplify parallel programming, especially the concept of goroutines, which are regular program functions executed concurrently. Goroutines can communicate with each other and the main thread through channels, that can also be used for synchronization purposes.
I have prepared a few simple programs to compare Go with C in terms of speed and to play with concurrent programming in Go. First, let's have a look at a typical recursive Fibonacci example. Below is a C version:
#include <stdio.h>
#include <stdlib.h>

int fib(int n) {
  if (n < 2) {
    return(n);
  }
  return(fib(n-2) + fib(n-1));
}

int main(int argc, char *argv[]) {
  int n = atoi(argv[1]);
  printf("%d\n", fib(n));
  return(0);
}
and here is a Go version:
package main

import (
  "flag"
  "fmt"
)

var f = flag.Int("f", 1, "Fibonacci number")

func fib(n int) int {
  if n < 2 {
    return n
  }
  return fib(n-2) + fib(n-1)
}

func main() {
  flag.Parse()
  fmt.Println(fib(*f))
}
A quick test shows that a single threaded Go program is actually faster than the C one:
$ gcc -O2 -o fib fib.c
$ time ./fib 40
102334155

real 0m1.987s
user 0m1.980s
sys 0m0.004s

$ 8g fib.go; 8l fib.8
$ time ./8.out -f=40
102334155

real 0m1.934s
user 0m1.932s
sys 0m0.004s
I also prepared a program in Go to calculate a sum of subsequent Fibonacci sequences up to a given number in parallel. It uses run function as a goroutine to calculate each sequence independently and a shared channel ch to gather the results, that are finally summed up (so we don't care about the order in which they appear in the channel):
package main

import (
  "flag"
  "fmt"
  "runtime"
)

var n = flag.Int("n", 1, "Number of CPUs to use")
var f = flag.Int("f", 1, "Fibonacci number")

func fib(n int) int {
  if n < 2 {
    return n
  }
  return fib(n-2) + fib(n-1)
}

func run(n int, ch chan int) {
  ch <- fib(n)
}

func main() {
  flag.Parse()
  runtime.GOMAXPROCS(*n)
  ch := make(chan int)
  for i := 0; i <= *f; i++ {
    go run(i, ch)
  }
  sum := 0
  for i := 0; i <= *f; i++ {
    sum += <-ch
  }
  fmt.Println(sum)
}
The program takes additional parameter -n to indicate the number of CPU cores to use. According to Go runtime package documentation the call to GOMAXPROCS is temporary and will go away when the scheduler improves. Until then you have to remember to use this call, otherwise your application will use only one CPU core by default.

I ran the program for 40 Fibonacci sequences on dual Intel Xeon L5420 2.50GHz using from single up to all available CPU cores. The execution time improved most dramatically between -n=1 (5.073s) and -n=2 (3.141s), than it gradually slowed down from -n=3 (2.574s) to -n=8 (2.013s).

What I like about Go is that it gives a set of powerful tools into programmer's hands, but at the same time does not try to hide the complexity of parallel programming behind bloated libraries or awkward language constructs. It nicely follows the KISS principle and borrows some good ideas from Unix design (like channels, which work similar to Unix pipelines). If you think seriously about future system programming, I think Go is definitely a language worth learning. Not only because it's Google ;-)

Tuesday, December 15, 2009

Funding Clojure 2010

There is no such thing as a free lunch - everybody knows that. But when it comes to software, we tend to think that it is (or should be) free. Free as in free beer, not as in speech, quite contrary to what free software philosophy says. But software doesn't grow on trees, it requires long hours of work from people who create it. Many people write software and release it under one of open source licenses in hope that some day it becomes popular; but when it finally does, it turns out that its development takes so much time that you either have to drop it or start working on it full-time.
This is what recently happened to Clojure. Rich Hickey, the creator and main developer of this fascinating programming language announced his financial expectations towards individuals and businesses who benefit from it to fund its further development. I hope Rich gets enough financial funding to continue his work on Clojure. If you use Clojure in your development, maybe you should also think about donating.

Sunday, November 22, 2009

MagLev public alpha

Gemstone has just announced an alpha version of their concurrent Ruby engine, MagLev, available for download. MagLev is based on Gemstone's Smalltalk virtual machine and supports 64-bit Linux, Mac OS X and Solaris x86 operating systems. There are no plans for 32-bit version of MagLev.
MagLev does not support Rails yet, but so does not Fabio Kung's JMagLev. However, the advantage of MagLev over Fabio's machine is that Gemstone is determined to create an enterprise-class product, and JMagLev was just a demonstration of the power of Terracotta and does not seem to be developed any further. It seems that the next step for Gemstone will be to implement Rails functionality and allow RoR applications to run in a clustered enviornment, just as Grails ones can run on Terracotta.

Friday, November 20, 2009

Erlang project crawler

Today I received an email from Erlang Training and Consulting Ltd. - the owner of popular Erlang Community Site Trapexit - announcing its own Erlang open source project crawler. Crawler gathers information on open source Erlang projects from a number of code repositories such as GitHub, Bitbucket, SourceForge and Google Code. At the time when I am writing this post it includes information on 1228 projects. The number may not be impressive, but it is good to have information about the most interesting open source Erlang projects gathered in one place.

Sunday, September 6, 2009

Top gear(man)

Gearman is an open source project providing a flexible and universal framework for writing distributed applications. It differs from similar projects in easiness of use and the number of bindings for programming languages it provides: C, C++, Java, Perl, PHP and Python. In fact, Gearman has a simple command line client, that allows you to start jobs using any language you want - all you need to do is to provide the client with input data and then fetch the client's output. Gearman API is very simple, consistent, and makes writing distributed applications really easy, quick and fun.

Gearman architecture is equally simple: it consists of job servers, that accept task requests from clients and forward them to workers, and send results back to clients. Each worker can be connected to many job servers, and a client can choose which job server to use - this way there is no single point of failure that could break down the whole cluster. Job servers have their own queues and in case of worker failure they can reassign tasks to other workers. According to High Scalability Gearman has been successfully used by LiveJournal, Yahoo!, and Digg (which claims to run 300000 jobs a day through Gearman without any issues).

I decided to try out Gearman at home, and I must say that it was a really pleasant experience. I wrote a simple C++ worker and even simpler Python client. The worker recursively finds Fibonacci number for given n:
#include <cstring>
#include <cstdlib>
#include <iostream>
#include <sstream>
#include <libgearman/gearman.h>
#include <libgearman/worker.h>

using namespace std;

void *fib_worker(gearman_job_st *job, void *cb_arg, size_t *result_size, gearman_return_t *ret_ptr);
long fib(long n);
static void usage(char *name);

int main(int argc, char *argv[])
{
  int c;
  char *host = "127.0.0.1";
  in_port_t port = 0;
  gearman_worker_st worker;

  while ((c = getopt(argc, argv, "h:p:")) != -1) {
    switch(c) {
      case 'h':
        host = optarg;
        break;
      case 'p':
        port = (in_port_t) atoi(optarg);
        break;
      default:
        usage(argv[0]);
        exit(1);
    }
  }

  if (argc != optind) {
    usage(argv[0]);
    exit(1);
  }

  gearman_worker_create(&worker);
  gearman_worker_add_server(&worker, host, port);
  gearman_worker_add_function(&worker, "fib", 10, fib_worker, NULL);
  while (1) {
    gearman_worker_work(&worker);
  }

  return 0;
}

void *fib_worker(gearman_job_st *job, void *cb_arg, size_t *result_size, gearman_return_t *ret_ptr) {
  char ch[256];
  ostringstream os;
  int size = gearman_job_workload_size(job);
  strncpy(ch, (char *) gearman_job_workload(job), size);
  ch[size] = 0;
  long n = atol(ch);
  os << fib(n);
  string s = os.str();
  *result_size = s.size();
  *ret_ptr = GEARMAN_SUCCESS;
  return strdup(s.c_str());
}

long fib(long n) {
  if (n < 2) {
    return 1;
  } else {
    return fib(n - 2) + fib(n - 1);
  }
}

static void usage(char *name) {
  cout << "Usage: " << name << " [-h <host>] [-p <port>] <string>" << endl;
  cout << "\t-h <host> - job server host" << endl;
  cout << "\t-p <port> - job server port" << endl;
}
Python client prepares a set of jobs for a sequence of n numbers, runs them simultaneously through a job server and sums up the result:
import optparse
from gearman import *

parser = optparse.OptionParser()
parser.add_option('--host', help = "Specifies gearman job server")
parser.add_option('-n', '--num', help = "Amount of Fibonacci numbers to compute")
(opts, args) = parser.parse_args()

client = GearmanClient([opts.host])

ts = Taskset()
for i in range(1, int(opts.num)):
  t = Task(func = "fib", arg = i)
  ts.add(t)

client.do_taskset(ts)

sum = 0
for task in ts.values():
  sum += int(task.result)

print sum
You can download the source code of both worker and client here. After you compile and install Gearman with traditional:
./configure
make
sudo make install
sudo ldconfig
Install Python extension with:
easy_install gearman
And compile the C++ example with:
make
Then you can run a job server as a daemon:
gearmand -d -L 127.0.0.1
or in debug mode:
gearmand -vv -L 127.0.0.1
Next, run a couple of Gearman workers:
./GearmanWorker -h 127.0.0.1
And the Python client:
python GearmanClient.py --host 127.0.0.1 -n 45
For a single machine, it makes sense to run at most as many workers as there are CPUs (or CPU cores) available. For a network cluster, you can run more job servers and workers (and clients) respectively.

I've made some tests with the client and worker above. using my home laptop and an Intel Atom based net-top running together in a local network. For only one laptop worker, computing the sum of 45 Fibonacci numbers took 66.955 seconds, for two laptop workers it took 35.702 seconds, and adding a remote worker reduced the total time to 25.593 seconds. Adding more workers didn't reduce computation time, it even slightly slowed the cluster down - which is quite understandable, as the number of workers exceeded the number of free CPUs (Intel Atom in fact has only one physical core, although applications see it as dual-core CPU).

Sunday, July 5, 2009

Jabase: Jabber cluster on HBase

Java has often been compared with Erlang by Erlang advocates, who emphasize its advantage over Java in thread creating and message passing. Some even claim that Erlang can be Java successor in concurrent programming. Of course such comparisons and benchmarks have some value, but the truth is that Erlang has never been, and never will be, Java competitor. The reason for this is simple, and was perfectly explained by Dennis Byrne in his article "Integrating Java and Erlang":
Java and Erlang are not mutually exclusive, they complement each other. I personally have learned to embrace both because very few complex business problems can be modeled exclusively from an object oriented or functional paradigm.

Integration and interoperability are now the key words that make modern IT business go round. It is also understood very well by the Erlang team, who created Jinterface - a set of Java classes for communicating with Erlang.

Jinterface is also the key element that allowed me to build a highly scalable Jabber cluster based on ejabberd as XMPP server and HBase distributed database as a storage. Ejabberd is a distributed Jabber server written in Erlang, but unfortunately Erlang native storage, Mnesia, can't handle large amount of data. To overcome this limitation, ejabberd provides ODBC drivers for MySQL, MS SQL and PostgreSQL, but it's only a partial solution to the scalability problem. First, the whole ejabberd cluster still uses a single database instance as data storage, and second, user sessions are still kept in Mnesia.

Jabase is a middleware set of components written in Erlang and Java providing communication layer between ejabberd XMPP server and HBase distributed database. While ejabberd ensures communication between users and server instances, HBase provides highly scalable, distributed database to store large amount of data and serve them efficiently. Additionally, Java instances are responsible for caching user sessions and providing efficient methods of serving and searching for session data, while Erlang code ensures session data integrity among Jabber server instances. Jabase architecture looks like this:
The source code of Jabase has been released under GPL and the project website contains a manual how to compile and set up a simple Jabber cluster based on Jabase. However, technical support is served exclusively by Division-by-Zero, for which I built this software. If you have any questions or are interested in using Jabase in your company, please contact Division-by-Zero.

Edit: You can get the original code here. Keep in mind that it supports ejabberd-2.0.3 and you may need to adjust the source code to make it work with the latest version of ejabberd.

Wednesday, March 4, 2009

Incoming Revolution: Clojure + Terracotta

For some time I have been working quite extensively with Java and Java-related technologies in addition to all that Erlang and functional stuff I do every day, and I must say that I am really impressed with what is going on in the area where both worlds overlap. A few months ago I was experimenting with JScheme running on Terracotta, but as I told Ari from Terracotta Inc., who became interested in the project, much more interesting would be combining their product with Clojure. I knew that some people had already been thinking about it.
Not much time has passed since then, and guess what. Paul Stadig announced on his blog that he managed to run Clojure code on Terracotta. Today the same guy just made me looking for my jaw on the floor: he made the whole Clojure environment (together with REPL) work on Terracotta! Now imagine Clojure concurrent applications, using Software Transactional Memory distributed across computer network through Terracotta: you can build massive software clusters that can work with incredible performance; you can add Hadoop (distributed file system) and Hbase (distributed database) and be able to build a system that can handle hundreds of thousands of parallel operations and store petabytes of data; you can scale your system up and and down just by adding or removing machines from the cluster. And with modern cloud computing services, like AWS, you can build a large computation cluster or a social networking website with a relatively small budget.
Basically, you don't need much money to start another Facebook. Your imagination and programming skills are your only limit. Good luck!