Forgot to mention: grinderl now use sinan as build system (makefiles are still around but should disappear as soon as the tests will be handle by sinan). Hopefully sinan will help to deals with dependencies and release handling.

Btw I continue to follow the development of the CEAN repository …

I decided to change the roadmap of the grinderl project.

It first application was to be a tiny load-test framework (at least a testing framework as I’m not sure it is efficient enough to create heavy load on a well design server). And I still have to finish the part of creating useful statistics from all the results.

But now I think that I could implement a better node management API, and then grinderl can become one way to call code on remote host (not exactly as RPC since command are asynchronous and supervised by remote host). One application could be to install an external application on all remote host, or manage deployment like does capistrano (but with the full Erlang environment available!).

So the new (of course, it’s the first!) roadmap:

  • Stabilize the current version:
    • move test to become command
    • better handling of results sent by remote hosts
  • Develop concurrently the remote node API and a UI:
    • API to manage remote hosts
      • use slave module
      • have a look at this how-to
      • can we install a minimal Erlang node on a remote machine through ssh … using CEAN or erlware maybe?
    • UI: network tools seems to require a network-able UI (probably using yaws) to be able to:
      • see/manage nodes/hosts
      • send command to all/any hosts, monitor results (looks like each command will need a specialize UI)


  • Sad note: nothing new in grinderl (this project don’t seems to appeal my creativity)
  • Good note: simple project to learn Erlang distributed programming

I did it! I have try to code my first OTP applications following most of the recommendation of the documentation (and the French book Erlang programmation was a great help).

One of those applications became an opensource project: grinderl. I did blog a bit about it in this blog post, and now it has evolved from the all-in-one-module code to a small ongoing project (hosted by googlecode). To do that I used:

  • application/supervisor/server/fsm/event … all the generic modules: not complicated
  • eunit: simple enough, and tests should be compatible with the test_server application
  • dialyzer: wow! it can save a lot of time, especially because Erlang has no type system: a MUST.
  • appmon: great and easy to see your supervision tree
  • debugger: I still prefer to write message in the console … please don’t tell that to anybody!
  • edoc: I learned how to use it, but sadly it didn’t teach me to write good documentation…

I may like to have a look at cover and profiling tools, I may want to learn about Erlang port, but I guess the two main points I missed are:

  • mnesia: looks so great!
  • release handling: this is a tricky part of OTP application distribution I think!

And for the last one, I hope the sinan built system can help me (Eric Merritt promised us a high level user documentation soon).

My only regret is that I haven’t found a complete tutorial to go through all the steps of transformation of an Erlang program in an OTP application release. And that’s too sad that I don’t feel able to write one!

It took a long time to decide to learn Erlang instead of Haskell or OCaml/F#. What I really want to learn for now are the OTP principles of distributed application. So I wrote my first Erlang script (not yet an application) and it confirmed my point of view: Erlang make easy the development of distributed program! really!!!

At my work we needed to test performance of a TCP server (and we’ll used Grinder because most of our API is Python). I’ve decided to use Erlang to do the same thing as an exercise: grind.erl (of course it will never become neither as huge as Grinder neither as efficient as Tsung; I even do not hope it can be usefull). But with few lines of code, I was able to reproduce the same behavior as Grinder: launch on multiple machine multiple tester agents executing some test function and gather the results of all the tests.

Ok, I did not told all the truth: that’s not a very small number of lines: arround 260 (with tests and all). But all the code is for running multiple time a test function and gathering some statistics on its result (run_task). Distributed this behaviour among multiple Erlang node is only a map call (distribute_task)!

Here is a code extract:

distribute_task(NodeLst, Task) ->
    NodeLstLen = length(NodeLst),
    StatListener = spawn(grind, statistic_gathering, [now(), NodeLstLen]),
        "~w create statistic gathering process ~w on node ~w~n",
        [self(), StatListener, node()]
    TaskRunnerCreator = fun(Node) -> spawn(
        [StatListener, Task]
    ) end,
    Runners = lists:map(TaskRunnerCreator, NodeLst),
        "~w create a list of task runners: ~w~n",
        [self(), Runners]

And here is a usage example:

        [node(),], % use 2 nodes
            % on each node run the test function in 50 concurrent process
            {concurrent, 50},
            % two statistics to gather
            [{mean, writetime}, % a real value (mean, std. dev., min, max, med will be retrieved
             {count, writer_val} % a occurence counter
            % foo function to test: must return a tuple {ok|error, Pid, ValLst}
            fun(Writer, WritenValue) ->
                FWrite = fun() -> io:format("~s got ~w~n", [Writer, WritenValue]) end,
                {WriteTime, _Res} = timeit(FWrite),
                {ok, self(), [WriteTime, {Writer, WritenValue}]}
            % arguments to used for each call of the test function
            [{rr, ["bea", "pouf"]}, % first is taken in the list with a round-robin style
             {choice, [0, 1, 12, 42, 77]} % second argument is randomly choosen from the list

This first steps was to become more familiar with Erlang language. My next steps will be:

  • use edoc …
  • use logger/trace services instead of printing every function call
  • use generic services (gen_event, gen_server)
  • create an OTP application (application, supervisor)
  • use Mnesia to gather statistics
  • what about OTP release
  • embed everything in distribution package (automake/autoconf ?)