At The Grid we do a lot of computationally heavy work server-side, in order to produce websites from user-provided content. This includes image analytics (for understanding the content), constraint solving (for page layout) and image processing (optimization and filtering to achieve a particular look). Currently we serve some thousand sites, with some hundred thousands sites expected by the time we’ve completed beta – so scalability is a core concern.

All computationally intensive work is put as jobs in a AMQP/RabbitMQ message queue, which are consumed by Heroku workers. To make it easy to manage many queues and worker roles we also use MsgFlo.
This provides us with the required flexibility to scale: the queues buffer the in-progress work, broker distributes evenly between available workers, and with Heroku we can change number of workers with one command. But, it still leaves us with the decision on how much compute capacity to provision. And when load is dynamic, it is tedious & inefficient to do it manually – especially as Heroku bills workers used by the second.

RabbitMQ and Heroku dashboards

Monitoring RabbitMQ queues and scaling Heroku workers manually when demand changes; not fun.

If we would instead regulate this every 1-5 minute based on demand, we would reduce costs. Or alternatively, with a fixed budget, provide a better quality-of-service. And most importantly, let developers worry about other things.

Of course, there already exists a number of solutions for this. However, some used particular metrics providers which we were not using, some used metrics with unclear relationship to required workers (like number of users), or had unacceptable limitations (only one worker per service, only run as a service with pay-by-number-of-workers).


guv 0.1 implements a simple proportional scaling model. Based the current number of jobs in the queue, and an estimate of job processing time – it calculates the number of workers required for all work to be completed within a configured deadline.

guv system model

The deadline is the maximum time you allow for your users to wait for a completed job. The job processing time [average, deviation] can be calculated from metrics of previous jobs. And the number of jobs in queue is read directly from RabbitMQ.

# A simple guv config for one worker role.
# One guv instance typically manages many worker roles
  app: my-heroku-app
  queue: 'analyze.IN' # RabbitMQ queue name
  worker: analyzeworker # Heroku dyno role name
  process: 20
  deadline: 120.0
  min: 1 # keep something always running
  max: 15 # budget limits

Now there are a couple limitations of this model. Primarily, it is completely reactive; we do not attempt to predict how traffic will develop in the future. Prediction is after all terribly tricky business – better not go there if it can be avoided.
And since it takes a non-zero amount of time to spin up a new worker (about 45-60 seconds), on a sudden spike in demand may cause some jobs to miss a tight deadline, as the workers can’t spin up fast enough. To compensate for this, there is some simple hysteresis: scale up more aggressively, and scale down a bit reluctanctly – we might need the workers next couple of minutes.

As a bonus, guv includes some integration with common metrics services: The statuspage.io metrics about ‘jobs-in-flight’ on status.thegrid.io, come directly from guv. And using New Relic Insights, we can analyze how the scaling is performing.

Last 2 days of guv scaling history on some of the workers roles at The Grid.

If we had a manual scaling with a constant number over 48 hours period, workers=35 (Max), then we would have paid at least 3-4 times more than we did with autoscaling (difference in size of area under Max versus area under the 10 minute line). Alternatively we could have provisioned a lower number of workers, but then with spikes above that number – our users would have suffered because things would be taking longer than normal.

We’ve been running this in production since early June. Back then we had 25 users, where as now we have several thousand. Apart from updating the configuration to reflect service changes we do not deal with scaling – the minute to minute decisions are all done by guv. Not much is planned in terms of new features for guv, apart from some more tools to analyze configuration. For more info on using guv, see the README.

flattr this!


At The Grid we do a lot of CPU intensive work on the backend as part of producing web pages. This includes content extraction, normalization, image analytics, webpage auto-layout using constraint solvers, webpage optimization (GSS to CSS compilation) and image processing.

The system runs on Heroku, and spreads over some 10 different dyno roles, communicating between each other using AMQP message queues. Some of the dyno separation also deals with external APIs, allowing us to handle service failures and API rate limiting in a robust manner.

Majority of the workers are implemented using NoFlo, a flow-based-programming for Node.js (and browser), using Flowhub as our IDE. This gives us a strictly encapsulated, visual, introspectable view of the worker; making for a testable and easy-to-understand architecture.

Inside a process: In NoFlo each node is a JavaScript class

However NoFlo is only concerned about an individual worker process: it does not comprehend that it is a part of a bigger system.

Enter MsgFlo

MsgFlo is a new FBP runtime designed for distributed systems. Each node represents a separate process, and the connections (edges) between nodes are message queues in a broker process.
To make this distinction clearer, we’ve adopted the term participant for a node which participates in a MsgFlo network.
Because MsgFlo implements the same FBP runtime protocol and JSON graph format as NoFlo, imgflo, MicroFlo – we can use the same tools, including the .FBP DSL and Flowhub IDE.

Distributed MsgFlo system: HTTP frontends + workers in separate processes.

The graph above represents how different roles are wired together. There may be 1-N participants in the same role, for instance 10 dynos of the same dyno type on Heroku.
There can also be multiple participants in a single process. This can be useful to make different independent facets show up as independent nodes in a graph, even if they happen to be executing in the same process. One could use the same mechanism to implement a shared-nothing message-passing multithreading model, with the limitation that every message will pass through a broker.

Connections have pub-sub semantics, so generally each of the individual dynos will receive messages sent on the connection.
The special component msgflo/RoundRobin specifies that messages should be delivered in a round-robin fashion: new message goes only to the next process in that role with available capacity. The RoundRobin component also supports dead-lettering, so failed jobs can be routed to another queue. For instance to be re-processed at a later point automatically, or manually after developers have located and fixed the issue. This way one never loose pending work.
On AMQP roundrobin delivery and deadlettering can be fulfilled by the broker (e.g. RabbitMQ), so there is no dedicated process for that node.

Messaging systems

People use different messaging systems. We’ve tried to make sure that MsgFlo architecture and tools can be used with many different. The format and delivery of discovery messages is specified, and the tools have a transport abstraction layer. Currently there is production-level support for AMQP 0-9-1 (tested with RabbitMQ). Basic support exists for MQTT, a simple protocol popular in distributed “Internet-of-Things” type systems. Support for more transports can be added by implementing two classes.

Polyglot participation

MsgFlo itself only handles the discovery of participants and setup of the connections between them, as well as providing debug capabilities like Flowhub endpoint support. Having participants in a particular language requires implementing . We do provide a set of libraries that makes this easy for popular languages:

Using noflo-runtime-msgflo makes it super simple to use NoFlo as MsgFlo participants. The exported ports of the NoFlo graph or component (for instance ‘in’, ‘out’, and ‘error’) will be automatically made available as queues in MsgFlo, and one can connect this into a bigger system.

noflo-runtime-msgflo --name compute_foo --graph project/MyGraph


If you have some plain Node.js you can use msgflo-nodejs, like this real-life example from imgflo-server.

msgflo = require 'msgflo'

ProcessImageParticipant = (client, role) ->

  definition =
    component: 'imgflo-server/ProcessImage'
    icon: 'file-image-o'
    label: 'Executes image processing jobs'
    inports: [
      id: 'job'
      type: 'object'
    outports: [
      id: 'jobresult'
      type: 'object'

  func = (inport, job, send) ->
    throw new Error 'Unsupported port: ' + inport if inport != 'job'

    # XXX: use an error queue?
    @executor.doJob job, (result) ->
      send 'jobresult', null, result

  return new msgflo.participant.Participant client, definition, func, role

In addition to node.js and NoFlo, there is basic participant support provided for Python and for C++ (with AMQP). It took about about half a day and 2-300 lines of code, so adding support for more languages should be pretty simple. There are even tests you can reuse.

Example in Python using msgflo-python:

import msgflo

class Repeat(msgflo.Participant):
  def __init__(self, role):
    d = {
      'component': 'PythonRepeat',
      'label': 'Repeat input data without change',
    msgflo.Participant.__init__(self, d, role)

  def process(self, inport, msg):
    self.send('out', msg.data)

Example becomes a bit more verbose in C++11, using msgflo-cpp.

class Repeat : public msgflo::Participant
    struct Def : public msgflo::Definition {
        Def(void) : msgflo::Definition()
            component = "C++Repeat";
            label = "Repeats input on outport unchanged";
            outports = {
                { "out", "any", "" }

    Repeat(std::string role)
        : msgflo::Participant(role, Def())

    virtual void process(std::string port, msgflo::Message msg)
        std::cout << "Repeat.process()" << std::endl;
        msgflo::Message out;
        out.json = msg.json;
        send("out", out);


Since MsgFlo 0.3, we are using MsgFlo in production for all workers across The Grid backends. After migrating we’ve also moved more things into dedicated participants, because we now have the tooling that makes managing that complexity easy. Our short term focus now is more tools around MsgFlo, like deadline-based autoscaling and integration of data-driven testing using fbp-spec. Features planned for MsgFlo itself includes live introspection of messages in Flowhub.

Looking further ahead, we would like to make more use of the polyglot capabilities, for instance by move some of our image analytics out from NoFlo/node.js participants (with C/C++ libs) to pure C++ 11 participants.
I also hope to do some fun projects with MQTT and MicroFlo – and validate MsgFlo for Embedded/Internet-of-Things-type.

flattr this!


Time for a new release of imgflo, the image processing server and dataflow runtime based on GEGL. This iteration has been mostly focused on ironing out various workflow issues, including documentation. Primarily so that the creatives in our team can be productive in developing new image filters/processing. Eventually this will also be an extension point for third parties on our platform.

By porting the png and jpeg loading operations in GEGL to GIO, we’ve added support for loading images into imgflo over HTTP or dataURLs. The latter enables opening local file through a file selector in Flowhub. Eventually we’d like to also support picking from web services.

Loading local file using html input type="file"

Loading local file using HTML5 input type=”file”


Another big feature is allowing to live-code new GEGL operations (in C) and load them. This works by sending the code over to the runtime, which then compiles it into a new .so file and loads it. Newly instatiated operations then uses that revision of code. We currently do not change the active operation of currently running instances, though we could.
Operations are never unloaded, due both to a glib limitation and the general trickyness of guaranteeing this to be safe for native code. This is not a big deal as this is a development-only feature, and the memory growth is slow.

Live-coding new image processing operations in C

Live-coding new image processing operations in C


imgflo now supports showing the data going through edges, which is very useful to understand how a particular graph works.

Selecting edges shows the buffer at that point in the graph

Selecting edges shows the buffer at that point in the graph

Using Heroku one can get started without installing anything locally. Eventually we might have installers for common OS’es as well.

Get started with imgflo using Heroku


Vilson Viera added a set of new image filters to the server, inspired by Instagram. Vilson is also working on our image analytics pipeline, the other piece required for intelligent automatic- and semi-automatic image processing.

Various insta filters


GEGL has for a long time supported meta-operations: operations which are built as a sub-graph of other operations. However, they had to be built programatically using the C API which limited tooling support and the platform-specific nature made them hard to distribute.
Now GEGL can load such operations from the JSON format also used by imgflo (and several other runtimes). This lets one use operations built with Flowhub+imgflo in GIMP:


This makes Flowhub+imgflo a useful tool also outside the web-based processing workflow it is primarily built for. Feature is available in GEGL and GIMP master as of last week, and will be released in GIMP 2.10 / GEGL 0.3.


Next iteration will be primarily about scaling out. Both allowing multiple “apps” (including individual access to graphs and usage monitoring/quotas) served from a single service, and scaling performance horizontally. The latter will be critical when the ~20k+ users who have signed up start coming onboard.
If you have an interest in using our hosted imgflo service outside of The Grid, get in contact.

flattr this!


SuperCollider is an open source project for real-time audio synthesis and algorithmic composition.
It is split into two parts; an interpreter (sclang) implementing the SuperCollider language and the audio synthesis server (scsynth).
The server has an directed acyclic graph of nodes which it executes to produce the audio output (paper|book on internals). It is essentially a dataflow runtime, specialized for the problem domain of real-time audio processing. The client controls the server through OSC messages which manipulates this graph. Typically the client is some SuperCollider code in the sclang interpreter, but one can also use Clojure, Python or other clients. It is in many ways quite similar to the Flowhub visual IDE (a FBP protocol client) and runtimes like NoFlo, imgflo and MicroFlo.
So we decided to make SuperCollider a runtime too: sndflo.


Growing list of runtimes that Flowhub can target

We used SuperCollider for Piksels & Lines Orchestra, a audio performance system which hooked into graphics applications like GIMP, Inkscape, MyPaint, Scribus – and sonified the users actions in the application. A lot of time was spent wrestling with SuperCollider, due to the number of new concepts and myriad of ways to do things, and
lack of (well documented) best practices.
There is also a tendency to favor very short, expressive constructs (often opaque). An extreme example, here is an album of SuperCollider pieces composed with <140 characters (+ an analysis of some of them).

On the contrary sndflo is very focused and opinionated. It exposes Synths as components, which are be wired together using Busses (edges in the graph), allowing to build audio effect pipelines. There are several known issues and limitations, but it has now reached a minimally useful state. Creating Synths components (the individual effects) as a visual graph of UGen (primitives like Sin,Cos,Min,Max,LowPass) components is also within scope and planned for next release.

Simple substrative audio synthesis using sawwave and low-pass filter

Simple substrative audio synthesis using sawwave and low-pass filter

The sndflo runtime is itself written in SuperCollider, as an extension. This is to make it easier for those familiar with SuperCollider to understand the code, and to facilitate integration with existing SuperCollider code and tools. For instance setting up a audio pipeline visually using Flowhub+sndflo, then using the Event/Pattern/Stream system in SuperCollider to create an algorithmic composition that drives this pipeline.
Because a web browser cannot talk OSC (UDP/TCP) and SuperCollider does not talk WebSocket a node.js wrapper converts messages on the FBP protocol between JSON over WebSocket to JSON over OSC.

sndflo also implements the remote runtime part of the FBP protocol, which allows seamless interconnection between runtimes. One can export ports in one runtime, and then use it as a component in another runtime, communicating over one of the supported transports (typically JSON over WebSocket).

YouTube demo video

In above example sndflo runs on a Raspberry Pi, and is then used as a component in a NoFlo browser runtime to providing a web interface, both programmed with Flowhub. We could in the same way wire up another FBP runtime, for instance use MicroFlo on Arduino to integrate some physical sensors into the system.
Pretty handy for embedded systems, interactive art installations, internet-of-things or other heterogenous systems.

flattr this!


When I announced the first release of the imgflo project in April, it was perhaps difficult to see what exactly it was useful for and why we are developing it. This has changed now as 3 weeks ago we launched The Grid, our AI-based web publishing platform. We are on a bold mission to have “websites build themselves”; because until posting to personal websites becomes easier and more rewarding than posting to social media, content on the web will continue to pile up in closed silos.


To help solve this problem we built several open source technologies:

NoFlo: for creating highly testable, component-based, distributed software.
Flowhub: for visually and interactively building programs and extensions.
GSS: for building constraint-based, responsive layouts
And of course imgflo: for on-demand server-side image processing.

In total over 100k lines of code, and around 5000 commits over the last 12 months. Some of the stack is expained in more detail in a recent interview with Libre Graphics World.

imgflo on The Grid

thegrid.io launch site is of course built with The Grid. In the particular layout filter used, the look & feel is driven largely by the content. Colors for text captions are extracted from tweets and social media posts, and the featured images are largely unfiltered. Other Grid layout filters may style all provided content, including images, towards a uniform look specified by a color scheme. Or a layout filter may mix-and-match content- versus style-driven design.

The background texture on this section was created with imgflo, by passing the featured image through a blur graph:


It is important to note that no-one chose this exact image to be used in the particular layout section (and thus have the given image filter applied), which is why processing happens on-demand. The layout section with image inside a computer screen is available for content which has images of type “screenshot”. This property may be automatically detected by our image analytics pipeline, or manually annotated by user. The system allows describing many other such constraints, which are all taken into account when it works to create the appropriate layout for given content.

Even without considering styling, imgflo has a couple of important roles on a Grid site. Important is the ability create multiple scaled down versions to optimize download size. For this we also created a helper library called RIG, which is used to generate a set of CSS media-queries with imgflo request urls.

> rig = require 'rig-up'
> css = rig content, serverconfig, 'passthrough', parameters, ... 
 # passthrough is name of the graph to process through
@media (max-width: 503px) {
  .media, .background {
    background-image: url('https://imgflo.herokuapp.com/graph/apikey/6bb56129dc707894baa88d10a02a12b9/passthrough?input=https%3A%2F%2Fa.com%2Fb.png&width=400&height=225');
@media (min-width: 504px) and (max-width: 1007px) {
  .media, .background {
    background-image: url('https://imgflo.herokuapp.com/graph/apikey/d099f7222293d335a6192d742f523bfa/passthrough?input=https%3A%2F%2Fa.com%2Fb.png&width=800&height=450');

Processing images through imgflo also means that they are cached. So if the original image becomes unavailable, the website still has versions it can use. This can happen for instance on Twitter when people change their profile picture.
Note that while we optimize images when presented on site, we don’t touch the original image (non-destructive). This means image uploaded to The Grid has the full data & metadata preserved, unlike on some other social/web services. However, we are currently not preserving metadata in processed images.


imgflo 0.2

imgflo is now split into three repositories, the GEGL-based Flowhub runtime, the HTTP API server and the native dependencies. The runtime itself is plain C with glib, and could be used in non-web applications for desktop, mobile or embedded.

A major feature is that processing requests can now be authenticated, so that non-legitimate users cannot disrupt legitimate ones by overloading the server. We also use Amazon S3 for caching processed images, offloading a large portion of the work. Servicing 10k++ visits a day with a 2-dyno Heroko app has been no problem with this setup.

In imgflo-server we’ve also added support for using different processors than imgflo (which uses GEGL), in particular NoFlo with noflo-canvas. One can now build and deploy image processing pipelines using JavaScript, including all the libraries that work with the <canvas> element.

Building NoFlo image processing graph in Flowhub, then requesting from imgflo-server

Building NoFlo image processing graph in Flowhub, then requesting from imgflo-server

Full details about the changes can be found in the changelogs: server, runtime.


Flowhub provides imgflo a node-based visual & interactive IDE for developing new image filters for The Grid. It is similar to etablished tools like FilterForge, the Blender compositor,  vvvv and nuke – which many designers and visual artists are familiar with. However there are still many snags in the workflow for non-technical people. Smoothing out these is major part of the next imgflo milestone.
After that the focus will be on horizontal scalability, to handle the load as The Grid enters beta and opens to founding members in spring.

flattr this!


Its been nearly 6 months since the previous release of MicroFlo, which was the first that allowed you to visually program your Arduino using NoFlo UI. While we are still just getting started, lots of things have changed since then.


Flowhub is the name of the officially supported, packaged and hosted version of the open source NoFlo UI. This is the IDE used for programming with MicroFlo, and a lot of work has been put into it the last couple of months. Today we released the beta version.

You can test it out for browser, node.js and microcontrollers now.

Other runtimes are in development, and you can add your own by implementing the FBP runtime protocol. For instance there are runtimes for desktop development for GNOME, image processing using GEGL, audio synthesis using SuperCollider and for Python.

Heterogenous FBP

Lightbulb idea: two microcontrollers programmed with MicroFlo used as components in a NoFlo program

Often microcontrollers act as sensors and actuators in a larger system, where embedded computers, mobile devices and servers are used to provide the data storage and processing as well as user interfaces and connectivity with other systems.

With MicroFlo 0.3 one can visually create a microcontroller program, and then export ports on this program to make the entire microcontroller available as a component in NoFlo on node.js. This allows to seamlessly create programs which combine microcontrollers and embedded computers.
We made use of this functionality when we created an interactive table that shows the status of the Ingress virtual reality game.

Platform support

Lunchbox electronics: Some boards that can run MicroFlo

MicroFlo has worked on AVR-based Arduinos from day 1, but it was always the goal to not be specific to Arduino. Therefore I’m happy to say that there are now basic platform implementations for:

  • AVR-based Arduino and derivatives
  • Atmel AVR8 (without using Arduino)
  • mbed LPC1768 (ARM Cortex M3)
  • Texas Instruments Tiva-C (ARM Cortex M4)
  • Embedded Linux (RPi, BeagleBone Black)

Basic bring-up up of new platform can be done in a couple of days, and components which do not use platform-dependent libraries can be used immediately. The goal is to be portable enough that you can pick up whatever capable device you find in your parts-bin, prototype your initial code there – then move the program over to a more ideal device if/when appropriate.
If you have particular platforms you’d like to see supported, leave a note.

Simulation & Automated testing

Automated testing in the embedded world using C/C++ is very painful compared to that of recent web technologies. CoffeScript (or JavaScript) in combination with a modern BDD framework like Mocha makes for simple and beautiful tests.

These tests are ran in a simulator which implements the MicroFlo I/O backend (C++) in JavaScript using a Node.js addon. Unfortunately there are not many good open source instruction-level simulators for the platforms MicroFlo support, so the platform backends can only be tested once we support on-device testing.

Automated testing is a critical piece to making sure that MicroFlo is not only a fun and rewarding way to create microcontroller programs, but also an excellent way to make industrial quality devices.

Easier to get started

A Chrome app is slightly less intimidating for users than a terminal

MicroFlo now ships a Chrome app, used to let Flowhub communicate with the MicroFlo runtime running on device over serial/USB/Bluetooth. This means  it is no longer neccesary to run node.js in a terminal, removing a usability issue in getting started.

In the future, this functionality will be baked into the Flowhub Chrome app itself. With time component code editing, building the firmware and flashing the device will also be available there, making Flowhub a true integrated development environment for MicroFlo.




flattr this!


At TheGrid we are in need for a flexible service for doing server-side image processing. So after some discussion at LGM in Leipzig, I started writing one based on GEGL, the image processing library that will power the upcoming GIMP 2.10 release. The library provides a demand-driven graph-based API , with a ton of operations included and support for GPU processing using OpenCL.


For creating image processing pipelines for the server, imgflo acts as a runtime for Flowhub, our open source visual programming IDE. It adds to the existing NoFlo browser, Node.js and MicroFlo microcontroller runtime targets, all possible thanks to the runtime-agnostic protocol.

The preview is live and changes whenever changes are made to the graph. This allows to quickly experiment and develop new graphs.


As a server, imgflo provides a simple HTTP API where you specify the input image as an URL, the graph to process it through and any parameters exposed on that graph.

Requests being made to the imgflo server HTTP API on demo page

Processed images are cached, so that subsequent request on the same url just returns the image out of the cache.
The git repository includes configuration and build setup for Heroku, so deploying an instance is a 5 minute job.


imgflo 0.1 is now minimally useful as image processing server, but there are many more enhancements on the todo list. Scalability and integration with NoFlo are two big topics, as as expanding the pool of available operations and graphs. Porting filters from GIMP to GEGL is a way of helping with the latter.

Additionally it would be interesting to provide a way of using imgflo with GIMP. First of all one could use graphs made with Flowhub via GEGL meta-operations. A more crazy idea is to integrate directly,  using Flowhub as a companion node-based editor.



flattr this!


LGM2014 will happen April 2-5th in Leipzig, Germany and this will be my fifth year attending. In fact LGM 2010 in Brussels was my first international conference ever, and convinced me that I wanted to make open source professionally.

I’m very excited about this years program, because once again we have managed to combine bleeding edge developments in open source software for graphics and visuals, with a wide range of connecting fields: open hardware, design, art activism, free cultural works, research and education.

Personally, I especially look forward to:

  • Richard Hughes: Building an OpenHardware Spectrograph for Color Profiling in Linux
  • Johannes Hanika: Wavelets for image processing
  • Manuel Quiñones: GEGL is not GIMP – creating graphic applications with GEGL (workshop)
  • Libre Graphics Magazine: Beating the drums, Why we made gender an issue

I am also hosting a BoF session on visual programming of libre graphics tools. Curious to see what comes out of that.

If you are interested in open source and graphics, don’t miss Libre Graphics Meeting.
Register now (it’s free and open for all)!

Can’t go to LGM, but would still like to contribute? Please consider donating to our travel fund.

I would like to thank the GIMP project for sponsoring my trip to LGM2014.

flattr this!


Two months after MicroFlo 0.1.0, another important milestone has been reached. This release brings a basic visual programming environment and initial support for all major desktop platforms (Win/OSX/Linux). The project is still very much experimental, but it is now starting to demonstrate potential advantages over traditional Arduino programming.

Official release notes and announcement here.

The start of something visual

The “Hello World” adopted from Arduino, a program that blinks the built-in LED a couple of times per second. Pressing Play (>) uploads the program to the Arduino using MicroFlo.

The IDE shown is NoFlo UI, a visual programming environment which can also be used to program JavaScript for the browser and Node.js using the NoFlo runtime. This project is developed by Henri Bergius and rest of the NoFlo team. For more details about the NoFlo IDE project, check their latest update and follow their Kickstarter project.


At Piksel 2013 in Bergen, I also presented MicroFlo for the first time, to an audience of mostly new media and experimental sound artists. The talk goes into detail about the motivations behind the project, from the quite practical to the more philosophical considerations. Not my most coherent talk, but it gives some insight.



For the next milestone, MicroFlo 0.3, several things are already planned. Focus is mostly on practical improvements to the system, but I also hope to complete prototype support for “heterogeneous FBP”: Allowing to program systems consisting of both host computer and microcontroller programs in a unified manner using NoFlo+MicroFlo.

I am also planning a MicroFlo workshop at Bitraf some time in December and to demo the project at Maker Faire Oslo.

In the meantime, you can get started with MicroFlo for Arduino by following this tutorial. Feedback and contributions welcomed!

flattr this!


Lately I’ve been playing with microcontrollers again; Atmel AVRs with and without Arduino boards. I’ve make a couple of tiny projects myself, helped an artist friend do interactive works and helped to integrated a microcontroller it in an embedded product at work. With Arduino, one does not have to worry about interrupts, registers and custom hardware programmers to get things done using a microcontroller. This has opened the door for many more people that pre-Arduino. But the Arduino language is just a collection of C++ classes and functions, users are still left with telling the microcontroller how to do things; “first do this, then this, then this…”.

I think always having to work on such a a low level limits what people make with Arduino, both in who’s able to use it and what current users are able to achive. So, I created a new experimental project: MicroFlo. It has a couple of goals, the two first being the most important:

People should not need to understand text-based, C style programming to be able to program microcontrollers. But those that do know it should be able to use that knowledge, and be able to mix-and-match it with higher-level paradims within a single program.

It should be possible to verify correctness of a microcontroller program in an automated way, and ideally in a hardware-independent manner.

Inspired by NoFlo, and designed for integration with it, MicroFlo implements Flow-based programming (FBP). In FBP, a program is constructed by connecting a set of independent components. Each component has in-ports and out-ports, and can only communicate with eachother through these. The connections can be defined using programatically, using a declarative text language,  or using a visual editor. 2D/3D artists will recognise this the concept from node compositors like in Blender, sound artists from applications like Reaktor.

Current status: A fridge

I have an old used fridge, by the looks of it made in the GDR some time before I was born. Not long after I got it, the thermostat broke and the cooler would not turn off. Instead of throwing it away and getting a new one, which would be the cool and practical* thing to do, I decided to fix it. Using an Arduino and MicroFlo.
* especially considering that it is several months since it broke…

A fridge is a simple system, something that should be simple for hobbyists to create. So it was a decent first usecase to test the framework on. Principially, such a system looks something like this:


The thermostat decides whether to turn the cooler on or off, and the cooler switch realizes this decision. There are many alternative methods of implemening each of these two components. I used a DS1820 digital thermometer IC to read temperature, and a hacked NEXA remote controlled relay for the switch.
All the logic, including temperature threshold is done in software on an Arduino Uno.

The code below for the cooler switch would have been simpler (a oneliner, left as excersise for the reader) if I instead had used a active high relay directly on the mains (illegal if not a certified electrician). Or alternatively reverse-engineered the 433Mhz protocol used.


MicroFlo code for the fridge, in the .FBP domain specific language (examples/fridge.fbp)
# Thermostat
timer(Timer) OUT -> TRIGGER thermometer(ReadDallasTemperature)
thermometer() OUT -> IN hysteresis(HysteresisLatch)

# On/Off switch
hysteresis() OUT -> IN switch(BreakBeforeMake)
switch() OUT1 -> IN ia(InvertBoolean) OUT -> IN turnOn(DigitalWrite)
switch() OUT2 -> IN ic(InvertBoolean) OUT -> IN turnOff(DigitalWrite)
# Feedback cycle to switch required for syncronizing break-before-make logic
turnOn() OUT -> IN ib(InvertBoolean) OUT -> MONITOR1 switch()
turnOff() OUT -> IN id(InvertBoolean) OUT -> MONITOR2 switch()

# Config
'5000' -> INTERVAL timer() # milliseconds
'2' -> LOWTHRESHOLD hysteresis() # Celcius
'5' -> HIGHTHRESHOLD hysteresis() # Celcius
'["0x28", "0xAF", "0x1C", "0xB2", "0x04", "0x00", "0x00", "0x33"]' -> ADDRESS thermometer()
board(ArduinoUno) PIN9 -> PIN thermometer()
board() PIN12 -> PIN turnOff()
board() PIN11 -> PIN turnOn()

Is the above solution nicer than using the Arduino IDE and writing in C++? At the moment maybe not significantly so. But it does prove that this kind of high-level dynamic programming model is feasible to implement also on devices with 2kB RAM and 32kB program memory. And it is a starting point for more interesting exploration.

Next steps

I will continue to experiment with using MicroFlo for new projects, to develop more components and test/validate the architecture and programming model. I also need to read through all of the canonical book on FBP by J. Paul Morrison.

Some bigger things that I want to add include:

  • Ability to introspect the graph running on the device, in particular the packets moving between components.
  • Automated testing (of the framework, individual components and application graphs)  using  JavaScript BDD test frameworks like Mocha or Vows.
  • Ability to change graphs at runtime,  and then persist it to EEPROM so it will be loaded on next reset.

And eventually: Allowing to manipulate and monitor running graphs visually, using the NoFlo development environment. See bug #1.

Curious still? Check out the code, and ask on the FBP mailing list if you have any questions!

flattr this!