2014
02.12

LGM2014 will happen April 2-5th in Leipzig, Germany and this will be my fifth year attending. In fact LGM 2010 in Brussels was my first international conference ever, and convinced me that I wanted to make open source professionally.

I’m very excited about this years program, because once again we have managed to combine bleeding edge developments in open source software for graphics and visuals, with a wide range of connecting fields: open hardware, design, art activism, free cultural works, research and education.

Personally, I especially look forward to:

  • Richard Hughes: Building an OpenHardware Spectrograph for Color Profiling in Linux
  • Johannes Hanika: Wavelets for image processing
  • Manuel Quiñones: GEGL is not GIMP – creating graphic applications with GEGL (workshop)
  • Libre Graphics Magazine: Beating the drums, Why we made gender an issue

I am also hosting a BoF session on visual programming of libre graphics tools. Curious to see what comes out of that.

If you are interested in open source and graphics, don’t miss Libre Graphics Meeting.
Register now (it’s free and open for all)!

Can’t go to LGM, but would still like to contribute? Please consider donating to our travel fund.

I would like to thank the GIMP project for sponsoring my trip to LGM2014.

2013
11.27

Two months after MicroFlo 0.1.0, another important milestone has been reached. This release brings a basic visual programming environment and initial support for all major desktop platforms (Win/OSX/Linux). The project is still very much experimental, but it is now starting to demonstrate potential advantages over traditional Arduino programming.

Official release notes and announcement here.

The start of something visual

The “Hello World” adopted from Arduino, a program that blinks the built-in LED a couple of times per second. Pressing Play (>) uploads the program to the Arduino using MicroFlo.

The IDE shown is NoFlo UI, a visual programming environment which can also be used to program JavaScript for the browser and Node.js using the NoFlo runtime. This project is developed by Henri Bergius and rest of the NoFlo team. For more details about the NoFlo IDE project, check their latest update and follow their Kickstarter project.

Talk

At Piksel 2013 in Bergen, I also presented MicroFlo for the first time, to an audience of mostly new media and experimental sound artists. The talk goes into detail about the motivations behind the project, from the quite practical to the more philosophical considerations. Not my most coherent talk, but it gives some insight.

Link

Next

For the next milestone, MicroFlo 0.3, several things are already planned. Focus is mostly on practical improvements to the system, but I also hope to complete prototype support for “heterogeneous FBP”: Allowing to program systems consisting of both host computer and microcontroller programs in a unified manner using NoFlo+MicroFlo.

I am also planning a MicroFlo workshop at Bitraf some time in December and to demo the project at Maker Faire Oslo.

In the meantime, you can get started with MicroFlo for Arduino by following this tutorial. Feedback and contributions welcomed!

2013
09.23

Lately I’ve been playing with microcontrollers again; Atmel AVRs with and without Arduino boards. I’ve make a couple of tiny projects myself, helped an artist friend do interactive works and helped to integrated a microcontroller it in an embedded product at work. With Arduino, one does not have to worry about interrupts, registers and custom hardware programmers to get things done using a microcontroller. This has opened the door for many more people that pre-Arduino. But the Arduino language is just a collection of C++ classes and functions, users are still left with telling the microcontroller how to do things; “first do this, then this, then this…”.

I think always having to work on such a a low level limits what people make with Arduino, both in who’s able to use it and what current users are able to achive. So, I created a new experimental project: MicroFlo. It has a couple of goals, the two first being the most important:

People should not need to understand text-based, C style programming to be able to program microcontrollers. But those that do know it should be able to use that knowledge, and be able to mix-and-match it with higher-level paradims within a single program.

It should be possible to verify correctness of a microcontroller program in an automated way, and ideally in a hardware-independent manner.

Inspired by NoFlo, and designed for integration with it, MicroFlo implements Flow-based programming (FBP). In FBP, a program is constructed by connecting a set of independent components. Each component has in-ports and out-ports, and can only communicate with eachother through these. The connections can be defined using programatically, using a declarative text language,  or using a visual editor. 2D/3D artists will recognise this the concept from node compositors like in Blender, sound artists from applications like Reaktor.

Current status: A fridge

I have an old used fridge, by the looks of it made in the GDR some time before I was born. Not long after I got it, the thermostat broke and the cooler would not turn off. Instead of throwing it away and getting a new one, which would be the cool and practical* thing to do, I decided to fix it. Using an Arduino and MicroFlo.
* especially considering that it is several months since it broke…

A fridge is a simple system, something that should be simple for hobbyists to create. So it was a decent first usecase to test the framework on. Principially, such a system looks something like this:

 

The thermostat decides whether to turn the cooler on or off, and the cooler switch realizes this decision. There are many alternative methods of implemening each of these two components. I used a DS1820 digital thermometer IC to read temperature, and a hacked NEXA remote controlled relay for the switch.
All the logic, including temperature threshold is done in software on an Arduino Uno.

The code below for the cooler switch would have been simpler (a oneliner, left as excersise for the reader) if I instead had used a active high relay directly on the mains (illegal if not a certified electrician). Or alternatively reverse-engineered the 433Mhz protocol used.

 

MicroFlo code for the fridge, in the .FBP domain specific language (examples/fridge.fbp)
# Thermostat
timer(Timer) OUT -> TRIGGER thermometer(ReadDallasTemperature)
thermometer() OUT -> IN hysteresis(HysteresisLatch)

# On/Off switch
hysteresis() OUT -> IN switch(BreakBeforeMake)
switch() OUT1 -> IN ia(InvertBoolean) OUT -> IN turnOn(DigitalWrite)
switch() OUT2 -> IN ic(InvertBoolean) OUT -> IN turnOff(DigitalWrite)
# Feedback cycle to switch required for syncronizing break-before-make logic
turnOn() OUT -> IN ib(InvertBoolean) OUT -> MONITOR1 switch()
turnOff() OUT -> IN id(InvertBoolean) OUT -> MONITOR2 switch()

# Config
'5000' -> INTERVAL timer() # milliseconds
'2' -> LOWTHRESHOLD hysteresis() # Celcius
'5' -> HIGHTHRESHOLD hysteresis() # Celcius
'["0x28", "0xAF", "0x1C", "0xB2", "0x04", "0x00", "0x00", "0x33"]' -> ADDRESS thermometer()
board(ArduinoUno) PIN9 -> PIN thermometer()
board() PIN12 -> PIN turnOff()
board() PIN11 -> PIN turnOn()

Is the above solution nicer than using the Arduino IDE and writing in C++? At the moment maybe not significantly so. But it does prove that this kind of high-level dynamic programming model is feasible to implement also on devices with 2kB RAM and 32kB program memory. And it is a starting point for more interesting exploration.

Next steps

I will continue to experiment with using MicroFlo for new projects, to develop more components and test/validate the architecture and programming model. I also need to read through all of the canonical book on FBP by J. Paul Morrison.

Some bigger things that I want to add include:

  • Ability to introspect the graph running on the device, in particular the packets moving between components.
  • Automated testing (of the framework, individual components and application graphs)  using  JavaScript BDD test frameworks like Mocha or Vows.
  • Ability to change graphs at runtime,  and then persist it to EEPROM so it will be loaded on next reset.

And eventually: Allowing to manipulate and monitor running graphs visually, using the NoFlo development environment. See bug #1.

Curious still? Check out the code, and ask on the FBP mailing list if you have any questions!

2013
02.01

Are you interested in the overlap between technology, art and design; and free, open, libre tools that join these domain? Do you use Libre Graphics software like GIMP, Blender, Krita, Inkscape, Scribus, MyPaint (and similar), and want to meet the people behind them?
Are you a developer of free and open source software in the areas of photography, graphics, page layout, design, publishing, typography, animation or video?

Come to the 8th annual Libre Graphics Meeting, from Wednesday 10th to Saturday 13th April in Madrid, Spain!

Registration is open (no attendance fee, sponsorship possible), and presentation & workshop proposals are accepted until 15th of February (2 weeks from today!).

2012
11.25

A first set of performance improvements for the brush engine has just landed in MyPaint master. The goals for this work for me were, in priority: a) Making sure that moving to a GEGL backend in MyPaint does not reduce performance, b) Improve performance when integrating the MyPaint brush engine in other applications, and lastly c) Improving the performance in MyPaint itself.

TL;DR: * Users of the soon-to-be-released MyPaint 1.1 should experience about 15% faster drawing of strokes for medium to big brushes. * Switching to the GEGL based backed for MyPaint 1.2 is now both feasible and highly desirable from a performance perspective.


Optimizations

The optimizations are implemented through three complimentary strategies:

1. Deferred data access to minimize fetching and updating of tiles

All dab drawing operations that happen as a result of a motion update event are queued up. When the brush engine has calculated where all dabs should go, tiles are fetched, all dabs drawn and the tiles updated. This in contrast to before where each dab drawing operation would fetch and update tiles.

2. Coarse grained parallelism using multi-threading via OpenMP directives

The tiles to be processed are divided evenly between processing threads (one per core). Each tile is processed completely independent of other tiles, so there is no locking or synchronization in the drawing code. The tile backing store must naturally be thread-safe and may ensure this using locks.

3. Fine grained parallelism using SSE via GCC auto-vectorization

Within each tile we attempt to make use of auto-vectorization to create the brush dab mask and do the composition of the dab onto the tile. Currently this is only implemented for a part of the mask calculation.

Results

Details of the results and how they can be reproduced is found in the original email thread.

Gains for MyPaint 1.1

Starting with the lowest priority goal, but the most relevant to users; performance impact on MyPaint right now.

Surface drawing results for existing Python-based backend

In terms of raw speed of drawing brushes to onto the underlying surface, speedups range from 20% to 50% for larger brushes (16 px+). This sets an upper boundary for the speedup perceived by the user.

Looking at the UI-enabled benchmarks of MyPaint, which is doing everything a normal application instance does, including layer compositing and rendering to screen, around 15% speedup was observed.  As the UI benchmark only tests a single brush at size=8.0px, it is possible that larger brushes will a higher speedup.

Users of the soon-to-be-released MyPaint 1.1 should experience about 15% faster drawing of strokes for medium to big brushes.

Note that the backend in use does not make use of the multi-threading introduced by (2) due to the tile store not being thread-safe and that it already had a cache to mitigate the problem fixed by (1).
Note

GEGL-based backend results, outlook for MyPaint 1.2

Surface drawing results, GEGL versus Python-based surface.
Test results by Till Hartmann on his Phenom II X6.

So in terms of raw surface rendering speed, the GEGL based backend is now significantly faster than the Python-based one. With 1 and 2 threads it is respectively up to 25% and 100% faster for big brush sizes. With 6 threads, it can be up to 4 times faster.

Switching to the GEGL based backed for MyPaint 1.2 is now both feasible and highly desirable from a performance perspective.

Note that to see UI performance increases approaching the raw surface drawing performance increase we may also need to do the layer compositing multi-threaded.

Gains in other applications

I’m trying to convince the Krita guys to update to the new version and to provide some feedback on the impact. Other consumers of the MyPaint brush engine do not tend to communicate much with us (some are proprietary).
I have strong hopes that (1) should increase their performance radically as their tile get/set cost is significantly higher than in the MyPaint case: They need to convert between the internal Krita and the MyPaint brush engine working colorspace each time. They may also be able to enable multi-threading and see speedups similar to the GEGL-based backend as a result.

Future Work

This is only lays the groundwork of better optimized MyPaint brush engine, many areas have room for improvement. For one only a small subset of the heavy code is vectorized. There may be inner loops that can be tweaked. It may be that, with a different tile access pattern compared to before, a different tile size would be more ideal. Perhaps doing the expensive calculation of the brush dab could be avoided some times by caching them… Thinking bigger, one could move all the drawing (and rendering) to the GPU.

More details on these ideas can be found here. If you are interested in working on any of it, get in touch and start hacking!

2012
11.05

Following the Piksels & Lines research meeting in June and being accepted in the call for proposals I am now taking part in the Piksel & Lines Orchestra residency, hosted by Piksel and LGRU, together with media artist and engineer Brendan Howell. Together we will further develop the Piksels & Lines Orchestra, a system that turns the traditional libre graphics tools (like MyPaint, GIMP, Inkscape, Scribus) into instruments for use in an performance setting.

A prototype of this system is to be demonstrated at Piksel X in Bergen, November 21-25th. Piksel is an annual festival where artists and developers working with free and open source software, hardware and art come together. Its diverse program will include presentations, workshops, performances, and installations.

Within the Piksels & Lines Orchestra residency we will also realize an artwork that will be performed in Madrid, April at the Future Tools Conference. This event combines Libre Graphics Meeting, an annual artist and developer meeting around free and open source graphics software with Interactivos?’13, two weeks of project-centric workshops focused on collaborative creation using open hardware and open software. The call for projects is now open, focusing on “Tools for a Read-Write World”.

Don’t miss any of these events if you are interested in the intersection and interaction between artistic works and open tools!

from more than a dozen countries

2012
10.22

Two years after moving to Berlin and joining Openismus, it is time for another big change. I learned a lot while at Openismus, and had a lot of fun both in and outside of working hours. Those of you who have been to the barbecue parties know what a great bunch of people they are. I’m very glad to see that they are now going strong again.

In December I will join Squarehead Technology in Oslo as a Software Developer. There I will work on their advanced microphone array systems for acoustic cameras and acoustical zoom. The role includes both real-time programming, digital signal processing of audio/video and embedded Linux, which is pretty much exactly what I was looking for.

Here is a very quick demo of the technology, used for noise analysis: Nor 848 video

The array microphone system records 200+ channels of audio simultaneously. By exploiting the time difference between channels, digital signal processing can extract and/or visualize audio content at different positions in the recording. This can be done both in real time, and in retrospect (unlike parabolic microphones).

Fun times ahead!

2012
06.15

Last week I was so lucky as to attend the 3rd Libre Graphics Research Unit (LGRU) meeting in beautiful Bergen, Norway.

The meeting was titled Piksels and Lines and had “a particular focus on improvements, interoperability between and fringe use of F/LOSS graphic bitmap and vector software, as well as generative software used in performative contexts.”

The meeting was structured into three different areas: Seminar, Workshop and Performance.

Seminar

The attendants that were invited for the meeting each did a presentation of their choosing. They were recorded and are available in the online archive of the seminar. The video quality leaves something to be desired, but the audio is generally good.

The presentations I found particularly interesting were:

I gave a presentation titled MyPaint and cross-application workflows. It was an introduction to MyPaint as a creative tool, how it combines raster and vector (piksels and lines) concepts, and my perspective on interoperability between libre graphics applications.

 

 

Workshop

I had hoped to hack some code for one of my existing ideas during the workshops. That did not happen. Instead I ended up hacking specifications. Maybe that is just as good. Hacking one can always do later, hashing out and documenting ideas has to be done while it is fresh.

First the results of some discussions with Øyvind Kolås, the GEGL maintainer:

A journal for GEGL: transaction log over changes made to a GEGL graph. Specification. Discussion. This feature would allow for applications based on GEGL to:

  • Implement non-linear histories (undo/redo), and a timeline of the changes
  • Store the history in a document like OpenRaster
  • Share the history between different applications
  • Let multiple applications to work on the same document at the same time

A strategy for improved file format support in GEGL, and using this to improve  file support and interoperability in libre graphics applications. Proposed plan.

Executing this plan would move a lot of the existing file format support from GIMP (PSD, XCF, OpenRaster) down into GEGL so that it can be reused across applications. And would then let GEGL provide image support plugins for GdkPixbuf and QImage – so that at the very least – previews will work everywhere.

 

Chatting with Egil Möller, creator of Sketchspace, also resulted in:

A web based system supporting a continuous work-flow from free-hand
sketch to finished product. Concept and mockups

“Imagine starting from a freehand drawing or imported raster image and gradually refining this into a technical document with illustrations, UML-diagrams or even running code or a 3d model.”

Refining here means that the user guides the tool to transform freehand sketch into vector paths, then into vector shapes, then into something domain-specific and formal like UML – by adding additional data like annotations, strengthening of lines to “disambiguating” the transformation.

Needless to say, this is more a visionary thing. Realizing this would involve finding good solutions to a fair amount of computer vision problems.

Making GEGL available for use in web-based applications. Proposal.

More on the concrete side: allow GEGL to be used in interactive or batch-oriented web applications, or in native applications based on web technologies (Javascript, HTML5 user interfaces).

Some of the discussions also resulted in me writing down the strategy for GEGL integration in MyPaint and the related ideas/plans for how to improve the performance of the MyPaint brush engine.

Now we just need to implement all the stuff… Contributions welcomed!

 

Performance

Since Piksel, with a long history in generative performance arts, was the hosting organization it was not surprising a project in that area materialized.

A workshop session hosted by media artist Brendan Howell called Demonstrating the Unexpected came up with the idea of the Piksels & Lines Orchestra (PLO): think of the collaborative use of our traditional libre graphics software as an orchestra. The applications, from MyPaint to Scribus, are instruments; the people using them players; a performance the use of these instruments. Can we create an experience for an audience based on this framework? How would it sound? How would it look?

Architecture diagram - no need for them to be dull looking.

Having plenty of code-crafting people available, the next afternoon it was decided to spend a couple of hours realizing a prototype. The LGRU blog has the details. We recorded video of our initial performances with this prototype as well, but that has sadly not made it online yet…

 

Thanks!

Thanks a lot to Piksel and LGRU for sponsoring my attendance, and the EU Culture Programme and Bergen municipality for funding activities that support libre graphics and free culture!

2012
05.13

Already covered in the news from LGM was the release of GIMP 2.8, and that GIMP 2.10 will be fully GEGLified. The goat-invasion branch which has most of that work, the result of 3 weeks of pippin and mitch on a couch hacking together, has already landed in master. This means that GIMP now has support for high bit-depth workflows for most operations. Finally.

Putting the goat in MyPaint

During LGM I started working on using GEGL in MyPaint. I have already mentioned this idea several times, so it was time to stop talking and get hacking.

As a first step in making use of GEGL I wanted to replace the current surface implementation with one based on GeglBuffer. Since GeglBuffer already provides tiling, and can store any buffer data supported by Babl this turned out to be easy. Øyvind (pippin) added the semi-quirky pixel format we currently use* in MyPaint to Babl, and I was able to get a rough working GEGL based Surface implementation the first evening.

The MyPaint brush engine working on top of GeglBuffer

* RGBA premultiplied alpha, in 16 bit unsigned integers with  2^15 being the maximum value.

The next couple of days went to moving to the GeglBufferIterator API instead of gegl_buffer_{get,set} to have zero-copy access to improve performance, and improving GEGL and GEGL-GTK so that some of the hacks in the initial implementation could be removed.

Most of the work is in the gegl branch of MyPaint. A simple test application, mypaint-gegl.py, is included, and you can read README.gegl for how to try it out. Warning: only intended for curious developers at this stage.

A lots of work remains to be done for MyPaint to be able to fully use GEGL. The progress is tracked in two bugs, one for MyPaint work and one for GEGL issues. Because one cannot combine PyGObject with PyGTK, it will likely not be possible to fully integrate GEGL in MyPaint before porting to PyGI and GTK+ 3.

Oh, in case the goat references are lost on you – check the GEGL page on wikipedia.

2012
03.24

The standard way of deploying Maliit is to have a single maliit-server instance (per user session), hosting the actual input method (virtual keyboard, handwriting). Applications then communicate with the server (and by extension, the IM) through an IPC.

This allows for a single instance of Maliit to serve all applications, which is memory efficient and robust. A crash in a Maliit IM plugin cannot take down the application and risk loss of significant user data. The disadvantage is the increased system complexity (a separate server process needs to be running at all times*) and requiring compositing of the application and input method windows. The latter can be quite challenging to do in a well-performing way on low-powered mobile/embedded devices. See Jans blogpost for how we handled that on the Nokia N9.

* By default we make use of DBus autostarting, of course.

Application-hosted Maliit

To make Maliit more suitable for systems where only a single application runs (embedded) or compositing performance is not good enough, we now also allow Maliit to be “application-hosted”: the Maliit server and input method plugins lives in the application process, not a separate server process. Enabling this feature has been a long running task of mine: All the code in input-context and server was made transport independent, a direct transport (no IPC) was introduced, and setting up the server for a given configuration (X11, QPA, app-hosted) was simplified. Other motivations for this work include being able to run the server and IM plugin easily for automated end-to-end system or acceptance testing, or just to easily start the server with a given IM plugin loaded for quick manual testing during development (see Michaels merge request).

An example application exists as part of the Maliit SDK that demonstrates the feature: maliit-exampleapp-embedded

Maliit running in application-hosted mode: The Maliit Server and input method plugin is embedded in the application instead of running in a standalone server process.

This works by having a special input-context “MaliitDirect” which instead of connecting to the server over DBus, creates the server and a direct connection. As when running standalone the server will instantiate and manage the necessary input method plug-ins.

Because the IM does not have its own window in this configuration, the application is responsible for retrieving the IM widget from the server, and re-parenting it into the appropriate place in the widget hierarchy. For all other purposes the application uses the same interface as if the IM was hosted remotely, making sure the abstraction is not broken and that one can easily use the application with Maliit deployed in different configuration.

This feature currently works with Qt4 applications, and is in Maliit since the latest release (0.90.0). One issue is that with the current input method API, the plugin assumes a fullscreen window; overlays extending the base area of the IM will be clipped and size needs to be overridden. This is something we are fixing in the new improved API.

Compositor-hosted Maliit

Another approach to make rendering perform better is to host the input method in the process responsible for the compositing. This also reduces the number or processes involved in rendering/compositing, and the associated overhead. This could be a X11 compositing window manager (like KWin or mcompositor), but a more realistic use-case is a Wayland compositor (for instance based on QtCompositor).

The API allows the consumer to inject an class instance for the configuration dependent logic, allowing to integrate the Maliit server with the logic in the rest of the compositor. Applications will use the normal “Maliit” inputcontext and communicate to the server through an IPC like DBus.

After the work with application-hosted Maliit, this feature was completed by making the server and connection libraries available as public API. The API is available in the latest  Maliit release (0.90.0), but is considered unstable until Maliit hits the 1.0 mark.