Ubuntu 9.10 Release Party, Oslo

Where as all my physical machines run Arch Linux, I do have some virtualized servers running Ubuntu. And in general, it is the GNU/Linux distro I recommend to people new to Linux. Not only because it has most of the things you need in a decent package and a balance between free and proprietary that I like, but also because it has a large and welcoming community. It is one of the most popular distros, and this makes it easy for people to find help when they need it. So, being the curious geek that I am, I had to check out nearest release party for Ubuntu 9.10 (Karmic Koala).

The release party in Oslo was hosted by Redpill-Linpro with food being sponsored by Freecode. Despite this, I kinda feared that it would be a tiny and unprofessional event.  But I’m glad to say that fear was unjustified, as there was both a decent amount of people (50++) and many good presentations.  The talks given where:

  • What’s wrong with the Bourne shell and what can be done about it? – Axel Liljencrantz
  • easypeasy! Norsk Ubuntu-basert distro for Netbooks – Jon Ramvi og Tor Grønsund
  • Bli kjent med Upstart – Stig Sandbeck Mathisen
  • Stopp Datalagringsdirektivet – Fribit

The first talk was about fish (friendly interactive shell), why and how it has improved the command line shell both for beginners and experts. I found the authors reasoning to be good, and it did have a lot of nifty features, but I don’t think I’ll switch away from bash. Mostly because fish breaks compatibility with it, and I think I’d find that tiresome pretty fast.

The thing that amazed me about the easypeasy guys was how ambitious and serious they were about their distro. They didn’t just wanna package something in a slightly different way, but instead do customization and even application development and try to get deals with companies doing web-content. The core ideas centered around simplifications of UI, tailor the OS to the hardware platform and getting the “web”/”cloud” down to the desktop. Not really my cup of tea, but I’m curious as to how they will do.

Upstart is the program that has replaced the nearly 25 year old init-system (which is responsible for the programs ran in early userspace) in Ubuntu 9.10. It is asynchronous and has a lot more features, and can be used in a init compatible way. So it might well end up replacing init in other distros as well. Time will tell.

At last there was a short plea to register against “Datalagringsdirektivet”, a EU directive that Norway might pass if not vetoed. This directive obliges all ISPs to store information about which users had a given IP for at least 6 months. This means that even if you are not under suspicion your activities are being logged just in case you might be doing something illegally. This is in stark contrast to the principle that one is “innocent until proven otherwise”. As such I have signed as being against it here. And so should you.

Here are some (not very good) pictures I took during the event. Have to link them because WordPress refuses to generate thumbnails for the images. Licensed CC-by-sa.
All in all it was a great event, and I might very well end up going to the next one as well. Which will hopefully be even bigger and better!

Linux everywhere?

I stopped by my local library today (must have been 5 years since last time!) and guess what I found:

library-linux-thinclient

Yes, Linux. Debian or a derivative with KDE3  running on a HP thin client. Even has Tux! At first I thought it was just this one terminal, but a quick peek told me that all the user available machines ran this system.  And several people where surfing around on the web using Firefox with the with no apparent struggles. The computers for the employees ran Windows XP though. Definitely a step in the right direction, but the fact that I was somewhat surprised tells me that Linux adoption still has a long way to go. Here is another machine:
library-linux-workstation

Maybe I’ll ask them about their experiences with the system when I go there next time. Would be interesting to hear.

At home I’ve migrated all my machines to Arch Linux x86_64 as of yesterday. Previously it was a mix of Arch 32/64 bit and Debian. No major issues yet so I guess I can say that it went pretty smooth. All the machines are now KVM capable so I’ll set them up as a small virtualization cluster. But first I want to install Linux (openWRT) on my router so that it can run DNS/BOOTP/DHCP and give me a bit more flexibility in the configuration.

I’m now also a Linux Foundation member. I’m not sure if I’ll ever use any of the benefits I get, but the cause is worth 25 usd per year none the less. I even asked them to not send me the free t-shirt. I’d never use it anyways, as it was bright white and had big logos on it. And the fit on such shirts are always the same crappy “loose” (read: “shoulder mounted tent” on a slim guy like me). Not my style.

Today I sent in my first deliverable for my senior project. We also had the first meeting and agreed on tasks for the next weeks/months. More about that later!

GPGPU in ALICE workshop

I’ve attended a small workshop/seminar in Bergen (at IFT) about nVidia CUDA and its use in the ALICE project at CERN. The goal for me and the other students from Vestfold University College was to get a concrete task for our senior project and to get up to speed on some of the things we need to get started.

The three days were heavily packed with information, so this will just be a tiny tiny summary. I will probably write up some posts explaining some parts in more detail later. If I can get a hold of the presentations given I will link them here.

Talks/topics

Day 1:

Why parallelize and why GPGPU?

Computing units does not scale simply by frequency anymore so we need to parallelize to increase performance. GPUs are commodity hardware designed for massive parallelism driven forward by  a huge consumer market in 3D graphics => provides high performance, low cost, is flexible and easy to use compared to DSP, FPGA, custom vector CPUs etc.

The Nvidia CUDA platform

Describes the underlying hardware platform and a concept for accessing this from a high level language (C with some C++ and custom extensions). Current generation has approx 200 stream processors and is designed to have several thousand threads running at the same time. There are cards especially for GP-computing (Tesla-series), but for a lot of computations the high end GeForce cards are good enough. Practically all newer GPUs from nVidia is CUDA capable, all the way from the Ion found in netbooks/net-tops.

CUDA programming model

Functions called kernels are executed on the device. All threads will run this same code, differentiation on which threads do what is done by using their IDs in statements. Threads are grouped into blocks, blocks are grouped into grids.  The number of threads/blocks is specified on kernel invocation and used for scaling to different devices. Memory is divided into several parts, each of which needs be handled manually. Device <=> host transfers is also done manually.

Day 2:

The TPC and HLT in ALICE.

Incredibly high data rates, not feasible or wanted to store all raw data  => some processing/analysis needs to be done in real-time (“online”). This can be done now using a large cluster of machines (the HLT). Using GPUs will allow this cluster to consist of a smaller number of nodes => cheaper and will free resources for other things like offline analysis.

Optimizing CUDA code.

Calculations are fast, (global) memory is slow => Focus on reducing and optimizing memory access. Designed for massive parallelism => make sure your problem sizes are big enough and that you’re split the problem in a good way.

Day 3:

OpenCL

Basically the CUDA model + vendor/platform independence – C++ features, otherwise only minor differences => Should be easy to port to and understand when you know CUDA. Implementations on the way, also for CPUs (x86 with SSE and Cell) and DSPs (Texas Instruments). Likely to be a good candidate for scientific computing in the future.

CUDA-ified tracking code

We had a look at how CUDA has been implemented to do a task within the AliRoot framework used in the HLT allready. Some concept mismatches leads to having to wrap code in somewhat ugly ways.  Further uglified/complicated by the fact that the same code should be able to run on either the CPU or GPU.

Our task: Vertex finding

More about this in a later post!

Other

I brought a camera along, but as I’m not used to having one with me I completely forgot to take pictures. We did get to see Bergen a little bit, tho we spent most of our time on the CUDA/CERN stuff.
And there was rain. Every day, all day pretty much. Not sure how I’d cope with that if I lived there.

Senior project confirmed

I will be implementing algorithms on nVidia graphic cards using CUDA for use at CERN. Its a collaboration project between my school, the university of Bergen and a university in Germany (presumably Heidelberg). While a lot of the specifics are still up in the air, it will most likely be for the High Level Trigger system for the ALICE sensor.

ALICE is the sensor system which will be used for studying the biggest particle collisions at CERN when their new particle accelerator gets up and running again this fall and throughout next year. They will be colliding lead nucleus in order to recreate the conditions that existed under 1 billionth of a second after the Big Bang.  In these experiments one hopes to find out more about quark-gluon plasma.

Needless to say this is a really exiting project for me. Not a lot of people are so lucky and get to work on such a grand thing for their senior project.  I expect plenty of good technical challenges. And who knows, maybe there is even an opportunity for more than a senior project here?

First up is an introductory course in Bergen on the 5th-7th of October to get up to speed and a concrete assignment. I also hope to get some exploratory coding with CUDA before that time.

MoinMoin, MyPaint & Me

One of my latest addictions are wikis. So much that writing a blogpost like this, without being able to use wiki markup syntax, is quite annoying. I might have to fix that some day. Specifically I’ve set up my own MoinMoin wiki, where I can put all my silly ideas and thoughts. Circumstances made it so that I fixed up the norwegian translation of the 1.8.3 version. Due to pure foolishness on my part I did not check if the translation for the upcoming 1.9, so now we have a lot of conflicting strings (circa 200). Yay… Hopefully it will turn out for the better as me and Jørg Cassens, the translator focusing on 1.9, get them in sync again.

Recently I’ve also become involved in MyPaint development. It’s “a fast and easy open-source graphics application for digital painters”. You can follow development over at gitorious. Things fixed untill now:

MyPaint with filename in titlebar

  • Filenames in the title bar!
  • Improved file handling; Nice and consistent error messages when trying to open a file that doesnt exist or you dont have permissions to read.

My artistic skills are severly limited, and I’ve never used a tablet before but here is the obligatory screenshot.  Needless to say, I dont do this awesome program any justice at all. Here is someone who does (David Revay). But hey, I’m at least halfway there right?!

If you are on Arch Linux, packages are avaliable from AUR, both stable version and -git. Somehow I’m also the maintainer of those now… If you are on anything else, you will have to go to the homepage and get it there. Hopefully packages will be in Debian and Ubuntu official repos shortly.
Do note that it currently does not build on Windows, or cygwin at the moment. So if you are the type of person that can make such magic happen, please step up for the task!

I hope to do some more adventurous coding on some of my own project ideas soon, but for now I expect to continue contributing bits and pieces on MyPaint to gain some experience.