Blodbad at Rock In, Oslo

As the previous post explained I was in Oslo yesterday for the Ubuntu Karmic Release party. But as I was heading for the release party I stumbled upon a friend of mine from Tønsberg. Turned out three bands from my home town were playing in Oslo that very evening, one of the bands being buddies of mine. So I got to combine two of my passions; music (metal) and free software in one evening.

The bands playing where Niku, Concrete, Framferd and Serepa Deformed. The event was part of a  two day mini festival called Blodbad (bloodbath) and the venue was Rock In.

There was a decent crowd for being a Thursday, but I’m sure both the bands and hosts had hope for more. In any case the sound was pretty good and my friends in Concrete (myspace) delivered a great performance so it was very enjoyable.

Ubuntu 9.10 Release Party, Oslo

Where as all my physical machines run Arch Linux, I do have some virtualized servers running Ubuntu. And in general, it is the GNU/Linux distro I recommend to people new to Linux. Not only because it has most of the things you need in a decent package and a balance between free and proprietary that I like, but also because it has a large and welcoming community. It is one of the most popular distros, and this makes it easy for people to find help when they need it. So, being the curious geek that I am, I had to check out nearest release party for Ubuntu 9.10 (Karmic Koala).

The release party in Oslo was hosted by Redpill-Linpro with food being sponsored by Freecode. Despite this, I kinda feared that it would be a tiny and unprofessional event.  But I’m glad to say that fear was unjustified, as there was both a decent amount of people (50++) and many good presentations.  The talks given where:

  • What’s wrong with the Bourne shell and what can be done about it? – Axel Liljencrantz
  • easypeasy! Norsk Ubuntu-basert distro for Netbooks – Jon Ramvi og Tor Grønsund
  • Bli kjent med Upstart – Stig Sandbeck Mathisen
  • Stopp Datalagringsdirektivet – Fribit

The first talk was about fish (friendly interactive shell), why and how it has improved the command line shell both for beginners and experts. I found the authors reasoning to be good, and it did have a lot of nifty features, but I don’t think I’ll switch away from bash. Mostly because fish breaks compatibility with it, and I think I’d find that tiresome pretty fast.

The thing that amazed me about the easypeasy guys was how ambitious and serious they were about their distro. They didn’t just wanna package something in a slightly different way, but instead do customization and even application development and try to get deals with companies doing web-content. The core ideas centered around simplifications of UI, tailor the OS to the hardware platform and getting the “web”/”cloud” down to the desktop. Not really my cup of tea, but I’m curious as to how they will do.

Upstart is the program that has replaced the nearly 25 year old init-system (which is responsible for the programs ran in early userspace) in Ubuntu 9.10. It is asynchronous and has a lot more features, and can be used in a init compatible way. So it might well end up replacing init in other distros as well. Time will tell.

At last there was a short plea to register against “Datalagringsdirektivet”, a EU directive that Norway might pass if not vetoed. This directive obliges all ISPs to store information about which users had a given IP for at least 6 months. This means that even if you are not under suspicion your activities are being logged just in case you might be doing something illegally. This is in stark contrast to the principle that one is “innocent until proven otherwise”. As such I have signed as being against it here. And so should you.

Here are some (not very good) pictures I took during the event. Have to link them because WordPress refuses to generate thumbnails for the images. Licensed CC-by-sa.
All in all it was a great event, and I might very well end up going to the next one as well. Which will hopefully be even bigger and better!

Linux everywhere?

I stopped by my local library today (must have been 5 years since last time!) and guess what I found:

library-linux-thinclient

Yes, Linux. Debian or a derivative with KDE3  running on a HP thin client. Even has Tux! At first I thought it was just this one terminal, but a quick peek told me that all the user available machines ran this system.  And several people where surfing around on the web using Firefox with the with no apparent struggles. The computers for the employees ran Windows XP though. Definitely a step in the right direction, but the fact that I was somewhat surprised tells me that Linux adoption still has a long way to go. Here is another machine:
library-linux-workstation

Maybe I’ll ask them about their experiences with the system when I go there next time. Would be interesting to hear.

At home I’ve migrated all my machines to Arch Linux x86_64 as of yesterday. Previously it was a mix of Arch 32/64 bit and Debian. No major issues yet so I guess I can say that it went pretty smooth. All the machines are now KVM capable so I’ll set them up as a small virtualization cluster. But first I want to install Linux (openWRT) on my router so that it can run DNS/BOOTP/DHCP and give me a bit more flexibility in the configuration.

I’m now also a Linux Foundation member. I’m not sure if I’ll ever use any of the benefits I get, but the cause is worth 25 usd per year none the less. I even asked them to not send me the free t-shirt. I’d never use it anyways, as it was bright white and had big logos on it. And the fit on such shirts are always the same crappy “loose” (read: “shoulder mounted tent” on a slim guy like me). Not my style.

Today I sent in my first deliverable for my senior project. We also had the first meeting and agreed on tasks for the next weeks/months. More about that later!

GPGPU in ALICE workshop

I’ve attended a small workshop/seminar in Bergen (at IFT) about nVidia CUDA and its use in the ALICE project at CERN. The goal for me and the other students from Vestfold University College was to get a concrete task for our senior project and to get up to speed on some of the things we need to get started.

The three days were heavily packed with information, so this will just be a tiny tiny summary. I will probably write up some posts explaining some parts in more detail later. If I can get a hold of the presentations given I will link them here.

Talks/topics

Day 1:

Why parallelize and why GPGPU?

Computing units does not scale simply by frequency anymore so we need to parallelize to increase performance. GPUs are commodity hardware designed for massive parallelism driven forward by  a huge consumer market in 3D graphics => provides high performance, low cost, is flexible and easy to use compared to DSP, FPGA, custom vector CPUs etc.

The Nvidia CUDA platform

Describes the underlying hardware platform and a concept for accessing this from a high level language (C with some C++ and custom extensions). Current generation has approx 200 stream processors and is designed to have several thousand threads running at the same time. There are cards especially for GP-computing (Tesla-series), but for a lot of computations the high end GeForce cards are good enough. Practically all newer GPUs from nVidia is CUDA capable, all the way from the Ion found in netbooks/net-tops.

CUDA programming model

Functions called kernels are executed on the device. All threads will run this same code, differentiation on which threads do what is done by using their IDs in statements. Threads are grouped into blocks, blocks are grouped into grids.  The number of threads/blocks is specified on kernel invocation and used for scaling to different devices. Memory is divided into several parts, each of which needs be handled manually. Device <=> host transfers is also done manually.

Day 2:

The TPC and HLT in ALICE.

Incredibly high data rates, not feasible or wanted to store all raw data  => some processing/analysis needs to be done in real-time (“online”). This can be done now using a large cluster of machines (the HLT). Using GPUs will allow this cluster to consist of a smaller number of nodes => cheaper and will free resources for other things like offline analysis.

Optimizing CUDA code.

Calculations are fast, (global) memory is slow => Focus on reducing and optimizing memory access. Designed for massive parallelism => make sure your problem sizes are big enough and that you’re split the problem in a good way.

Day 3:

OpenCL

Basically the CUDA model + vendor/platform independence – C++ features, otherwise only minor differences => Should be easy to port to and understand when you know CUDA. Implementations on the way, also for CPUs (x86 with SSE and Cell) and DSPs (Texas Instruments). Likely to be a good candidate for scientific computing in the future.

CUDA-ified tracking code

We had a look at how CUDA has been implemented to do a task within the AliRoot framework used in the HLT allready. Some concept mismatches leads to having to wrap code in somewhat ugly ways.  Further uglified/complicated by the fact that the same code should be able to run on either the CPU or GPU.

Our task: Vertex finding

More about this in a later post!

Other

I brought a camera along, but as I’m not used to having one with me I completely forgot to take pictures. We did get to see Bergen a little bit, tho we spent most of our time on the CUDA/CERN stuff.
And there was rain. Every day, all day pretty much. Not sure how I’d cope with that if I lived there.