Predicting the next Top500 list: the outcome

A couple of months ago I discussed my bets for the Top500 predict contest. I still owe you an update on the outcome...

After placing my original bets, I kept a close eye on their value. The contest allowed you to sell bets, for a part of the possible return value. I quickly realized this was an important aspect as this was a way to stack up credits in my virtual wallet with 100% guarantee, as opposed to hoping that the bets placed were correct to win a (possibly only slightly) larger return.

The value of bets was determined by the degree in which other betters agreed on your bets. The more people strongly agreed with one of your bets, the closer the value of that bet got to the possible return value. When a lot of people agreed with your bet, it was often clear that selling the bet was the best way to go. I opted for selling bets with a value of at least 65% of the possible return value.

After selling a bet, it was often interesting to place another slightly different bet on the same aspect of the Top500 list. Although the initial value of that bet would be fairly low, it was possible to either sell that bet again in a couple of days, or just leave the bet there, hope you got it right and cash in at the end. By just betting a small part of what you earned by selling the original bet, you could quickly stack up lots of credits and still hope for a large total return at the end.

Here's my entire trade history, and my portfolio at the end of the competition:

When I joined the contest, I spent my entire budget of 12,000 credits on bets. By selling high valued bets, I was able to collect about 41,500 credits throughout the contest, and placed other bets for about 3,300 credits. Out of the 8 bets that were still open at the end of the contest, 4 of them were correct, which yielded roughly another 17,500 credits (only about 4,600 credits were lost to incorrect bets). The resulting total of 55,700 credits, a 4.6x increase of my original budget, was enough to keep the first place in the betters ranking which resulted in a nice shiny iPad on my doorstep a couple of weeks later (thanks deplhit!).

I can't really say I've put that iPad to good use since then (unless you consider playing games during daily commutes or letting a drooling 1.5-year old smash his fists on it useful), but it's a nice toy to have nevertheless. More importantly, competing in the Top500 predict made me dive into a little bit of supercomputer history, which was quite interesting.

Predicting the next Top500 list

(yes, it's been a while)

Two weeks ago, one of my colleagues mentioned the Top500 predict website to me, a game in which you need to predict the next Top500 list of supercomputers, which is due in November 2011.

In the game, you receive a number of credits (12K in total, if you provide some personal details) that you can spend in bets. Each bet concerns some feature of the Top500 list, and betting is done by predicting that particular feature and betting a number of credits on that prediction. For most features, you need to specify an interval instead of just a fixed value.

The interesting aspect is that the narrower you specify the interval the more your bet may return, but of course also the bigger the chance is that the actual value drops outside the interval you specified. Competing in the game is absolutely free; although it concerns betting, no real money is involved at all (e.g. you can't buy additional credits to bet with). The winner of the game does win an iPad2 though...

I decided to take this game seriously instead of just entering bets straight away. The same evening I had some free time on my hands, and I studied the Top500 lists of the past couple of years in terms of the features to predict. After a couple of hours, I was confident that I could place well-informed bets that had a good chance of being correct. In this post I'll briefly discuss each of those bets, describing the reasoning behind the respective bets I placed. Feel free to disagree with me.

Predicting the future top supercomputer

Let's look at the bets involving the future top supercomputer first, which involve three features: overall performance obtained for the LINPACK benchmark (RMax), country hosting the top system and power consumption of that system.

In the last two editions of the Top500 list, Japan (Nov. 2010) and China (June 2011) hosted the top supercomputer. This heavily contrasts with prior lists, in which the USA was firmly on top for 12 editions in a row (since Nov. 2004). Both Japan and China came up with significantly better performing systems compared to the former champion. The Japanese Tianhe-1A system, which heavily relies on GPUs to boost performance, currently has a peak performance of 2.57 petaflops, while the most recent Japanese champion, the K computer, has an Rmax of 8.16 petaflops. Both offer significantly more performance than the 1.76 petaflops of the Jaguar (USA) which topped the June 2010 list.

Especially the huge performance improvement of the K computer in the most recent edition of the Top500 list makes me think that no system will be able to beat the current champion in the next edition. The USA has two large systems in the pipeline, Mira (~10 petaflops) and Sequoia (~20 petaflops), but these won't be ready until early 2012 according to latest news. The cancellation of the Blue Waters project by IBM only makes me more confident. I'm unaware of other systems that might compete with the K computer (but I may be missing some).

Thus, predicting these three features is fairly easy under that assumption: pick Japan as the country which will host the next top supercomputer, and choose the intervals for both performance in terms of Rmax and power consumption fairly narrow to just contain the current values. If the K computer does indeed stay on top, this results in a huge return: ~23.7K credits for 4.5K of credits spent. If I'm wrong however, I lose over 1/3 of my total budget, so this group of bets is kind of a game changer for me.

  • RMax of Top 1 Machine

    my bet: 8 - 9 PFLOPS (2K, return: 12.6K)

  • Country with the Top 1 Supercomputer

    my bet: Japan (1.5K, return: 6.5K)

  • Power Consumption of the Top 1 Machine

    my bet: 9 - 12MWatt (1K, return: 4.6K)

Predicting national performance development

Another group of features-to-predict involve overall performance development for three selected countries: Germany, China and USA. This involves the ratio of the total Rmax (sum over all systems of that country) over that of the previous list. A bit of a dubious measure, as adding or removing one single system might may a big difference, but hey...

A little analysis of the performance gain of consecutive editions of the Top500 list showed that this is a particulary hard feature to predict. Over the years, the ratios vary wildly, and no real trend can be found. China has shown remarkable progress by improving the total sum of Rmax values by more than 75% three times in a row (Nov. 2009 - Nov. 2010). In general however the performance ratios are usually somewhere around 1.2, i.e. about 20% more total Rmax compared to a the previous Top500 list.

Making accurate predictions for these features is very difficult (if not impossible), so I chose to enter fairly large intervals. Naturally this results in lower returns, but I hope I can avoid to lose the credits I bet this way. If the actual ratios are in the specified intervals for each country respectively, this group of bets can get me a total return of 11.7K credits.

  • Performance Development Germany

    my bet: 15 - 35% (1K, return: 4K)

  • Performance Development China

    my bet: 25 - 50% (1K, return: 3.2K)

  • Performance Development USA

    my bet: 15 - 30% (1K, return: 4.5K)

Amount of systems using GPUs

To estimate how many systems will rely on GPUs, I didn't spend much time studying the evolution in previous editions of the Top500 list.
Although GPUs have been around for quite some time, only recently have they been actively used in supercomputers for general-purpose computing instead of high-quality graphics. The June 2011 edition of the list featured only 14 systems (partially) powered by GPUs, as far as I can tell from the raw data available from the Top500 website.

In my view, GPU-powered systems won't take over the Top500 list just yet. GPUs are still an add-on to more traditional compute powered delivered by CPUs, are not easy to use because they require specialized knowledge (e.g. involving data locality and languages like CUDA, OpenCL, ...), and are only applicable to certain types of applications.

Therefore, I picked an interval on the lower end of the spectrum, with a maximum of 40 systems using GPUs. I highly doubt the actual value will go over 25, but I opted for a safe bet on this one.

  • my bet: 10 - 40 (1K, return: 9.3K)

Replacement Rate

Predicting the replacement rate involves how many systems will be replaced by new ones, and thus how many systems will drop out of the June 2011 edition of the list. Again, historical data showed that this is fairly hard to predict.

On average about 45% of systems got replaced over the past couple of years, but there's a significant amount of variation between editions of the list (from 29% up to 60%). Therefore, again, a fairly safe bet by specifying a wide interval, and a fairly low return resulting from that.

  • my bet: 25 - 50% (1K, return: 3.3K)

Total Sum of RMax Value

One feature that does show a strong trend is the total sum of Rmax over the entire list. For the June 2011 edition a total of 58.93 petaflops was reached, and on average an increase between edition of about 35% is observed.

I slightly adjusted my prediction to less though, because the highest performing system of the last edition was responsible for 14% of the total Rmax. And since I don't expect the top supercomputer to change in November, I anticipate the increase in total Rmax will be slightly lower this time.

  • my bet: 64 - 70 PFLOPS (1.5K, return: >5.1K)

Entry Level

The last feature to predict is the entry level Rmax, in other words the minimal LINPACK performance a system needs to deliver to be mentioned in the Top500 list.
This is also a feature which shows a fairly strong trend over time: on average, an increase of about 34% is observed.

The previous edition had an entry level Rmax of 40.19 teraflops, and because of a slightly lower increase in entry level Rmax in the most recent editions of the list, I also adjusted my prediction a bit downward. The interval is fairly small, which is a risk but one with a potentially large return.

  • my bet: 45 - 50 TFLOPS (1K, return: 9.3K)


Whether my predictions turn out to be right or not, figuring out how to make decent predictions for the next Top500 list was fun. The competitive aspect makes it more interesting, and overall it's a cool experiment to try and see how predictable the list really is.

The current rankings (with me on top at the time of writing) are not really indicative for how the final rankings will look like. The net worth used to rank competitors only evaluates how good your predictions match those of others. However, it's not because a large part of the users predicts a feature in a particular way that the actual value will match those predictions. The creators of the competition hope that "the wisdom of the masses" will make the predictions match the actual values closely. Let's see how that turns out...

How we almost won VPW-2011, using only Haskell

Last week, I competed in this years Flemish Programming Contest (VPW for short in Flemish/Dutch). After attending the first VPW edition in Leuven in 2009 as an impartial observer and co-organizing VPW-2010 in Ghent, I wanted to experience the contest from the perspective of the competitors.

First, a brief description of the contest: each team of max. 3 members is given 5 assignments of varying difficulty, and needs to solve as many as they can in just 3 hours. Naturally, the teams that solves the most assignments wins. The time needed since the start of the contest per solved problem is summed up to rank teams that correctly solved an equal number of problems, a punishment of 20 minutes is added for each incorrect submission made and a bonus of 60 minutes deduction is granted to the teams that are able to solve one particular problem sufficiently efficient. Important side-notes are that each team is only allowed to use one single laptop/keyboard/mouse, that teams are not allowed to access the internet during the contest and that solutions to problems are tested on non-disclosed input sets.

To make things more interesting (and maybe also harder), I picked Haskell as the sole language my team would use to implement my solutions. After finding two like-minded team mates in the GhentFPG community, we chose "Avoid Success At All Costs" (the Haskell motto) as team name and started prepping for the contest.

By second hand experience, I knew that having a set of functions handy to process input and write out answers in a structured way, implement some standard programming paradigms (backtracking, binary search, etc.) and handle with 'special' data structures like matrices and such is a big plus. Since only the Haskell Platform would be available on the contest servers, we put together a template which contains a whole bunch of functions that might come in useful during the contest. We also held a couple of training sessions in which we mimiced the contest setting a closely a possible, to figure out what tactic to use, decide who would sit behind the single available laptop (that turned out to be me in the end) and who would draft the algorithms for the hardest problems. A last minute dropout of one of the team members caused some minor headaches, but we were able to find a very worthy replacement in the end, i.e. a guy who co-organized VPW-2009.

I was fairly confident that we would perform well, given that our team was fairly well prepared and that two of us were closely involved with the design of assignments and/or organization during previous contests. However, it turned out I grosly underestimated the impact of the contest setting. Working together with three people you're not very familiar with, on a single laptop andunder the pressure of time and competition was harder than I anticipated.

We finished 6th out of 24 teams in our category, by solving two of the five assignments correctly in the 3-hour time span that was given us. We felt we were pretty close to also solving a third one, but didn't manage to weed out the bugs before the end of the contest. After seeing the final ranking (see screenshot), we realized that we were really close to actually winning. If we had managed to solve the third assignment in time, we might have taken 1st place because we had very little punishment time compared to the only team that had three solved assignments. The morning after the contest I finished the assignment we were working on in just 10 minutes on the train ride to work, which makes things even more painful.

Nevertheless, we had fun. The only way we could have been better prepared for such a contest was gaining experience through competing, so we had no regrets. Using another language wouldn't have changed things much, because the emphasis was on the algorithms to solve the assignments and less on the code you were write. Haskell is a very expressive language which is a significant advantage under the pressure of time, and the type system weeds out stupid bugs early, which is again a win in terms of time. Having only one laptop available for 3 people is a huge bottleneck, but something you can learn to deal with by working out assignments thoroughly on paper (something you usually don't do if you have a computer handy).

I haven't decided yet what I'll do next year. Maybe I'll compete again, and if I do I'll again only use Haskell. Another option is to rejoin the team of jury members that design and select the assignments (which is more work but easier than actually competing in my opinion), at the informal request of some jury members. Both sides are interesting, so I'll refrain from making a decision on this for a while longer and enjoy the good feeling I retain from competing in VPW-2011.

Visiting CERN

Last week, I was visiting the CERN site near Geneva, Switzerland for work. I was attending a workshop on Quattor, a powerful tool for automating the system configuration of large numbers of systems. Learning Quattor is a necessity, because the UGent HPC team I'm part of relies on it for managing roughly half a thousand systems today, and will be relying on it for many more in the upcoming years.

The CERN site lives and breathes science, you could just feel it when strolling between the various buildings. There was a very professional feel to it all, with fairly strict access control to the site, very well occupied meeting rooms and monitoring screens with live updates on the Large Hadron Collider (LHC) operation status in pretty much every building. Having lunch in one of the large on-site restaurants only added to the feeling that working at CERN must be a dream come true for a geek/scientist.

Unfortunately, we had little time to visit many of the really interesting things to see there. We had a quick glance at the massive Tier 0 data center, took the visitors exhibition tour in the giant wooden globe across the street and walked through the "Microcosm" exhibition, where we saw a rather old but very interesting movie clip on the discovery of the W boson back in the 80s.

A guy taking part in the Quattor workshop was able to explain me the basic goal of the LHC and its detectors. The whole deal is to find the Higgs boson, also called the God particle. The way I understood it, is that the Higgs boson is a really heavy particle that can not exist without huge amounts of energy being available to sustain its existence. It is roughly known how heavy this particle should be (if it exists), and thus also how much energy is needed to keep it alive, if you will. The LHC will be able to deliver this required energy, and thus should allow to observe the Higgs boson.

Here's the interesting part: if it's not being observed, not even when the LHC reaches its maximum capacity in the next couple of years, then the theoretical model of physics that has been widely accepted during the last couple of decades is wrong. That would mean that most of the recent research in physics will have to been questioned all over again. Just imagine the impact that would have.

However, there's a catch. Observing the particle is not as easy as it sounds. The huge detectors that were built for LHC experiments like ATLAS and CMS generate huge amounts of data. I was told the noise-to-signal ratio is a couple of billion to one, meaning that a huge number of observations needs to be done to make one potentially interesting observation. Even after filtering out the trivially uninteresting observations, the amount of data that needs to be processed is enormous. Hence the need for a huge data center like the Tier 0 on the CERN site, which is assisted by eleven Tier 1's and hundreds of Tier 2's all over the world.

Even though we didn't get to see the LHC up close, visiting CERN has been a very interesting experience. I hope I'll be able to return in the near future, and learn more about it all while scientific history is being made.

Back to the future: Haskell

(Before you ask: yes, I'm growing a beard, and yes, I'm using the lack of a Belgian government 250+ days after the elections as a lousy excuse.)

The last couple of months, I've been trying to get back to an old love of mine: Haskell. I haven't really used it for years until recently, but I'm planning to change that in the near future. I'll briefly outline my world-shocking (well, almost) plans here. But first, a bit of history...

Back in 2005, I used Haskell for my Masters thesis entitled "Modeling and implementing a ray tracer using functional languages", which resulted in HRay. Besides a couple of minor hiccups, I haven't really done anything serious with Haskell since. And it's not that I didn't want to, I just didn't find the time nor opportunities for it during my PhD. The tools I used for my research usually needed to chew through a large amount of data quickly and I decided to resort to C for that, because I didn't have enough experience with Haskell to produce tools that are competitive performance-wise with ones written in C.

During my PhD I promised myself I would get back to Haskell once I finished, and so I did.

I took my first steps back to Haskell during the Google AI Challenge, by putting together a bot in Haskell (code here) that played the contest version of Galcon against another bot. I relied on jaspervdj's Haskell starter package for the bare metal stuff. The bot ended at 497th place in the contest (6th Belgian, 6th Haskeller), a nice result with just over 4600 bots competing.

During BelHac, the first Belgian Haskell hackathon, I had some discussions about starting an effort for a new Haskell benchmark suite, i.e. one worthy of that name, and I also made some minor patches to HRay, which I had ignored for too long. I also met a couple of infamous Haskell hackers, including Don Stewart a.k.a. dons and Duncan Coutts , who helped me with some GUI-related stuff for HRay back in 2005. And last but not least, I won a copy of Real World Haskell (RWH), because I submitted stuff to Hackage during the hackathon and won the lottery draw. That earned me the right to be on a picture together with dons, see below (I'm on the left, dons is right next to me \o/). He even signed my copy of RWH, how cool is that!

Since BelHac, my hands have been itching to use Haskell for all sorts of things. My sysadmin job doesn't really allow using Haskell at work, I mostly use Python there, and finding time for side-projects isn't easy since my wife and 8-month old son are also competing for attention (and frankly, they are winning hands down over anything else). Nevertheless, I do have plans with Haskell in the near future.

First of all, I will be competing in the Flemish Programming Contest next month, and would like to do so using only Haskell. I found two team mates in the GhentFPG community, and we've been preparing for the contest for a couple of weeks now. We're hoping to blow the competition away by exploiting some of the strengths of Haskell, i.e. fast, obvious-bug-free programming, and chose the Haskell motto "Avoid Success At All Costs" as team name.

Something I would like to look into is using genetic programming to evolve Haskell programs. Recently, a Haskell library for genetic programming was announced on the Haskell mailing lists: genprog. I played with it for a while, and found the concept of being able to evolve expressions that evaluate to a specified value as closely as possible very interesting. There are several problems that come forward when trying to evolve actual Haskell programs, and although I have no idea what I'm getting myself into, I'd love to dive in and see what I can come up with. One idea I have that might turn out useful is to use Hoogle somehow to figure out how two existing Haskell programs can be recombined to form two others (the crossover operation in genetic algorithms).

Another Haskell-related project that I find interesting is looking into the low-level behavior of Haskell programs, and compare it to that of programs written in procedural languages. My attention was drawn to this after reading a tweet by dons, which pointed to excellent slides by David Peixotto on exactly this topic. David has compared the dynamic instruction mix of Haskell programs with that of programs written in C/C++, and found them to be remarkably different. Using MICA, a tool I implemented during my PhD that allows to collect so-called microarchitecture-independent workload characteristics, it would be interesting to compare not only the instruction mix, but also other aspects of low-level program behavior, e.g. the spatial and temporal locality of memory accesses, or the amount of instruction-level parallelism (ILP). I believe that kind of research could result in very interesting insights on how Haskell programs are different from programs in other languages, and might contribute to improving code generation for Haskell.

To exascale, or not to exascale?

A couple of days ago a Slashdot post entitled "Supercomputer Advancement Slows?" caught my attention. It concerns an IEEE Spectrum article on Next-Generation Supercomputers, which is well worth the read imho.

In short, the article mentions various reasons why supercomputers won't break the exaflop (1,000,000,000,000,000,000 operations per second) limit anytime soon. The major concerns are well-known, i.e. power usage, cooling, cost, physical footprint, etc. Besides this, also the degree of use (actual vs peak flops), the huge memory/storage requirements concerns and the need for fault-tolerance were touched upon in the article.

First of all, I'm not sure whether I fully agree with the conclusion of the article. Although some of the problems mentioned seem like they're too hard to handle right now, we've seen some amazing things being accomplished the past couple of decades. Also, I kind of had a "the earth is flat"-feeling when reading the article, if you know what I mean. I might be wrong, though. When YouTube started to gain momentum a couple of years ago I felt like it would never work out because people wouldn't want to put videos of themselves online for everyone to see. Boy, was I wrong...

Nevertheless, the reason I'm bringing up this Slashdot post is because I feel the author(s) of the IEEE Spectrum article missed something.

During my PhD I "wasted" a couple of centuries of computing time on the university HPC infrastructure myself. Also, in the last couple of months since I've become part of the HPC-team at Ghent University, I've worked with scientists from various fields who run experiments on our (currently rather modest) HPC infrastructure. This experience with HPC systems from the point of view of the end users made me realize there is another important aspect which contributes to a successful HPC infrastructure, or supercomputer (if you insist): the users. Yes, them.

Even if you have a massive beast of a system, with a state-of-the-art network and storage infrastructure, the best processors money can buy and no budget limitations to pay for operational cost, it's the users that will determine whether or not it all pays off. Users need to know what they are doing, how to efficiently use the system, and how to avoid doing downright useless stuff on it. You won't believe how much computing time gets wasted by typos in scripts or quick-lets-submit-this-because-its-Friday-afternoon-and-I-need-a-beer experiments.

Frankly, I have no idea how they handle this at really large supercomputer sites, like the ones in the Top500 list. I hope they only start the really big experiments after thorough preparation, testing smaller-scale stuff first and making damn sure they've done the best they can to optimize the experiments. Otherwise, why even bother breaking the exascale limit?

2010 - a year of change

I intend to focus on technical stuff on this blog, but I can't resist to briefly mention how my life changed in 2010. Last year has been the best year of my life, and I'm not exaggerating. That may sound cocky, but let me explain.

In 2010, I experienced things that opened my eyes and achieved things that changed my life for good. Some of those events are fairly minor, others are pretty big. A brief overview:

- (February 2010) Broke my ankle; first time I broke something since I outgrew nappies. Now I know why I should try and prevent breaking something ever again (it hurts like hell!).

- (first quarter 2010) Helped organize the Flemish Programming Contest 2010. A truly rewarding experience, on various levels. I'm glad we can look back on a successful event, with over 200 geeks competing in more than 80 teams. Planning to compete myself in the 2011 edition.

- (March 2010) First wedding anniversary. Not exactly a life-changer, but worth mentioning nonetheless. Hi honey!

- (June 2010) Applied for a real job for the first time in my life (HPC sysadmin @ Ghent University). Although I still needed to finish my PhD, I didn't want to miss this chance to at least try and score a job that really appeals to me. Got job offer in August, and gladly accepted.

- (July 2010) Became father for the first time. I know it's a cliche, but this is the best day of my life so far. Seeing a new life being born, and realizing you helped create it is a stunning experience. Seeing my son grow up since then is possibly even more amazing.

- (August-September 2010) Defended my PhD disseration successfully and obtained the degree "PhD in Engineering: Computer Science", after 5 years of hard work. I'm sure my wife is pretty happy the non-stop keeping an eye on experiments and pulling all-nighters to meet paper deadlines is over and done with.

- (October 2010) Started my new job as a system administrator in the HPC team of Ghent University. This was kind of a leap in the dark, since I never "maintained" systems (i.e. kept then running) beyond my own workstations and laptop. I haven't regretted this decision since though, since I'm learning new things almost every day, am helping out researchers from various scientific fields with their experiments and have interesting things to look forward to, i.e. a real supercomputer that will be supported by the team I'm part of. w00!

Next post will be less personal and more technical, I promise!

Back from the e-dead

Good news everyone! I'm back.

Bringing my blog back to life is one of my New Years resolutions for 2011.

Back soon with a better looking design and a real post (hopefully)!

Subscribe to boegel's blog - just boegel RSS

Back to top