• Home   /  
  • Archive by category "1"

Eurovision 2011 Voting Analysis Essay

With the release of the full data from the European Broadcasting Union, we are able to perform a statistical analysis on the voting in the Eurovision Song Contest 2014. For the second ‘Voting Insight’ article, we analyse the impact of running order in this year’s Eurovision Song Contest. Previously on ESC  Insight we have compared how the performance position impacts other contests with producer-led running orders, and before the final this year in Denmark gave our analysis of what we could infer from trends in the Danish-produced running order.

Now we are able to look at where songs placed on the scoreboard based on their running order. This analysis looks to see if the EBU is correct with their statement that ‘’ and furthermore compare this impact for both juries and televotes. Ben Robertson investigates.

Read This First, Here’s The Summary Of Our Findings

By its nature, this article is going to be long, densely packed, and have quite a bit of maths going on. That said, the conclusions are worth clearly highlighting. If you don’t want to be spoiled, skip over this bullet pointed summary.

  • Through the use of correlation analysis on the jury votes, the public votes, and the running order, we will investigate if the running order has a statistical impact on the results of the Eurovision Song Contest 2014.
  • Firstly, we will show that the running order on its own is not enough to ensure a victory for any one song at the Contest.
  • Secondly, we will find evidence that shows with statistical confidence that songs do benefit from a later running order slot.
  • Finally, we will examine the variability of both televote patterns and jury patterns as they change from the Semi Final through to the Grand Final, and show that juries have a wider range of opinion than the televote and are more likely to change their vote between the semi-final and the grand final than the public.

The overall conclusion is this. The running order does have an impact on the Eurovision Song Contest. In general the best songs will continue to do well, no matter their running order location, but there are circumstances where the running order can decide the winning song.

This confirms previous studies both by ESC Insight and other academic research (including Page, L. and Page, K., 2008. A Field Study Of Biases In Sequential Performance Evaluation On The Idol Series. Journal of Economic Behavior and Organization; and Bruine de Bruin, W., 2005. Save the last dance for me: unwanted serial position effects in jury evaluations. Acta Psychologica 118).

How this affects the long-term health of the Song Contest, why this is important, and what can be done to address it, will be areas for discussion throughout the summer and into the 2015 season here on ESC Insight.

Okay, summary over. Time for the science part.

They knew the result an hour before you did (Eurovsion 2014 Broadcast)

What Effect Do We Expect Running Order To Have?

Before conducting our analysis, let’s go back over the Song Contest history and look at some of the prevailing thinking about the impact of the running order.

Before Conchita Wurst’s victory for Austria this year, you have to go all the way back to 2004 to find a winner that sung outside of the final ten songs in the running order; nobody ever wins from second place in the running order; and if you’re looking to qualify from the semi-finals you want to be in the bottom half of the draw more than the first.

Going into a bit more detail, our previous analysis on former Eurovision Song Contests suggests that of the running order on the points score of a song, with a later start resulting in a better performance. That’s not enough to change the entire Contest, but enough to offer a 20 to 30 point spread on a song’s ultimate performance.

That means ‘Euphoria‘ would have still won the 2012 Contest if Loreen had been drawn in the first half. And yes, that means that if Sweden had been singing second, the ‘curse’ would have been broken. It also means that in a close Song Contest the running order could change the result – the example here being 2003 with Turkey (singing 4th) scoring 167, Belgium (singing 22nd) scoring 165, and Russia (singing 11th) scoring 164 points. Switch Russia and Belgium, and t.A.T.u. would have beaten Sertab (but then if you switch Sertab to 11th and Russia to 22nd you probably would have got a tie-breaker).

Running order alone would not have been enough to topple Loreen from winning the contest in 2012

Returning to the 2014 Contest, our analysis will firstly test the ‘5% variance’ imparted by the running order, before going further to see how songs have benefited or lost out as they qualify through from the Semi Finals to the Final and have their running order position changed.

Our Methodology and Assumptions

With the full results and rankings available from the EBU from the 2014 Song Contest, we are able to analysis the full voting patterns. This enables us to use rank correlation to look at the individual jury rankings, overall country rankings, and the final results, using Spearman’s Rank Correlation. This is a statistical calculation which looks at how strong the correlation is between the two sets of data.

For example if the results of the correlation analysis conclude as +1, that implies that we have a perfect correlation in the data. Comparing the running order the result, a +1 would occur if the winning song performed last,the second placed song had sung in second-last place, and so on up to the last placed song opening the Contest.

If the result of the correlation is -1, then the correlation is perfectly negative. In this case that would imply that the song that won the Contest opened the show, the second placed song was performed second, all the way through the order so the song that was drawn last finished last.

If the result of the analysis is 0, this implies that there is zero mathematical correlation between the two sets of data.

To determine if we believe a statistical significant correlation exists, we will consult these tables. The values in these tables are used to decide if the correlation is significant or not, and how likely it would be by chance or not. As you may expect, if more data is correlated together, we require a less perfect order to show a statistically significant trend.

We also use means (averages) and standard deviations (measuring the spread of data) as used in our first Voting Insights 2014 article for further comparison purposes of the data. We also take use where appropriate of Pearson’s Correlation. Pearson’s produces a similar number to Spearman’s Rank, but uses the full range of data rather than just the ranking. We will come back to this later in the analysis as required.

Can The Running Order Alone Guarantee Victory?

First of all, let’s look for any strong bias between the running order and the rankings made by the jury and the public in both the Semi-Finals and the Grand Final.

We take the individual rankings from each jury member and each accumulation of televotes and convert this into a 1-25 (or 26 for non-qualifying countries), we give the highest points to the highest running order positions, and then perform the Spearman’s Rank Correlation on these results.

Average of all Televote Spearman’s Rank CorrelationsAverage of all jury scores Spearman’s Rank CorrelationsStandard Deviation of Televote Spearman’s Rank CorrelationsStandard Deviation of jury scores Spearman’s Rank Correlation
First Semi Final0.119-0.0890.1970.244
Second Semi Final0.271-0.0170.2610.325
Grand Final-0.0140.0280.1670.201

We can note at this stage the increase in standard deviation between the juries and the televotes. This backs up again from our previous study that the juries are less reliable compared to televotes for following trends and patterns. The difference here between the juries and televoters (around twenty percent) is in keeping with the figures for spread we found before.

At first glance, these results show a split between juries and televoters, suggesting that in the Eurovision Semi Finals being later in the running order is better for gaining televotes. We attribute the disparity between the jury scores and televotes mostly to the styles of songs in question. Malta was a strong jury favourite compared to its televote placing, which would skew our results significantly due to the fact this opened the show and therefore as per our mathematical hypothesis had the worst running order position.

However  for any result to be statistically significant we would need to see stronger correlations than are seen here. The Second Semi Final shows a positive televote correlation of 0.271, however for us to be 90% confident such a result did not occur by chance alone we would need the correlation to reach at least 0.35.

Because of the lack of correlation, we can argue that the running order alone is not a decisive factor in the Eurovision scoring, but as we are about to show, the running order does have an impact.

The running order on its own is not enough to ensure a victory (picture: Andreas Putting / EBU)

Let Us Now Consider The Impact Of The Producer-Led Draw

Before jumping to a conclusion about the lack of correlation above, we need to consider the running order itself. Simply put, it is not completely random, and nor is it completely chosen by the host broadcaster. The Eurovision songs are drawn at each stage into the first half or second half; and from these allocations the running order is decided by the host broadcaster. As it so happened both pre-contest favourites, as well as our eventual winner, were drawn in the first half of both their respective Semi-Finals and the Grand Final.

As we at ESC Insight commented before the final, the running order produced from DR . What happens if we split up each section of the running order, and treat it not as three running orders, but six running orders?

Under these circumstances, we can perform the same rank correlation figures below, but on the top and bottom halves of each show to test for any bias where the running order for a block is under complete producer control.

We will perform this part of the analysis using the Pearson’s Correlation technique to account for the full range of scores from the juries and the televotes in each section of the contest, rather than the relative ranks of songs in each half of the Contest.

Average of all Televote Pearson’s Rank CorrelationsAverage of all jury scores Pearson’s Rank CorrelationsStandard Deviation of Televote Pearson’s Rank CorrelationsStandard Deviation of jury scores Pearson’s Rank Correlation
1st half Semi Final 1-0.3310.0060.3270.342
2nd half Semi Final 10.2730.2630.2140.386
1st half Semi Final 20.351-0.0830.2710.332
2nd half  Semi Final 20.314-0.0090.350.388
1st half Grand Final
0.2590.0760.2210.289
2nd half Grand Final
0.1980.1550.1910.288

Excepting the anomaly of the top half of the first Semi Final where Sweden and Armenia are powerful enough to drown out any smaller trends, we can see here a weak but visible trend that the running order does have stronger bias towards the final position of songs. The effect of Semi Final One is strongly negative due to the strong scoring potential that both Armenia (drawn first) and Sweden (drawn fourth) had regardless of their running order position, which strongly skews the dataset. Our juries may be more spread, but this suggests that juries are less likely to be affected by running order bias than televoters.

The running order from DR overall shows a slight apparent increase in running order bias overall, showing that where the producers can control the running order, the product of this is that it increases any running order bias. Note in particular we found a negative bias overall for the Grand Final, but splitting it into two halves shows each have quite weak but also quite clear running order bias based on the producers’ choices.

This is in common with other producer-led running orders as .

We must remember that the quality of the songs and the performances themselves, as well as any cultural links exhibited in Eurovision voting, play a major role in the outcome. However, these results still show that the running order does contribute towards the ultimate performance of a song in the Eurovision Song Contest.

Songs do benefit from a later running order slot.

How To Solve A Problem Like Removing The Songs From The Analysis?

Twenty of the twenty-six songs in the Grand Final had to qualify through a Semi Final. What we are able to do with the full spread of data is see how the voting patterns changed between these songs as they progress from the Semi Final to the Grand Final.

The question we test is simply whether or not a song drawn later in the show moves up in the relative rankings above other songs that qualified from the same Semi Final.

We are aware of the potential flaws in this, that different people could be watching and casting votes at home, there is a wider pool of potential countries to vote for, or that the performances could be delivered in differing quality on stage. We assume for the purpose of this investigation that these factors have no bearing on the result and would point out the professionalism of everyone taking to the Eurovision stage this year.

For an example of how this works, let’s assume that the televote in country X gives country Y a fifth place in the Semi Final between the ten songs that had qualified. If country Y then finishes fourth in the country X televote (between the ten songs that qualified) we give country X a score of +1, indicating it gained one place (from country Y). If it lost two places relative to the other countries in its Semi Final in the Grand Final, it would have a score of -2.

We do this for each televote ranking and each jury member’s ranking. We then compare the sums of all these changes to the difference in the running order position, based on a percentage. For example Sweden was drawn fourth in the Semi Final, and thirteenth in the Grand Final. The change in Sweden’s running order was as follows:

13/26 – 4/16 = 0.25

Meaning a 25% benefit to Sweden’s running order position assuming our running order hypothesis is correct. We ignore any possible effects of running order benefit or loss based on other songs around them in the competition, and we take the pure mathematical solution with later in the draw being better for our purpose.

Here is our table of running order winners and losers as we go from each Semi Final qualifier to the Grand Final.

Name of Country (1st Semi Final)

Percentage change in running orderRank of running order benefit/lossName of Country (2nd Semi Final)Percentage change in running orderRank of running order benefit/loss

Armenia

+20.7 %3Malta+77.9 %

1

Sweden

+25 %1Norway-0.8 %

5

Iceland

-15.9 %6Poland+1.3 %

4

Russia

+13.9 %4Austria+2.3 %3
Azerbaijan-38.5 %8Finland+15.9 %

2

Ukraine

-52.4 %9Belarus-59.0 %9

San Marino

+21.2 %2Switzerland-3.1 %

6

The Netherlands+4.8 %5Greece-48.2 %

8

Montenegro

-63.0 %10Slovenia-27.9 %

7

Hungary-19.2 %7Romania-73.1 %

10

This table shows which countries have the biggest change in running order as they go from the Semi Final to the Grand Final. We therefore would be looking especially closely at Montenegro, Ukraine and Azerbaijan for possible drops in Semi Final One, and from Belarus, Greece and Romania to suffer moving from Semi Final Two, with Malta as a strong beneficiary.

To see if these trends come true, we compare these ranks, both percentage (for Pearson’s correlation) and ranked (for Spearman’s correlation) to assess whether the running order does alter the results of each televote and jury.

Here is a table for those songs in Semi Final One and Two to show how much gain and loss they had from each televote as they progressed to the Grand Final. Note that those countries that did not meet the threshold for their televote for either the Semi Final or the Grand Final are not included.

Name of Country (1st Semi Final)

Increase or decrease in televote rank from each countryRank of televote changeName of Country (2nd Semi Final)Increase or decrease in televote rank from each countryRank of televote change

Armenia

+112Malta+14

2

Sweden

+93Norway+25

1

Iceland

+44Poland+6

4

Russia

+211Austria+5

5

Azerbaijan

+15Finland-4

7

Ukraine

-26Belarus-16

9

San Marino

-78Switzerland+7

3

The Netherlands

-57Greece-3

6

Montenegro

-149Slovenia-13

8

Hungary-1810Romania-21

10

When we compare the data here with the running order benefit or loss table we are able to calculate the correlation over the entire televote.

Pearson’s Rank Correlation for Televote to Running Order Difference

Spearman’s Rank Correlation for Televote to Running Order Difference

First Semi Final

0.523

0.479

Second Semi Final

0.718

0.544

The results here show a strong positive correlation between the running order and the relative rankings of songs that qualified from the same Semi Final. Our average of the Spearman’s Rank values calculated (which would be 0.512) would for a data set of 10 values give us a value clearly over required 0.4424 to give us a 90 % certainty that this trend is not random. It is statistically significant and both Semi Final results would meet this requirement. The Pearson’s Rank, using the full spread of the percentage benefit/loss and the full rank, shows an even stronger positive correlation.

It is not perfect, and we note that Norway’s ‘Silent Storm’ benefitted the most from televoting despite not receiving a notably different draw in the final.

However it is important to remember what we are suggesting with this conclusion. The running order does not as we have previously investigated give a significant bias to the contest as a whole, but it does have some measurable difference on the outcome. For Norway, their +25 increase in rnakings averages out across the 16 countries with televotes in the Second Semi Final and the Grand Final to boosting their average televote rating by +1.6 relative to the other countries from their Semi Final.

It is unlikely these kinds of figures alone would win or lose anybody the Song Contest, but they are enough to make a noticeable difference, and in a closely fought Contest the points margin of victory could easily be less than the points margin gained through a better running order position.

Want to see running order bias in action? Russia, Turkey, and Belgium, in 2003.

Are The Juries Affected By The Running Order?

You will recall that the EBU brought the juries back in part to diminish the impact of the public vote. Let us perform the same calculations on the jury rankings (Georgia’s results are not taken into consideration because the jury votes were disqualified in the Grand Final).

Firstly, the gain or loss of a country’s ranking in the jury results.

Name of Country (1st Semi Final)

Increase or decrease in individual jury ranks from each countryRank of jury vote changeName of Country (2nd Semi Final)Increase or decrease in individual jury ranks from each country

Rank of jury vote change

Armenia

-156Malta-10

7

Sweden

+411Norway+38

1

Iceland

+55Poland+16

4

Russia

-3110Austria+27

2

Azerbaijan

+104Finland+7

5

Ukraine

-177Belarus-22

9

San Marino

+143Switzerland+3

6

The Netherlands

-198Greece-11

8

Montenegro

-269Slovenia+20

3

Hungary

282Romania-68

10

…and the correlation…

Pearson’s Rank Correlation for Jury Rankings to Running Order Difference

Spearman’s Rank Correlation for Jury Rankings to Running Order Difference

First Semi Final

0.270

0.382

Second Semi Final

0.476

0.527

The average of the Spearman’s Rank totals gives us a score of 0.455. This would still give us mathematical evidence that the juries, just like the televoters, are also susceptible to the same kind of running order bias.

The net results of this data averages to be less than comparing to televote changes. Our biggest mover is Romania, losing 68 ranking points relative to the other songs in the second Semi Final. However we assessed this for a total of eighty-five jury members, taking our average number of points lost down per jury member to only 0.8.

Nevertheless, even when we know the people are the same, and they have listened to the songs at least on more than one occasion, we still see that some running order bias shines through both juries and televoters. Even with this apparent though some of the trends in the data are the polar opposites. The main televote advance in Semi Final 1 was for Russia, but they lost out the most from the jury groups as one example. Armenia did a similar thing on a smaller scale as it progressed to the Friday and Saturday night performances.

The overall trend in the data though does give statistically significant data to suggest the running order does have an impact on the workings of votes in the Eurovision Song Contest. Not enough to decide if you win or lose, but enough for a small pool of points that may be vital.

A Statistical Aside About Our Juries

The fact that the juries also exhibit running order bias may be a surprise to many. It was a surprise to me. I would expect jury members to have strong pre-judgments about the songs they like and would generally see little change in their results. The change may not be as dramatic, but it is still significant.

If you look at the 180 jury members that voted in the Semi Final and the Grand Final, you will find that 179 of them changed their relative rankings of songs that qualified from the Semi Final they vote in. Something triggered them to change their minds.

The obvious answer would be that this is due to the performance quality on the night, especially with ‘vocal capacity’ and ‘the overall impression of the act’ being amongst the criteria that juries have to work on. However the trends are less clear than we might expect. Norway was our main beneficiary of jury points progressing to the Grand Final (gaining thirty-eight points), but still twenty-three of the eight-five jurors (27.1 %) voted it in a lower relative position in the Grand Final compared to the Semi Final. This is one example of the many that shows the juries do not clearly show consensus.

The juries are also not able to follow trends from their own voting as clearly as televoters are able to. We run Spearman’s Rank Correlation on the rankings of each country’s points to the qualifiers in each Semi Final to the Grand Final. We then average for all the televoting scores and for the jury voting scores. This shows how well the placing of a song in the Semi Final predicts how it will place in the Grand Final.

Mean result of Spearman’s Rank Correlations of Semi Final Televotes to Grand Final Televotes

Standard Deviation of Spearman’s Rank Correlations

First Semi Final

0.868

0.087

Second Semi Final

0.860

0.101

 

Mean result of Spearman’s Rank Correlations of Semi Final Jury Scores to Grand Final Jury Scores

Standard Deviation of Spearman’s Rank Correlations

First Semi Final

0.744

0.231

Second Semi Final

0.673

0.209

As you would expect, we have very strong positive correlations that the televotes are similar between the Semi Final and the Grand Final. The effect though is much weaker for the jury groups, and the results are spread over a larger amount. The juries therefore are less predictable even based on their votes in the previous show compared to televoters who have more steady patterns in voting for their favourite entries.

The large standard deviation here is notable because it is much larger for the jury groups than televoting, suggesting the data is very spread out with some jury members have very low correlations between the two voting occasions. Four jury members, two or which are jury Chairs, shown either a correlation equal to zero or negative correlation between their voting in the Semi Final compared to the Grand Final. Mathematically speaking, how they voted in the Semi Final gave no bearing to how they voted in the final.

As an example, the lowest correlation recorded was -0.08. The biggest outlying vote here was one to move Greece from a twelth place in the Semi Final (eighth overall compared to other songs that qualified in that Semi Final) to rating them eighth in the Grand Final (second highest from songs qualified from Semi Final Two). This apparent boost was something that many other jurors seemed to miss, as Greece was a net loser in jury votes heading from Semi Final Two into the Grand Final.

Freaky Fortune at Eurovision 2014 (picture: Andreas Putting)

This article is not the place to speculate as to how and why these changes happened, but that all but one jury member changed their mind on the songs between the different performances, and that some did quite wildly, is a further sign of the increased random noise that the jury vote gives to the final score that is read out on the Saturday night show.

What will be interesting will be to see if this is true in later years, as the rules from the EBU now ensure that all jury members must have at least a three year window before being invited on the jury again, so next year’s jury members will be completely different to this year.

Drawing Conclusions About The Running Order

Our investigations with the actual voting data from the 2014 Song Contest has shown that the running order has a statistical impact to Eurovision. This backs up previous research into the topic. What we have also established directly for the first time is that both juries and televoters can be susceptible to the same kinds of bias caused by it.

The large spread of jury data and values, and their lack of consistency, adds an extra confusing dimension to this analysis.

The EBU should examine the options available to limit running order bias, perhaps through the mixing of the running order show to each jury member as we have previously discussed as an option on ESC Insight. Further ideas in the linked article such as providing training or increasing the numbers of voters from each jury should also be investigated.

Based on the findings here that show juries are less in keeping with each other, or even themselves, than we would otherwise expect and wish from our industry experts, the issues surrounding jury voting are likely to continue strongly.

No matter what running order Valentina had, it would not have been enough for San Marinese victory

Overall though, the big trends at the Song Contest are still apparent.

A good song is needed to do well, and voting traits are visible positively and negatively between different countries. Running order is a small but significant factor in the final result, and could be the decisive factor in a close Contest. It may not change our winner this year, but if as an example San Marino and Latvia had swapped starting positions in Semi Final One I would expect based on the trends above that maybe Valentina would not have delighted the microstate with her qualification, and we may have eaten cake twice on Saturday night.

The running order has an impact on the Eurovision Song Contest, and it should be a factor that is closely and carefully noted by the European Broadcasting Union for future Contests.


Keep a look out in the near future for our third ‘Voting Insight’ which will examine the trends between the ages, gender and professions of our jury members and how they have voted.

If you find any mathematics published here to be incorrect please drop an email to Ben via ben@escinsight.com.  

Share with your friends: :

About The Author: Ben Robertson

Ben Robertson focuses on hot issues across the continent as well as piling through the minefield of statistics Eurovision creates. Ben moved from the UK to Sweden in 2011 and is the Stockholm Co-ordinator of Melodifestivalklubben and a Bureau Member of OGAE International.

Share This Post

Have Your Say

You Can Support ESC Insight on Patreon

Share with your friends: :

"Eurovision 2011" redirects here. For the Junior Contest, see Junior Eurovision Song Contest 2011.

The Eurovision Song Contest 2011 was the 56th edition of the annual Eurovision Song Contest. It took place in Düsseldorf, Germany, following Lena's win at the 2010 contest in Oslo, Norway with the song "Satellite". This was the first contest to take place outside the host nation's capital city since the 2004 contest in Istanbul. The event was held at the Esprit Arena, with semi-finals held on 10 and 12 May, and the final held on 14 May 2011.[2]

Forty-three countries participated in the contest,[3] with those returning including Austria, which last participated in 2007; Hungary, which last competed in 2009; and San Marino, which last participated in 2008. Italy also returned to the Contest, marking its first participation since 1997. No country withdrew from the contest.

The winner was Azerbaijan with the song "Running Scared" performed by Ell & Nikki. The runner-up was Italy, and Sweden finished in third place. Italy (2nd) and Germany (10th) were the only members of the "Big Five" to make it into the top 10, with the United Kingdom close behind at 11th place; France and Spain, failed to make it into the top half of the leader board coming 15th (82 points) and 23rd (50 points) respectively. 2010 Hosts Norway were eliminated in the first semi-final. Azerbaijan obtained its first ever victory in any Eurovision since its debut in 2008. Azerbaijan won the viewers voting with Sweden in second place, and Greece in third place. Italy won the jury voting, with Azerbaijan in second place and Denmark in third place. This is the first time since the juries were reintroduced alongside the televoting in 2009 that the winner did not place first in the jury voting.

The broadcast of the final won the Rose d'Or award for Best Live Event.[4]

Location[edit]

Following Lena's win at the 2010 contest with the song "Satellite", Germany became host nation for the 2011 edition.

Bidding phase[edit]

Twenty-three cities submit official bids to the German broadcaster Norddeutscher Rundfunk (NDR), in order to be the host city for the 2011 contest.[5] Eight of these cities continued to show interest in hosting the event including Berlin, Hamburg, Hanover, Gelsenkirchen,[6]Düsseldorf, Cologne, Frankfurt and Munich.[7] NDR announced on 21 August 2010 that four of those cities had officially applied to host the 2011 Contest: Berlin, Hamburg, Hanover, and Düsseldorf.[8] Possible locations within the cities included the following:[9]

Key    Host venue

Media reports regarding host city[edit]

Berlin

Concerns were raised about Berlin's bid concept which consisted of an inflatable tent to be built on Tempelhof's hangar area. Decision makers at NDR reportedly doubted the venue's ability to provide advantageous acoustic conditions. Berlin's speaker Richard Meng neither confirmed nor denied that because, he stated, "secrecy about the bid concepts was promised to the NDR".[10]

Düsseldorf

On 24 September 2010, it was announced that Fortuna Düsseldorf football club had applied to the Deutsche Fußball Liga for permission to move its home matches to the Paul-Janes-Stadion if the Esprit Arena in Düsseldorf was awarded the 2011 Song Contest. This message indicated that talks with Düsseldorf to host the song contest in the Esprit Arena were already at an advanced stage.[11] The club later announced on 6 October 2010 that it had obtained permission to move its games if necessary.[12]

The Neue Ruhr Zeitung newspaper reported on 12 December 2010 that Fortuna Düsseldorf were to be moved to the Paul-Janes-Stadion due to the contest. Fortuna Düsseldorf's training venue next to the Esprit Arena would be equipped with mobile stands from a Swiss event construction specialist, Nussli Group, creating 20,000 extra seats.[13] This decision was made because the Arena Sportpark Düsseldorf holds better logistic qualifications.

Hamburg

On 2 October 2010 the Hamburger Abendblatt newspaper announced that Hamburg would be unable to host the 2011 Song Contest, because the city could no longer fulfil the required financial conditions.[14]

Esprit Arena Düsseldorf[edit]

Further information on the host city: Düsseldorf

The Esprit Arena in Düsseldorf was announced by German broadcaster Norddeutscher Rundfunk (NDR) as the venue for the 2011 Eurovision Song Contest on 12 October 2010.[15][16] This was the first Eurovision Song Contest held in Germany since German reunification, with West Germany having previously hosted the contest in 1957[17] and 1983.[18] Germany was also the first member of the "Big Five" to host the Contest since the implementation of the rule in 2000 that permits the five largest contributors to the European Broadcasting Union (EBU) – Germany, France, the United Kingdom, Spain and Italy – to qualify automatically for the final alongside the previous year's winner.

That the stadium acquired a rental period of six weeks, in order to allow construction and dismantling work within the Esprit Arena to be carried out.[19] The stadium accommodated a capacity of 38,000 for spectators during the Eurovision Song Contest.[20]Düsseldorf offered 23,000 hotel beds and 2,000 additional beds in the Düsseldorf surroundings and on ships on the River Rhine.[21]

Format[edit]

The four countries that were part of the Big Four, along with the host of the contest, automatically qualify for a place in the final. Since Germany was both a "Big Four" country and the host for the 2011 contest, there was a vacant spot in the final. At a Reference Group meeting in Belgrade it was decided that the existing rules would remain in place, and that the number of participants in the final would simply be lowered from twenty-five to twenty-four.[22] On 31 December 2010, the official participation list was published by the EBU, which stipulated that with the return of Italy to the contest, this nation would become a member of the "Big Five". This change permitted Italy automatic qualification into the finals, alongside France, Spain, the United Kingdom, and host nation Germany, restoring the number of participants for the final to twenty-five nations.[23]

On 30 August 2010 it was announced that Svante Stockselius, Executive Supervisor of the Eurovision Song Contest, would be leaving his position on 31 December 2010.[24] On 26 November 2010, EBU reported that Jon Ola Sand would be the new Executive Supervisor of the Eurovision Song Contest.[25]

Semi-final allocation draw[edit]

The draw to determine the semi-final running orders was held on 17 January 2011. All of the participating countries excluding the automatic finalists were split into six pots, based on the voting history of those countries in previous years. From these pots, half (or as close to half as was possible) competed in the first semi-final on 10 May 2011. The other half in that particular pot competed in the second semi-final on 12 May 2011. This draw doubled as an approximate running order, in order for the delegations from the countries to know when their rehearsals commenced. The draw also determined in which of the semi-finals the automatic finalists would be able to cast their votes.[26]

Israeli broadcaster IBA requested to take part in the second semi-final due to the Israeli Memorial Day, which was held during the first semi-final. German broadcaster NDR also requested that it be allowed to vote in the second semi-final for scheduling reasons.[26]

Graphic design[edit]

The design of the contest was built around the motto "Feel your heart beat", with the logo and on-screen graphics designed by Turquoise Branding.[27] The postcard introducing each performance included the logo in the colours of the performing country (e.g. the United Kingdom in red, white and blue); then a German place was shown in a toy-like view using tilt-shift photography and a story happened there, whose main characters were people either living in Germany or tourists from that country. The contest's motto, 'Feel your heart beat', was then shown or said in the country's national or native language.[28] For example, in the first postcard shown (Poland's), the boyfriend drops a piece of paper. The camera then pans down to the paper, to show the Polish phrase "Poczuj bicie serca" handwritten on it. In the second postcard shown (Norway's), a mountain climber from Norway climbs to the top of a mountain and yells the Norwegian phrase "Kjenn ditt hjerte slå.". Then, the heart appeared once again, and the stage and the crowd could be seen, with heartbeat sounds and pink lights pulsating in rhythm with the heartbeat, before the performance started.

The main colours of the letterboxes were black and pink. The scoreboard showed a spokesperson from the country giving their votes on the right, while showing a table of results on the left. The large points (8, 10 and 12) were highlighted in pink, whilst the lower points, (1–7) were in purple.[29] This scoreboard design was used again the following year, with minor changes such as the large points appearing progressively larger in size compared to the lower points and the highlighted colours changed to match the 2012 theme, "Light your fire!"[30]

National host broadcaster[edit]

ARD, the European Broadcasting Union member to broadcast the Eurovision Song Contest in Germany, is a joint organisation of Germany's regional public-service broadcasters. The ARD has 10 members. The venues that were in consideration are located in the areas of three different members: Berlin is located within the Rundfunk Berlin-Brandenburg (RBB) member area, Hamburg and Hanover within the Norddeutscher Rundfunk (NDR) area and Düsseldorf within the Westdeutscher Rundfunk (WDR) broadcasting area. While NDR has been responsible for the transmission of the Eurovision Song Contest in recent years when the final took place in other countries, the financial scope of the three broadcasters seemed to have become a decisive factor in the application procedure for the 2011 Eurovision Song Contest. The Tagesspiegel reported on 7 October 2010 that the costs for hosting this event resulted in a tense discussion about necessary savings on other programme contents made by the three broadcasters.

Hosts[edit]

On 16 December 2010, NDR announced that Anke Engelke, Judith Rakers, and Stefan Raab were to be the presenters for the contest. It was the third time three people would host the contest, the previous such contests being 1999 and 2010.[31] Raab is known as the German representant in 2000 with "Wadde hadde dudde da?", whereas Engelke is an actress and comedian, and Rakers journalist and television presenter.

Event concept and ticket sale[edit]

On 13 October 2010 Thomas Schreiber, coordinator at ARD, outlined details of Düsseldorf's event concept. The Esprit Arena was to be split in two parts separated from each other. On one side of the stadium the stage would be installed while the other side would function as background dressing rooms for the artist delegations. An athletics arena next to the Esprit Arena would serve as the press centre for the event. The Esprit Arena offered comfortable seats relatively near to the stage that created an indoor event arena atmosphere rather than a football-stadium ambiance. There were plans to allow the public the chance to attend the dress rehearsals.[32] Altogether, tickets were sold for seven shows (the final, two semi-finals and four dress rehearsals).[33]

He also said in that interview that tickets for the event were likely to go on sale "within the next four weeks" (by mid-November 2010). NDR had already opened a preregistration e-mail-newsletter on its website for all people interested in tickets for the event.[34]

Ticket sales started on 12 December 2010 at 12:12 CET on the website www.dticket.de, the only authorised seller.[35] However, the ticket page opened for sales approximately two hours earlier than originally advertised; this announcement was made by an email newsletter sent to preregistered buyers minutes before opening, giving them a slight benefit in acquiring tickets. The final 32,000 tickets that were put on sale on 12 December sold out in less than six hours. Once camera positions had been determined, a few thousand extra tickets were put on sale.

Tickets for the semi-finals were put on sale in mid-January, when it was known which countries would take part in each semi-final.[36]

Participating countries[edit]

Further information: List of countries in the Eurovision Song Contest

Esprit Arena, Düsseldorf – host venue of the 2011 contest.

  Countries in the first semi-final

  Countries voting in the first semi-final

One thought on “Eurovision 2011 Voting Analysis Essay

Leave a comment

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *