23 May 2015

Thanks for Reading

This blog is no longer being updated.

Thanks for reading, please enjoy the archives!

13 May 2015

Thoughts on the New Science Advisory Structure of the European Commission

Update 15 May: Science | Business has an interesting interview with Robert-Jan Smits, Director-General for Research and Innovation, on the new advisory mechanism.He explains that the new mechanism was in part to placate the fussy Brits, still smarting from the dismissal of Anne Glover as EC CSA. He also provides some more hints on how the structure might work.

Today, the European Commission released its new plans for science advice under Jean-Claude Juncker, the EC President. The new structure comes after Juncker terminated the office of the Chief Scientific Advisor, led by Anne Glover, which was put in place by his predecessor, Juan Manuel Barosso. The new structure is shown visually in the figure at the top of this post.

While there are many details of the plan still to be announced (here is the EC press release with a link to a presentation given earlier today), here are three quick reactions.

1. The 7-person super committee will likely be problematic

My first reaction is that this supra-committee -- called a "High Level Group of eminent scientists" and shown in a blue bubble above -- is a recipe for future problems.  This is for several reasons.

One is just math. The EU has 28 member states. You can do the political math. Another problem is one that dogs the EC, democracy. Who will the "eminent scientists" actually represent? They will not, as explained today, work for the EC. It is unlikely too that they will work for member states (such as a chief scientific advisor or EU or national government employee), as these individuals work for member governments with interests that may or may not coincide with those of the EU. That leaves industry, NGOs and academics. Appointing members from industry or NGOs seems impractical, leaving academics.

If this is correct, then the "eminent scientists" will probably look a bit like the group that Junker met with today, which included Nobel and other grand prize winners: Sir Paul Nurse, Jules Hoffmann, Serge Haroche, László Lovász, Jean Tirole and Edvard Ingjald Moser. I love Nobel Prize winners as much as anyone, but until they have a Nobel Prize in science policy, it is safe to say that this group probably does not have great knowledge of science advisory processes nor of the byzantine politics of the European Union, although some have relevant experience. A group of the "great and the good" is politically defensible, but may be practically problematic.

This, I question whether the "High Level" group is either necessary or desirable.

2. The model appears to confuse "science for policy" with "policy for science."

As I have written before, the EC already has a significant expert body within its ranks that stands ready to enhance science advice to the commission. There are undoubtedly considerable internal politics (Gambling? I am shocked, shocked!) within the EC between its Directorate for Research and Innovation and the Joint Research Centre, that I am utterly and blissfully unaware of.  That said, it is important to recognize that the support of science (which we might call "policy for science") is a different sort of beast that support of policy (which we might call "science for policy").

In fact, putting the same body in change of both functions could create some unhealthy potential for conflicts across roles, for the simple reason that DG RTD is in the business of advocating for science budgets. If advocating for the use of certain science is added to its portfolio, then these different missions could come into conflict. It is a little bit like putting an agricultural ministry in charge of advocating for farmers and for for healthy diets. You'd like to think that these two interests always go hand in hand, but experience shows that they sometimes don't. Experience shows that separating institutions that support science and those that support policy helps to avoid unnecessary conflicts.

In my opinion, the JRC is much better placed to negotiate the connections between the "demand" for science advice and the "supply" of science advice. Along those lines, see this paper by Dan Sarewitz and I a few years back which proposed a model for reconciling the supply of and demand for science (here in PDF). It is a notable positive that the EC proposal adopts the framing of "supply and demand" as this will highlight the significance of the interface between the two, and the need for expertise at that interface.

3. Roles and responsibilities have yet to be described

This is a point made by Corporate Observatory Europe (one of the groups which campaigned to have the EC CSA office terminated), and specifically that "independence" and "transparency" have yet to be defined. I agree. Much better than an "independence" framing might be a "conflict of interest" framing. This is another reason why coordinating the reconciliation of supply of and demand for science might be better done from inside the EC rather than by a body of supposedly heroic, independent super humans.

More broadly, the role of such a "science advisory mechanism" needs to be made clear. is is a science arbiter? An honest broker? Or what? How is demand to be identified? What products are to be produced? Other such questions might be raised as well.

To be fair, there is time to provide answers to these questions. But they should be answered. One of the problems with the EC CSA office is that its creation was seen by some as "mission accomplished" and it started with few resources or mechanisms (see the discussion in Wilsdon and Doubleday, here in PDF). It would be very easy for the EC to move on from today's announcement, to let the issue move into the background, only to reemerge sometime down the road in problematic fashion  as occurred with the CSA.

For anyone interested in improving connections of expertise and decision making, the EC stands at a significant fork in the road. Which route it takes will be significant. We should all keep paying attention.

08 May 2015

Evaluating UK Election Predictions

UPDATE: I have a piece up at the Guardian with draws upon and extends this analysis. See it here. Comments welcomed!

Back in March, I posted up a survey of thirteen various forecasts for the outcome of the UK election and promised to perform an evaluation when the results were in. Well the results are in. Let's jump right into the evaluation. After that, I'll offer some more general comments about predicting elections, data journalism and democracy.

Here are the forecasts that I am evaluating:
Many of these forecasts were dynamic, meaning that they changed over time based on updated information. The evaluation below is based on this snapshot of the forecasts from the end of March. But the results won't be particularly sensitive to the date of the forecast, give how badly wrong they were.

The methodology is a very simple one, and one that I have used frequently (e.g., to evaluate predictions of Olympic Medals, the World Cup, the NCAA tournament etc.) and is the basis of a chapter in my new book.

First, I identified a "naive baseline" forecast. In this case I chose to use the composition of the UK Parliament in March 2015. My expectation was that these numbers were (for the most part) determined in 2010, so any forecasters claiming to have skill really ought to improve upon those numbers. Let me emphasize that the March 2015 composition of the UK Parliament is an extremely low threshold for calculating skill.

Second, I calculate the improvement upon or degradation from the naive baseline. I do this by performing a very simple calculation. I (a) take the difference between the forecasted number of seats for a particular political party and the actual results, (b) square this number, (c) sum these squares across the political parties for each forecast, and then (d) take the square root of the resulting sum. The result is a measure of the total number of seats the forecast had in error.

Let's start with a look at the forecasts for the two biggest parties, the Tories and Labour, which were the only parties which had a realistic chance of forming a government. Here are those results, with RED indicating a worse performance than the naive baseline, and BLACK indicating an improvement (no, there is no black on this graph).
It is difficult to describe this graph as anything other than mass carnage for the forecasters. The predictions were off, and not by a small amount. Nate Silver visited the UK before the election to opine on the election explained to the British public, "What we know is that it’s highly likely you won’t have a majority.” Um, no.

Let's bring in the Liberal Democrats and see how that affected the results. (Note: Only 12 of the 13 forecasts included 3 parties.)
Here we have 2 of 12 forecasts outperforming the naive baseline. Stephen Fischer, who ran a great blog during the election at Elections, Etc. did the best as compared to the naive baseline, but this result is tempered a bit by the fact that their forecast degraded since the March prediction was made, with the election day forecast performing worse. The Naive Forecast, Fischer and Murr did pretty poorly overall, missing between 46-60 seats across the 3 parties.

The other forecast to outperform the naive baseline was produced by Andreas Murr at LSE and use a "wisdom of the crowds" approach. This method was based on asking people who they thought would win their constituency, not who they would vote for. The fact that this method outperformed every other approach, save one, is worth noting.

Overall, the track record of the forecasters for the three-party vote was also pretty dismal.

Let's bring in the SNP and UKIP. (Note: Only 8 of the 13 forecasts included SNP.)
With the SNP revolution occurring in Scotland, we would expect that this would improve the forecasts, since the naive baseline had only 6 SNP members in Parliament. (UKIP turns out to be mathematically irrelevant in this exercise.) Even so, adding in the SNP only raises two other forecasters above the naive baseline. It is worth noting that the worst performing forecast method (Stegmaier & Williams) had the very best prediction for the number of SNP seats.

Even with advance knowledge that the SNP would gain a large number of seats, that head start only led to 50% of the forecasters who predicted SNP seats to improve upon the naive baseline.

Overall, if we take the set of forecasts as an ensemble and ask how they did collectively (simply by summing their seat errors and dividing by the number of parties predicted), the picture remains pretty sorry:
  • Two-Party Forecasts (13): degraded from Naive Baseline by ~38 seats per party
  • Three-Party Forecasts (12): degraded from Naive Baseline by ~17 seats per party
  • Five-Party Forecasts (8): degraded from Naive Baseline by ~0.3 seats per party
So what lessons should we take from this exercise?

One lesson is that while predicting elections is interesting and fun from an academic perspective, it may not add much to our democratic practices. Nate Silver at FiveThirtyEight, for better or worse, has become the face of poll-drive "horse-race journalism" in which the politics and policy choices are stripped out and numbers are pretty much all that matters. This is of course ironic, because Silver used to complain about punditry and horse-race journalism. Yet during his recent PR tour of the United Kingdom he was the ultimate pundit weighing in on the horse race. Not discussed by Silver were questions about subjects such as the future of the NHS, recharging UK productivity, or the desirability of Scottish independence or a possible EU referendum.

My criticism of election forecasts goes back a long way. Back in 2004 I wrote:
Rather than trying to see the future, political science might serve us better by helping citizens to create that future by clarifying the choices we face and their possible consequences for policy.
By simply predicting seats and treating politics like a sporting event, we diminish the partisanship, the choices, and the fundamental values that lie at the core of politics. Politics is about people and our collective future. I fear that data journalists have diminished our politics.

A second lesson is that we often forget our ignorance. Back in 2012 Nate Silver wrote very smartly:
Can political scientists “predict winners and losers with amazing accuracy long before the campaigns start”?

The answer to this question, at least since 1992, has been emphatically not. Some of their forecasts have been better than others, but their track record as a whole is very poor.
The 2015 UK General Election reminds of of this fact. Sure, it does seem possible to anticipate US elections, but this may say something about American exceptionalism (e.g., highly partisan with well-gerrymandered districts, a relatively simple electoral system that is overwhelmingly well-surveyed) rather than anything about the predictability of politics more generally.

I don't mean to pick on Nate Silver (disclaimer: I worked for him briefly in 2014, and admit to sometimes being seduced by horse-race journalism!) but at the same time, his overwhelming presence in the UK elections (and that of other forecasters) was influential enough to warrant critique. I have long had a lot of respect for Nate, not least because in the US at least, he figured out how to systematically integrate and evaluate polls, something that academic political scientists utterly failed to do.

At the same time, here is one example of the overwhelming influence of a dominant "narrative" in popular discourse. One pollster, Survation, conducted a survey before the election that proved remarkably accurate. But they chose not to publish. Why not?
We had flagged that we were conducting this poll to the Daily Mirror as something we might share as an interesting check on our online vs our telephone methodology, but the results seemed so “out of line” with all the polling conducted by ourselves and our peers – what poll commentators would term an “outlier” – that I “chickened out” of publishing the figures – something I’m sure I’ll always regret.
While Survation has to live with the decision not to release their poll, I can understand the pressures that exist not to contradict popular narratives expressed by loud and powerful media bodies. These pressures can mean narrow perspectives that exclude other, inconvenient expertise. Sometimes, the popular narrative is wrong.

The role of data journalists (and their close cousins, the explainer journalists) should not be to limit public discourse, either intentionally or unintentionally by weight of influence, but rather to open it. This means going beyond the numbers and into all the messiness of policy and politics. Data journalism, like our democracies, remains a work in progress.

05 May 2015

Handicapping the UK General Election

I am trained as a political scientist but I am in no way an expert in UK politics. With that out of the way, I thought it might be fun and educational (both for me) to have a go at making sense of the upcoming UK election. So please read on and comment if you'd like.

The UK parliament has 650 seats, which means that a majority is 326 seats. Presently, the UK is governed by a coalition of the Conservative party (holding 302 seats) and Liberal Democrats (holding 56 seats). The upcoming election is particularly interesting because there is a significant chance that the combined loss of Conservative and LibDem seats will total less than that needed to form a government.  The number actually needed to secure a majority is actually less than 326 because the Sinn Fein Party does not take its seats, due to a longstanding boycott of Westminster. A working majority is generally thought to be 323.

So here are the numbers to watch as the election results come in.

1. The total seats won by the current coalition.

The current (May 5) forecast by electionforecast.co.uk suggests that this total will fall in between 272 and 343 seats, with a forecast of 307 seats. If the coalition secures 323 seats or more, then odds would appear to strongly favor a continuation of the current coalition and David Cameron continuing on as Prime Minister. This scenario is the least complicated but also seems fairly unlikely.

Things get more complicated if the current coalition does not win a working parliamentary majority. It is of course also possible that if the Conservative-LibDem coalition gets close enough to a working majority that several smaller parties are added to the coalition to secure a working majority.

But what if such a majority is not possible? Let's break down some additional key numbers.

2. The total seats won by the Conservatives.

The current forecast by electionforecast.co.uk has Conservatives on 281 seats and Labour on 267. Another group of academics, Polling Observatory, has it at 274 Conservative and 272 Labour. May2015 has it at 274 Conservative and 269 Labour. Ladbrokes (betting market) has it 286.5 Conservative and 266.5 Labour.

What these various prediction suggest is an expectation that the Conservatives will win more seats than Labour, but not by much and with substantial uncertainty. So, let's consider each possibility in turn.

2a. Conservatives win more seats than Labour

If this occurs, we can expect David Cameron to quickly assert victory and claim a mandate for a second term. This mandate might be implemented via a Conservative-led minority government or a continuation of the Conservative-LibDem coalition as a minority government.

A big wild card here is what Ed Miliband and the Labour party decide to do. If the combined total of Labour and Scottish National Party seats (perhaps plus some other minority parties) totals 323 or more, then together they will have the votes to pass a motion of no confidence in the Prime Minister, which would force Cameron to step down.

Such a vote would almost certainly lead to a constitutional crisis as the UK would be in uncharted territory under the provisions of the 2011 Fixed-term Parliaments Act (FTPA). Catherine Haddon at the Institute for Government explains that "the Act substantially changes the rules of politics; and that nobody can yet tell exactly how these new rules will change the game." She gets into more detail:
If a motion of no confidence is passed or there is a failed vote of confidence, there is a 14-day period in which to pass an act of confidence in a new government. If no such vote is passed, a new election must be held, probably a mere 17 working days later.

So far, so clear. But from there we start to get into uncharted territory on two fronts. One is that some of the crucial mechanisms are not set out; the other is how the operation of the Act could affect political dynamics and party bargaining.
It is conceivable that the election winds up in the UK courts.

Thus, assuming that the Conservatives win more seats, but parliament is hung, then a first big decision will be Ed Miliband and Labour's decision whether to join with the SNP to pass a vote of no confidence. Of course, under a minority government such a vote of no confidence could occur at any time. Perhaps Miliband waits for the first big screw up by the minority government to force a new election in weeks or months time.

Of course, if Miliband cannot assemble 323+ no confidence votes, then the point is moot and the coalition continues to govern with David Cameron as PM, but with a continuing risk of the government falling.

2b. Labor wins more seats than the Conservatives

If this occurs it has potential to be a game changer. Under this scenario a first big decision shifts to Nick Clegg and the Liberal Democrats. Do they then switch alliances from the Conservatives to Labour? Maybe so if David Cameron keeps as a "red line" and in/out vote on Europe. Clegg could then be known as the man who saved UK's role in Europe and perhaps the fate of the LimDems as a meaningful political party. Of course, if the UK public really values having this vote it could swing things the other way come the next election.

Under this scenario there would be essentially no risk of a constitutional crisis, as no conceivable combination of Conservatives and other parties would have the votes to pass a no confidence vote against a Labour government. A Labour-LibDem coalition may be a minority government, but with the SNP as backstop it might as well be majority. (For those unaware, Labour has ruled out a coalition with the SNP, and the SNP has ruled out supporting any scenario that includes a Conservative government.)

Bottom Line

My sense of the above is that Labour and Ed Miliband are in the drivers seat reagrdless of who is PM next week. While it is possible that the current coalition receives a mandate for another term, that seems unlikely. What seems more likely in order of my qualitative estimation is the following:

1. Cameron hangs on as PM over a minority government. It lasts somewhere between 2 weeks and 6 months before a second election of 2015.

2. Labour and the LibDems form a minority but new, stable coalition government.

3. The UK courts settle a constitutional crisis over the FTPA, as there is not enough votes to form a government or declare no confidence.

For those wanting to dive deeper, here is an analysis of seats to watch as results come in to get a sense of which way things are turning.

Whatever happens, it'll be fun to watch from Boulder, and a wonderful expression of modern democracy in action!

What do you think?

04 May 2015

Focus of Attention in 10 Years of Charlie Hebdo Covers

The graph above and accompanying analysis comes from  Jean-François Mignot and Céline Goffette writing the Le Monde last February.  The graph shows a content analysis of more than 10 years of Charlie Hebdo covers from 2005 to 2015. Of 523 covers, religion was the subject of 38, and of those 38, Islam was the subject of just 7.

Charlie Hebdo is in the news this week because its staff has been given Toni and James C. Goodale Freedom of Expression Courage Award by the PEN American Center, which supports free expression. The award prompted a backlash by some PEN members and others, who think that the award is inappropriate.

Whatever your views on free expression, Charlie Hebdo or the PEN Award, the data shown above are an important part of understanding what Charlie Hebdo does - they are not a magazine obsessed by (or even focused on) religion or Islam.

On the PEN controversy, I think that The Economist gets it just about right:
Unfettered free speech is good for humanity. Charlie Hebdo was firebombed, and its journalists were threatened and attacked for what they wrote—yet they persisted. That they persisted in drawing crass, juvenile cartoons is beside the point. Defending free speech means defending speech you don’t like; otherwise it’s just partisanship, not principle.

28 April 2015

Earthquakes: Death Rates and Frequency of Big Events

The earthquake disaster in Nepal continues to unfold, now with more than 5,000 confirmed deaths, a number that is sure to rise.  Several excellent pieces on earthquakes (such as these by Brad Plumer at Vox and Andy Revkin at the NYT) have me thinking about long-term trends. So I have quickly put together some data, with some interesting results.

1. Are we, globally, doing better with respect to earthquakes?

The graph above shows data for 1900 through 2014 on global death rates, expressed as deaths per million global population. (Data: Our World in Data, Daniell et al. 2011 through 2010 with population estimated through 2014 and earthquake fatalities updated for 2012, 2013, 2014).

Since 1900 earthquake death rates have dropped by more than 80% (based on a linear trend, shown in red). This is extremely good news, and is suggestive that even as global population increased by a factor of more than 4, we collectively are doing much better with respect to earthquakes.

But let's zoom in to the most recent 25 years and take a closer look.
Since 1990, death rates have increased by a factor of about 3, based on a linear trend (red). What exactly  is going on here?

The answer appears to lie in a recent increase in the occurrence of large earthquakes, which I address next.

2. Have earthquakes become more common? 

The graph above comes from my colleague here at CU-Boulder, Professor Roger Bilham, a world authority on earthquakes and central Asia in particular (Here is Bilham on the Nepal quake). That figure clearly shows a big gap in the incidence of magnitude 8.4 quakes and greater from the mid-1960s through the early 2000s. The swarm of big events in the 2000s coincides with the increased death rates over the same period shown above.

The pattern also shows up at slightly lower earthquake intensities. The figure below shows that same 1980s and 1990s "lull." The result is a pattern of an increasing incidence of strong quakes since 1990 which also shows up at magnitudes 8.0 and 7.5 (from Ben-Naim et al. 2013).
Another study concluded, "Obvious increases in the global rate of large (M ≥ 7.0) earthquakes happened after 1992, 2010, and especially during the first quarter of 2014" (here in PDF). So it seems clear that the world is in a recent period of increased earthquake activity. But does that mean that earthquakes are increasing? Or is it that we are having a run of bad luck?

The figure immediately above, showing long-term incidence back to 1900, provides a good sense of where experts presently are on these questions.  Here are conclusions from three recent papers that looked at these questions:
  • Parsons and Geist 2014 (PDF): "we cannot find a strong signal associated with global M ≥ 7.0 earthquakes that rises above the random fluctuations that are observed between regular 48 h periods; the largest rate increases we see are not associated with global main shock."
  • Ben-Naim et al. 2013: "in the magnitude threshold range 7.0≤Mmin≤8.3 which constitutes the vast majority of great earthquakes on record, the earthquake sequence does not exhibit significant deviations from a random set of events." (They do note the two clusters of >8.3 events, one mid-20th century and one during the past decade.)
  • Shearer and Stark (2012): "Global clustering of large earthquakes is not statistically significant: The data are statistically consistent with the hypothesis that these events arise from a homogeneous Poisson process."
None of the studies listed above rule the the possibility that earthquakes are becoming more common, they simply find no evidence to support such an assertion, given the historical occurrence of events (compare Dimer de Oliveira 2012). This has to do with both statistics (no evidence of a long-term trend) but also physics (no reason to expect such an increase beyond variability).

That is the science. But from a practical perspective, large earthquakes clearly have become more common since the 1990s. This is also reflected in death rates, which overall have improved since 1900, but at least some part of that improvement is an artifact of infrequent large earthquakes in the 1980s and 1990s.

The Bottom Line

The world is doing better overall with respect to earthquakes. But some of that improvement is based on some good fortune of the 1980s and 1990s. For whatever reason -- and the current consensus is just a random uptick in a variable geophysical process - strong earthquakes have become more common over the recent decade than they were in the two decades before that.

The recent devastation should provide a reminder that we still have a lot of work to do to reduce vulnerability to earthquakes. We can't control where, where or how strong, but we can exert a huge influence on the death and destruction that results.

27 April 2015

The "Sweet FA Prediction Model" and The UK General Election

[Warning: Electoral outcome spoilers ahead.]

The UK has a big election coming up. Here I'll be evaluating some of the pre-election predictions after the results come in. But it turns out, that may be a futile exercise, as it appears that the results are already in. Congratulations are in order for Ed Miliband, the next PM of the United Kingdom.

Let me explain.

Back in 2000, Roger Mortimore, director of political analysis for Ipsos MORI, discovered a remarkable predictive relationship for the outcome of UK general elections.

Elec.WinnerFA Cup holders (year of final)Shirt colour(s)Correct?
1997LabManchester U. (1996)REDY
1992ConTottenham H. (1991)WHITEY
1987ConCoventry City (1987)Sky BLUEY
1983ConManchester U. (1983)REDN*
1979ConIpswich Town (1978)BLUEY
O'74LabLiverpool (1974)REDY
F'74HungSunderland (1973)RED and WHITEY
1970ConChelsea (1970)BLUEY
1966LabLiverpool (1965)REDY
1964LabWest Ham U. (1964)RED ("Claret")Y
1959ConNott'm Forest (1959)REDN
1955ConNewcastle U. (1955)Black and WHITEY
1951ConNewcastle U. (1951)Black and WHITEY
1950LabWolves (1949)YELLOWY
* Would have been correct if Brighton & Hove Albion (BLUE) had not missed an open goal in the dying seconds of the FA Cup final, before losing the replay.
Mortimore explained:
All you have to do to predict which of the major parties will have an overall majority in the Commons following the election is to note the shirt colours usually worn by the current holders (on election day) of the FA Cup. If their shirts are predominantly in the Conservative colours of blue or white, a Conservative victory will ensue; on the other hand if the predominant colour is red or yellow, Labour will be successful. (Black stripes are ignored.)

The table shows that the Tories win an election held when the FA Cup is held by a club who play in predominantly Blue or White shirts; Labour wins when the cup holders wear a shade of Red or Yellow. A hung Parliament results when the Cup holders wear both parties' colours.
The FA Cup holders for the 2015 general election are Arsenal (as it should be, but I digress), who wear red, and sometimes yellow. This implies a Labor victory and thus Ed Miliband as Prime Minister.

Now, to those skeptics of the "Sweet FA Prediction model" (as Mortimore calls it) may point out, rightly, that any fool with a spreadsheet can mine data to identify past spurious relationships. You can probably even write academic papers on such things. The real test, you might argue, is how an alleged relationship fares in an out-of-sample prediction context.

Well, lets see what happened since Mortimore first published his model in 2000.

Elec.WinnerFA Cup holders (year of final)Shirt colour(s)Correct?
2010 ConChelsea (2009)BLUEY
2005 LabManchester U. (2004)REDY
2001LabLiverpool (2001)REDY
1997LabManchester U. (1996)REDY
1992ConTottenham H. (1991)WHITEY
1987ConCoventry City (1987)SKY BLUEY
1983ConManchester U. (1983)REDN*
1979ConIpswich Town (1978)BLUEY
O'74LabLiverpool (1974)REDY
F'74IndecisiveSunderland (1973)RED & WHITEY
1970ConChelsea (1970)BLUEY
1966LabLiverpool (1965)REDY
1964LabWest Ham U. (1964)RED ("Claret")Y
1959ConNott'm Forest (1959)REDN
1955ConNewcastle U. (1955)BLACK & WHITEY
1951ConNewcastle U. (1951)BLACK & WHITEY
1950LabWolves (1949)YELLOW ("Old Gold")Y
* Would have been correct if Brighton & Hove Albion (BLUE) had not missed an open goal in the dying seconds of the FA Cup final, before losing the replay.

Dare I say ... BOOM?

The Sweet FA Cup Prediction model has gone 3-0 since it was first introduced. That is some fine predicting and clearly validates the model. Despite this remarkable success, Mortimore remains humble: "I must reluctantly point out that the Sweet FA Prediction model© is not entirely serious." Of course, Mortimore then applied the model to the London Mayoral elections with similar success.

It turns out that the FA Cup is a veritable treasure trove of oracle-like prognostication. After the championship game in Wembley late next month, I'll provide my updated analysis of expected US hurricane damage for 2015 based on the FA Cup final score.

Don't laugh. It anticipated Superstorm Sandy.

Predicting the future turns out to be pretty easy, if you just know where to look.

21 April 2015

PACITA Keynote: Technology Assessment as Political Myth?



Above is a talk I gave last week in February at the PACITA Conference on Technology Assessment in Berlin. My talk was titled "Technology Assessment as Political Myth?"

In the talk I discussed the phrase "basic research" and the so-called "Green Revolution" as examples of the stories that we tell ourselves about how innovation works. It turns out that the stories that we tell about innovation -- about science an technology in the economy and broader society-- are grounded in more than just the empirical.

This is work in progress, imperfect and incomplete, but indicative of where my future interests lie. Comments welcomed. Thanks again to my hosts at PACITA for the opportunity!

16 April 2015

I'm Giving a Talk Next Week

What peer-reviewed research motivated the White House science advisor to write a six-page screed about me and post it on the White House web site? Instigated a social and mainstream media campaign to have me fired from my job? And was the basis for a member of Congress to open an investigation of me?

Next Tuesday, April 21, I'll be giving a lecture here at CU-Boulder at noon in Ekeley W166, sponsored by the student group here on campus, the Forum on Science, Ethics and Policy, which I have titled "On Witch Burning and Other Incendiary Topics." The talk will intermix (a) a narrative of my experiences working on extreme events and climate change over more than 20 years, and (b) some of the actual research on the subject. In the talk there will be some drama and some science. It'll be fun.

It will not be webcast. If you are around, please come and say hi, Thanks!

08 April 2015

Science & Politics Lessons from Ernest Moniz in the Iran Talks

At The Guardian today I have an essay on the role of US Secretary of Energy Ernest Moniz (pictured in one of the photos above) in the Iran nuclear talks, and what we can learn from it for thinking about "science advice." Here s an excerpt:
The good news is that beyond the few issues that occupy the attention of those fighting the latest science wars – over climate change or GMOs to name two of the most prominent partisan battlefields – science is well established in high level politics. That doesn’t mean that we cannot improve how we make use of experts in the political process, but we do have a track record of success to work from.
Head over there for the whole thing, and please feel welcome to return here and offer any comments.

06 April 2015

The Cost of College and the Price of Tuition

Writing in the New York Times yesterday, my University of Colorado colleague Paul Campos, a professor of law, makes the decidedly contrarian argument that decreasing state subsidies are not the primary factor in the increasing costs of tuition. 

Campos writes:
Once upon a time in America, baby boomers paid for college with the money they made from their summer jobs. Then, over the course of the next few decades, public funding for higher education was slashed. These radical cuts forced universities to raise tuition year after year, which in turn forced the millennial generation to take on crushing educational debt loads, and everyone lived unhappily ever after.

This is the story college administrators like to tell when they’re asked to explain why, over the past 35 years, college tuition at public universities has nearly quadrupled, to $9,139 in 2014 dollars. It is a fairy tale in the worst sense, in that it is not merely false, but rather almost the inverse of the truth.
Campos concludes:
What cannot be defended, however, is the claim that tuition has risen because public funding for higher education has been cut. 
Campos, I am afraid is wrong. Badly wrong. His major error is to confuse the price of tuition with the costs of delivering a college education. Let me explain.

First, it is important to distinguish between the cost of delivering a college education and the price of tuition. At US public universities, the cost equals the price of tuition plus a state subsidy. Tuition is the price that the student of their family pays to attend college. So what has happened to state subsidies? They have gone down, almost everywhere.

The graph below, from the Center on Budget and Policy Priorities, shows the deep cuts that have been made by state governments in terms of a per-student subsidy.

For a given cost, if the state subsidy goes down, then the tuition necessarily must go up to compensate. And the data shows this is exactly what has happened. Here is overall national data:
Tuition revenue more than doubled from 1987 to 2012. And here are the absolute numbers courtesy of the Economist:

But let's get more specific and look at costs, the state subsidy and tuition at the University of Colorado where Campos and I are both professors. Here is some specific data for the University of Colorado:
Here are the details (from this post):
So over the 10 years the price of tuition went up by 293% -- inflation only increased 27%. This is a big increase, and certainly increases the burden on those who pay the tuition. However, over that same period the inflation-adjusted cost of delivering that education went down by 14%. How can this be? The simple answer is that the state has cut its subsidy per student by 60% (closer to 70% after inflation), transferring a large portion of the costs of an education from the state to the student. 

The University of Colorado became more efficient from 2001 to 2001 in that the overall cost-per-student of delivering an education dropped by about 15% per student. Maybe, as Campos alleges, there are more administrators with swanky salaries. Even so, the cost of delivery of an education went down. Perhaps that is not true at every university, 

Yet, at the same time, tuition -- as seen by the student and their family -- almost tripled.

Campos is likely correct that overall public money to higher education has increased over recent decades (certainly true if R&D spending is included). But that is utterly irrelevant to the question why the price of tuition has increased. Tuition is a cost born by the individual student, and is a function or more variables than just the overall spending on higher education.

The data show clearly, and I think irrefutably, that the pull-back in state subsidies for students attending public universities has led to an increase in the price of tuition. No doubt there are also market factors at play (e.g., the salaries of professors and administrators) and institutional factors (e.g., infrastructure, operations, and yes, the size of administrative budgets). But as the University of Colorado example shows, the overall cost can be reduced, yet tuition can go up dramatically.

So when Campos concludes that "the claim that tuition has risen because public funding for higher education has been cut.... flies in the case of facts," he is looking at the wrong facts in the wrong way.  The pull back in state funding is indeed a primary driver in the increased costs of tuition.

27 March 2015

Evaluating Predictions of the UK General Election

The United Kingdom is going to have a general election on May 7th, just over five weeks from now. There are high stakes and a lot of uncertainties, probably more so than usual. Yesterday, David Cameron (Conservative and Prime Minister) and Ed Miliband (Labor and party leader), squared off in parallel interviews with interviewer Jeremy Paxman in a broadcast watched by 12% of the UK TV audience.

These days, where there are elections there are also election forecasters. But political scientists have been doing this for a long while. Back in the early 1990s, when I was in graduate school in political science, I wrote a seminar paper on methodologies of election forecasting, which at that time I took a pretty dim view of (I still do!).

Where prediction is concerned, it is always worth evaluating our forecasts, lest we trick ourselves into thinking we know more than we actually do. So for fun, I am going to evaluate predictions of the upcoming UK elections.

Courtesy Will Jennings, a political scientist at the University of Southampton @drjennings, via Twitter, below is a summary of various predictions of the outcome of the upcoming election.
The 12 predictions span a huge range, +/-33 seats for the Conservatives and +/-25.5 for Labor. Six of the 12 forecast Labor holding more seats than the Conservatives, and 6 forecast less. With such a wide spread, it is mathematically safe to say that some predictions will be better than others.

I am going to evaluate 2 questions using this data after the results are in.

1. Which forecast showed the most skill?
2. Does the collection of forecasts demonstrate any skill?

To evaluate #1, I will use a naive baseline as the basis for calculating a simple skill score. The naive baseline I will use is just the composition of the current UK Parliament.

PartySeats
Conservative302
Labour256
Liberal Democrat56
Democratic Unionist8
Scottish National6
Independent5
Sinn Fein5
Plaid Cymru3
Social Democratic & Labour Party3
UK Independence Party2
Alliance1
Green1
Respect1
Speaker1
Total number of seats650
Current working Government Majority73


It is important to note that there are, effectively, a limitless number of ways that a forecast evaluation might be structured, with different results as a consequence. Always beware post hoc forecast evaluations. I have no horse in this race, so I am producing a very simple evaluation, based on methods I have used before on many occasions. These choices could of course be made differently.

Some methodological details:

  • I am evaluating predictions of actual seats, not percentage gains or losses.
  • I am counting all seats equally.
  • I am not evaluating the prediction of specific seats, but overall parliamentary composition. Yes, this means that skill may occur for spurious reasons.
  • Yes, there are other, likely "better," naive baselines that could be used (e.g., using recent opinion poll results). Such a choice will reflect upon absolute skill, but not relative skill.

Given that there is a wide spread among multiple forecasts, we have to very careful about committing the logical fallacy of using the election outcomes to select among forecasts. This is of course a very common problem in science (which i described in this paper in PDF in the context of hurricane forecasts in reinsurance applications). I have described this problem as the hot hand fallacy meets the guaranteed winner scam. It is easy to confuse luck with skill.

So I will also be evaluating the forecast ensemble. I will do this in 2 ways. I will evaluate the average among forecasts and I will evaluate the distribution of forecasts, both against the naive forecast as well as the election outcome. We can expect to be able to conclude very little about the skill of a forecasting method (as compared to a specific forecast) because we are looking at only one election. So my post-election analysis will necessarily include the empirical and the metaphysical. But we'll cross those bridges when we get there.

This exercise is mainly for fun, but because my new book has a chapter on prediction (that I am in the midst of completing) it is also a useful way for me to re-engage some of the broader literature and data in the context of a significant upcoming election.

Comments, suggestions most welcomed from professionals and amateurs alike!

24 March 2015

The University of Colorado-Boulder's New "Degree in Three" Initiative

The University of Colorado-Boulder has started a new initiative, called "Degree in Three." The Daily Camera today has some background:
The University of Colorado is launching a new initiative for cost-conscious and decisive undergraduate students who want to finish their degree in three years.

Traditionally, students and parents have thought of college as a four-year experience, but that doesn't always need to be the case, said Michael Grant, CU-Boulder vice provost and associate vice chancellor for undergraduate education.

The goal of "Degree in Three" is to make students aware that it's possible to finish all the requirements for a bachelor's degree in three years, an effort that could save them money and help move them along to the next step in their life.
Over the past year, I have been working with Michael Grant, CU-Boulder vice provost and associate vice chancellor for undergraduate education, to roll out the initiative. From the Camera article:
Roger Pielke Jr., a faculty member who has been working with Grant on the initiative, said he thinks of it as an "experiment" to see what kind of demand exists.

"It seems that there's space for expanding the options that are available to students these days, with concern about the cost of college and the job market and so on," he said.

Though students may need to pay for additional courses during the summer, he estimated that finishing in three years could save a student 10 to 20 percent on the total cost of their degree—every little bit counts, he said.

Currently, undergraduate tuition in the College of Arts and Sciences is $9,048 for in-state students, $31,410 for out-of-state students and $32,910 for international students. Students in other colleges and schools pay different tuition rates.

Even if students take slightly longer than three years to finish, that's still a win for the campus.
It's also a win for students and their families. If you are interested in the initiative, please check out its new website - Degree in Three. And please feel free to email me with any questions.

20 March 2015

New Review of Disasters & Climate Change


The Weekly Standard has a positive review of Disasters & Climate Change, by Robert Bryce. Here is an excerpt:
In The Rightful Place of Science, Pielke acknowledges the massive challenges, and inherent conflicts, in the energy/climate debate—and in so doing, he reveals himself to be both rationalistic and humanistic. I’ll take those stances over religious zealotry every day of the week. 
See the whole review here. Other reviews can be found here.

19 March 2015

My Review of Galileo's Middle Finger

In this week's Nature, I have a review of Galileo's Middle Finger: Heretics, Activists, and the Search for Justice in Science, by Alice Dreger, a medical historian who has spent much of the past decade studying controversies in science.  My review can be found here and here in PDF.

The book is engaging and eye-opening. It chronicles series of issues, mainly related to the science of sexuality and gender, in which activists, a category which also includes academics, engage in professional and personal attacks of scholars whose work that they find to be inconvenient, unhelpful or just offensive to their sensibilities about how the world should work. Most academic work is like the proverbial tree falling in the forest, but every so often (and probably more often than many of us would like to think), scholarship becomes the focus of a political battle.

I should know. My review was completed and filed the week before I was targeted with an "investigation" by a member of Congress for the audacity of testifying before that august body with the results of peer reviewed, government-funded research, widely accepted as scientific consensus. But even before that, my own career has led me to be sympathetic to Dreger's arguments.

I have lots of experience with personal and professional attacks based on my research and advocacy. For instance, it was one year ago today that I published a piece at FiveThirtyEight on that same research, which prompted a social and mainstream media campaign to have me fired for voicing such heresies. The Guardian, New York Times, Slate, Salon and even the American Geophysical Union all joined the campaign. Unsurprisingly, FiveThirtyEight succumbed to the pressure, explaining "Reception to the article ran about 80 percent negative in the comments section and on social media. A reaction like that compels us to think carefully about the piece and our editorial process."

So, scientific consensus vs. Facebook likes - Guess who won?

The pressures on academics to conform (or not deviate) is very high. But Dreger is an example of the rare scholar willing to take the heat for taking a good hard look at the "wisdom" of the crowd. She shows that in some instances the crowd is just a mob, utterly indifferent to evidence and scholarship. Dreger has taken many lumps for her work, but I have little doubt that Galileo's Middle Finger will secure her role as a champion of the integrity of evidence, despite the various efforts to delegitimize her and her work, which will probably continue as well.

Here is an excerpt from my review:
Even as her “stomach hurt from the thought of the backlash”, Dreger published her findings (A. Dreger Arch. Sexual Behav. 37, 503–510; 2008). She faced online accusations and e-mails about her funding and politics; ethics charges were filed against her with her dean. Ultimately, however, she won a Guggenheim Fellowship to look at other conflicts involving scientists and activists.

Dreger ends this powerful book by calling for her fellow academics to counter the “stunningly lazy attitude toward precision and accuracy in many branches of academia”. In her view, chasing grants and churning out papers now take the place of quality and truth. It is a situation exacerbated by a media that can struggle when covering scientific controversies, and by strong pressures from activists with a stake in what the evidence might say.

She argues, “If you must criticize scholars whose work challenges yours, do so on the evidence, not by poisoning the land on which we all live.” There is a lot of poison in science these days. Dreger is right to demand better.
If you are interested in the politics of science in the 21st century and the challenges faced by scholars who do work deemed politically taboo, then you will benefit from Dreger's engaging writing and exploration of numerous cases. You'll also learn something about human sexuality and its many complexities. Galileo's Middle Finger will be on my fall syllabus in my graduate seminar in Science & Technology Policy.

Read my full review here in PDF.