It’s the most wonderful time of the year … the IA Summit chair is making his list and checking it twice – the acceptance and rejection emails for presentations have been sent out, sparking a flurry of conversation on the twitters about the selection process and (mainly) the value of blind reviews.
Although I haven’t been heavily involved in the organization of the conference since 2008, when I was the conference chair, I thought i’d give some transparency into how we did things then – I don’t think it’s changed significantly since.
Back in 2008 we received about 150 regular session proposals and had 45 available session slots. The first step in the selection process was a blind review – 50 volunteers reviewed about 20 proposals each against a set of standard criteria, giving each proposal about 6-7 reviews and scores.
These reviews were done “blind” (without knowing the author) because the final selection committee wanted to gather research on what topics and presentations potential attendees might find attractive without that research being polluted by speaker name recognition.
Every proposal was then reviewed by a final selection committee of 3, led by myself – taking into account the blind reviews, scores and speaker identities as well as our own experience and opinions and the list was narrowed down to the final 45.
We found that the blind reviews and scores were an excellent starting point and a way to get different perspectives, but they were just that – a starting point. We ended up picking 26 of the final lineup (57%) from the top 45 as scored by the blind reviews and 19 (42%) from outside it. Three of the sessions in the final lineup were the 137th, 127th and 111th as ranked by the blind scores – so you can see that the research needed interpretation!
Bonus Information about Scheduling: Once we had the final 45, we didn’t create the schedule straight away – we published the list on the website and asked attendees to pick their favorites. This enabled us to do 3 things:
- Put popular sessions in large rooms.
- Pre-schedule the 6 most popular sessions into the flex-track as repeats.
- Through a pairs-analysis, avoid pitting sessions that attendees wanted to see both of against one another.
So there you are, a little window into our process, hopefully it sheds some light into the role of blind reviews.