AAAI

Archives



2009 conference
2008 conference
2007 conference


Subscribe






This page is powered by Blogger. Isn't yours?

Blog

Home

Venue

Call for Papers

Schedule

Tutorials

Registration

Hotel Info

Important Dates

Submission

Accepted Papers

Invited Speakers

Program Committee

Data Challenge

Sponsors

ICWSM 2009

ICWSM 2008

ICWSM 2007

Past proceedings

Videolectures from 2008

Videolectures from 2009

4th Int'l AAAI Conference on Weblogs and Social Media

May 23-26, 2010, George Washington University, Washington, DC

Sponsored by the Association for the Advancement of Artificial Intelligence.


ICWSM 2010 Blog

Friday, March 5, 2010

 

Response to Ed Chi about the review process

In a comment to a previous post Ed Chi asked:

Can you give some transparency to how decisions are made to reduce a full paper to a poster / short paper?
In answer - the review process this year was somewhat different from previous years:

- For each paper, one SPC-member meta-reviewer and three PC-member reviewers were assigned, primarily based on bidding, with some manual balancing of the reviewers by the PC chairs (in part based on requests from the SPC).

- The PC members submitted reviews independently (i.e., without seeing the other reviews)

- The SPC member then led a discussion of the paper, in the case of any substantive disagreement, and wrote a fourth review summarizing the results. Sometimes extra reviews were asked for at this point. The discussion is not available to the authors, although the SPC members were encouraged to summarize it in the meta-review.

At this point, if everyone's done their job - there's a lot of information about each paper, but all the decisions were based on a very local view of the pool of submissions (as each reviewer sees only 8-10 papers), and reviewers tend to calibrate their scores differently in different sub-areas. In the final stage, a small group - Sam and I, with some help from Marti and previous ICWSM organizers - looked over all the papers, reading the reviews for many of them, the meta-reviews for all of them, and sometimes the papers themselves - to give a final ranking. Aside from the raw scores for the paper, we considered the overall topical balance of the conference, and the perceived breadth of interest in a paper, but these were secondary considerations. This discussion is also not provided to the authors.

BTW, my comment about this year having a strong pool of submissions wasn't polite conversation. There were a half-dozen papers with an average score of 8.5 or above, 16 papers with an average score of 8 or above, and 30 with an average score of 7 or above. After looking the papers over one of our first steps was to figure out what was the maximal amount of papers we could present as plenaries (25), given the time constraints of the conference, and we spent a lot of time discussing the papers that were close to the cut-off. We also considered but rejected the possibility of introducing parallel sessions this year.

William Cohen
co-PC chair


Comments: «Oldest ‹Older 1201 – 921 of 921 «Oldest ‹Older 1201 – 921 of 921

Post a Comment

Subscribe to Post Comments [Atom]





<< Home












Sponsored by the Association for the Advancement of Artificial Intelligence. For more info: icwsm10@aaai.org