cobi-paper-schedule

Effectively planning a large conference requires understanding the preferences and constraints of organizers, authors, and attendees. Traditionally, the onus of scheduling the program falls on a few dedicated organizers. Resolving conflicts becomes difficult due to the size and complexity of the schedule and the lack of insight into community members’ needs and desires. Working mostly on paper without tool support, organizers often find the scheduling process painstaking.

Cobi addresses these challenges by drawing on the people and expertise within the community, and embedding intelligence for resolving conflicts into a scheduling interface. We deployed Cobi at CHI 2013 & CSCW 2014, where we transformed a paper-based process that previously took organizers over a month into a community-informed process that allowed organizers to resolve hundreds of newly surfaced scheduling constraints in just hours.

acsourcing
Committeesourcing

Program committee members group papers sharing a common theme, providing affinity information between papers. This step provides mechanisms for clustering qualitative data given a small amount of experts’ time. The collected data informs the initial session making process.

Try now


authorsourcing
Authorsourcing

Authors mark which other papers they find relevant to their paper, and which other papers they would personally like to attend. Cobi then encodes the responses into preferences and constraints, which reveals latent preferences and constraints previously not codified. This step provides fine-grained feedback to refine affinities and sessions.

Try now


cobi
Cobi Scheduling Tool
Conference chairs use the web-based, visual scheduling interface that combines community input and constraint-solving intelligence. The tool enables chairs to resolve conflicts while considering other subjective factors, and to make informed improvements to the schedule.

Try now


mychi
Confer
Conference attendees use the web app to decide where to spend their time during the conference by bookmarking favorite papers, getting social recommendations, and discovering the schedule. The recommendation engine takes authorsourcing data as initial input to bootstrap the algorithm, and improves as more attendees specify which papers they like.
Try now


People

Haoqi Zhang, Northwestern
Steven Dow, CMU HCII
Rob Miller, MIT CSAIL
Juho Kim, MIT CSAIL
Lydia Chilton, UW CSE
Paul André, CMU HCII

Papers