We publish research articles in conferences and journals primarily in the field of computer science, but also in other fields including psychology, sociology, and medicine. See our blog for research highlights and our publications page for a comprehensive view of our research contributions. Here are excerpts from recent articles:
Value Sensitive Algorithm Design: Method, Case Study and Lessons
Intelligent algorithmic systems are assisting humans to make important decisions in a wide variety of critical domains. Examples include: helping judges decide whether defendants should be detained or released while awaiting trial; assisting child protection agencies in screening referral calls; and helping employers to filter job resumes.
However, technically sound algorithms might fail in multiple ways. First, automation may worsen engagement with key users and stakeholders. For instance, a series of studies have shown that even when algorithmic predictions are proved to be more accurate than human predictions, domain experts and laypeople remain resistant to using the algorithms. Second, an approach that largely relies on automated processing of historical data might repeat and amplify historical stereotypes, discriminations, and prejudices. For instance, African-American defendants were substantially more likely than Caucasian defendants to be incorrectly classified as high-risk offenders by recidivism algorithms. See more
Simulation Experiments on (the Absence of) Ratings Bias in Reputation Systems
Rating systems for building reputation are used everywhere in the gig economy (left to right: Upwork, Uber, Rover, Instacart), and lots of prior research suggests they will show race- and gender-based biases. Our research tells a more complex story.
It seems like every day there is a new gig work platform (e.g. UpWork, Uber, Airbnb, or Rover) that uses a 5-star scale to rate workers. This helps workers build reputation and develop the trust necessary for gig work interactions, but there is a big concern: lots of prior work finds that race and gender biases occur when people evaluate each other. In an upcoming paper at the 2018 ACM CSCW conference, we describe what we thought would be a straightforward study of race and gender biases in 5-star reputation systems. However, it turned into an exercise in repeated experimentation to verify surprising results and careful statistical analysis to better understand our findings. Ultimately, we ended up with a future research agenda composed of compelling new hypotheses about race, gender and five-star rating scales.See more
MovieLens is a web site that helps people find movies to watch. It has hundreds of thousands of registered users. We conduct online field experiments in MovieLens in the areas of automated content recommendation, recommendation interfaces, tagging-based recommenders and interfaces, member-maintained databases, and intelligent user interface design.
Find bike routes that match the way you ride. Share your cycling knowledge with the community. Cyclopath is a geowiki: an editable map where anyone can share notes about roads and trails, enter tags about special locations, and fix map problems – like missing trails. Hundreds of Twin Cities cyclists are already doing this, making Cyclopath the most comprehensive and up-to-date bicycle information resource in the world.
LensKit is an open source toolkit for building, researching, and studying recommender systems. Do you need a recommender for your next project? LensKit provides high-quality implementations of well-regarded collaborative filtering algorithms and is designed for integration into web applications and other similarly complex environments.