Computers Interviewing Humans (Part 2)

Given that we’re using computer algorithms to evaluate humans, can these systems be gamed or fooled? And is it possible that computers are less biased that humans? On any given day, humans can be distracted, tired, sick or just flat out biased against people for any number of reasons. Should these systems be more transparent? How do we know if they’re being fair? Do we need to regulate these services? Is there a happy medium here? And finally, if you feel that you’ve been unfairly discriminated against by these systems, is there anything you can do about it?

John Davisson is Senior Counsel at EPIC. John works on a variety of appellate litigation and Freedom of Information Act cases. John first came to EPIC in 2015 as a clerk in the Internet Public Interest Opportunities Program. He has previously clerked at Levine Sullivan Koch & Schulz, served as a student attorney in the Civil Rights Section of Georgetown’s Institute for Public Representation, and interned at the Appignani Humanist Legal Center. John is a 2016 magna cum laude graduate of Georgetown University Law Center, where he was managing editor of the Georgetown Journal on Poverty Law & Policy, a Georgetown Law Fellow, and an NGO observer to the 9/11 military commission at Naval Station Guantanamo Bay. He worked as a journalist before entering the law and earned his B.A. at Columbia University. John is a member of the New York and District of Columbia bars.

Further Info: