Artificial Intelligence (AI) has been around for decades, but has only recently begun to fulfill the promise of truly replicating human-like decision making. The Information Age has generated enormous quantities of data and modern technology has given us unprecedented power to ingest and analyze this data. AI systems today control airplanes, financial and insurance systems, and even criminal sentencing recommendations. We can use AI to conduct law enforcement and intelligence gather operations. AI has even generated audio, video and photos that are completely fake but nearly impossible for a human to detect. Our guest today, Lorraine Kisselburgh, is working with international organization to define common-sense guidelines for the creation and use of these AI systems, to maximize potential and minimize abuse.
Lorraine Kisselburgh (Ph.D., Purdue University) is a Scholar with the Electronic Privacy Information Center in Washington, D.C., a former professor of media, technology, and society, and a visiting lecturer in the Center for Entrepreneurship at Purdue University. She studies the social implications of emerging technologies, including privacy and ethics in emerging technology contexts. Her research has been awarded funding from the National Science Foundation and the Department of Homeland Security, and recognized by the National Academy of Engineering. She currently serves on the executive committee of Association of Computing Machinery’s (ACM) US Technology Policy Committee (USTPC) and was a member of the ACM Task Force on Code of Ethics.
- Email: firstname.lastname@example.org
- Website: www.lkisselburgh.net
- Twitter: @lkisselburgh, @EPICPrivacy
- Facebook: EPICPrivacy
- Universal Guidelines for AI: https://thepublicvoice.org/AI-universal-guidelines/
- Electronic Privacy Informantion Center (EPIC): https://www.epic.org/
- “Deep Fake” Obama PSA: https://www.youtube.com/watch?v=cQ54GDm1eL0
- Lyrebird fake Trump and Obama voices: https://soundcloud.com/user-535691776/dialog
- OpenAI fake news articles: https://arstechnica.com/information-technology/2019/02/researchers-scared-by-their-own-work-hold-back-deepfakes-for-text-ai/
- AI Now Institute: https://ainowinstitute.org/
- Berkman Klein Center for Internet and Society: https://cyber.harvard.edu/
- Data & Society Intelligence and Autonomy Initiative: https://autonomy.datasociety.net/
- WEF’s AI and Machine Learning: https://www.weforum.org/communities/artificial-intelligence-and-machine-learning