default search action
AIES 2019: Honolulu, HI, USA
- Vincent Conitzer, Gillian K. Hadfield, Shannon Vallor:
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, AIES 2019, Honolulu, HI, USA, January 27-28, 2019. ACM 2019, ISBN 978-1-4503-6324-2
Invited Talk I
- Ryan Calo:
How We Talk About AI (and Why It Matters). 1
Spotlight 1: Normative Perspectives
- Ava Thomas Wright:
Rightful Machines and Dilemmas. 3-4 - The Anh Han, Luís Moniz Pereira, Tom Lenaerts:
Modelling and Influencing the AI Bidding War: A Research Agenda. 5-11 - Emanuelle Burton, Kristel Clayville, Judy Goldsmith, Nicholas Mattei:
The Heart of the Matter: Patient Autonomy as a Model for the Wellbeing of Technology Users. 13-19 - Bertram F. Malle, Paul Bello, Matthias Scheutz:
Requirements for an Artificial Agent with Norm Competence. 21-27 - Naveen Sundar Govindarajulu, Selmer Bringsjord, Rikhiya Ghosh, Vasanth Sarathy:
Toward the Engineering of Virtuous Machines. 29-35 - Sophie F. Jentzsch, Patrick Schramowski, Constantin A. Rothkopf, Kristian Kersting:
Semantics Derived Automatically from Language Corpora Contain Human-like Moral Choices. 37-44 - Han Yu, Chunyan Miao, Yongqing Zheng, Lizhen Cui, Simon Fauvel, Cyril Leung:
Ethically Aligned Opportunistic Scheduling for Productive Laziness. 45-51 - Tathagata Chakraborti, Subbarao Kambhampati:
(When) Can AI Bots Lie? 53-59 - Thomas Krendl Gilbert, Yonatan Mintz:
Epistemic Therapy for Bias in Automated Decision-Making. 61-67 - Christian Borgs, Jennifer T. Chayes, Nika Haghtalab, Adam Tauman Kalai, Ellen Vitercik:
Algorithmic Greenlining: An Approach to Increase Diversity. 69-76
Session 1: Algorithmic Fairness
- Alejandro Noriega-Campero, Michiel A. Bakker, Bernardo Garcia-Bulle, Alex 'Sandy' Pentland:
Active Fairness in Algorithmic Decision Making. 77-83 - Andrew Morgan, Rafael Pass:
Paradoxes in Fair Computer-Aided Decision Making. 85-90 - Amanda Coston, Karthikeyan Natesan Ramamurthy, Dennis Wei, Kush R. Varshney, Skyler Speakman, Zairah Mustahsan, Supriyo Chakraborty:
Fair Transfer Learning with Missing Protected Attributes. 91-98 - Nripsuta Ani Saxena, Karen Huang, Evan DeFilippis, Goran Radanovic, David C. Parkes, Yang Liu:
How Do Fairness Definitions Fare?: Examining Public Attitudes Towards Algorithmic Definitions of Fairness. 99-106
Session 2: Norms and Explanations
- Adam Lerer, Alexander Peysakhovich:
Learning Existing Social Conventions via Observationally Augmented Self-Play. 107-114 - Dylan Hadfield-Menell, McKane Andrus, Gillian K. Hadfield:
Legible Normativity for AI Alignment: The Value of Silly Rules. 115-121 - Michael Hind, Dennis Wei, Murray Campbell, Noel C. F. Codella, Amit Dhurandhar, Aleksandra Mojsilovic, Karthikeyan Natesan Ramamurthy, Kush R. Varshney:
TED: Teaching AI to Explain its Decisions. 123-129 - Himabindu Lakkaraju, Ece Kamar, Rich Caruana, Jure Leskovec:
Faithful and Customizable Explanations of Black Box Models. 131-138
Session 3: Artificial Agency
- Joe Cruz:
Shared Moral Foundations of Embodied Artificial Intelligence. 139-146 - Beishui Liao, Marija Slavkovik, Leendert W. N. van der Torre:
Building Jiminy Cricket: An Architecture for Moral Agreements Among Stakeholders. 147-153 - Antonio Daniele, Yi-Zhe Song:
AI + Art = Human. 155-161 - Eric P. S. Baumer, Micki McGee:
Speaking on Behalf of: Representation, Delegation, and Authority in Computational Text Analysis. 163-169
Session 4: Autonomy and Lethality
- Daniel Lim:
Killer Robots and Human Dignity. 171-176 - Sean Welsh:
Regulating Lethal and Harmful Autonomy: Drafting a Protocol VI of the Convention on Certain Conventional Weapons. 177-180 - Timothy Geary, David Danks:
Balancing the Benefits of Autonomous Vehicles. 181-186 - Tracy Hresko Pearl:
Compensation at the Crossroads: Autonomous Vehicles and Alternative Victim Compensation Schemes. 187-193
Session 5: Rights and Principles
- Jess Whittlestone, Rune Nyrup, Anna Alexandrova, Stephen Cave:
The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions. 195-200 - Jack Parker, David Danks:
How Technological Advances Can Reveal Rights. 201
Spotlight 2: Fairness and Explanations
- Bishwamittra Ghosh, Kuldeep S. Meel:
IMLI: An Incremental Framework for MaxSAT-Based Learning of Interpretable Classification Rules. 203-210 - Junaid Ali, Muhammad Bilal Zafar, Adish Singla, Krishna P. Gummadi:
Loss-Aversively Fair Classification. 211-218 - Sahaj Garg, Vincent Perot, Nicole Limtiaco, Ankur Taly, Ed H. Chi, Alex Beutel:
Counterfactual Fairness in Text Classification through Robustness. 219-226 - Luca Oneto, Michele Donini, Amon Elders, Massimiliano Pontil:
Taking Advantage of Multitask Learning for Fair Classification. 227-237 - Stefano Teso, Kristian Kersting:
Explanatory Interactive Machine Learning. 239-245 - Michael P. Kim, Amirata Ghorbani, James Y. Zou:
Multiaccuracy: Black-Box Post-Processing for Fairness in Classification. 247-254 - Lior Wolf, Tomer Galanti, Tamir Hazan:
A Formal Approach to Explainability. 255-261 - Daniel McNamara, Cheng Soon Ong, Robert C. Williamson:
Costs and Benefits of Fair Representation Learning. 263-270 - Stephen Pfohl, Ben J. Marafino, Adrien Coulet, Fátima Rodriguez, Latha Palaniappan, Nigam H. Shah:
Creating Fair Models of Atherosclerotic Cardiovascular Disease Risk. 271-278 - Mark Ibrahim, Melissa Louie, Ceena Modarres, John W. Paisley:
Global Explanations of Neural Networks: Mapping the Landscape of Predictions. 279-287 - Alexander Amini, Ava P. Soleimany, Wilko Schwarting, Sangeeta N. Bhatia, Daniela Rus:
Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure. 289-295 - Naman Goel, Boi Faltings:
Crowdsourcing with Fairness, Diversity and Budget Constraints. 297-304 - Nathaniel Swinger, Maria De-Arteaga, Neil Thomas Heffernan IV, Mark D. M. Leiserson, Adam Tauman Kalai:
What are the Biases in My Word Embedding? 305-311 - Daniel McNamara:
Equalized Odds Implies Partially Equalized Outcomes Under Realistic Assumptions. 313-320 - Jeanna N. Matthews, Marzieh Babaeianjelodar, Stephen Lorenz, Abigail Matthews, Mariama Njie, Nathaniel Adams, Dan Krane, Jessica Goldthwaite, Clinton Hughes:
The Right To Confront Your Accusers: Opening the Black Box of Forensic DNA Software. 321-327
Invited Talk III
- Anca D. Dragan:
Specifying AI Objectives as a Human-AI Collaboration problem. 329
Spotlight 3: Empirical Perspectives
- Stephen Cave, Kate Coughlan, Kanta Dihal:
"Scary Robots": Examining Public Responses to AI. 331-337 - Ching-Hua Chuan, Wan-Hsiu Sunny Tsai, Su Yeon Cho:
Framing Artificial Intelligence in American Newspapers. 339-344 - Huao Li, Stephanie Milani, Vigneshram Krishnamoorthy, Michael Lewis, Katia P. Sycara:
Perceptions of Domestic Robots' Normative Behavior Across Cultures. 345-351 - Wenjie Hu, Jay Harshadbhai Patel, Zoe-Alanah Robert, Paul Novosad, Samuel Asher, Zhongyi Tang, Marshall Burke, David B. Lobell, Stefano Ermon:
Mapping Missing Population in Rural India: A Deep Learning Approach with Satellite Imagery. 353-359 - Bradley J. Gram-Hansen, Patrick Helber, Indhu Varatharajan, Faiza Azam, Alejandro Coca-Castro, Veronika Kopacková, Piotr Bilinski:
Mapping Informal Settlements in Developing Countries using Machine Learning and Low Resolution Multi-spectral Data. 361-368 - Ravi Pandya, Sandy H. Huang, Dylan Hadfield-Menell, Anca D. Dragan:
Human-AI Learning Performance in Multi-Armed Bandits. 369-375 - De'Aira G. Bryant, Ayanna M. Howard:
A Comparative Analysis of Emotion-Detecting AI Systems with Respect to Algorithm Performance and Dataset Diversity. 377-382 - Ray Jiang, Silvia Chiappa, Tor Lattimore, András György, Pushmeet Kohli:
Degenerate Feedback Loops in Recommender Systems. 383-390 - Vahid Behzadan, James Minton, Arslan Munir:
TrolleyMod v1.0: An Open-Source Simulation and Data-Collection Platform for Ethical Decision Making in Autonomous Vehicles. 391-395 - Charles M. Giattino, Lydia Kwong, Chad Rafetto, Nita A. Farahany:
The Seductive Allure of Artificial Intelligence-Powered Neurotechnology. 397-402
Session 6: Social Science Models for AI
- Daniel Susser:
Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures. 403-408 - Alexander Peysakhovich:
Reinforcement Learning and Inverse Reinforcement Learning with System 1 and System 2. 409-415 - Dylan Hadfield-Menell, Gillian K. Hadfield:
Incomplete Contracting and AI Alignment. 417-422 - Sky Croeser, Peter Eckersley:
Theories of Parenting and Their Application to Artificial Intelligence. 423-428
Session 7: Measurement and Justice
- Inioluwa Deborah Raji, Joy Buolamwini:
Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products. 429-435 - Rodrigo L. Cardoso, Wagner Meira Jr., Virgílio A. F. Almeida, Mohammed J. Zaki:
A Framework for Benchmarking Discrimination-Aware Models in Machine Learning. 437-444 - McKane Andrus, Thomas K. Gilbert:
Towards a Just Theory of Measurement: A Principled Social Measurement Assurance Program for Machine Learning. 445-451 - Alex Beutel, Jilin Chen, Tulsee Doshi, Hai Qian, Allison Woodruff, Christine Luu, Pierre Kreitmann, Jonathan Bischof, Ed H. Chi:
Putting Fairness Principles into Practice: Challenges, Metrics, and Improvements. 453-459
Session 8: AI for Social Good
- Shiwali Mohan, Frances Yan, Victoria Bellotti, Ahmed Elbery, Hesham Rakha, Matthew Klenk:
On Influencing Individual Behavior for Reducing Transportation Energy Expenditure in a Large Population. 461-467 - Zhiyuan Lin, Alex Chohlas-Wood, Sharad Goel:
Guiding Prosecutorial Decisions with an Interpretable Statistical Model. 469-476 - Cristina Cornelio, Lucrezia Furian, Antonio Nicolò, Francesca Rossi:
Using Deceased-Donor Kidneys to Initiate Chains of Living Donor Kidney Paired Donations: Algorithm and Experimentation. 477-483 - Paul Duckworth, Logan Graham, Michael A. Osborne:
Inferring Work Task Automatability from AI Expert Evidence. 485-491
Session 9: Human and Machine Interaction
- Arifah Addison, Christoph Bartneck, Kumar Yogeeswaran:
Robots Can Be More Than Black And White: Examining Racial Bias Towards Robots. 493-498 - Ryan Blake Jackson, Ruchen Wen, Tom Williams:
Tact in Noncompliance: The Need for Pragmatically Apt Responses to Unethical Commands. 499-505 - José Hernández-Orallo, Karina Vold:
AI Extenders: The Ethical and Societal Implications of Humans Cognitively Extended by AI. 507-513 - Shervin Shahrdar, Corey Park, Mehrdad Nojoumian:
Human Trust Measurement Using an Immersive Virtual Reality Autonomous Vehicle Simulator. 515-520
Invited Talk IV
- David Danks:
The Value of Trustworthy AI. 521-522
AIES'19 Doctoral
- Ryan Blake Jackson:
Generating Appropriate Responses to Inappropriate Robot Commands. 523-524 - Maayan Shvo:
Towards Empathetic Planning and Plan Recognition. 525-526 - Filip Michalsky:
Fairness Criteria for Face Recognition Applications. 527-528 - Himan Abdollahpouri:
Popularity Bias in Ranking and Recommendation. 529-530 - Amanda Coston:
Risk Assessments and Fairness Under Missingness and Confounding. 531 - Michelle C. Ausman:
Artificial Intelligence's Impact on Mental Health Treatments. 533-534 - Daniel McNamara:
Algorithmic Stereotypes: Implications for Fairness of Generalizing from Past Data. 535-536 - Nripsuta Ani Saxena:
Perceptions of Fairness. 537-538 - Vasanth Sarathy:
Learning Context-Sensitive Norms under Uncertainty. 539-540 - Kacper Sokol:
Fairness, Accountability and Transparency in Artificial Intelligence: A Case Study of Logical Predictive Models. 541-542 - Aaron Springer:
Enabling Effective Transparency: Towards User-Centric Intelligent Systems. 543-544 - Elija Perrier:
AIES 2019 Student Submission. 545-546 - De'Aira Bryant:
Towards Emotional Intelligence in Social Robots Designed for Children. 547-548 - Duncan C. McElfresh:
A Framework for Technically- and Morally-Sound AI. 549-550 - Meir Friedenberg:
Towards Formal Models of Blameworthiness. 551-552 - Sina Mohseni:
Toward Design and Evaluation Framework for Interpretable Machine Learning Systems. 553-554 - Alan Mishler:
Modeling Risk and Achieving Algorithmic Fairness Using Potential Outcomes. 555-556 - Fernando A. Delgado:
Machine Learning in Legal Practice: Notes from Recent History. 557-558 - McKane Andrus:
On Serving Two Masters: Directing Critical Technical Practice towards Human-Compatibility in AI. 559-560
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.