The ANU is launching a major new project on Humanising Machine Intelligence, uniting computer scientists, philosophers, and social scientists in the pursuit of a more ethical future for AI and Machine Learning.
Machine intelligence is already used in innumerable applications that, while not explicitly morally-loaded, have clear and profound social implications, from facial recognition, to the distribution of online attention. It is also used to support decisions that have explicit moral dimensions, for example about how to allocate welfare resources, and whom to grant bail or parole. And the application of MI in fully autonomous decision-making systems (robotic and otherwise) is picking up pace. Self-driving vehicles, autonomous weapons systems, and companion robots are the first wave of such systems; many more are on the way. Many companies and governments are also heavily invested in developing more general, multipurpose forms of AI. All of these autonomous systems must be able to make morally-loaded decisions by themselves.
In each of these fields inadequate attention to ethics in the design of MI systems will predictably have negative social consequences, some of which could be catastrophic. The goal of the HMI project is to forestall those risks, and help to realise the tremendous social benefits promised by MI. The project has three components: (1) Discovery: formulate the design problem by identifying the social risks and opportunities of widespread reliance on MI. (2) Foundations: identify and answer the fundamental theoretical questions on which progress towards ethical MI depends. (3) Design: develop ethical algorithms and broader MI systems in partnership with industry and government.
The HMI project chief investigators are: Associate Professors Seth Lazar (Project Leader), Colin Klein and Katie Steele (Philosophy), Professors Marcus Hutter, Sylvie Thiébaux, Bob Williamson and Lexing Xie (Computer Science), Dr. Jenny Davis (Sociology), Associate Professor Idione Meneghel (Economics), and Professor Toni Erskine (Political Science).
We are looking for up to eight talented academics to help us humanise machine intelligence. Our primary criterion is demonstrated research excellence in a discipline area relevant to the project, and the clear potential to be research leaders in their disciplines and in the field of moral AI. An interdisciplinary background is not required, but successful applicants will be ready and equipped to engage with scholars from other disciplines and are expected to work actively with scholars from at least two of the project’s discipline areas.
Three of these new academics will be based in the Research School of Computer Science. Within this discipline, we strongly encourage researchers with a wide range of technical backgrounds, including but not limited to computational social choice and game theory, decision theory, information theory, logic and automated reasoning, general artificial intelligence, machine learning, optimisation, planning & scheduling, reasoning about constraints & preferences, and reinforcement learning. Though these positions will be housed in the Research School of Computer Science, we strongly encourage anyone who meets the selection criteria to apply, regardless of disciplinary background.
Successful applicants will help us design the next generation of more ethical MI systems, in part through publishing internationally influential research in the leading peer-reviewed venues (as suited to their discipline). We expect them to become leaders in academia, industry or government. As well as conducting research at the highest level, they will help build the HMI community at ANU and globally, through convening a regular seminar series and international workshops. They will also contribute, at a reduced intensity, to the education and outreach agendas of the School, in a manner appropriate to the level of appointment.
Positions are available at academic Level B (Research Fellow) and C (Senior Fellow) and are for an initial fixed-term duration of 3 years with the potential for extension to 5 years following a mid-term project review. For candidates who currently hold tenure-track or permanent appointments at universities, industry or government and in other exceptional circumstances, a tenure-track or continuing appointment may be considered.