Brian is the Founding Director of Concordia Consulting and a leading expert at the intersection of the long-term impact of AI and China-Western relations. He is a Policy Affiliate at the Center for the Governance of AI, former Senior Advisor at the Partnership on AI, and has advised some of the world’s leading AI firms and think tanks on AI safety and governance.
He has served on the program committee of AI safety workshops at AAAI, IJCAI, and ICFEM. He has been a member of the IEEE P2894 Explainable AI Working Group, IEEE P2863 Organizational Governance of AI Working Group, and the UNICRI-INTERPOL AI Expert Working Group.
Brian is a thought leader on topics including global catastrophic risks and great power relations. He is an advisor at 80,000 Hours. He has been invited to speak at Stanford, Oxford, Tsinghua, and Peking University.
Brian is a former investment banker at J.P. Morgan.
Brian's current writing centers mainly around the governance of artificial intelligence.
In order for AI developers to earn trust from users, civil society, governments, and other stakeholders, there is a need to move beyond principles to a focus on mechanisms for demonstrating responsible behavior. Making and assessing verifiable claims, to which developers can be held accountable, is one step in this direction. This report suggests various steps that different stakeholders can take to make it easier to verify claims made about AI systems and their associated development processes. The report has received extensive media coverage, including Financial Times, VentureBeat, 机器之心 (Synced Review), AI科技评论 (within Leiphone), 中国经营报 (China Business Journal), and 专知人工智能 (zhuangzhi.ai). Brian was a participant of the workshop held in April 2019, a co-author, and the corresponding translator of the Chinese version.
From healthcare to education to transportation, AI could improve the delivery of public services. But how can governments position themselves to take advantage of this AI-powered transformation? In this report, Oxford Insights and the International Research Development Centre (IDRC) present the findings of our Government AI Readiness Index to answer this question. Brian was invited to comment on the report as an expert in East Asia (AI Readiness in East Asia: An Emerging Powerhouse.)
The report reviews some of the key progress of AI governance in 2019. It was contributed by 50 experts from 44 institutions, including AI scientists, academic researchers, industry representatives, policy experts, and others. This group of experts covers a wide range of regional developments and perspectives, including those in the United States, Europe, and Asia. The report has been cited by Montreal AI Ethics Institute’s The State of AI Ethics and received extensive media coverage, including 中国科学报 (ScienceNet.cn), 文汇报 (Wen Wei Po), and 澎湃新闻 (paper.cn). Brian is a co-executive director of the report.
This syllabus aims to broadly cover the research landscape surrounding China’s AI ecosystem, including the context, components, capabilities, and consequences of China’s AI development. The materials presented range from blogs to books, with an emphasis on English translations of Mandarin source materials.
This report summarizes the main findings of the 2019 AGI Strategy Meeting held by Foresight Institute. The meeting sought to map out concrete strategies toward cooperation. This includes both reframing adversarial coordination topics in cooperative terms and sketching concrete positive solutions to coordination issues.
Brian has directly contributed to and/or advised the translation and publishing of the following work.
Drawing on over a decade of research, The Precipice explores the cutting-edge science behind the existential risks we face. It puts them in the context of the greater story of humanity: showing how ending these risks is among the most pressing moral issues of our time. And it points the way forward, to the actions and strategies that can safeguard humanity.
Recent years have seen an eruption of concern as the field of machine learning advances. When the systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem.
If the predicted breakthroughs occur and superhuman AI emerges, we will have created entities far more powerful than ourselves. How can we ensure they never, ever, have power over us? Russell suggests that we can rebuild AI on a new foundation, according to which machines are designed to be inherently uncertain about the human preferences they are required to satisfy.
This book discusses artificial intelligence and its impact on the future of life on Earth and beyond. The book discusses a variety of societal implications, what can be done to maximize the chances of a positive outcome, and potential futures for humanity, technology and combinations thereof.
OpenAI’s mission is to ensure that advanced artificial intelligence—by which they mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. The charter aims to guide OpenAI in acting in the best interests of humanity throughout its development.
This essay analyzes how AI changes the international order from the perspective of international structures and norms. It suggests that countries should discuss the future international norms of AI from the perspective of building a community of shared future for mankind and the principle of common security.
Select talks and seminars
“AI Toolkit for Law Enforcement”, The Third Global Meeting on AI for Law Enforcement, United Nations Interregional Crime and Justice Research Institute’s (UNICRI) Centre for AI and Robotics and the International Criminal Police Organization (INTERPOL) Global Complex for Innovation, 2020
“AI and US-China Relations”, Carnegie-Tsinghua Center for Global Policy, Beijing, 2019
“AI Safety and Global Cooperation”, UC Berkeley’s Center for Human-Compatible AI Annual Workshop, Asilomar, 2019
“Responsible AI Development and Global Cooperation”, AI Summit Asia, Hong Kong, 2019
“The Future of Humanity and China Specialists”, Tsinghua University’s Schwarzman College, Beijing, 2019
“The Future of Humanity and Asian Philanthropy”, Asian Philanthropy Circle, Singapore, 2019
Select panel discussions
“AI for Children: Beijing Principles”, Beijing Academy of AI, Beijing, 2020
“AI and Coronavirus”, Open Austria, 2020
“AI Governance Forum”, Beijing Academy of AI, Beijing, 2019
“World Economic Forum Global Shapers Technology and Leadership Summit”, Beijing, 2019
“Opportunities for Cooperation on AI at Academic and Corporate Levels”, Beneficial AI Conference, Puerto Rico, 2019 (Video, 28 min)
“AI Industry Immersion”, Tsinghua University’s Schwarzman College, Beijing, 2019
“Dialogue with Prof. Yew Kwang Ng”, The 11th International Youth Summit on Energy and Climate Change, Shenzhen, 2019
Select recorded talks
"Towards A Global Community Of Shared Future in AGI"
Beneficial AGI Conference, Puerto Rico, 2019
“Sino-Western Cooperation in AI Safety”
Effective Altruism Global, San Francisco, 2019
This page focuses on Brian's Chinese-language media coverage