thumbnail image

Brian Tse 谢旻希

 

    • Home
    • Writing
    • Translating
    • Speaking
    • Recorded Talks
    • Media
    • …  
      • Home
      • Writing
      • Translating
      • Speaking
      • Recorded Talks
      • Media

    Brian Tse 谢旻希

     

    • Home
    • Writing
    • Translating
    • Speaking
    • Recorded Talks
    • Media
    • …  
      • Home
      • Writing
      • Translating
      • Speaking
      • Recorded Talks
      • Media

      Brian Tse 谢旻希

       

      • Brian is the Founding Director of Concordia Consulting and a leading expert at the intersection of the long-term impact of AI and China-Western relations. He is a Policy Affiliate at the Center for the Governance of AI, former Senior Advisor at the Partnership on AI, and has advised some of the world’s leading AI firms and think tanks on AI safety and governance.

         

        He has served on the program committee of AI safety workshops at AAAI, IJCAI, and ICFEM. He has been a member of the IEEE P2894 Explainable AI Working Group, IEEE P2863 Organizational Governance of AI Working Group, and the UNICRI-INTERPOL AI Expert Working Group.

         

        Brian is a thought leader on topics including global catastrophic risks and great power relations. He is an advisor at 80,000 Hours. He has been invited to speak at Stanford, Oxford, Tsinghua, and Peking University.

        Brian is a former investment banker at J.P. Morgan.

      • Writing

        Brian's current writing centers mainly around the governance of artificial intelligence.

        Toward Trustworthy AI: Mechanisms for Supporting Verifiable Claims (迈向可信赖的人工智能:可验证声明的支持机制)

        In order for AI developers to earn trust from users, civil society, governments, and other stakeholders, there is a need to move beyond principles to a focus on mechanisms for demonstrating responsible behavior. Making and assessing verifiable claims, to which developers can be held accountable, is one step in this direction. This report suggests various steps that different stakeholders can take to make it easier to verify claims made about AI systems and their associated development processes. The report has received extensive media coverage, including Financial Times, VentureBeat, 机器之心 (Synced Review), AI科技评论 (within Leiphone), 中国经营报 (China Business Journal), and 专知人工智能 (zhuangzhi.ai). Brian was a participant of the workshop held in April 2019, a co-author, and the corresponding translator of the Chinese version.

        AI Readiness Index 2020

        From healthcare to education to transportation, AI could improve the delivery of public services. But how can governments position themselves to take advantage of this AI-powered transformation? In this report, Oxford Insights and the International Research Development Centre (IDRC) present the findings of our Government AI Readiness Index to answer this question. Brian was invited to comment on the report as an expert in East Asia (AI Readiness in East Asia: An Emerging Powerhouse.)

        AI Governance in 2019: Observations from 50 Global Experts

        The report reviews some of the key progress of AI governance in 2019. It was contributed by 50 experts from 44 institutions, including AI scientists, academic researchers, industry representatives, policy experts, and others. This group of experts covers a wide range of regional developments and perspectives, including those in the United States, Europe, and Asia. The report has been cited by Montreal AI Ethics Institute’s The State of AI Ethics and received extensive media coverage, including 中国科学报 (ScienceNet.cn), 文汇报 (Wen Wei Po), and 澎湃新闻 (paper.cn). Brian is a co-executive director of the report.

        Syllabus: Artificial Intelligence and China

        This syllabus aims to broadly cover the research landscape surrounding China’s AI ecosystem, including the context, components, capabilities, and consequences of China’s AI development. The materials presented range from blogs to books, with an emphasis on English translations of Mandarin source materials.

        Artificial General Intelligence: Toward Cooperation

        This report summarizes the main findings of the 2019 AGI Strategy Meeting held by Foresight Institute. The meeting sought to map out concrete strategies toward cooperation. This includes both reframing adversarial coordination topics in cooperative terms and sketching concrete positive solutions to coordination issues.

      • Translating

        Brian has directly contributed to and/or advised the translation and publishing of the following work.

        The Precipice: Existential Risk and the Future of Humanity (Chinese edition: forthcoming)

        Drawing on over a decade of research, The Precipice explores the cutting-edge science behind the existential risks we face. It puts them in the context of the greater story of humanity: showing how ending these risks is among the most pressing moral issues of our time. And it points the way forward, to the actions and strategies that can safeguard humanity.

        The Alignment Problem: Machine Learning and Human Values (Chinese edition: forthcoming)

        Recent years have seen an eruption of concern as the field of machine learning advances. When the systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem.

        Human Compatible: Artificial Intelligence and the Problem of Control 《AI新生:破解人机共存密码——人类最后一个大问题》

        If the predicted breakthroughs occur and superhuman AI emerges, we will have created entities far more powerful than ourselves. How can we ensure they never, ever, have power over us? Russell suggests that we can rebuild AI on a new foundation, according to which machines are designed to be inherently uncertain about the human preferences they are required to satisfy.

        Life 3.0: Being Human in the Age of Artificial Intelligence《生命3.0:人工智能时代,人类的进化与重生》

        This book discusses artificial intelligence and its impact on the future of life on Earth and beyond. The book discusses a variety of societal implications, what can be done to maximize the chances of a positive outcome, and potential futures for humanity, technology and combinations thereof.

        The OpenAI Charter (OpenAI纲领)

        OpenAI’s mission is to ensure that advanced artificial intelligence—by which they mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. The charter aims to guide OpenAI in acting in the best interests of humanity throughout its development.

        Fu Ying on AI and International Relations (人工智能对国际关系的影响初析)

        This essay analyzes how AI changes the international order from the perspective of international structures and norms. It suggests that countries should discuss the future international norms of AI from the perspective of building a community of shared future for mankind and the principle of common security.

      • Speaking

        Select talks and seminars

        • “AI Toolkit for Law Enforcement”, The Third Global Meeting on AI for Law Enforcement, United Nations Interregional Crime and Justice Research Institute’s (UNICRI) Centre for AI and Robotics and the International Criminal Police Organization (INTERPOL) Global Complex for Innovation, 2020
        • “AI and US-China Relations”, Carnegie-Tsinghua Center for Global Policy, Beijing, 2019 
        • “AI Safety and Global Cooperation”, UC Berkeley’s Center for Human-Compatible AI Annual Workshop, Asilomar, 2019
        • “Responsible AI Development and Global Cooperation”, AI Summit Asia, Hong Kong, 2019 
        • “The Future of Humanity and China Specialists”, Tsinghua University’s Schwarzman College, Beijing, 2019 
        • “The Future of Humanity and Asian Philanthropy”, Asian Philanthropy Circle, Singapore, 2019

        Select panel discussions

        • “AI for Children: Beijing Principles”, Beijing Academy of AI, Beijing, 2020 
        • “AI and Coronavirus”, Open Austria, 2020 
        • “AI Governance Forum”, Beijing Academy of AI, Beijing, 2019 
        • “World Economic Forum Global Shapers Technology and Leadership Summit”, Beijing, 2019 
        • “Opportunities for Cooperation on AI at Academic and Corporate Levels”, Beneficial AI Conference, Puerto Rico, 2019 (Video, 28 min)
        • “AI Industry Immersion”, Tsinghua University’s Schwarzman College, Beijing, 2019 
        • “Dialogue with Prof. Yew Kwang Ng”, The 11th International Youth Summit on Energy and Climate Change, Shenzhen, 2019 
           
      • Select recorded talks

        "Towards A Global Community Of Shared Future in AGI"

         

        Beneficial AGI Conference, Puerto Rico, 2019

        “Sino-Western Cooperation in AI Safety”

         

        Effective Altruism Global, San Francisco, 2019

      • Media

        This page focuses on Brian's Chinese-language media coverage

        智源研究院发布我国首个儿童人工智能发展原则《面向儿童的人工智能北京共识》

        2020年9月14日,北京智源人工智能研究院(以下简称“智源研究院”)联合北京大学人工智能研究院、清华大学人工智能研究院、中科院计算所、中科院自动化所、中科院心理所、清华大学人工智能国际治理研究院等高校院所,以及小米、旷视、好未来、高思、极客邦、奇虎360、新一代人工智能产业技术创新战略联盟等人工智能企业和联盟组织,共同发布了我国首个针对儿童的人工智能发展原则《面向儿童的人工智能北京共识》。

        傅莹会见牛津大学来访学者共同探讨人工智能国际对话

        2019年8月27日,傅莹在清华大学胜因院27号楼应约会见来华访问的牛津大学人工智能治理研究中心主任Allan Dafoe,Skype创始人之一Jaan Tallinn和牛津大学人工智能治理中心政策研究员谢旻希。双方学者就人工智能国际治理问题进行了交流。

        牛津大学人类未来研究所人工智能治理中心一行来访高瓴人工智能学院

        高瓴人工智能学院副院长卢斌、张国富,国际关系学院教授王义桅共同会见了该中心主任艾伦•达福(Allan Dafoe)、政策研究员谢旻希(Brian Tse),及Skype创始工程师、投资公司Ambient Sound Investments创建人贾安•塔林(Jaan Tallinn)。

        AI的伦理治理挑战:中美欧日英各方专家怎么看?

        2019年5月,北京智源人工智能研究院发布了《人工智能北京共识》。同年,该研究院在国家会议中心举办了“2019北京智源大会”,在当天下午的“人工智能伦理、安全与治理专题论坛”上,来自欧盟、英国、美国、日本和中国的人工智能领域的专家就人工智能的伦理问题分享了各自的观点。

        科学开放与风险管理如何兼得?牛津大学研究员谈安全可靠AI

        “人工智能(AI)应是安全可靠的,而安全则包括算法漏洞、应用层面及使用动机等多方面的风险。”牛津大学人类未来研究所人工智能治理中心(Center for the Governance of AI)政策研究员谢旻希,在日前于深圳举办的第十一届国际青年能源与气候变化峰会间隙,接受澎湃新闻记者的专访时表示。

        Oxford University AI Policy Researcher Says Trump’s AI Initiative Falls Short on Immigration and Ethics Issues (牛津大学AI政策研究员:特朗普的人工智能计划在移民和道德问题上有负期待)

        2019年3月,美国总统特朗普签署了一项行政命令,启动「美国 AI 计划」,以刺激美国政府在人工智能领域的投资,促进美国 AI 产业的发展。机器之心邀请了牛津大学人工智能管理中心(Center for the Governance of AI)政策研究员谢旻希(Brian Tse),分享他对于这一新的美国 AI 计划的观点。

        Navigating AI Governance in a Global Context: A conversation with Brian Tse

        Interview by China Tech Blog incubated at Tsinghua’s Schwarzman College.

      © 2022

        Cookie Use
        We use cookies to ensure a smooth browsing experience. By continuing we assume you accept the use of cookies.
        Learn More