布莱切利宣言 The Bletchley Declaration 2023年11月1日 学习ing

Artificial Intelligence (AI) presents enormous global opportunities: it has the potential to transform and enhance human wellbeing, peace and prosperity. To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible. We welcome the international community’s efforts so far to cooperate on AI to promote inclusive economic growth, sustainable development and innovation, to protect human rights and fundamental freedoms, and to foster public trust and confidence in AI systems to fully realise their potential. 
人工智能(AI)提供了巨大的全球机遇:它有潜力改变和增强人类福祉、和平与繁荣。为了实现这一点,我们确认,为了所有人的利益,人工智能的设计、开发、部署和使用应该以安全的方式,以人为中心,值得信赖和负责任。我们欢迎国际社会迄今为止努力在人工智能方面进行合作,以促进包容性经济增长、可持续发展和创新,保护人权和基本自由,并培养公众对人工智能系统的信任和信心,从而充分发挥其潜力。

 

AI systems are already deployed across many domains of daily life including housing, employment, transport, education, health, accessibility, and justice, and their use is likely to increase. We recognise that this is therefore a unique moment to act and affirm the need for the safe development of AI and for the transformative opportunities of AI to be used for good and for all, in an inclusive manner in our countries and globally. This includes for public services such as health and education, food security, in science, clean energy, biodiversity, and climate, to realise the enjoyment of human rights, and to strengthen efforts towards the achievement of the United Nations Sustainable Development Goals.
人工智能系统已经部署在日常生活的许多领域,包括住房、就业、交通、教育、卫生、无障碍和司法,它们的使用可能会增加。因此,我们认识到,这是一个独特的时刻,需要采取行动,确认人工智能安全发展的必要性,以及在我们的国家和全球范围内以包容性的方式为所有人造福的人工智能变革机会。这包括卫生和教育、粮食安全、科学、清洁能源、生物多样性和气候等公共服务,以实现人权的享受,并加强实现联合国可持续发展目标的努力。

 

Alongside these opportunities, AI also poses significant risks, including in those domains of daily life. To that end, we welcome relevant international efforts to examine and address the potential impact of AI systems in existing fora and other relevant initiatives, and the recognition that the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed. We also note the potential for unforeseen risks stemming from the capability to manipulate content or generate deceptive content. All of these issues are critically important and we affirm the necessity and urgency of addressing them. 
除了这些机会,人工智能还带来了重大风险,包括在日常生活领域。为此,我们欢迎相关国际努力在现有论坛和其他相关举措中审查和解决人工智能系统的潜在影响,并认识到需要解决对人权的保护、透明度和可解释性、公平性、问责制、监管、安全性、适当的人类监督、道德、减少偏见、隐私和数据保护等问题。我们还注意到,操纵内容或生成欺骗性内容的能力可能会带来不可预见的风险。所有这些问题都至关重要,我们申明解决这些问题的必要性和紧迫性。

 

Particular safety risks arise at the ‘frontier’ of AI, understood as being those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks - as well as relevant specific narrow AI that could exhibit capabilities that cause harm - which match or exceed the capabilities present in today’s most advanced models. Substantial risks may arise from potential intentional misuse or unintended issues of control relating to alignment with human intent. These issues are in part because those capabilities are not fully understood and are therefore hard to predict. We are especially concerned by such risks in domains such as cybersecurity and biotechnology, as well as where frontier AI systems may amplify risks such as disinformation. There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models. Given the rapid and uncertain rate of change of AI, and in the context of the acceleration of investment in technology, we affirm that deepening our understanding of these potential risks and of actions to address them is especially urgent.
特定的安全风险出现在人工智能的“前沿”,被理解为那些能力强大的通用人工智能模型,包括基础模型,可以执行各种各样的任务,以及可能表现出造成伤害的能力的相关特定狭义人工智能,这些能力与当今最先进模型中的能力相匹配或超过。潜在的故意滥用或与人类意图一致相关的意外控制问题可能会产生重大风险。这些问题的部分原因是人们对这些能力没有完全了解,因此很难预测。我们特别关注网络安全和生物技术等领域的此类风险,以及前沿人工智能系统可能放大虚假信息等风险的领域。这些人工智能模型的最重要功能可能会造成严重甚至灾难性的伤害,无论是故意的还是无意的。鉴于人工智能的快速和不确定的变化速度,以及在技术投资加速的背景下,我们确认,加深我们对这些潜在风险的理解以及应对这些风险的行动尤为紧迫。

 

Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation. We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe, and supports the good of all through existing international fora and other relevant initiatives, to promote cooperation to address the broad range of risks posed by AI. In doing so, we recognise that countries should consider the importance of a pro-innovation and proportionate governance and regulatory approach that maximises the benefits and takes into account the risks associated with AI. This could include making, where appropriate, classifications and categorisations of risk based on national circumstances and applicable legal frameworks. We also note the relevance of cooperation, where appropriate, on approaches such as common principles and codes of conduct. With regard to the specific risks most likely found in relation to frontier AI, we resolve to intensify and sustain our cooperation, and broaden it with further countries, to identify, understand and as appropriate act, through existing international fora and other relevant initiatives, including future international AI Safety Summits.
人工智能产生的许多风险本质上是国际性的,因此最好通过国际合作来解决。我们决心以包容性的方式共同努力,确保以人为中心、值得信赖和负责任的人工智能是安全的,并通过现有的国际论坛和其他相关举措支持所有人的利益,以促进合作,应对人工智能带来的广泛风险。在这样做的过程中,我们认识到,各国应该考虑到支持创新、适度的治理和监管方法的重要性,这种方法可以最大限度地提高效益,并考虑到与人工智能相关的风险。这可能包括酌情根据国情和适用的法律框架对风险进行分类和分类。我们还注意到酌情在共同原则和行为守则等方法上进行合作的相关性。关于最有可能发现的与前沿人工智能有关的具体风险,我们决心通过现有的国际论坛和其他相关举措,包括未来的国际人工智能安全峰会,加强和维持我们的合作,并扩大与更多国家的合作,以确定、理解并酌情采取行动。

 

All actors have a role to play in ensuring the safety of AI: nations, international fora and other initiatives, companies, civil society and academia will need to work together. Noting the importance of inclusive AI and bridging the digital divide, we reaffirm that international collaboration should endeavour to engage and involve a broad range of partners as appropriate, and welcome development-orientated approaches and policies that could help developing countries strengthen AI capacity building and leverage the enabling role of AI to support sustainable growth and address the development gap.
所有参与者都可以在确保人工智能的安全方面发挥作用:国家、国际论坛和其他倡议、公司、民间社会和学术界需要共同努力。注意到包容性人工智能和弥合数字鸿沟的重要性,我们重申,国际合作应努力让广泛的合作伙伴酌情参与,欢迎有助于发展中国家加强人工智能能力建设并利用人工智能的赋能作用来支持可持续增长和解决发展差距的以发展为导向的方法和政策。

 

We affirm that, whilst safety must be considered across the AI lifecycle, actors developing frontier AI capabilities, in particular those AI systems which are unusually powerful and potentially harmful, have a particularly strong responsibility for ensuring the safety of these AI systems, including through systems for safety testing, through evaluations, and by other appropriate measures. We encourage all relevant actors to provide context-appropriate transparency and accountability on their plans to measure, monitor and mitigate potentially harmful capabilities and the associated effects that may emerge, in particular to prevent misuse and issues of control, and the amplification of other risks.
我们确认,尽管必须在人工智能的整个生命周期中考虑安全性,但开发前沿人工智能能力的行为者,特别是那些异常强大和潜在有害的人工智能系统,对确保这些人工智能系统的安全负有特别重大的责任,包括通过安全测试系统、评估和其他适当措施。我们鼓励所有相关行为者在其计划中提供适当的透明度和问责制,以衡量、监测和减轻潜在的有害能力及其可能出现的相关影响,特别是防止滥用和控制问题,以及扩大其他风险。

 

In the context of our cooperation, and to inform action at the national and international levels, our agenda for addressing frontier AI risk will focus on:
在我们合作的背景下,为了为国家和国际层面的行动提供信息,我们应对前沿人工智能风险的议程将侧重于:

  • identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies.
    识别共同关注的人工智能安全风险,建立对这些风险的共同科学和循证理解,并在更广泛的全球方法来理解人工智能在我们社会中的影响的背景下,随着能力的不断增强,保持这种理解。
  • building respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognising our approaches may differ based on national circumstances and applicable legal frameworks. This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.
    根据这些风险,在我们各国制定各自的基于风险的政策,以确保安全,并酌情合作,同时认识到我们的方法可能因国情和适用的法律框架而异。这包括私人行为者提高透明度,开发前沿人工智能能力、适当的评估指标、安全测试工具,以及开发相关的公共部门能力和科学研究。

In furtherance of this agenda, we resolve to support an internationally inclusive network of scientific research on frontier AI safety that encompasses and complements existing and new multilateral, plurilateral and bilateral collaboration, including through existing international fora and other relevant initiatives, to facilitate the provision of the best science available for policy making and the public good.
为了推进这一议程,我们决心支持建立一个具有国际包容性的前沿人工智能安全科学研究网络,该网络包括并补充现有和新的多边、多边和双边合作,包括通过现有国际论坛和其他相关举措,促进提供可用于政策制定和公共利益的最佳科学。

 

In recognition of the transformative positive potential of AI, and as part of ensuring wider international cooperation on AI, we resolve to sustain an inclusive global dialogue that engages existing international fora and other relevant initiatives and contributes in an open manner to broader international discussions, and to continue research on frontier AI safety to ensure that the benefits of the technology can be harnessed responsibly for good and for all. We look forward to meeting again in 2024.
认识到人工智能的变革性积极潜力,并作为确保更广泛的人工智能国际合作的一部分,我们决心维持一场包容性的全球对话,让现有的国际论坛和其他相关举措参与进来,并以开放的方式为更广泛的国际讨论做出贡献,并继续对前沿人工智能安全进行研究,以确保能够负责任地利用这项技术的好处,造福所有人。我们期待着在2024年再次会晤。

The countries represented were:代表的国家有:

  • Australia澳大利亚
  • Brazil巴西
  • Canada加拿大
  • Chile智利
  • China中国
  • European Union欧盟
  • France法国
  • Germany德国
  • India印度
  • Indonesia印度尼西亚
  • Ireland爱尔兰
  • Israel以色列
  • Italy意大利
  • Japan日本
  • Kenya肯尼亚
  • Kingdom of Saudi Arabia沙特阿拉伯王国
  • Netherlands荷兰
  • Nigeria尼日利亚
  • The Philippines菲律宾
  • Republic of Korea大韩民国
  • Rwanda卢旺达
  • Singapore新加坡
  • Spain西班牙
  • Switzerland瑞士
  • Türkiye土耳其
  • Ukraine乌克兰
  • United Arab Emirates阿拉伯联合酋长国
  • United Kingdom of Great Britain and Northern Ireland大不列颠及北爱尔兰联合王国
  • United States of America美利坚合众国

References to ‘governments’ and ‘countries’ include international organisations acting in accordance with their legislative or executive competences.
“政府”和“国家”包括根据其立法或行政权限行事的国际组织。

 

参考资料:https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023

 

Artificial Intelligence (AI) presents enormous global opportunities: it has the potential to transform and enhance human wellbeing, peace and prosperity. To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible. We welcome the international community’s efforts so far to cooperate on AI to promote inclusive economic growth, sustainable development and innovation, to protect human rights and fundamental freedoms, and to foster public trust and confidence in AI systems to fully realise their potential. 
人工智能(AI)提供了巨大的全球机遇:它有潜力改变和增强人类福祉、和平与繁荣。为了实现这一点,我们确认,为了所有人的利益,人工智能的设计、开发、部署和使用应该以安全的方式,以人为中心,值得信赖和负责任。我们欢迎国际社会迄今为止努力在人工智能方面进行合作,以促进包容性经济增长、可持续发展和创新,保护人权和基本自由,并培养公众对人工智能系统的信任和信心,从而充分发挥其潜力。

AI systems are already deployed across many domains of daily life including housing, employment, transport, education, health, accessibility, and justice, and their use is likely to increase. We recognise that this is therefore a unique moment to act and affirm the need for the safe development of AI and for the transformative opportunities of AI to be used for good and for all, in an inclusive manner in our countries and globally. This includes for public services such as health and education, food security, in science, clean energy, biodiversity, and climate, to realise the enjoyment of human rights, and to strengthen efforts towards the achievement of the United Nations Sustainable Development Goals.
人工智能系统已经部署在日常生活的许多领域,包括住房、就业、交通、教育、卫生、无障碍和司法,它们的使用可能会增加。因此,我们认识到,这是一个独特的时刻,需要采取行动,确认人工智能安全发展的必要性,以及在我们的国家和全球范围内以包容性的方式为所有人造福的人工智能变革机会。这包括卫生和教育、粮食安全、科学、清洁能源、生物多样性和气候等公共服务,以实现人权的享受,并加强实现联合国可持续发展目标的努力。

Alongside these opportunities, AI also poses significant risks, including in those domains of daily life. To that end, we welcome relevant international efforts to examine and address the potential impact of AI systems in existing fora and other relevant initiatives, and the recognition that the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed. We also note the potential for unforeseen risks stemming from the capability to manipulate content or generate deceptive content. All of these issues are critically important and we affirm the necessity and urgency of addressing them. 
除了这些机会,人工智能还带来了重大风险,包括在日常生活领域。为此,我们欢迎相关国际努力在现有论坛和其他相关举措中审查和解决人工智能系统的潜在影响,并认识到需要解决对人权的保护、透明度和可解释性、公平性、问责制、监管、安全性、适当的人类监督、道德、减少偏见、隐私和数据保护等问题。我们还注意到,操纵内容或生成欺骗性内容的能力可能会带来不可预见的风险。所有这些问题都至关重要,我们申明解决这些问题的必要性和紧迫性。

Particular safety risks arise at the ‘frontier’ of AI, understood as being those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks - as well as relevant specific narrow AI that could exhibit capabilities that cause harm - which match or exceed the capabilities present in today’s most advanced models. Substantial risks may arise from potential intentional misuse or unintended issues of control relating to alignment with human intent. These issues are in part because those capabilities are not fully understood and are therefore hard to predict. We are especially concerned by such risks in domains such as cybersecurity and biotechnology, as well as where frontier AI systems may amplify risks such as disinformation. There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models. Given the rapid and uncertain rate of change of AI, and in the context of the acceleration of investment in technology, we affirm that deepening our understanding of these potential risks and of actions to address them is especially urgent.
特定的安全风险出现在人工智能的“前沿”,被理解为那些能力强大的通用人工智能模型,包括基础模型,可以执行各种各样的任务,以及可能表现出造成伤害的能力的相关特定狭义人工智能,这些能力与当今最先进模型中的能力相匹配或超过。潜在的故意滥用或与人类意图一致相关的意外控制问题可能会产生重大风险。这些问题的部分原因是人们对这些能力没有完全了解,因此很难预测。我们特别关注网络安全和生物技术等领域的此类风险,以及前沿人工智能系统可能放大虚假信息等风险的领域。这些人工智能模型的最重要功能可能会造成严重甚至灾难性的伤害,无论是故意的还是无意的。鉴于人工智能的快速和不确定的变化速度,以及在技术投资加速的背景下,我们确认,加深我们对这些潜在风险的理解以及应对这些风险的行动尤为紧迫。

Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation. We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe, and supports the good of all through existing international fora and other relevant initiatives, to promote cooperation to address the broad range of risks posed by AI. In doing so, we recognise that countries should consider the importance of a pro-innovation and proportionate governance and regulatory approach that maximises the benefits and takes into account the risks associated with AI. This could include making, where appropriate, classifications and categorisations of risk based on national circumstances and applicable legal frameworks. We also note the relevance of cooperation, where appropriate, on approaches such as common principles and codes of conduct. With regard to the specific risks most likely found in relation to frontier AI, we resolve to intensify and sustain our cooperation, and broaden it with further countries, to identify, understand and as appropriate act, through existing international fora and other relevant initiatives, including future international AI Safety Summits.
人工智能产生的许多风险本质上是国际性的,因此最好通过国际合作来解决。我们决心以包容性的方式共同努力,确保以人为中心、值得信赖和负责任的人工智能是安全的,并通过现有的国际论坛和其他相关举措支持所有人的利益,以促进合作,应对人工智能带来的广泛风险。在这样做的过程中,我们认识到,各国应该考虑到支持创新、适度的治理和监管方法的重要性,这种方法可以最大限度地提高效益,并考虑到与人工智能相关的风险。这可能包括酌情根据国情和适用的法律框架对风险进行分类和分类。我们还注意到酌情在共同原则和行为守则等方法上进行合作的相关性。关于最有可能发现的与前沿人工智能有关的具体风险,我们决心通过现有的国际论坛和其他相关举措,包括未来的国际人工智能安全峰会,加强和维持我们的合作,并扩大与更多国家的合作,以确定、理解并酌情采取行动。

All actors have a role to play in ensuring the safety of AI: nations, international fora and other initiatives, companies, civil society and academia will need to work together. Noting the importance of inclusive AI and bridging the digital divide, we reaffirm that international collaboration should endeavour to engage and involve a broad range of partners as appropriate, and welcome development-orientated approaches and policies that could help developing countries strengthen AI capacity building and leverage the enabling role of AI to support sustainable growth and address the development gap.
所有参与者都可以在确保人工智能的安全方面发挥作用:国家、国际论坛和其他倡议、公司、民间社会和学术界需要共同努力。注意到包容性人工智能和弥合数字鸿沟的重要性,我们重申,国际合作应努力让广泛的合作伙伴酌情参与,欢迎有助于发展中国家加强人工智能能力建设并利用人工智能的赋能作用来支持可持续增长和解决发展差距的以发展为导向的方法和政策。

We affirm that, whilst safety must be considered across the AI lifecycle, actors developing frontier AI capabilities, in particular those AI systems which are unusually powerful and potentially harmful, have a particularly strong responsibility for ensuring the safety of these AI systems, including through systems for safety testing, through evaluations, and by other appropriate measures. We encourage all relevant actors to provide context-appropriate transparency and accountability on their plans to measure, monitor and mitigate potentially harmful capabilities and the associated effects that may emerge, in particular to prevent misuse and issues of control, and the amplification of other risks.
我们确认,尽管必须在人工智能的整个生命周期中考虑安全性,但开发前沿人工智能能力的行为者,特别是那些异常强大和潜在有害的人工智能系统,对确保这些人工智能系统的安全负有特别重大的责任,包括通过安全测试系统、评估和其他适当措施。我们鼓励所有相关行为者在其计划中提供适当的透明度和问责制,以衡量、监测和减轻潜在的有害能力及其可能出现的相关影响,特别是防止滥用和控制问题,以及扩大其他风险。

In the context of our cooperation, and to inform action at the national and international levels, our agenda for addressing frontier AI risk will focus on:
在我们合作的背景下,为了为国家和国际层面的行动提供信息,我们应对前沿人工智能风险的议程将侧重于:

  • identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies.
    识别共同关注的人工智能安全风险,建立对这些风险的共同科学和循证理解,并在更广泛的全球方法来理解人工智能在我们社会中的影响的背景下,随着能力的不断增强,保持这种理解。
  • building respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognising our approaches may differ based on national circumstances and applicable legal frameworks. This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.
    根据这些风险,在我们各国制定各自的基于风险的政策,以确保安全,并酌情合作,同时认识到我们的方法可能因国情和适用的法律框架而异。这包括私人行为者提高透明度,开发前沿人工智能能力、适当的评估指标、安全测试工具,以及开发相关的公共部门能力和科学研究。

In furtherance of this agenda, we resolve to support an internationally inclusive network of scientific research on frontier AI safety that encompasses and complements existing and new multilateral, plurilateral and bilateral collaboration, including through existing international fora and other relevant initiatives, to facilitate the provision of the best science available for policy making and the public good.
为了推进这一议程,我们决心支持建立一个具有国际包容性的前沿人工智能安全科学研究网络,该网络包括并补充现有和新的多边、多边和双边合作,包括通过现有国际论坛和其他相关举措,促进提供可用于政策制定和公共利益的最佳科学。

In recognition of the transformative positive potential of AI, and as part of ensuring wider international cooperation on AI, we resolve to sustain an inclusive global dialogue that engages existing international fora and other relevant initiatives and contributes in an open manner to broader international discussions, and to continue research on frontier AI safety to ensure that the benefits of the technology can be harnessed responsibly for good and for all. We look forward to meeting again in 2024.
认识到人工智能的变革性积极潜力,并作为确保更广泛的人工智能国际合作的一部分,我们决心维持一场包容性的全球对话,让现有的国际论坛和其他相关举措参与进来,并以开放的方式为更广泛的国际讨论做出贡献,并继续对前沿人工智能安全进行研究,以确保能够负责任地利用这项技术的好处,造福所有人。我们期待着在2024年再次会晤。

The countries represented were:代表的国家有:

  • Australia澳大利亚
  • Brazil巴西
  • Canada加拿大
  • Chile智利
  • China中国
  • European Union欧盟
  • France法国
  • Germany德国
  • India印度
  • Indonesia印度尼西亚
  • Ireland爱尔兰
  • Israel以色列
  • Italy意大利
  • Japan日本
  • Kenya肯尼亚
  • Kingdom of Saudi Arabia沙特阿拉伯王国
  • Netherlands荷兰
  • Nigeria尼日利亚
  • The Philippines菲律宾
  • Republic of Korea大韩民国
  • Rwanda卢旺达
  • Singapore新加坡
  • Spain西班牙
  • Switzerland瑞士
  • Türkiye土耳其
  • Ukraine乌克兰
  • United Arab Emirates阿拉伯联合酋长国
  • United Kingdom of Great Britain and Northern Ireland大不列颠及北爱尔兰联合王国
  • United States of America美利坚合众国

References to ‘governments’ and ‘countries’ include international organisations acting in accordance with their legislative or executive competences.
“政府”和“国家”包括根据其立法或行政权限行事的国际组织。

posted @ 2024-02-19 11:39  Cong0ks  阅读(27)  评论(0编辑  收藏  举报