双语新闻 OpenAI的联合创始人:必须控制“超智能”人工智能

  

  人工智能公司OpenAI说,超智能将是“有史以来最具影响力”的技术发明 人工智能领导者OpenAI的一位联合创始人警告,必须控制超智能,以防止人类灭绝。

  “超智能将是人类有史以来最具影响力的技术发明,它可以帮助我们解决世界上许多最重要的问题。但是超智能的巨大力量也可能非常危险,可能导致人类失去权力或甚至灭绝,”伊利亚·苏茨克弗和对齐负责人扬·莱克在周二的一篇博客文章中写道,他们认为这样的进步可能在本十年内到来。

  他们说,管理这样的风险需要新的治理机构和解决超智能对齐问题:确保比人类聪明得多的人工智能系统“遵循人类意图”。

  “目前,我们没有解决方案来引导或控制一个可能超智能的人工智能,并防止它变得不受控制。我们目前用于对齐人工智能的技术,如从人类反馈中进行强化学习,依赖于人类对人工智能的监督能力。但是人类无法可靠地监督比我们聪明得多的人工智能系统,所以我们目前的对齐技术不会扩展到超智能。”他们写道。“我们需要新的科学和技术突破。”

  为了解决这些问题,在四年的时间内,他们说他们正在领导一个新的团队,并将迄今为止获得的计算能力的20%用于这项努力。

  “虽然这是一个非常雄心勃勃的目标,我们也不能保证成功,但我们乐观地认为一个专注、协调的努力可以解决这个问题,”他们说。 除了改进当前OpenAI的模型,如ChatGPT,并减轻风险外,新团队还专注于将超智能人工智能系统与人类意图对齐的机器学习挑战。

  它的目标是设计一个大致达到人类水平的自动对齐研究员,使用大量计算来扩展它,并“迭代地对齐超智能”。 为了做到这一点,OpenAI将开发一个可扩展的训练方法,验证生成的模型,然后对其对齐管道进行压力测试。

  该公司的努力得到了微软的支持。

  Artificial intelligence company OpenAI says superintelligence will be 'most impactful' tech ever invented

  A co-founder of artificial intelligence leader OpenAI is warning that superintelligence must be controlled in order to prevent the extinction of the human race.

  "Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction," Ilya Sutskever and head of alignment Jan Leike wrote in a Tuesday blog post, saying they believe such advancements could arrive as soon as this decade.

  They said managing such risks would require new institutions for governance and solving the problem of superintelligence alignment: ensuring AI systems much smarter than humans "follow human intent."

  "Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue. Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us and so our current alignment techniques will not scale to superintelligence," they wrote. "We need new scientific and technical breakthroughs."

  To solve these problems, within a period of four years, they said they're leading a new team and dedicating 20% of the compute power secured to date to this effort.

  "While this is an incredibly ambitious goal and we’re not guaranteed to succeed, we are optimistic that a focused, concerted effort can solve this problem," they said.

  In addition to work improving current OpenAI models like ChatGPT and mitigating risks, the new team is focused on the machine learning challenges of aligning superintelligent AI systems with human intent.

  Its goal is to devise a roughly human-level automated alignment researcher, using vast amounts of compute to scale it and "iteratively align superintelligence."

  In order to do so, OpenAI will develop a scalable training method, validate the resulting model and then stress test its alignment pipeline.

  The company's efforts are backed by Microsoft.

  举报/反馈