7X24小时服务热线:

020-88888888

您的位置: 主页 > 客户案例 > 案例分类一
案例分类一 案例分类二 案例分类三

k1体育在线网站-霍金等呼吁提防人工智能副作用

发布时间:2024-10-21点击数:

本文摘要:Dozens of scientists, entrepreneurs and investors involved in the field of artificial intelligence, including Stephen Hawking and Elon Musk, have signed an open letter warning that greater focus is needed on its safety and social benefits.数十位科学家、企业家及与人工智能领域有关的投资者公开信收到了一封公开信,警告人们必需更好地留意人工智能(AI)的安全性及其社会效益。

Dozens of scientists, entrepreneurs and investors involved in the field of artificial intelligence, including Stephen Hawking and Elon Musk, have signed an open letter warning that greater focus is needed on its safety and social benefits.数十位科学家、企业家及与人工智能领域有关的投资者公开信收到了一封公开信,警告人们必需更好地留意人工智能(AI)的安全性及其社会效益。参与连署的人中还包括了科学家史蒂芬霍金(Stephen Hawking)及企业家埃伦马斯克(Elon Musk)。

The letter and an accompanying paper from the Future of Life Institute, which suggests research priorities for “robust and beneficial” artificial intelligence, come amid growing nervousness about the impact on jobs or even humanity’s long-term survival from machines whose intelligence and capabilities could exceed those of the people who created them.这封相吻合生命未来研究所(Future of Life Institute,全称FLI)的公开信还附带了一篇论文,其中建议不应优先研究“强劲而有益”的人工智能。目前,人们日益担忧机器的智力和能力可能会多达建构它们的人类,从而影响到人类的低收入,甚至影响到人类的长年存活。“Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls,” the FLI’s letter says. “Our AI systems must do what we want them to do.”这封FLI的公开信回应:“由于人工智能的极大潜力,积极开展如何在回避其潜在陷阱的同时提供其益处的研究十分最重要。

我们的人工智能系统,必需按照我们的意愿工作。”The FLI was founded last year by volunteers including Jaan Tallinn, a co-founder of Skype, to stimulate research into “optimistic visions of the future” and to “mitigate existential risks facing humanity”, with a focus on those arising from the development of human-level artificial intelligence.FLI去年由还包括Skype牵头创始人让塔林(Jaan Tallinn)在内的志愿者创办。

正式成立该研究所的目的一方面是为了增进对“未来悲观图景”的研究,一方面则是为了“减少人类面对的现存风险”。这其中,在研发与人类非常的人工智能技术过程中经常出现的那些风险,将是该所注目的重点。

Mr Musk, the co-founder of SpaceX and Tesla, who sits on the FLI’s scientific advisory board alongside actor Morgan Freeman and cosmologist Stephen Hawking, has said that he believes uncontrolled artificial intelligence is “potentially more dangerous than nukes”.SpaceX和特斯拉(Tesla)的联合创始人马斯克、著名演员摩根弗里曼(Morgan Freeman)以及宇宙学家史蒂芬霍金都是FLI科学顾问委员会的委员。马斯克回应,他坚信不不受掌控的人工智能“有可能比核武器更加危险性”。

Other signatories to the FLI’s letter include Luke Muehlhauser, executive director of Machine Intelligence Research Institute, Frank Wilczek, professor of physics at the Massachusetts Institute of Technology and a Nobel laureate, and the entrepreneurs behind artificial intelligence companies DeepMind and Vicarious, as well as several employees at Google, IBM and Microsoft.这封FLI公开信上的其他连署人还包括机器智能研究所(Machine Intelligence Research Institute)的继续执行主任吕克米尔豪泽(Luke Muehlhauser),麻省理工学院(MIT)物理学教授、诺贝尔奖获得者弗兰克维尔切克(Frank Wilczek),人工智能企业DeepMind和Vicarious的幕后主管,以及几名谷歌(Google)、IBM和微软公司(Microsoft)的员工。Rather than fear-mongering, the letter is careful to highlight both the positive and negative effects of artificial intelligence.这封信并不以一封贩卖恐惧心理为目的。与此相反,它十分慎重地同时特别强调了人工智能的大力面和消极面。

“There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase,” the letter reads. “The potential benefits are huge, since everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable.”信中写到:“如今不存在的一个普遍共识是,人工智能研究正在急剧进展之中,它对社会的影响也很可能会渐渐减小。人类文明所能获取的一切都是人类智慧的结晶。这种智慧被人工智能有可能获取的工具缩放后,我们能做什么是我们无法想象的,不过那样的话根治疾病和贫穷将仍然是遥不可及的。从这个意义上说道,人工智能有极大的潜在益处。

”Benefits from artificial intelligence research that are already coming into use include speech and image recognition, and self-driving vehicles. Some in Silicon Valley have estimated that more than 150 start-ups are working on artificial intelligence today.目前,人工智能研究的部分益处早已沦为现实,其中还包括语音辨识和图像识别,以及自动驾驶的汽车。在硅谷,部分人估算如今专门从事人工智能业务的初创企业多达了150家。

As the field draws in more investment and entrepreneurs and companies such as Google eye huge rewards from creating computers that can think for themselves, the FLI warns that greater focus on the social ramifications would be “timely”, drawing not only on computer science but economics, law and IT security.人工智能于是以更有更加多的投资,许多创业家和谷歌等企业都在盼望着能通过创建不会自律思维的电脑,取得巨额报酬。对于这种局面,FLI警告说道,人们也许不应“及时”将更加多注意力集中于在人工智能的社会后果上,不仅要从计算机科学的角度积极开展研究,还要从经济、法律及信息安全的角度积极开展研究。


本文关键词:k1体育在线网站,k1体育棋牌,开元app官方入口下载,k1体育app下载注册

本文来源:k1体育在线网站-www.kodomo-switch.com

在线客服
服务热线

服务热线

020-88888888

微信咨询
k1棋牌官网(中国)官方网站·IOS/手机版APP下载/APP
返回顶部