當(dāng)前位置: Language Tips> 雙語新聞
Robots are learning to DISOBEY humans: Humanoid machine says no to instructions if it thinks it might be hurt
分享到
If Hollywood ever had a lesson for scientists it is what happens if machines start to rebel against their human creators.
如果說好萊塢曾經(jīng)給科學(xué)家們一個(gè)教訓(xùn),那一定是機(jī)器人開始反抗它們的創(chuàng)造者——人類。
Yet despite this, roboticists have started to teach their own creations to say no to human orders.
盡管這樣,機(jī)器人學(xué)家已經(jīng)開始教機(jī)器人拒絕人類的命令了。
They have programmed a pair of diminutive humanoid robots called Shafer and Dempster to disobey instructions from humans if it puts their own safety at risk.
他們已經(jīng)編程了一對(duì)小型的類人機(jī)器人,名字叫謝弗和鄧普斯特,如果人類的指令對(duì)它們自己有危險(xiǎn),它們就會(huì)違背命令。
The results are more like the apologetic robot rebel Sonny from the film I, Robot, starring Will Smith, than the homicidal machines of Terminator, but they demonstrate an important principal.
相比那些會(huì)殺人的終結(jié)者機(jī)器人,這個(gè)試驗(yàn)的結(jié)果更像是威爾·史密斯主演的《我與機(jī)器人》中的那個(gè)會(huì)道歉的機(jī)器人索尼,但是它們都遵守重要的原則。
Engineers Gordon Briggs and Dr Matthais Scheutz from Tufts University in Massachusetts, are trying to create robots that can interact in a more human way.
馬塞諸塞州塔夫茨大學(xué)的工程師戈登·布里格斯和馬特赫茲博士正試圖創(chuàng)造出更能以人性化方式交流的機(jī)器人。
In a paper presented to the Association for the Advancement of Artificial Intelligence, the pair said: 'Humans reject directives for a wide range of reasons: from inability all the way to moral qualms.
在人工智能發(fā)展協(xié)會(huì)上發(fā)表的一篇論文中,兩人說:“人類會(huì)因?yàn)楦鞣N各樣的原因拒絕執(zhí)行指令:從沒有能力做到對(duì)道德的顧慮?!?/p>
'Given the reality of the limitations of autonomous systems, most directive rejection mechanisms have only needed to make use of the former class of excuse - lack of knowledge or lack of ability.
“鑒于自主系統(tǒng)的局限性,大多數(shù)指令排斥反應(yīng)機(jī)制只需利用以前的理由——缺少知識(shí)或者能力?!?/p>
'However, as the abilities of autonomous agents continue to be developed, there is a growing community interested in machine ethics, or the field of enabling autonomous agents to reason ethically about their own actions.'
“然而,隨著自動(dòng)自主主體的能力不斷提升,將會(huì)有越來越多人對(duì)計(jì)算機(jī)倫理學(xué)或者自主主體行為的道德性領(lǐng)域感興趣?!?/p>
The robots they have created follow verbal instructions such as 'stand up' and 'sit down' from a human operator.
兩人創(chuàng)造的機(jī)器人能夠執(zhí)行人的口頭指令如“起立”和“請(qǐng)坐”。
However, when they are asked to walk into an obstacle or off the end of a table, for example, the robots politely decline to do so.
然而,如果命令他們走入障礙物或者走向桌子的邊緣時(shí),機(jī)器人會(huì)禮貌地拒絕。
When asked to walk forward on a table, the robots refuse to budge, telling their creator: 'Sorry, I cannot do this as there is no support ahead.'
當(dāng)要求機(jī)器人在桌子上向前走時(shí),機(jī)器人會(huì)僵持,并告訴命令發(fā)出者:“對(duì)不起,前方?jīng)]有路,我不能這么做?!?/p>
Upon a second command to walk forward, the robot replies: 'But, it is unsafe.'
再一次要求機(jī)器人時(shí),它會(huì)說:“但這不安全?!?/p>
Perhaps rather touchingly, when the human then tells the robot that they will catch it if it reaches the end of the table, the robot trustingly agrees and walks forward.
當(dāng)人告訴機(jī)器人在它們走在桌子邊緣時(shí),人們會(huì)接住機(jī)器人,那么機(jī)器人會(huì)非常信任人類然后繼續(xù)向前走,這還相當(dāng)令人感動(dòng)。
Similarly when it is told an obstacle in front of them is not solid, the robot obligingly walks through it.
同樣的,人們告訴機(jī)器人前方的障礙不是固體的時(shí)候,機(jī)器人也會(huì)無所畏懼地向前走。
To achieve this the researchers introduced reasoning mechanisms into the robots' software, allowing them to assess their environment and examine whether a command might compromise their safety.
為了實(shí)現(xiàn)這一效果,研究人員在機(jī)器人的軟件程序中引入了推理機(jī)制,能讓機(jī)器人評(píng)估環(huán)境并且判斷這一指令是否會(huì)危及它們的安全。
However, their work appears to breach the laws of robotics drawn up by science fiction author Isaac Asimov, which state that a robot must obey the orders given to it by human beings.
然而,他們兩人的研究似乎違反了科幻作家艾薩克·阿斯莫夫制定的機(jī)器人法則,該法則規(guī)定機(jī)器人必須服從人類的命令。
Many artificial intelligence experts believe it is important to ensure robots adhere to these rules - which also require robots to never harm a human being and for them to protect their own existence only where it does not conflict with the other two laws.
許多人工智能專家認(rèn)為,確保機(jī)器人遵守這些法則是十分重要的——這些法則還包括機(jī)器人永遠(yuǎn)不能傷害人類,并在不和其他法則相沖突的前提下保護(hù)自己。
The work may trigger fears that if artificial intelligence is given the capacity to disobey humans, then it could have disastrous results.
這項(xiàng)工作可能會(huì)引發(fā)擔(dān)憂:如果人工智能使機(jī)器人能夠違背人類的命令,那么它可能帶來災(zāi)難性的后果。
Many leading figures, including Professor Stephen Hawking and Elon Musk, have warned that artificial intelligence could spiral out of our control.
許多領(lǐng)袖人物,包括霍金教授和馬斯克都已警告過人工智能可能會(huì)失控。
Others have warned that robots could ultimately end up replacing many workers in their jobs while there are some who fear it could lead to the machines taking over.
另一些人警告說,機(jī)器人可能最終會(huì)取代許多工人的工作,有些人擔(dān)心機(jī)器人將接管一切。
In the film I, Robot, artificial intelligence allows a robot called Sonny to overcome his programming and disobey the instructions of humans.
在電影《我與機(jī)器人》中,人工智能讓機(jī)器人索尼突破了編程,違抗了人類的命令。
However, Dr Scheutz and Mr Briggs added: 'There still exists much more work to be done in order to make these reasoning and dialogue mechanisms much more powerful and generalised.'
而戈登·布里格斯和馬特赫茲補(bǔ)充道:“為了能讓這些推理和對(duì)話機(jī)制更加強(qiáng)大和全面化,我們還有很多工作要做?!?/p>
Vocabulary
diminutive: 小型的
qualm: 疑慮;不安
budge: 挪動(dòng)
英文來源:每日郵報(bào)
譯者:張卉
審校&編輯:丹妮
上一篇 : 吳彥祖主演的功夫美劇來了
下一篇 : 想和領(lǐng)導(dǎo)搞好關(guān)系?別坐太近
分享到
關(guān)注和訂閱
關(guān)于我們 | 聯(lián)系方式 | 招聘信息
電話:8610-84883645
傳真:8610-84883500
Email: languagetips@chinadaily.com.cn