您現在的位置:首頁 - 雅思 - 真題

劍橋雅思16 Test4 Passage3閱讀原文翻譯

2023-06-12 10:14:09 來源:中國教育在線

劍橋雅思16 Test4 Passage3閱讀原文翻譯,今天中國教育在線就來為大家分析這個問題。

劍橋雅思16 Test4 Passage3閱讀原文翻譯

A部分

Artificial intelligence(AI)can already predict the future.Police forces are using it to map when and where crime is likely to occur.Doctors can use it to predict when a patient is most likely to have a heart attack or stroke.Researchers are even trying to give AI imagination so it can plan for unexpected consequences.

人工智能已經可以預測未來。警察用它來標記犯罪可能在什么時候在哪里發(fā)生。醫(yī)生用它來預測病人什么時候最有可能患上心臟病或者中風。研究者甚至嘗試賦予人工智能想象力,以便它能夠對未曾預料到的事情進行規(guī)劃。

Many decisions in our lives require a good forecast,and AI is almost always better at forecasting than we are.Yet for all these technological advances,we still seem to deeply lack confidence in AI predictions.Recent cases show that people don’t like relying on AI and prefer to trust human experts,even if these experts are wrong.

我們生活中的許多決策都需要優(yōu)秀的預測,而人工智能幾乎總是要比我們更擅長預測一些。然而,就像對所有技術進步一樣,我們似乎對人工智能的預測相當缺乏信心。最近的案例表明,人們不喜歡依賴人工智能,而更傾向于相信人類專家,即使這些專家是錯的。

If we want AI to really benefit people,we need to find a way to get people to trust it.To do that,we need to understand why people are so reluctant to trust AI in the first place.

如果我們想讓人工智能真正惠及人類,我們需要找到讓人類信任它的方法。要做到這一點,我們需要理解為什么人們一開始就不愿意相信人工智能。

B部分

Take the case of Watson for Oncology,one of technology giant IBM’s supercomputer programs.Their attempt to promote this program to cancer doctors was a PR disaster.The AI promised to deliver top-quality recommendations on the treatment of 12 cancers that accounted for 80%of the world’s cases.But when doctors first interacted with Watson,they found themselves in a rather difficult situation.On the one hand,if Watson provided guidance about a treatment that coincided with their own opinions,physicians did not see much point in Watson’s recommendations.The supercomputer was simply telling them what they already knew,and these recommendations did not change the actual treatment.

以Watson for Oncology為例,它是技術巨頭IBM推出的超級計算機程序。他們向腫瘤醫(yī)生文章來自老烤鴨雅思推銷該程序的嘗試是場公共關系災難。該人工智能承諾針對12種癌癥的治療方案提供高品質建議。這12種癌癥占到世界所有病例的百分之八十。但當醫(yī)生與Watson互動時,他們發(fā)現自己處于十分尷尬的境地。一方面,如果Watson提供的治療方案與他們自己的意見恰好一致,醫(yī)師并不覺得Watson的建議有什么意義。超級計算機只是告訴他們他們已經知道的東西,這些建議并不會改變實際的治療。

On the other hand,if Watson generated a recommendation that contradicted the experts’opinion,doctors would typically conclude that Watson wasn’t competent.And the machine wouldn’t be able to explain why its treatment was plausible because its machine-learning algorithms were simply too complex to be fully understood by humans.This article is from Laokaoya website.Consequently,this has caused even more suspicion and disbelief,leading many doctors to ignore the seemingly outlandish AI recommendations and stick to their own expertise.

另一方面,如果Watson給出的建議與專家意見相反,醫(yī)生往往會得出Watson并不合格的結論。機器無法解釋為什么它的治療方案很有道理,因為機器學習的算法太過復雜,人類無法徹底理解。這就引發(fā)更多的懷疑和不信任,讓許多醫(yī)生忽略顯得十分古怪的人工智能的建議,并堅持他們自己的專業(yè)知識。

C部分

This is just one example of people’s lack of confidence in AI and their reluctance to accept what AI has to offer.Trust in other people is often based on our understanding of how others think and having experience of their reliability.This helps create a psychological feeling of safety.AI,on the other hand,is still fairly new and unfamiliar to most people.Even if it can be technically explained(and that’s not always the case),Al’s decision-making process is usually too difficult for most people to comprehend.And interacting with something we don’t understand can cause anxiety and give us a sense that we’re losing control.

這只是人們對人工智能缺乏信心、不愿意接受人工智能所提供的服務的一個例子。對其他人的信任往往基于我們理解他們的思考方式,并對他們的可靠性有相關經驗。這有助于營造一種心理上的安全感。另一方面,人工智能對于大多數人來說仍然屬于嶄新、陌生的事物。即便它能夠從技術上得以解釋(并不總是這樣),人工智能的決策過程對于大多數人來說仍然難以理解。與某種我們無法理解的東西互動會引發(fā)焦慮,并讓我們產生一種失控的感覺。

Many people are also simply not familiar with many instances of AI actually working,because it often happens in the background.Instead,they are acutely aware of instances where AI goes wrong.Embarrassing AI failures receive a disproportionate amount of media attention,emphasising the message that we cannot rely on technology.Machine learning is not foolproof,in part because the humans who design it aren’t.

許多人也不熟悉人工智能實際發(fā)揮作用的大量案例,因為這往往發(fā)生在背景中。相反,他們強烈意識到人工智能出錯的例子。人工智能尷尬的失敗吸引著不成比例的媒體注意,強調我們不能依賴科技。機器學習并非萬無一失,這部分是由于設計它的人類也是如此。

D部分

Feelings about AI run deep.In a recent experiment,people from a range of backgrounds were given various sci-fi films about AI to watch and then asked questions about automation in everyday life.It was found that,regardless of whether the film they watched depicted AI in a positive or negative light,simply watching a cinematic vision of our technological future polarised the participants’attitudes.Optimists became more extreme in their enthusiasm for AI and sceptics became even more guarded.

針對人工智能的情緒擁有極深的根源。在最近的一項實驗中,來自不同背景的人們觀看了各種各樣有關人工智能的電影,然后被問一些有關日常生活中自動化的問題。研究人員發(fā)現,無論他們所看的電影中人工智能是正面角色還是反面角色,僅僅觀看有關我們技術未來的電影畫面就會讓參與者的態(tài)度極化。樂觀主義者對人工智能的熱情變得更加極端,而懷疑論者則變得更加謹慎。

This suggests people use relevant evidence about AI in a biased manner to support their existing attitudes,a deep-rooted human tendency known as“confirmation bias”.As AI is represented more and more in media and entertainment,it could lead to a society split between those who benefit from AI and those who reject it.More pertinently,refusing to accept the advantages offered by AI could place a large group of people at a serious disadvantage.

這表明人們會用一種充滿偏見的方式看待有關人工智能的證據,以支持他們現有的態(tài)度。這一根植于人類本性中的傾向被稱為“確認偏誤”。隨著人工智能在媒體和娛樂方式中出現的越來越多,它會在從中受益的人和拒絕它的人之間造成分裂。更確切的說,拒絕接受人工智能所提供的好處會將一大批人置于嚴重不利的地位。

E部分

Fortunately,we already have some ideas about how to improve trust in AI.Simply having previous experience with AI can significantly improve people’s opinions about the technology,as was found in the study mentioned above.Evidence also suggests the more you use other technologies such as the internet,the more you trust them.

幸運的是,對于如何提升對人工智能的信任,我們已經有了一些想法。正如上述所提到的研究所發(fā)現的那樣,僅僅有過使用人工智能的經驗就可以顯著提升人們有關技術的看法。證據也表明,你使用的其他技術越多,如互聯網,你也會越信任它們。

Another solution may be to reveal more about the algorithms which AI uses and the purposes they serve.Several high-profile social media companies and online marketplaces already release transparency reports about government requests and surveillance disclosures.A similar practice for AI could help people have a better understanding of the way algorithmic decisions are made.

另一項解決方案可能是更多的披露人工智能所使用的算法,以及它們服務的目的。幾家高調的社交媒體公司和線上交易平臺已經發(fā)布了有關政府要求和監(jiān)管的透明性報告。人工智能類似的操作也可以幫助人們更好的理解算法決策的方式。

F部分

Research suggests that allowing people some control over AI decision-making could also improve trust and enable AI to learn from human experience.For example,one study showed that when people were allowed the freedom to slightly modify an algorithm,they felt more satisfied with its decisions,more likely to believe it was superior and more likely to use it in the future.

研究表明,讓人們對人工智能的決策制定擁有一定的控制也能夠提升信任,并讓人工智能可以學習人類的經驗。例如,一項研究顯示,當人們擁有稍微修改算法的自由時,他們會對人工智能的決策更加滿意,更可能相信其更勝一籌,并更可能在未來使用它。

We don’t need to understand the intricate inner workings of AI systems,but if people are given a degree of responsibility for how they are implemented,they will be more willing to accept AI into their lives.

我們不需要理解人工智能系統復雜的內部工作機制,但如果人們擁有一定的權責決定它們如何生效,他們會更加愿意在生活中接受人工智能。

>> 雅思 托福 免費課程學習,AI量身規(guī)劃讓英語學習不再困難<<

- 聲明 -

(一)由于考試政策等各方面情況的不斷調整與變化,本網站所提供的考試信息僅供參考,請以權威部門公布的正式信息為準。

(二)本網站在文章內容出處標注為其他平臺的稿件均為轉載稿,轉載出于非商業(yè)性學習目的,歸原作者所有。如您對內容、版 權等問題存在異議請與本站,會及時進行處理解決。

語言考試咨詢
HOT
培訓費用測算
英語水平測試
1
免費在線咨詢
免費獲取留學方案