跳转到主要内容
剑桥雅思16阅读Test4Passage3原文翻译

剑桥雅思16阅读Test4Passage3原文翻译

4.0
(1 评分人数)

238

11/08/2023

剑桥雅思16阅读Test4Passage3本文讨论了人们对人工智能(AI)预测的不信任和对其接受度较低的原因。

尽管AI在预测方面往往比人类更准确,但人们仍然对其持怀疑态度。其中一个原因是AI的决策过程对大多数人来说过于复杂,难以理解和解释。此外,人们通常依赖对他人思维方式和可靠性的理解来建立信任,而对AI则缺乏这种经验。同时,媒体上对AI失败的报道也加强了对其不可靠性的认知。然而,研究表明,拥有与AI的经验和更多透明度、控制权可以提高人们对AI的信任和接受度。因此,为了让人们真正信任和接受AI,我们需要提供更多的信息和参与机会,使人们更加了解和理解AI的工作方式。

A部分

Artificial intelligence (AI) can already predict the future. Police forces are using it to map when and where crime is likely to occur. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences.

 

Many decisions in our lives require a good forecast, and AI is almost always better at forecasting than we are. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictions. Recent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong.

 

If we want AI to really benefit people, we need to find a way to get people to trust it. To do that, we need to understand why people are so reluctant to trust AI in the first place.

 

B部分

Take the case of Watson for Oncology, one of technology giant IBM’s supercomputer programs. Their attempt to promote this program to cancer doctors was a PR disaster. The AI promised to deliver top-quality recommendations on the treatment of 12 cancers that accounted for 80% of the world’s cases. But when doctors first interacted with Watson, they found themselves in a rather difficult situation. On the one hand, if Watson provided guidance about a treatment that coincided with their own opinions, physicians did not see much point in Watson’s recommendations. The supercomputer was simply telling them what they already knew, and these recommendations did not change the actual treatment.

 

On the other hand, if Watson generated a recommendation that contradicted the experts’ opinion, doctors would typically conclude that Watson wasn’t competent. And the machine wouldn’t be able to explain why its treatment was plausible because its machine-learning algorithms were simply too complex to be fully understood by humans. Consequently, this has caused even more suspicion and disbelief, leading many doctors to ignore the seemingly outlandish AI recommendations and stick to their own expertise.

 

C部分

This is just one example of people’s lack of confidence in AI and their reluctance to accept what AI has to offer. Trust in other people is often based on our understanding of how others think and having experience of their reliability. This helps create a psychological feeling of safety. AI, on the other hand, is still fairly new and unfamiliar to most people. Even if it can be technically explained (and that’s not always the case), Al’s decision-making process is usually too difficult for most people to comprehend. And interacting with something we don’t understand can cause anxiety and give us a sense that we’re losing control.

 

Many people are also simply not familiar with many instances of AI actually working, because it often happens in the background. Instead, they are acutely aware of instances where AI goes wrong. Embarrassing AI failures receive a disproportionate amount of media attention, emphasising the message that we cannot rely on technology. Machine learning is not foolproof, in part because the humans who design it aren’t.

 

D部分

Feelings about AI run deep. In a recent experiment, people from a range of backgrounds were given various sci-fi films about AI to watch and then asked questions about automation in everyday life. It was found that, regardless of whether the film they watched depicted AI in a positive or negative light, simply watching a cinematic vision of our technological future polarised the participants’ attitudes. Optimists became more extreme in their enthusiasm for AI and sceptics became even more guarded.

 

This suggests people use relevant evidence about AI in a biased manner to support their existing attitudes, a deep-rooted human tendency known as “confirmation bias”. As AI is represented more and more in media and entertainment, it could lead to a society split between those who benefit from AI and those who reject it. More pertinently, refusing to accept the advantages offered by AI could place a large group of people at a serious disadvantage.

 

E部分

Fortunately, we already have some ideas about how to improve trust in AI. Simply having previous experience with AI can significantly improve people’s opinions about the technology, as was found in the study mentioned above. Evidence also suggests the more you use other technologies such as the internet, the more you trust them.

 

Another solution may be to reveal more about the algorithms which AI uses and the purposes they serve. Several high-profile social media companies and online marketplaces already release transparency reports about government requests and surveillance disclosures. A similar practice for AI could help people have a better understanding of the way algorithmic decisions are made.

 

F部分

Research suggests that allowing people some control over AI decision-making could also improve trust and enable AI to learn from human experience. For example, one study showed that when people were allowed the freedom to slightly modify an algorithm, they felt more satisfied with its decisions, more likely to believe it was superior and more likely to use it in the future.

 

 

We don’t need to understand the intricate inner workings of AI systems, but if people are given a degree of responsibility for how they are implemented, they will be more willing to accept AI into their lives.

 

 

人工智能(AI)已经能够预测未来。警察部队正在使用它来绘制犯罪发生的时间和地点。医生可以使用它来预测患者最可能发生心脏病或中风的时间。研究人员甚至试图赋予AI想象力,以便它可以计划意外后果。

 

 


我们生活中的许多决策需要一个良好的预测,而AI几乎总是比我们更擅长预测。然而,尽管有所有这些技术进步,我们似乎仍然对AI的预测深感不信任。最近的案例表明,人们不喜欢依赖AI,更愿意信任人类专家,即使这些专家是错误的。

 

 

 


如果我们想让AI真正造福人类,我们需要找到一种方法让人们信任它。为了做到这一点,我们需要了解为什么人们一开始如此不愿相信AI。

 


以IBM的超级计算机程序Watson for Oncology为例。他们推广这个项目给肿瘤医生的尝试成为了一场公关灾难。这个AI承诺提供关于治疗全球80%癌症病例中占80%的12种癌症的高质量建议。但当医生们第一次与Watson互动时,他们发现自己处于一个相当困难的境地。一方面,如果Watson提供的治疗建议与他们自己的意见一致,医生们就不认为Watson的建议有太大意义。超级计算机只是告诉他们他们已经知道的东西,这些建议并没有改变真正的治疗方法。

 

 


另一方面,如果Watson提供了与专家意见相矛盾的建议,医生们通常会得出结论认为Watson不够能干。而且机器无法解释为什么它的治疗方法是可行的,因为其机器学习算法对人类来说过于复杂,无法完全理解。因此,这更加引起了医生们的怀疑和不信任,导致他们忽略了看似荒唐的AI建议,坚持自己的专业知识。

 

 

 


这只是人们对AI缺乏信任和不愿接受AI所提供的东西的一个例子。对他人的信任通常基于我们对他人思维方式的理解和对他们可靠性的经验。这有助于产生一种心理上的安全感。另一方面,AI对大多数人来说仍然是相当新颖和陌生的。即使可以在技术上解释(这并不总是如此),AI的决策过程通常对大多数人来说太难理解。与我们不理解的东西互动可能会引发焦虑,并给我们一种失去控制的感觉。
许多人对AI实际上起作用的实例也不熟悉,因为它通常发生在背景中。相反,他们对AI出错的情况非常清楚。尴尬的AI失败引起了不成比例的媒体关注,强调了我们不能依赖技术的信息。机器学习并非百分之百可靠,部分原因是设计它的人并非完美无缺。

 

 


人们对AI的感觉深入人心。在最近的一个实验中,来自不同背景的人被要求观看各种关于AI的科幻电影,然后回答关于日常生活中自动化的问题。研究发现,无论他们观看的电影是以积极还是消极的方式描绘AI,仅仅观看对技术未来的电影愿景就会极化参与者的态度。乐观主义者对AI的热情变得更加极端,怀疑者变得更加谨慎。

 

 


这表明人们以一种有偏见的方式使用与AI相关的证据来支持他们现有的态度,这是一种根深蒂固的人类倾向,被称为“确认偏见”。随着AI在媒体和娱乐中的代表性越来越高,可能会导致社会分裂,一些人从中获益,一些人则拒绝接受。更重要的是,拒绝接受AI所提供的优势可能会使一大群人处于严重不利的境地。

 

 


幸运的是,我们已经有一些关于如何提高人们对AI的信任的想法。正如上面提到的研究发现的那样,只要有过与AI的以前经验,就可以显著提高人们对这项技术的看法。证据还表明,您使用互联网等其他技术的次数越多,就越信任它们。

 

 

 


另一个解决方案可能是更多地披露AI使用的算法及其用途。一些知名的社交媒体公司和在线市场已经发布有关政府请求和监视披露的透明度报告。对AI采取类似的做法可以帮助人们更好了解算法决策的方式。

 

 

 


研究表明,允许人们对AI的决策过程有一定的控制权也可以提高信任,并使AI能够从人类经验中学习。例如,一项研究表明,当人们被允许在算法中稍微修改时,他们对其决策更满意,更有可能相信它更优越,并更有可能在将来使用它。

 

 

 

 


我们不需要理解AI系统的复杂内部工作原理,但如果人们对如何实施它们负有一定的责任,他们将更愿意接受AI进入他们的生活。

2023年最新雅思模拟真题推荐:

2023雅思口语模考真题最新
2023雅思写作模考真题最新
2023雅思阅读模考真题最新
2023雅思听力模考真题最新
雅思口语模考
雅思写作批改
雅思真题资料题库PDF下载

 

有话要说:

Notifications
您的信息