剑桥雅思18阅读Test2Passage2本文探讨了强大的人工智能(AI)与人类价值保持一致的重要性,并讨论了是否需要AI监督这些价值观。
剑桥雅思18阅读Test2Passage2原文译文
文章提出了AI的快速发展和潜在的通用智能(AGI)可能性,同时也指出了AI在思考能力和道德问题上的优势。然而,确保AI目标与人类价值观一致存在挑战。文章强调了需要创造出“友好型”AI,使其目标可靠地与人类价值观保持一致,并讨论了机器在道德领域的超越人类的可能性。最后,文章强调需谨慎权衡安全和个人自主权之间的平衡,以及我们对AI所期望的目标。
引言 Powerful artificial intelligence (AI) needs to be reliably aligned with human values, but does this mean AI will eventually have to police those values? 第1段 This has been the decade of AI, with one astonishing feat after another. A chess-playing AI that can defeat not only all human chess players, but also all previous human-programmed chess machines, after learning the game in just four hours? That’s yesterday’s news, what’s next? True, these prodigious accomplishments are all in so-called narrow Al, where machines perform highly specialised tasks. But many experts believe this restriction is very temporary. By mid-century, we may have artificial general intelligence (AGI) – machines that can achieve human-level performance on the full range of tasks that we ourselves can tackle. 第2段 If so, there’s little reason to think it will stop there. Machines will be free of many of the physical constraints on human intelligence. Our brains run at slow biochemical processing speeds on the power of a light bulb, and their size is restricted by the dimensions of the human birth canal. It is remarkable what they accomplish, given these handicaps. But they may be as far from the physical limits of thought as our eyes are from the incredibly powerful Webb Space Telescope. 第3段 Once machines are better than us at designing even smarter machines, progress toward these limits could accelerate. What would this mean for us? Could we ensure safe and worthwhile coexistence with such machines? On the plus side, AI is already useful and profitable for many things, and super AI might be expected to be super useful, and super profitable. But the more powerful AI becomes, the more important it will be to specify its goals with great care. Folklore is full of tales of people who ask for the wrong thing, with disastrous consequences – King Midas, for example, might have wished that everything he touched turned to gold, but didn’t really intend this to apply to his breakfast. 第4段 So we need to create powerful AI machines that are ‘human-friendly’- that have goals reliably aligned with our own values. One thing that makes this task difficult is that we are far from reliably human-friendly ourselves. We do many terrible things to each other and to many other creatures with whom we share the planet. If superintelligent machines don’t do a lot better than us, we’ll be in deep trouble. We’ll have powerful new intelligence amplifying the dark sides of our own fallible natures. 第5段 For safety’s sake, then, we want the machines to be ethically as well as cognitively superhuman. We want them to aim for the moral high ground, not for the troughs in which many of us spend some of our time. Luckily they’ll be smart enough for the job. If there are routes to the moral high ground, they’ll be better than us at finding them, and steering us in the right direction. 第6段 However, there are two big problems with this utopian vision. One is how we get the machines started on the journey, the other is what it would mean to reach this destination. The ‘getting started’ problem is that we need to tell the machines what they’re looking for with sufficient clarity that we can be confident they will find it – whatever ‘it’ actually turns out to be. This won’t be easy, given that we are tribal creatures and conflicted about the ideals ourselves. We often ignore the suffering of strangers, and even contribute to it, at least indirectly. How then, do we point machines in the direction of something better? 第7段 As for the ‘destination’ problem, we might, by putting ourselves in the hands of these moral guides and gatekeepers, be sacrificing our own autonomy – an important part of what makes us human. Machines who are better than us at sticking to the moral high ground may be expected to discourage some of the lapses we presently take for granted. We might lose our freedom to discriminate in favour of our own communities, for example. 第8段 Loss of freedom to behave badly isn’t always a bad thing, of course: denying ourselves the freedom to put children to work in factories, or to smoke in restaurants are signs of progress. But are we ready for ethical silicon police limiting our options? They might be so good at doing it that we won’t notice them; but few of us are likely to welcome such a future. 第9段 These issues might seem far-fetched, but they are to some extent already here. AI already has some input into how resources are used in our National Health Service (NHS)here in the UK, for example. If it was given a greater role, it might do so much more efficiently than humans can manage, and act in the interests of taxpayers and those who use the health system. However, we’d be depriving some humans (e.g. senior doctors) of the control they presently enjoy. Since we’d want to ensure that people are treated equally and that policies are fair, the goals of AI would need to be specified correctly. 第10段 We have a new powerful technology to deal with – itself, literally, a new way of thinking. For our own safety, we need to point these new thinkers in the right direction, and get them to act well for us. It is not yet clear whether this is possible, but if it is, it will require a cooperative spirit, and a willingness to set aside self-interest. 第11段 Both general intelligence and moral reasoning are often thought to be uniquely human capacities. But safety seems to require that we think of them as a package: if we are to give general intelligence to machines, we’ll need to give them moral authority, too. And where exactly would that leave human beings? All the more reason to think about the destination now, and to be careful about what we wish for. | 强大的人工智能(AI)需要可靠地与人类价值观保持一致,但这是否意味着AI最终将不得不监督这些价值观?
第1段 这个十年是属于AI的。一个又一个令人惊叹的成就接连而至。一个下棋的AI能够击败不仅所有的人类棋手,还有之前所有人类编程的棋类机器,仅仅在4小时内学会这个游戏?这已经是昨天的新闻了,下一个是什么?当然,在这些非凡的成就中都是在所谓的“狭义AI”领域,即机器执行高度专业化的任务。但是许多专家认为这种限制是非常暂时的。到2050年,我们可以拥有人工通用智能(AGI)——机器可以在我们自己能够处理的所有任务范围内达到人类水平的表现。 第2段 如果是这样,没有理由认为AI的发展会止步不前。机器将摆脱许多限制人类智能的物理条件。我们的大脑以一个灯泡的电力运行,采用缓慢的生化处理速度,并且其大小受限于人类生产通道的尺寸。考虑到这些劣势,我们的大脑所能实现的真是令人惊叹。但是它们可能与思考的物理极限一样的遥远,就好像我们的眼睛与强大的詹姆斯·韦伯太空望远镜相距甚远一样。 第3段 一旦机器优于我们,在设计更聪明的机器方面取得进展,那么达到这些极限的进程可能会加速。那对我们意味着什么呢?我们是否能够确保与这些机器安全和有意义地共存?好处是,AI已经在许多领域非常有用和赚钱,而超级AI可能会被期望非常有用和赚钱。但是,AI越强大,确保其目标的明确定义就变得更加重要。传说中充满了人们因为要求错误而造成灾难后果的故事——例如,国王米达斯可能希望他触碰的一切都变成黄金,但他并没有真正希望将这个应用到早餐。 第4段 因此,我们需要创造出“友好型”强大AI机器,使其目标可靠地与我们自己的价值观保持一致。让这个任务变得困难的是,我们自己在保持人类友好方面远远不能可靠。我们对彼此和与我们共享这个星球的许多其他生物做出了许多可怕的事情。如果超级智能机器的表现比我们好不了多少,那我们将陷入深深的麻烦中。我们将有一个强大的新智能放大我们自己易犯错误的一面。
第5段 出于安全的考虑,我们希望机器在道德上也能超越人类,就像认知上一样。我们希望它们追求道德高地,而不是我们中许多人花时间度过的低谷。幸运的是,它们将足够聪明完成这个任务。如果存在通往道德高地的路线,它们将比我们更善于找到并引领我们朝着正确的方向前进。 第6段 然而,这个乌托邦式的愿景存在两个大问题。一个问题是如何让机器开始进行这一旅程,另一个问题是达到这个目标意味着什么。开始的问题是,我们需要清楚地告诉机器他们在寻找什么,以便我们可以确信他们将找到它 - 无论这个“它”实际上是什么。考虑到我们是部族生物,对理想本身存在矛盾情感,这并不容易。我们经常??视陌生人的苦难,甚至间接地为之做出贡献。那么,我们如何将机器引导指向更好的方向? 第7段 至于“目标”的问题,如果我们将自己交给这些道德指南和守门人,我们可能会牺牲自己的自主权 - 这是构成我们人类的重要部分。超过我们在道德高地上的表现的机器可能会劝阻我们目前认为理所当然的一些失误。我们可能会失去因自己的社群而有歧视的自由,例如。
第8段 当然,失去行为不良的自由并不总是一件坏事:为自己禁止将儿童送入工厂劳动,或在餐馆内吸烟是进步的标志。但是,我们准备好接受道德硅谷警察限制我们的选择吗?它们可能做得非常出色,以至于我们都注意不到它们的存在;但我们当中很少有人会欢迎这样的未来。 第9段 这些问题可能听起来有些牵强,但在某种程度上已经存在。例如,在英国,AI已经对我们国家医疗服务(NHS)的资源使用有一定的影响。如果它被赋予更大的作用,可能比人类管理更高效,并为纳税人和使用医疗系统的人行事。然而,我们将剥夺一些人(如高级医生)目前享有的控制权。由于我们希望确保人们受到平等对待,政策公平,因此需要正确指定AI的目标。
第10段 我们有一个新的强大技术要处理 - 它本身是一种全新的思维方式。为了我们的安全,我们需要将这些新思考者引向正确的方向,并让它们为我们表现出良好的行为。目前尚不清楚这是否可能,但如果可能的话,它将需要合作的精神和愿意放下个人的利益。 第11段 通常认为,普通智能和道德推理是人类所特有的能力。但是出于安全的考虑,我们需要将它们作为一个整体来考虑:如果我们将普通智能交给机器,我们也需要给予它们道德权威。那么人类会处于什么地位?更有理由现在就考虑这个目的地,并谨慎地考虑我们所期望的是什么。 |
2023年最新雅思模拟真题推荐:
2023雅思口语模考真题最新 |
2023雅思写作模考真题最新 |
2023雅思阅读模考真题最新 |
2023雅思听力模考真题最新 |
雅思口语模考 |
雅思写作批改 |
雅思真题资料题库PDF下载 |
有话要说: