What if robots were in charge of the world? | BBC Ideas
Here's a thought experiment. Could artificial intelligence govern us? Populism and disinformation are on the rise, and politics across the world seems to be dominated by emotions and strongman personalities. Leaders often seem to be more interested in short-term political gains, then the long-term needs of their electorate. But could machines do a better job?
这是一个思想实验。人工智能能统治我们吗?民粹主义和虚假信息正在上升,世界各地的政治似乎被情绪和铁腕人物所主导。领导人似乎往往对短期政治利益更感兴趣,而不是选民的长期需求。但是机器能做得更好吗?
Imagine a world where decisions are made based on impartial facts and data, where the decision makers are unconcerned by scandals, immune to corruption, and have no vested interest in maintaining their popularity. A world where climate change is a more pressing issue than the results of the latest focus group. And where global leaders don't risk instigating World War Three, by ranting on Twitter at 2 AM. Sounds too good to be true?
想象一个基于公正的事实和数据做出决策的世界,在那里,决策者不关心丑闻,不受腐败的影响,也没有既得利益来维持他们的声望。在这个世界里,气候变化是一个比最新焦点小组的结果更紧迫的问题。在那里,全球领导人不会冒着挑起第三次世界大战的风险,在凌晨两点在推特上咆哮。听起来是不是好得不近真实?
In fact, scientists believe there are no plausible circumstances in which machines would or could, replace governments entirely. While a machine might be able to make incredibly complex calculations, it would have no objective concept of right and wrong, no definitive way of deciding what's best. For example, it might be able to objectively analyse the financial cost of keeping someone alive through medical treatment, but it cannot quantify whether the human life is worth that cost.
事实上,科学家们认为,在任何情况下,机器都不会或不可能完全取代政府。虽然机器可能能够进行极其复杂的计算,但它没有客观的是非观念,没有决定什么是最好的明确方法。例如,机器也许能够客观地分析通过医疗手段维持生命的经济成本,但它无法量化人的生命是否值得付出这样的代价。
And while you could argue our current politicians may not be subject to enough accountability, it would be impossible to hold a machine accountable for its mistakes. After all, what do you do when a machine misbehaves? Tell its motherboard? It's not quite the Terminator, but perhaps the biggest risk in the medium term, is the use of lethal automated weapons. While there is currently human oversight, if drones were ever authorised to make life-or-death decisions, one mistake could trigger an automatic reaction and cause an accidental flash war. Which frankly sounds a tad more terrifying than Arnie stealing your clothes, boots and motorcycle.
虽然你可能会说我们当前的政治家可能没有受到足够的问责,但让一台机器为其错误负责是不可能的。毕竟,当机器出现问题时,你会怎么做?告诉它的主板吗?虽然这还算不上“终结者”,但从中期来看,最大的风险可能是使用致命的自动化武器。虽然目前有人类的监督,但如果无人机被授权做出生死攸关的决定,一个错误就可能触发自动反应,引发一场意外的闪电战。老实说,这听起来比阿尼偷走你的衣服、靴子和摩托车还要可怕。
As hard as it might be to believe, technology which surpasses human intelligence is decades if not centuries away. But even if it existed, scientists argue that it would be no more useful in government than the world's most intelligent human. Instead, it is far more likely that the use of artificial intelligence in government will continue on its current trajectory as an aid in decision-making, with humans having ultimate power.
尽管很难相信,但超越人类智慧的技术即使不是几个世纪,也要几十年后才会出现。但科学家们认为,即使这样的技术真的存在,它在政府中的作用也不会超过世界上最聪明的人类。相反,人工智能在政府中的应用更有可能沿着目前的轨迹发展下去,作为决策的辅助工具,而人类则拥有最终的权力。
AI is already being used to assist in deciding who gets grants or benefits, in healthcare and policing. But think of it like VAR, with a human being acting as the referee. Of course, as machines are programmed by humans and their conclusions used to support human decisions, they can be susceptible to human bias, and their findings can be used selectively. Machines learn from data, which is gathered from the world we live in, as opposed to the world we'd like to live in.
在医疗保健和警务领域,人工智能已被用于协助决定谁能获得补助金或福利。但可以把它想象成视频助理裁判,由人类担任裁判。当然,由于机器是由人类编程的,它们的结论用于支持人类的决策,因此可能容易受到人类偏见的影响,其结论也可能被选择性地使用。机器从数据中学习,而数据是从我们生活的世界中收集的,而不是我们希望生活的世界。
In places like the US, where African-Americans are often disproportionately and, in some cases, lethally targeted by the police, predictive policing could interpret existing data, to potentially perpetuate those discriminatory patterns. Sadly, it would seem that machine learning is no more equipped than human beings to make big ethical calls. AI would not be an infallible replacement for flawed human beings. How we use AI to govern, whether or not it is manipulated or how mistakes are made, are all down to human beings themselves.
在美国等地方,非裔美国人常常被警察不成比例地、有时甚至是致命地针对,预测性警务可能会解读现有数据,从而可能延续这些歧视性模式。遗憾的是,机器学习似乎并不比人类更有能力做出重大的道德决策。人工智能并不会成为有缺陷人类的完美替代品。我们如何使用人工智能来治理,它是否被操纵,或者如何出错,这些都取决于人类自己。
In short, AI is much more human than we ever realised, which is perhaps the scariest notion of all.
简而言之,人工智能比我们意识到的更像人类,这可能是最可怕的想法。
🌟 视频版和pdf见公众号【琐简英语】,回复"1"可进【打卡交流群】
空空如也
暂无小宇宙热门评论