Skip to content

Reading Comprehension Text 3

This year marks exactly two centuries since the publication of Frankenstein; or, The Modern Prometheus, by Mary Shelley. Even before the invention of the electric light bulb, the author produced a remarkable work of speculative fiction that would foreshadow many ethical questions to be raised by technologies yet to come.
Today the rapid growth of artificial intelligence (AI) raises fundamental questions: “What is intelligence, identity, or consciousness? What makes humans humans?”
What is being called artificial general intelligence, machines that would imitate the way humans think, continues to elude scientists. Yet humans remain fascinated by the idea of robots that would look, move, and respond like humans, similar to those recently depicted on popular sci-fi TV series such asWestworldandHumans.”
Just how people think is still far too complex to be understood, let alone reproduced, says David Eagleman, a Stanford University neuroscientist. “We are just in a situation where there are no good theories explaining what consciousness actually is and how you could ever build a machine to get there.”
But that doesnt mean crucial ethical issues involving AI arent at hand. The coming use of autonomous vehicles, for example, poses thorny ethical questions. Human drivers sometimes must make split-second decisions. Their reactions may be a complex combination of instant reflexes, input from past driving experiences, and what their eyes and ears tell them in that moment. AIvisiontoday is not nearly as sophisticated as that of humans. And to anticipate every imaginable driving situation is a difficult programming problem.
Whenever decisions are based on masses of data, “you quickly get into a lot of ethical questions,” notes Tan Kiat How, chief executive of a Singapore-based agency that is helping the government develop a voluntary code for the ethical use of AI. Along with Singapore, other governments and mega-corporations are beginning to establish their own guidelines. Britain is setting up a data ethics center. India released its AI ethics strategy this spring.
On June 7 Google pledged to notdesign or deploy AIthat would causeoverall harm,” or to develop AI-directed weapons or use AI for surveillance that would violate international norms. It also pledged not to deploy AI whose use would violate international laws or human rights.
While the statement is vague, it represents one starting point. So does the idea that decisions made by AI systems should beexplainable, transparent, and fair.”
To put it another way: How can we make sure that the thinking of intelligent machines reflects humanitys highest values? Only then will they be useful servants and not Frankensteins out-of-control monster.
31. Mary Shelley’s novel Frankenstein is mentioned because it
[A]
fascinates AI scientists all over the world. 
[B]
has remained popular for as long as 200 years. 
[C]
involves some concerns raised by AI today. 
[D]
has sparked serious ethical controversies. 
32. In David Eagleman’s opinion, our current knowledge of consciousness
[A]
helps explain artificial intelligence. 
[B]
can be misleading to robot making. 
[C]
inspires popular sci-fi TV series. 
[D]
is too limited for us to reproduce it. 
33. The solution to the ethical issues brought by autonomous vehicles
[A]
can hardly ever be found. 
[B]
is still beyond our capacity. 
[C]
causes little public concern. 
[D]
has aroused much curiosity. 
34. The author’s attitude toward Google’s pledge is one of
[A]
affirmation. 
[B]
skepticism. 
[C]
contempt. 
[D]
respect. 
35. Which of the following would be the best title for the text?
[A]
AI’s Future: In the Hands of Tech Giants 
[B]
Frankenstein, the Novel Predicting the Age of AI 
[C]
The Conscience of AI: Complex But Inevitable 
[D]
AI Shall Be Killers Once Out of Control 

答案与解析 (Answers)

31. [C] involves some concerns raised by AI today.
解析:第一段指出,《弗兰肯斯坦》这部科幻小说“foreshadow many ethical questions to be raised by technologies yet to come(预示了未来技术将引发的许多伦理问题)”,这与当今 AI 引发的伦理担忧相呼应。

32. [D] is too limited for us to reproduce it.
解析:第四段戴维·伊格尔曼指出,人类思考的方式太复杂以至于难以理解,“let alone reproduced(更不用说复制了)”,且缺乏解释意识的良好理论。这说明我们对意识的了解太有限,无法复制它。

33. [B] is still beyond our capacity.
解析:第五段指出,自动驾驶面临复杂的瞬间决策,而如今 AI 的“视觉”远不如人类复杂,“anticipate every imaginable driving situation is a difficult programming problem(预测每种可想象的驾驶情况是一个困难的编程问题)”,说明这仍然超出了我们当前的能力。

34. [A] affirmation.
解析:倒数第二段明确提到,尽管谷歌的声明有些模糊,“it represents one starting point(它代表了一个起点)”。将它视作一个起点,表明作者对其持肯定(affirmation)态度。

35. [C] The Conscience of AI: Complex But Inevitable.
解析:文章从小说切入,探讨了 AI 的意识、伦理、安全以及各方如何建立规范,最后总结“如何确保智能机器反映人类最高价值观”。“良知/道德”贯穿全文,“复杂但不可避免”高度概括了文章主旨。

核心长难句精解 (Highlighted Sentences)

1. 定语从句与不定式被动语态:
"Even before the invention of the electric light bulb, the author produced a remarkable work of speculative fiction that would foreshadow many ethical questions to be raised by technologies yet to come."
【解析】that would foreshadow 是修饰 fiction 的定语从句。to be raised 是动词不定式的被动语态,作后置定语修饰 questions,表示“将要被引发的”。
【翻译】甚至在电灯泡发明之前,这位作者就创作了一部卓越的科幻小说,它预示了未来技术将引发的许多伦理问题。
2. 主语从句与同位语:
"What is being called artificial general intelligence, machines that would imitate the way humans think, continues to elude scientists."
【解析】What is being called... 是主语从句。machines that... 是对前面主语从句的同位语解释。elude 意为“使...迷惑/无法实现”。
【翻译】所谓的通用人工智能,即能模仿人类思维方式的机器,至今仍让科学家们束手无策。
3. 倒装句与条件逻辑:
"Only then will they be useful servants and not Frankenstein’s out-of-control monster."
【解析】Only then 置于句首,引起部分倒装(will they be)。这句话是全文的警示性总结。
【翻译】只有那样,它们才会成为有用的仆人,而不是弗兰肯斯坦笔下失控的怪物。

Practice makes perfect.