2 . In one horrible film plot, AI eventually outsmarts humans and takes over computers and factories. In another, large language models (LLMs) of the sort that power generative AI like ChatGPT give bad guys the know — how to create destructive cyberweapons.
It is time to think hard about these film plots, not because they have become more probable but because policymakers around the world are considering measures to guard against them. The idea that AI could drive humans to extinction is speculative — no one yet knows how such a threat might materialise and no common methods exist for determining what counts as risky. Plenty of research needs to be done before standards and rules can be set.
Governments cannot ignore a technology that could change the world deeply. Regulators have been too slow in the past, but there is danger, too, in acting hurriedly. If they go too fast, policymakers could create global rules that are aimed at the wrong problems and are ineffective against the real ones.
Because of the computing resources and technical skills required, only a handful of companies have so far developed powerful “frontier” models. New hurried regulations could easily block out competitors to the “handful of companies”, especially because these companies are working closely with governments on writing the rule book. A focus on extreme risks is likely to make regulators careful of open-source models, which are freely available and can easily be revised.
The best that governments can do now is to set up the basic systems to study the technology and its potential risks, and ensure that those working on the problem have enough resources. As AI develops further, regulators will have a far better idea of what risks they are guarding against, and consequently what the rule book should look like. A fully mature body could eventually take shape. But creating it will take time and reflection.
1. What does the first paragraph function as?
A.An argument. | B.An explanation. | C.A lead-in. | D.A comment. |
2. What does the author think of AI driving humans to extinction?
A.He believes it is a realistic possibility. |
B.He considers it fictional and unworthy for policymakers to pay attention to. |
C.He views it as an uncertain threat that needs more research. |
D.He perceives it as a seemingly reasonable situation that requires serious consideration. |
3. What is the harm of regulators’ going too fast on the AI issue?
A.Competition in this area is prevented. | B.The development of AI is restricted. |
C.AI will be applied to a limited degree. | D.The public will be misled about danger. |
4. Which can be the best title of the text?
A.AI: a Real Threat? | B.Don’t Rush into Policing AI |
C.AI: Humans’ Friend or Enemy? | D.Time for Government to Regulate AI |