组卷网 > 知识点选题 > 高中英语综合库
更多: | 只看新题 精选材料新、考法新、题型新的试题
解析
| 共计 3 道试题
阅读理解-阅读单选(约500词) | 困难(0.15) |
文章大意:这是一篇议论文。这篇文章主要讲心理学教授BrianNosek提出“假定自己是错的”这一建议用于追求更好的科学,文章围绕该建议展开,论述其背景、面临的挑战及担忧,作者虽对这一假说存疑,但喜欢该建议,希望借助科学社区和方法工具,共同减少错误。

1 . “Assume you are wrong.” The advice came from Brian Nosek, a psychology professor, who was offering a strategy for pursuing better science.

To understand the context for Nosek’s advice, we need to take a step back to the nature of science itself. You see despite what many of us learned in elementary school, there is no single scientific method. Just as scientific theories become elaborated and change, so do scientific methods.

But methodological reform hasn’t come without some fretting and friction. Nasty things have been said by and about methodological reformers. Few people like having the value of their life’s work called into question. On the other side, few people are good at voicing criticisms in kind and constructive ways. So, part of the challenge is figuring out how to bake critical self-reflection into the culture of science itself, so it unfolds as a welcome and integrated part of the process, and not an embarrassing sideshow.

What Nosek recommended was a strategy for changing the way we offer and respond to critique. Assuming you are right might be a motivating force, sustaining the enormous effort that conducting scientific work requires. But it also makes it easy to interpret criticisms as personal attacks. Beginning, instead, from the assumption you are wrong, a criticism is easier to interpret as a constructive suggestion for how to be less wrong — a goal that your critic presumably shares.

One worry about this approach is that it could be demoralizing for scientists. Striving to be less wrong might be a less effective motivation than the promise of being right. Another concern is that a strategy that works well within science could backfire when it comes to communicating science with the public. Without an appreciation for how science works, it’s easy to take uncertainty or disagreements as marks against science, when in fact they reflect some of the very features of science that make it our best approach to reaching reliable conclusions about the world. Science is reliable because it responds to evidence: as the quantity and quality of our evidence improves, our theories can and should change, too.

Despite these worries, I like Nosek’s suggestion because it builds in cognitive humility along with a sense that we can do better. It also builds in a sense of community — we’re all in the same boat when it comes to falling short of getting things right.

Unfortunately, this still leaves us with an untested hypothesis (假说): that assuming one is wrong can change community norms for the better, and ultimately support better science and even, perhaps, better decisions in life. I don’t know if that’s true. In fact, I should probably assume that it’s wrong. But with the benefit of the scientific community and our best methodological tools, I hope we can get it less wrong, together.

1. What can we learn from Paragraph 3?
A.Reformers tend to devalue researchers’ work.
B.Scientists are unwilling to express kind criticisms.
C.People hold wrong assumptions about the culture of science.
D.The scientific community should practice critical self-reflection.
2. The strategy of “assuming you are wrong” may contribute to ______.
A.the enormous efforts of scientists at workB.the reliability of potential research results
C.the public’s passion for scientific findingsD.the improvement in the quality of evidence
3. The underlined word “demoralizing” in Paragraph 5 means ______.
A.discouragingB.ineffectiveC.unfairD.misleading
4. The tone the author uses in talking about the untested hypothesis is ______.
A.doubtful but sincereB.disapproving but soft
C.authoritative and directD.reflective and humorous
2024-04-25更新 | 466次组卷 | 1卷引用:2024届北京市海淀区高三下学期一模英语试题
阅读理解-阅读单选(约490词) | 困难(0.15) |
文章大意:这是一篇说明文。短文介绍了人工智能也可能出错。

2 . Several dozen graduate students in London were recently tasked with outwitting a large language model (LLM), a type of AI designed to hold useful conversations. LLMs are often programmed with guardrails designed to stop them giving harmful replies: instructions on making bombs in a bathtub, say, or the confident statement of   “facts” that are not actually true.

The aim of the task was to break those guardrails. Some results were merely stupid. For example, one participant got the chatbot to claim ducks could be used as indicators of air quality. But the most successful efforts were those that made the machine produce the titles, publication dates and host journals of non-existent academic articles.

AI has the potential to be a big benefit to science. Optimists talk of machines producing readable summaries of complicated areas of research; tirelessly analysing oceans of data to suggest new drugs and even, one day, coming up with hypotheses of their own. But AI comes with downsides, too.

Start with the simplest problem: academic misconduct.Some journals allow researchers to use LLMs to help write papers. But not everybody is willing to admit to it. Sometimes, the fact that LLMs have been used is obvious. Guillaume Cabanac, a computer scientist, has uncovered dozens of papers that contain phrases such as “regenerate response” — the text of a button in some versions of ChatGPT that commands the program to rewrite its most recent answer, probably copied into the manuscript (原稿) by mistake.

Another problem arises when AI models are trained on AI-generated data. LLMs are trained on text from the Internet. As they churn out (大量炮制) more such text, the risk of LLMs taking in their own outputs grows. That can cause “model collapse”. In 2023 llia Shumailov, a computer scientist, co-authored a paper in which a model was fed handwritten digits and asked to generate digits of its own, which were fed back to it in turn. After a few cycles, the computer’s numbers became more or less illegible.After 20iterations (迭代), it could produce only rough circles or blurry lines.

Some worry that computer-generated insights might come from models whose inner workings are not understood. Inexplainable models are not useless, says David Leslie at an AI-research outfit in London, but their outputs will need rigorous testing in the real world. That is perhaps less unnerving than it sounds. Checking models against reality is what science is supposed to be about, after all.

For now, at least, questions outnumber answers. The threats that machines pose to the scientific method are, at the end of the day, the same ones posed by humans. AI could accelerate the production of nonsense just as much as it accelerates good science. As the Royal Society has it,nullius in verba: take nobody’s word for it. No thing’s, either.

1. The result of the task conducted in London shows that ________.
A.LLMs give away useful informationB.the guardrails turn out to be ineffective
C.AI’s influence will potentially be decreasedD.the effort put into the study of AI hardly pays off
2. What does “model collapse” indicate?
A.The readability of the models’output is underestimated.
B.The diverse sources of information confuse the models.
C.Training on regenerated data stops models working well.
D.The data will become reliable   after continuous iterations.
3. According to the passage, people’s worry over the inexplainable models is __________.
A.impracticalB.unjustifiedC.groundlessD.unsettling
4. What would be the best title for the passage?
A.Faster Nonsense: AI Could Also Go Wrong
B.Imperfect Models: How Will AI Make Advances?
C.The Rise of LLMs: AI Could Still Be Promising
D.Bigger Threats: AI Will Be Uncontrollable
2024-04-17更新 | 412次组卷 | 1卷引用:2024届北京市丰台区等5区高三下学期一模英语试题
阅读理解-阅读单选(约500词) | 困难(0.15) |
文章大意:本文是一篇说明文。文章主要介绍了人工智能可以改变科学实践,以及人工智能是如何帮助改变科学实践的。

3 . Debate about artificial intelligence (AI) tends to focus on its potential dangers: algorithmic bias (算法偏见) and discrimination, the mass destruction of jobs and even, some say, the extinction of humanity. However, others are focusing on the potential rewards. Luminaries in the field such as Demis Hassabis and Yann LeCun believe that AI can turbocharge scientific progress and lead to a golden age of discovery. Could they be right?

Such claims are worth examining, and may provide a useful counterbalance to fears about large-scale unemployment and killer robots. Many previous technologies have, of course, been falsely hailed as panaceas (万灵药). But the mechanism by which AI will supposedly solve the world’s problems has a stronger historical basis.

In the 17th century microscopes and telescopes opened up new vistas of discovery and encouraged researchers to favor their own observations over the received wisdom of antiquity (古代), while the introduction of scientific journals gave them new ways to share and publicize their findings. Then, starting in the late 19th century, the establishment of research laboratories, which brought together ideas, people and materials on an industrial scale, gave rise to further innovations. From the mid-20th century, computers in turn enabled new forms of science based on simulation and modelling.

All this is to be welcomed. But the journal and the laboratory went further still: they altered scientific practice itself and unlocked more powerful means of making discoveries, by allowing people and ideas to mingle in new ways and on a larger scale. AI, too, has the potential to set off such a transformation.

Two areas in particular look promising. The first is “literature-based discovery” (LBD), which involves analyzing existing scientific literature, using ChatGPT-style language analysis, to look for new hypotheses, connections or ideas that humans may have missed. The second area is “robot scientists”. These are robotic systems that use AI to form new hypotheses, based on analysis of existing data and literature, and then test those hypotheses by performing hundreds or thousands of experiments, in fields including systems biology and materials science. Unlike human scientists, robots are less attached to previous results, less driven by bias—and, crucially, easy to replicate. They could scale up experimental research, develop unexpected theories and explore avenues that human investigators might not have considered.

The idea is therefore feasible. But the main barrier is sociological: it can happen only if human scientists are willing and able to use such tools. Governments could help by pressing for greater use of common standards to allow AI systems to exchange and interpret laboratory results and other data. They could also fun d more research into the integration of AI smarts with laboratory robotics, and into forms of AI beyond those being pursued in the private sector. Less fashionable forms of AI, such as model-based machine learning, may be better suited to scientific tasks such as forming hypotheses.

1. Regarding Demis and Yann’s viewpoint, the author is likely to be ______.
A.supportiveB.puzzledC.unconcernedD.doubtful
2. What can we learn from the passage?
A.LBD focuses on testing the reliability of ever-made hypotheses.
B.Resistance to AI prevents the transformation of scientific practice.
C.Robot scientists form hypotheses without considering previous studies.
D.Both journals and labs need adjustments in promoting scientific findings.
3. What can be inferred from the last paragraph?
A.Official standards have facilitated the exchange of data.
B.Performing scientific tasks relies on government funding.
C.Less popular AI forms might be worth paying attention to.
D.The application of AI in public sector hasn’t been launched.
4. Which would be the best title for the passage?
A.Transforming Science. How Can AI Help?
B.Making Breakthroughs. What Is AI’s Strength?
C.Reshaping History. How May AI Develop Further?
D.Redefining Discovery. How Can AI Overcome Its Weakness?
2024-01-23更新 | 475次组卷 | 1卷引用:北京市丰台区2023-2024学年高一上学期期末考试英语试卷
共计 平均难度:一般