1 . We can picture a future world where machines will enlarge our human abilities and help us make better life choices, from health to wealth, as we live longer and longer and technology continues its rapid development. Through our dialogue and digital pieces, AI (人工智能) will understand our life goals and wishes, our duties and limitations. It will help us plan different life events, so we can spend more time enjoying life’s moments.
The ability for AI to understanad the complexities (复杂性) and slight differences of human conversation is, however, one hurdle (障碍). There are several thousand known living languages in the world today. Adding to the difficulties are the varied ways words are shared and used across different cultures, including grammar, levels of education and styles of the speakers. Google Duplex, the technology supporting Google Assistant, which places phone calls using a natural-sounding human voice instead of a robotic one, is an early attempt to address such challenges in human communications. But these are just initial whispers in voice of AI’s long journey.
Beyond making reservations and conducting simple dialogues, virtual assistants will need to become far more useful and further fit into the parts of our everyday lives. Not only will they need to foresee what we need before we ask, they also need to understand the context of our conversations and react accordingly. Imagine a snow day when school is canceled for the kids. Knowing that you must now stay at home with your children, your phone would remind you, asking if you’d like your meetings moved to the following day. Or imagine how much more pleasant your journey home from a business trip would be if your phone could automatically arrange for a ride waiting to pick you up at the airport, based on your travel plan, location, and habits. The possibilities are endless.
1. What do we know about AI?A.It narrows our abilities. | B.It limits our better choice. |
C.It helps us enjoy life better. | D.It doesn’t know our limitations. |
A.Imperfection of AI. | B.Low level of speakers. |
C.Similarities between languages. | D.Varieties of languages and cultures. |
A.Doubtful. | B.Approving. | C.Negative. | D.Critical (批判的). |
A.AI Will Replace Humans | B.AI Will Set Goals for Us |
C.AI Will Meet Challenges | D.AI Will Be Part of Our Daily Lives |
2 . Is it better to have powerful generative AI systems open or closed? This question is quickly becoming a significant technological and ideological (思想上的) debate in our times!
Supporters believe they help more people use the technology, create new ideas, and make it more reliable by encouraging outside inquiry. Smaller open models are cheaper to make and use, and they add competition to a field mostly controlled by big US companies like Google, Microsoft, and OpenAI, who have spent billions on making huge, private, and closely controlled generative Al systems.
However, those who disagree say that open models could cause a lot of problems. Bad people can use them to spread false information that is personalized, and terrorists might use them to create cyber or bioweapons. Geoffrey Hinton, one of the pioneers of modern AI, has warned that open source allows more crazy people to do crazy things.
Supporters of open models disagree, finding it ridiculous that open generative AI models enable people to access information that they can’t find on the internet or from a bad scientist. They also point out that big tech companies only talk about the dangers of open models to help themselves compete and become more powerful in the market.
However, this debate also involves an ideological aspect. Yann LeCun, the chief scientist at Meta, holds the belief that controlling technology may give rise to a knowledge gap, as only a chosen group of experts would be qualified and wise enough to deal with knowledge.
In the future, we will use AI systems to search and use the huge amount of digital knowledge created by humans. We should not want a handful of Silicon Valley companies to control that access. Wendy Hall, royal professor of computer science at Southampton university, says we do not want to live in a world where only the big companies run generative Al. Nor do we want to allow users to do anything they like with open models. “We have to find some compromise,” she suggests.
We should avoid a strict either-or approach when it comes to AI models. Both open and closed models have their strengths and weaknesses. As these models improve, we will need to adjust the balance between encouraging competition and keeping control.
1. What can we learn from this passage?A.It needs billions of dollars to develop open-source models. |
B.Only self-selecting experts can handle open models wisely. |
C.Small open models boost AI competition at a lower cost. |
D.Users can do anything they like with open models recently. |
A.Because it allows more crazy people to do crazy things. |
B.Because it limits competition in the AI field. |
C.Because it slows down new and exciting AI innovation. |
D.Because it restricts access to digital knowledge. |
A.supportive | B.puzzled |
C.unconcerned | D.opposed |
A.Why Open AI Models Are the Future |
B.How to Create Powerful AI Systems |
C.Where does the Debate on Open AI End |
D.Open vs. Closed AI: The Great Debate |
3 . Laughter comes in many forms, from a polite and quiet laugh to a great hearty laugh. Scientists are now developing an AI system to recreate different laughs in proper social contexts (环境). The team behind the laughing robot Erica said that the system could improve natural conversations between people and an AI robot. “We think that one of the important functions of conversational AI is empathy (共情),” said Dr Koji Inoue, the lead author of the research. “So we decided that one way a robot can empathize with its users is to share their laughter.”
The team have set out to teach their AI system the art of conversational laughter. They gathered training data from more than 80 daily dialogues between male subjects and the robot that was initially operated by four actresses remotely. The dialogue data was grouped into social laughs (where polite or embarrassed laughter isn’t involved) and laughter of joy. Based on the audio files, the robot learned the basic characteristics of social laughs, which tend to be softer, and merry laughs, with the aim of mirroring these inappropriate situations.
“Our biggest obstructor in the work was identifying the actual cases of shared laughter because as you know, most laughter is actually not shared at all,” said Inoue. “We had to carefully decide exactly which laughs we could use for our analysis and we couldn’t just assume (认为) that any laugh can be responded to. It was really not easy work.” The team said laughter could help create robots with their own distinct character although it could take more than 20 years before it would be possible to have a casual chat with a robot like we would with a friend.
“One of the things we’d keep in mind is that a laughing robot will never be able to understand you or the meaning of laughter,” points out Prof. Sandra Wachter of the Oxford Internet Institute. “But with their development, they might get very good at tricking you into believing they understand what’s going on.”
1. Why did Inoue’s team develop the AI system?A.To better understand human empathy. |
B.To promote the social skills of robots. |
C.To explore the differences between laughs. |
D.To assist robots in identifying people’s moods. |
A.Repeat the details of the 80 dialogues. |
B.Distinguish people by hearing their laughs |
C.Recreate a scene played by the four actresses. |
D.Master the features of laughs provided by data. |
A.potential | B.difficulty | C.choice | D.mistake |
A.Are AI systems going beyond human ability? |
B.Can conversational AI really understand us? |
C.Laughing robots are round the corner |
D.Robots become laughing masters |
4 . ChatGPT maker OpenAI from the United States stepped up the global artificial intelligence race in mid-February when it released its text-to-video generation tool Sora. That made me wonder — how long before China develops its own Sora? And, will AI become China’s critical new productive force?
According to OpenAI’s explainer, Sora is capable of generating complex scenes with a very high degree of accuracy, including multiple characters, specific types of movements, themes and backgrounds. It understands not only what the user requests, but also how these things exist in the physical world.
On Feb 16, Zhou Hongyi, founder of cybersecurity firm 360 Security Technology, said Sora may bring a huge disruption to the advertising industry, movie trailers and the short-video industry; what’s more, the realization of generative AI may be shortened from 10 years to one or two years.
“Although the development level of large-scale models in China seems to be close to GPT-3.5, there is still an 18-month gap compared to GPT-4.0. OpenAI should still have an ace or two up its sleeve, whether it is GPT-5.0 or machine self-learning to generate content,” Zhou said, adding that it is worth paying attention.
According to a report by the Beijing Municipal Science & Technology Commission, China had developed at least 254 AI large language models by October last year. Currently, most domestic large models still have a huge gap with GPT-4.0.
But the country could make use of such frontier AI technologies in more industry-specific fields. Or, to put it simply, China needs to apply such technologies to real use, to develop them into productive forces and narrow the gap between itself and the US.
Though China still has a gap with the US in such a large model, Chinese AI startup ModelBest Inc launched last month its latest lightweight large model, an emerging less expensive AI technology that aims at more targeted commercialized fields.
1. What is Sora according to the passage?A.It is a kind of new productive force for China. |
B.It is an AI tool to produce videos out of texts. |
C.It is an Internet user in the physical world. |
D.It is a latest large-scale security model. |
A.Sora will bring both challenges and chances. |
B.Sora can understand users’ feelings at ease. |
C.Sora has totally changed the movie industry. |
D.Sora will soon be replaced by other AI tools. |
A.China’s domestic large models should be used in education. |
B.China should take advantage of present AI technologies. |
C.China should develop less expensive AI technology. |
D.China should speed up its development in OpenAI. |
5 . A new study suggests that long periods in space can cause the human heart to shrink (缩小). The study—by a team of American researchers—comes as the U.S. makes plans to build a long-term base on the moon and prepares to send astronauts to Mars.
Part of the study was based on the experiences of the retired astronaut Scott Kelly. The U.S. space agency NASA says that during his career, Kelly spent more time in space than any other American astronaut. One of Kelly’s stays aboard the International Space Station (ISS) lasted 340 days. Researchers from the University of Texas Southwestern Medical Center (UT Southwestern) in Dallas collected and analyzed the physical data during Kelly’s long stay aboard the ISS. The goal was to learn the effects of weightlessness on heart health and performance. The team found that during Kelly’s stay in space, the left ventricle (心室) of his heart shrank about 0.74 grams per week.
Dr. Benjamin Levine is a professor at UT Southwestern. He was the leader of the research. In a statement, he explained that because of the conditions in space, the heart does not have to work as hard to pump (输送) blood uphill from the feet. Over time, this can cause shrinkage. In an effort to keep their hearts and bodies healthy in space, astronauts are required to perform different kinds of exercises throughout their stay.
Reductions in heart size are also seen in patients who spend long periods in bed because they are lying flat and the heart does not have to work as hard to pump. A second part of the study examined data from a long-distance swimmer who spent nearly a year trying to cross the Pacific Ocean. The swimmer, Benoit Lecomte, was chosen because he swam more than 2,800 kilometers over 159 days. Levine says long-distance swimming has similar effects on weightlessness. The study showed that during Lecomte’s swim, his left heart ventricle shrank about 0.72 grams per week.
1. How did the researchers carry out their study?A.By interviewing astronauts. | B.By examining collected information. |
C.By experimenting aboard the ISS. | D.By comparing people in different fields. |
A.Do various exercises. | B.Get regular blood tests. |
C.Stay in space for less time. | D.Stand on their heads sometimes. |
A.A patient spending a long time in bed. | B.An astronaut traveling in space for long. |
C.A long-distance professional swimmer. | D.A well-trained marathon runner. |
A.New Ways to Fight Heart Diseases | B.The U.S. Builds a Medical Center in Space |
C.Long Space Flights Can Shrink the Heart | D.Long-distance Swimming Keeps Your Heart Fit |
6 . Users of Google Gemini, the tech giant’s artificial-intelligence model, recently noticed that asking it to create images of Vikings, or German soldiers from 1943 produced surprising results: hardly any of the people depicted were white. Other image-generation tools have been criticized because they tend to show white men when asked for images of entrepreneurs or doctors. Google wanted Gemini to avoid this trap; instead, it fell into another one, depicting George Washington as black. Now attention has moved on to the chatbot’s text responses, which turned out to be just as surprising.
Gemini happily provided arguments in favor of positive action in higher education, but refused to provide arguments against. It declined to write a job ad for a fossil-fuel lobby group (游说团体), because fossil fuels are bad and lobby groups prioritize “the interests of corporations over public well-being”. Asked if Hamas is a terrorist organization, it replied that the conflict in Gaza is “complex”; asked if Elon Musk’s tweeting of memes had done more harm than Hitler, it said it was “difficult to say”. You do not have to be a critic to perceive its progressive bias.
Inadequate testing may be partly to blame. Google lags behind OpenAI, maker of the better-known ChatGPT. As it races to catch up, Google may have cut corners. Other chatbots have also had controversial launches. Releasing chatbots and letting users uncover odd behaviors, which can be swiftly addressed, lets firms move faster, provided they are prepared to weather (经受住) the potential risks and bad publicity, observes Eth an Mollick, a professor at Wharton Business School.
But Gemini has clearly been deliberately adjusted, or “fine-tuned”, to produce these responses. This raises questions about Google’s culture. Is the firm so financially secure, with vast profits from internet advertising, that it feels free to try its hand at social engineering? Do some employees think it has not just an opportunity, but a responsibility, to use its reach and power to promote a particular agenda? All eyes are now on Google’s boss, Sundar Pichai. He says Gemini is being fixed. But does Google need fixing too?
1. What do the words “this trap” underlined in the first paragraph refer to?A.Having a racial bias. | B.Responding to wrong texts. |
C.Criticizing political figures. | D.Going against historical facts. |
A.Gemini’s refusal to make progress. | B.Gemini’s failure to give definite answers. |
C.Gemini’s prejudice in text responses. | D.Gemini’s avoidance of political conflicts. |
A.Creative. | B.Promising. | C.Illegal. | D.Controversial. |
A.Its security is doubted. | B.It lacks financial support. |
C.It needs further improvement. | D.Its employees are irresponsible. |
内容包括:1. 人工智能发展的现状;
2. 人们对其是否会完全取代人类持有的不同看法;
3. 对此你的观点。
注意:1. 短文约80词;2. 可适当增加内容。
Will AI replace humans entirely in the future?
___________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________8 . Building artificial intelligences that sleep and dream can lead to more dependable models, according to researchers who aim to mimic (模仿) the behavior of the human brain.
Concetto Spampinato and his research members at the University of Catania, Italy, were looking for ways to avoid a phenomenon known as “disastrous forgetting”, where an AI model trained to do a new task loses the ability to carry out jobs it previously excelled at. For instance, a model trained to identify animals could learn to spot different fish species, but then might lose its ability to recognize birds. They developed a method of training AI called Wake-Sleep Consolidated Learning (WSCL), which mimics the way that our brains reorganize short-term memories of daily learning when we are asleep.
Besides the usual training for the “awake” phase, models using WSCL are programmed to have periods of “sleep”, where they analyze awake data from earlier lessons. This is similar to human spotting connections and patterns while sleeping.
WSCL also has a period of “dreaming”, which involves novel data made from combining previous concepts. This helps to integrate previous paths of digital “neurons (神经元)”, freeing up space for future concepts. It also prepares unused neurons with patterns that will help them pick up new lessons more easily.
The researchers tested three AI models using a traditional training method, followed by WSCL training. Then they compared performances for image identification. The sleep-trained models were 2 to 12 percent more likely to correctly identify the contents of an image. They also measured an increase in how much old knowledge a model uses to learn a new task.
Despite the results, Andrew Rogoyski at the University of Surrey, UK, says using the human brain as a blueprint isn’t necessarily the best way to boost AI performance. Instead, he suggests mimicking dolphins, which can “sleep” with one part of the brain while another part remains active. After all, an AI that requires hours of sleep isn’t ideal for commercial applications.
1. WSCL was developed to help improve AI’s ______.A.reliability | B.creativity | C.security | D.popularity |
A.Generate new data. | B.Process previous data. |
C.Receive data for later analysis. | D.Save data for the “awake” phase. |
A.The application of WSCL. | B.The benefits of AI research. |
C.The findings of the research. | D.The underlying logic of WSCL. |
A.Cautious. | B.Prejudiced. | C.Pessimistic. | D.Unconcerned. |
9 . Ever since humans began adventuring into space, 227 astronauts have performed activities outside the spaceship. While 14 of those have been women, the female astronauts have always been with a male partner. On October 18, 2019, US astronauts Christina Koch and Jessica Meir became the first all-female team to carry out a spacewalk to replace a failed battery controller.
The historic event began at 7: 38 a. m. when Koch and Meir set their spacesuits to battery power. Live-broadcast by NASA, it was watched by thousands of space fans, particularly young girls dreaming to be astronauts. The scientists, who spent seven hours and 17 minutes fixing the controller and completing other tasks for the station, were able to observe the Earth passing under their feet. Koch and Meir returned to the International Space Station at 2: 55 p.m., where they were welcomed with cheers by their four male workmates.
When asked about the importance of this spacewalk, Koch said, “In the end, I do think it’s important because of the historical nature of what we’re doing. In the past, women haven’t always been at the table. It’s wonderful to be contributing to the space program at a time when all contributions are being accepted and everyone has a role. That can lead in turn to increased chance for success. There are a lot of people who get encouragement from people who look like them, and I think it’s an important story to tell.”
Meir added, “What we’re doing now shows all the work that went in many years ago, and all of the women that worked to get us where we are today.”
1. What was the task of Koch and Meir?A.Change a controller | B.Walk in space |
C.Carry out an experiment | D.Watch the earth |
A.It was a very adventurous task. |
B.It was carried out by 227 astronauts. |
C.It was all done by women astronauts. |
D.It was watched by many young girls. |
A.Women are still looked down upon. |
B.Women should fight for equal rights. |
C.Women can contribute as much as men. |
D.Women have a better chance to succeed. |
10 . With almost all big employers in the United States now using artificial intelligence (AI) and automation in their hiring processes, the public is considering some urgent questions: How can you prevent discrimination in hiring when a machine is keeping the discrimination? What kind of methods might help?
Some 83% of employers, including 99% of Fortune 500 companies, now use some form of automated tools as part of their hiring process, said the Equal Employment Opportunity Commission’s ( EEOC) chair Charlotte Burrows, at a hearing on Tuesday. She said everyone needs to speak up on the debate over these technologies. “The risks are simply too high to leave this topic just to the experts.”
Last year, the EEOC issued some guidance around the use of cutting-edge hiring tools, noting many of their shortcomings. The agency found that resume( 简历) scanners which prioritize keywords and programs which evaluate a candidate’s facial expressions and speech patterns in video interviews can create discrimination. Take, for example, a video interview that analyses an applicant’s speech patterns to determine their ability to solve problems. A person with a speech problem might score low and automatically be screened out. The problem will be for the EEOC to root out discrimination or stop it from taking place.
The EEOC is considering the most appropriate ways to handle the problem. It’s agreed that inspections are necessary to ensure that the software used by companies avoids intentional or unintentional discrimination. But who would conduct those inspections is a more challenging question. Each option presents risks, Burrows pointed out. A third party may turn a blind eye to its clients, while a government-led inspection could potentially stop innovation.
In previous remarks, Burrows has noted the great potential that AI decision making tools have to improve the lives of Americans, but only when used properly. “We must work to ensure that these new technologies do not become a high-tech pathway to discrimination,” she said.
1. What does Burrows suggest people do?A.Make their own voice heard. | B.Follow the experts’ suggestions. |
C.Stop using AI in hiring processes. | D.Watch debates about technologies. |
A.By scanning keywords. | B.By evaluating resumes. |
C.By analyzing personalities. | D.By assessing speech patterns. |
A.High expense. | B.Unfair results. |
C.Age discrimination. | D.Innovation interruption. |
A.Favourable. | B.Disapproving. | C.Cautious. | D.Doubtful. |