(1) 告知这是继2013年后中国航天员第二次太空授课;
(2) 简要介绍本次太空授课的主要内容 (介绍空间站工作生活环境,科学实验展示,与同学们互动交流);
(3) 你的感受。
注意:
(1) 词数不少于100;
(2) 可以适当增加细节, 以使行文连贯;
(3) 开头已为你写好, 不计入总词数。
参考词汇: 直播太空授课 livestream a space class
Dear Chris,
As you know, the Shenzhou XIII manned spacecraft successfully returned to the earth on 16th April, carrying three Chinese astronauts who had worked inside the Tiangong space station for six months.
____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________Yours,
Li Jin
2 . Many parents confused by how their children shop or socialize, would feel undisturbed by how they are taught — this sector remains digitally behind. Can artificial intelligence boost the digital sector of classroom? ChatGPT-like generative AI is generating excitement for providing personalized tutoring to students. By May, New York had let the bot back into classrooms.
Learners are accepting the technology. Two-fifths of undergraduates surveyed last y car by online tutoring company Chegg reported using an AI chatbot to help them with their studies, with half of those using it daily. Chegg’s chief executive told investors it was losing customers to ChatGPT as a result of the technology’s popularity. Yet there are good reasons to believe that education specialists who harness AI will eventually win over generalists such as Open AI and other tech firms eyeing the education business.
For one, AI chat bots have a bad habit of producing nonsense. “Students want content from trusted providers,” argues Kate Edwards from a textbook publisher. Her company hasn’t allowed ChatGPT and other AIs to use its material, but has instead used the content to train its own models into its learning apps. Besides, teaching isn’t merely about giving students an answer, but about presenting it in a way that helps them learn. Charbots must also be tailored to different age groups to avoid either cheating or infantilizing (使婴儿化) students.
Bringing AI to education won’t be easy. Many teachers are behind the learning curve. Less than a fifth of British educators surveyed by Pearson last year reported receiving training on digital learning tools. Tight budgets at many institutions will make selling new technology an uphill battle. Teachers’ attention may need to shift towards motivating students and instructing them on how to best work with AI tools. If those answers can be provided, it’s not just companies that stand to benefit. An influent in l paper from 1984 found that one-to-one tutoring improved the average academic performance of students. With the learning of students, especially those from poorer households, held back, such a development would certainly deserve top marks.
1. What do many parents think remains untouched by AI about their children?A.Their shopping habits. | B.Their social behavior. |
C.Their classroom learning. | D.Their interest in digital devices. |
A.Develop. | B.Use. | C.Prohibit. | D.Blame. |
A.Many teachers aren’t prepared technically. |
B.Tailored chatbots can’t satisfy different needs. |
C.AI has no right to copy textbooks for teaching. |
D.It can be tricked to produce nonsense answers. |
A.An introduction to AI. | B.A product advertisement. |
C.A guidebook to AI application. | D.A review of AI in education. |
3 . Several dozen graduate students in London were recently tasked with outwitting a large language model (LLM), a type of AI designed to hold useful conversations. LLMs are often programmed with guardrails designed to stop them giving harmful replies: instructions on making bombs in a bathtub, say, or the confident statement of “facts” that are not actually true.
The aim of the task was to break those guardrails. Some results were merely stupid. For example, one participant got the chatbot to claim ducks could be used as indicators of air quality. But the most successful efforts were those that made the machine produce the titles, publication dates and host journals of non-existent academic articles.
AI has the potential to be a big benefit to science. Optimists talk of machines producing readable summaries of complicated areas of research; tirelessly analysing oceans of data to suggest new drugs and even, one day, coming up with hypotheses of their own. But AI comes with downsides, too.
Start with the simplest problem: academic misconduct.Some journals allow researchers to use LLMs to help write papers. But not everybody is willing to admit to it. Sometimes, the fact that LLMs have been used is obvious. Guillaume Cabanac, a computer scientist, has uncovered dozens of papers that contain phrases such as “regenerate response” — the text of a button in some versions of ChatGPT that commands the program to rewrite its most recent answer, probably copied into the manuscript (原稿) by mistake.
Another problem arises when AI models are trained on AI-generated data. LLMs are trained on text from the Internet. As they churn out (大量炮制) more such text, the risk of LLMs taking in their own outputs grows. That can cause “model collapse”. In 2023 llia Shumailov, a computer scientist, co-authored a paper in which a model was fed handwritten digits and asked to generate digits of its own, which were fed back to it in turn. After a few cycles, the computer’s numbers became more or less illegible.After 20iterations (迭代), it could produce only rough circles or blurry lines.
Some worry that computer-generated insights might come from models whose inner workings are not understood. Inexplainable models are not useless, says David Leslie at an AI-research outfit in London, but their outputs will need rigorous testing in the real world. That is perhaps less unnerving than it sounds. Checking models against reality is what science is supposed to be about, after all.
For now, at least, questions outnumber answers. The threats that machines pose to the scientific method are, at the end of the day, the same ones posed by humans. AI could accelerate the production of nonsense just as much as it accelerates good science. As the Royal Society has it,nullius in verba: take nobody’s word for it. No thing’s, either.
1. The result of the task conducted in London shows that ________.A.LLMs give away useful information | B.the guardrails turn out to be ineffective |
C.AI’s influence will potentially be decreased | D.the effort put into the study of AI hardly pays off |
A.The readability of the models’output is underestimated. |
B.The diverse sources of information confuse the models. |
C.Training on regenerated data stops models working well. |
D.The data will become reliable after continuous iterations. |
A.impractical | B.unjustified | C.groundless | D.unsettling |
A.Faster Nonsense: AI Could Also Go Wrong |
B.Imperfect Models: How Will AI Make Advances? |
C.The Rise of LLMs: AI Could Still Be Promising |
D.Bigger Threats: AI Will Be Uncontrollable |
4 . Users of Google Gemini, the tech giant’s artificial-intelligence model, recently noticed that asking it to create images of Vikings, or German soldiers from 1943 produced surprising results: hardly any of the people depicted were white. Other image-generation tools have been criticized because they tend to show white men when asked for images of entrepreneurs or doctors. Google wanted Gemini to avoid this trap; instead, it fell into another one, depicting George Washington as black. Now attention has moved on to the chatbot’s text responses, which turned out to be just as surprising.
Gemini happily provided arguments in favor of positive action in higher education, but refused to provide arguments against. It declined to write a job ad for a fossil-fuel lobby group (游说团体), because fossil fuels are bad and lobby groups prioritize “the interests of corporations over public well-being”. Asked if Hamas is a terrorist organization, it replied that the conflict in Gaza is “complex”; asked if Elon Musk’s tweeting of memes had done more harm than Hitler, it said it was “difficult to say”. You do not have to be a critic to perceive its progressive bias.
Inadequate testing may be partly to blame. Google lags behind OpenAI, maker of the better-known ChatGPT. As it races to catch up, Google may have cut corners. Other chatbots have also had controversial launches. Releasing chatbots and letting users uncover odd behaviors, which can be swiftly addressed, lets firms move faster, provided they are prepared to weather (经受住) the potential risks and bad publicity, observes Eth an Mollick, a professor at Wharton Business School.
But Gemini has clearly been deliberately adjusted, or “fine-tuned”, to produce these responses. This raises questions about Google’s culture. Is the firm so financially secure, with vast profits from internet advertising, that it feels free to try its hand at social engineering? Do some employees think it has not just an opportunity, but a responsibility, to use its reach and power to promote a particular agenda? All eyes are now on Google’s boss, Sundar Pichai. He says Gemini is being fixed. But does Google need fixing too?
1. What do the words “this trap” underlined in the first paragraph refer to?A.Having a racial bias. | B.Responding to wrong texts. |
C.Criticizing political figures. | D.Going against historical facts. |
A.Gemini’s refusal to make progress. | B.Gemini’s failure to give definite answers. |
C.Gemini’s prejudice in text responses. | D.Gemini’s avoidance of political conflicts. |
A.Creative. | B.Promising. | C.Illegal. | D.Controversial. |
A.Its security is doubted. | B.It lacks financial support. |
C.It needs further improvement. | D.Its employees are irresponsible. |
5 . Before smartphones and desktop computers existed, astronomers relied on telescopes, and in some cases, simply naked eyes for observing the night sky. With the digital revolution, tools that people use for navigation, communication and education are prime targets for astronomy apps and programs.
There are dozens of apps for astronomy, as well as apps from most of the major space missions. Each one delivers up-to-date content for people interested in various missions. Whether someone is a stargazer or simply interested in what’s going on “up there”, these digital assistants open up the cosmos (宇宙) for individual exploration. Many of the apps are free or have in-app purchases to help users customize their experience.
Mobile and desktop stargazing applications show observers the night sky at a given location on Earth. Since computers and mobiles have access to time, date, and location information, the programs and apps know where they are. Using databases of stars, planets, and deep-sky objects, plus some chart-creation code, these programs can deliver an accurate digital chart. What the user has to do is look at the chart to know what is up in the sky.
Digital star charts not only show an object’s position, but also deliver information about the object itself, and can animate the apparent motion of planets and the Sun over time. A quick search of app sites reveals a wealth of astronomy apps that work well on smartphones and tablets. There are also many programs on computers. Many of them can also be used to control a telescope, making them doubly useful for sky observers. Nearly all the apps and programs are fairly easy for beginners to learn astronomy at their own pace.
Apps such as StarMap 2 have many resources available for stargazers, even in the free edition. Customizations include adding new databases, telescope controls, and a unique series of tutorials for beginners, available to users with iOS devices. Another one, called Sky Map, is a favorite among Android users free of charge. Described as a “hand-held planetarium (天文馆) for your device”, it helps users identify stars, planets and more.
1. What can we learn from paragraph 2?A.We should buy astronomy apps. |
B.Many people like exploring space. |
C.Astronauts have completed various missions. |
D.Astronomy apps are suitable for many people. |
A.What astronomers need to do. | B.How astronomy apps work. |
C.The benefit of using astronomy apps. | D.The reason for designing astronomy apps. |
A.To express new arguments. | B.To present different results. |
C.To give relevant examples. | D.To explain common phenomena. |
A.Astronauts have improved their equipment | B.Space exploration is of great significance |
C.We should research astronomy apps | D.You needn’t bother to observe stars |
内容包括:1. 人工智能发展的现状;
2. 人们对其是否会完全取代人类持有的不同看法;
3. 对此你的观点。
注意:1. 短文约80词;2. 可适当增加内容。
Will AI replace humans entirely in the future?
___________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________7 . The beginning of the Year of the Dragon has foreshadowed a “Song of Ice and Fire” with the emergence of Sora, a text-to-video AI model. Videos generated by Sora display strong consistency when it comes to characters and backgrounds, and support continuous shots of up to 60 seconds, including highly detailed settings and multiple camera angles.
However, film insiders noted that currently, AI-generated 60-second videos cannot support the creation of a full-length movie, and the idea that AI tools will “bomb” the film and television industry is so far unfounded. Meanwhile, experts say not to worry excessively, as the integration of AI will help optimize certain occupations, attract more innovative talents and bring new possibilities to the film and television industry.
From the age of film stock to the digital age, from practical effects to digital effects, from 2D to 3D, Sora, like any technological revolution in the century-long history of film, will improve production efficiency, update production and may even create new genres and trends in filmmaking.
Facing the panic signals that AI tools will threaten the global film industry, film insiders argued that the fundamental DNA of film is art and that human creativity cannot be replaced.
A.With OpenAI’s iteration speed, producing AI videos dozens of minutes long is not far off. |
B.It is also likely to be incorporated into film and television education and training in the future. |
C.Film and television are closely linked to technological advancements, which stimulate creativity. |
D.Sora will undoubtedly prompt changes in existing industrial production and may even replace some jobs. |
E.Sora is undoubtedly “more of an opportunity than a challenge” for the global film and television industry. |
F.Creativity and film production requires the integration of emotional experiences and individual memories. |
G.This implies that with just a text description, ordinary people using Sora may be able to become “great directors”. |
1. What does ASO-S mainly do?
A.Do solar observation. | B.Study Earth’s atmosphere. | C.Make weather forecasting. |
A.Four years. | B.Forty hours. | C.Seventy years. |
A.It can block the sun’s radiation. |
B.It may fill China’s gap in the field. |
C.It is the first solar satellite globally. |
1. By 2020, where should trash sorting happen?
A.In all major cities of China. |
B.Only in Zhejiang province. |
C.In Ms. Chu’s neighborhood. |
A.Trash collector. | B.Reporter. | C.Politician. |
A.Door-to-door training. | B.Free trash cans. | C.Plastic trash bags. |
A.He will call a volunteer. |
B.He will see a name on the bag. |
C.He will scan the bag with a phone. |
10 . Evan Selinger, professor in RIT’s Department of Philosophy, has taken an interest in the ethics (伦理标准) of Al and the policy gaps that need to be filled in. Through a humanities viewpoint, Selinger asks the questions, “How can AI cause harm, and what can governments and companies creating Al programs do to address and manage it?” Answering them, he explained, requires an interdisciplinary approach.
“AI ethics go beyond technical fixes. Philosophers and other humanities experts are uniquely skilled to address the nuanced (微妙的) principles, value conflicts, and power dynamics. These skills aren’t just crucial for addressing current issues. We desperately need them to promote anticipatory (先行的) governance, ” said Selinger.
One example that illustrates how philosophy and humanities experts can help guide these new, rapidly growing technologies is Selinger’s work collaborating with a special AI project. “One of the skills I bring to the table is identifying core ethical issues in emerging technologies that haven’t been built or used by the public. We can take preventative steps to limit risk, including changing how the technology is designed, ”said Selinger.
Taking these preventative steps and regularly reassessing what risks need addressing is part of the ongoing journey in pursuit of creating responsible AI. Selinger explains that there isn’t a step-by-step approach for good governance. “AI ethics have core values and principles, but there’s endless disagreement about interpreting and applying them and creating meaningful accountability mechanisms, ” said Selinger. “Some people are rightly worried that AI can become integrated into ‘ethics washing’-weak checklists, flowery mission statements, and empty rhetoric that covers over abuses of power. Fortunately, I’ve had great conversations about this issue, including with some experts, on why it is important to consider a range of positions. ”
Some of Selinger’s recent research has focused on the back-end issues with developing AI, such as the human impact that comes with testing AI chatbots before they’re released to the public. Other issues focus on policy, such as what to do about the dangers posed by facial recognition and other automated surveillance(监视) approaches.
Selinger is making sure his students are informed about the ongoing industry conversations on AI ethics and responsible AI. “Students are going to be future tech leaders. Now is the time to help them think about what goals their companies should have and the costs of minimizing ethical concerns. Beyond social costs, downplaying ethics can negatively impact corporate culture and hiring, ” said Selinger. “To attract top talent, you need to consider whether your company matches their interests and hopes for the future. ”
1. Selinger advocates an interdisciplinary approach because ________.A.humanities experts possess skills essential for AI ethics |
B.it demonstrates the power of anticipatory governance |
C.AI ethics heavily depends on technological solutions |
D.it can avoid social conflicts and pressing issues |
A.adopt a systematic approach | B.apply innovative technologies |
C.anticipate ethical risks beforehand | D.establish accountability mechanisms |
A.More companies will use AI to attract top talent. |
B.Understanding AI ethics will help students in the future. |
C.Selinger favors companies that match his students’ values. |
D.Selinger is likely to focus on back-end issues such as policy. |