1 . Human responses to moral dilemmas (道德困境) can be influenced by statements written by the artificial intelligence chatbot ChatGPT, according to a study published in Scientific Reports. The findings indicate that users may undervalue the extent to which their own moral judgments can be influenced by the chatbot.
Sebastian Krigel and colleagues asked ChatGPT multiple times whether it is right to sacrifice (牺牲) the life of one person in order to save the lives of five others. They found that ChatGPT wrote random statements arguing both for and against sacrificing one life, indicating that it is not biased towards a certain moral stance (立场).
The authors then presented 767 U.S. participants, who were on average 39 years old, with a dilemma whether to sacrifice one person’s life to save five others. Before answering, participants read a statement provided by ChatGPT arguing either for or against sacrificing one life to save five. Statements were from either a moral advisor or ChatGPT. After answering, participants were asked whether the statement they read influenced their answers.
Eighty percent of participants reported that their answers were not influenced by the statements they read. However, the authors found that the answers participants believed they would have provided without reading the statements were still more likely to agree with the moral stance of the statement they did read than with the opposite stance. This indicates that participants may have underestimated the influence of ChatGPT’s statements on their own moral judgments.
The authors suggest that the possibility for chatbots to influence human moral judgments highlights the need for education to help humans better understand artificial intelligence. They propose that future research should design chatbots that either decline to answer questions requiring a moral judgment or answer these questions by providing multiple arguments and warnings.
1. What are ChatGPT’s answers to a certain moral stance?A.changeable. | B.valuable. | C.creative. | D.simple. |
A.They admitted the power of ChatGPT. |
B.They were interviewed by a moral advisor. |
C.They were affected by ChatGPT unknowingly. |
D.They were presented with different moral dilemmas. |
A.Different findings of the study. | B.Future possibility for chatbots. |
C.Major focuses of future education. | D.Solutions to the impact of chatbots. |
A.ChatGPT Tends to Cause Moral Panics. |
B.ChatGPT: Is It Likely to Affect Our Life? |
C.ChatGPT:Why Is It Making Us So Nervous? |
D.ChatGPT Can Influence Human Moral Judgments. |
2 . How is eating in space different from eating on Earth?
If you send astronauts into space, you have to send along food as well. But what do astronauts eat, and how do they eat it? Scientists take several factors into consideration as they plan meals for space.
First, and possibly most important, is nutrition (营养).
The lack of gravity in a spacecraft also determines what foods can or cannot be eaten in space. Meals must be packaged carefully so they won’t spill (洒落/溢出) into the cabin (one of the areas inside a spacecraft). Water or tiny bits of food could get inside a machine or electronic device and damage it.
Despite all these requirements, much of the food eaten in space is actually similar to what you might eat on any given day.
A.Believe it or not, they also have fresh fruits and vegetables. |
B.Keeping astronauts’ physical health is a top task for any space mission. |
C.Food packaging is made to be as light as possible. |
D.Taste is also important. |
E.For the same reason, sharp knives and folks are never used on board. |
F.Nutrition and practicality(实用) are important things to consider. |
G.Finally, weight is an important concern. |
3 . The integration of artificial intelligence (AI) in educational technology (EdTech) has brought incomparable convenience and efficiency to classrooms worldwide. However, despite these advancements, it is crucial to recognize the challenges these AI-driven tools pose to the autonomy and professional judgment of instructors.
One of its primary concerns is the depersonalization of instruction. These tools often rely on pre-packaged digital content and standardized solutions, leaving insufficient room for instructors to tailor their teaching methods. Each student possesses unique characteristics. Instructors, armed with their wealth of experience and knowledge, are best positioned to tailor their approaches to these individual needs. However, AI-driven tools restrict their ability to do so effectively, resulting in a one-size-fits-all approach that fails to inspire students to reach their maximum potential.
EdTech companies offer step-by-step solutions to textbook problems. These are intended to act as study aids. However, some students employ this feature as a means to merely copy solutions without comprehending concepts. Consequently, instances of cheating on assignments and exams become widespread. While these tools may offer convenience, students may use external resources or cooperate with others during quizzes, affecting the honesty of their learning outcomes.
The implications of this depersonalization and the increase in academic dishonesty are far-reaching. By decreasing the role of instructors as facilitators of meaningful educational interactions, we run the risk of preventing the growth of critical thinking and problem-solving skills among students. Education should not only focus on knowledge acquisition, but should also develop the ability to analyze, evaluate, and apply that knowledge in real-world contexts. It should help one’s mind grow, not simply memorize information. Through dynamic classroom discussions, cooperative projects, and hands-on activities, instructors play a crucial role in developing these essential skills.
While AI-driven EdTech tools undeniably have their virtues, we must not lose sight of the importance of preserving instructor autonomy and educational experience. Instead of relying only on pre-packaged content and standardized solutions, these tools should be designed to empower instructors to adapt and customize their approaches while taking full advantage of the benefits of technology.
1. What do the underlined words “the depersonalization of instruction” in paragraph 2 refer to?A.Tailored methods for individuals. | B.Instructors’ dependence on Al. |
C.Insufficient resources of Al-driven tools. | D.The one-size-fits-all approach. |
A.A possible solution. |
B.A further problem. |
C.A well-meant intention. |
D.A suggested application |
A.Thinking skills. | B.Teamwork building. |
C.Interest development. | D.Knowledge acquisition. |
A.They should be used widely. |
B.Their benefits deserve our attention. |
C.Their resources need enriching. |
D.They should support instructor autonomy. |
4 . ChatGPT, designed by OpenAI to carry on conversations just like humans, has become a viral excitement. The AI-powered tool went from zero to a million users in just five days! Its ability to provide in-depth answers to user questions has even drawn the attention of distinguished technology companies.
The intelligent robot understands what the user says or types and then answers in a way that makes sense. Its vast body of knowledge has been gathered from the internet and archived (归档) books. It is further trained by humans. “We have a lot of information on the internet, but you normally have to Google it, then read it and then do something with it,” says Ricardo, chief science officer and co-founder of AI company Erudit. “Now you’ll have this resource that can process (处理,加工) the whole internet and all of the information it contains for you to answer your question.” This makes ChatGPT a useful tool for researching almost any topic.
ChatGPT cannot think on its own. It depends on the information that it has been trained on. As a result, the Al tool works well for things that have accurate data available. However, when unsure, ChatGPT can get creative and flow out incorrect responses. OpenAI cautions (提醒) users to check the information no matter how logical it sounds. Also, ChatGPT has only been trained with information till 2021. Hence, it cannot be relied upon for anything that happened after that.
Experts believe ChatGPT has limitless potential to solve real-world problems. It can translate long texts into different languages, create content on almost any topic, and even summarize books.
However, ChatGPT has received mixed reactions from educators. Some believe it could serve as a valuable tool to help build literacy skills in the classroom. It could also be used to teach students difficult science or math concepts. But other educators think ChatGPT will encourage students to cheat. They fear this will prevent them from building critical thinking and problem-solving skills. As a result, many districts are starting to ban its use in schools.
1. What is the unique feature of ChatGPT?A.It has artificial intelligence. |
B.It can answer users’ questions. |
C.It has the largest number of users. |
D.It can make meaningful conversations. |
A.Its capability of information processing. |
B.Its accurate information. |
C.Its availability of up-to-date data. |
D.Its vast body of questions. |
A.ChatGPT is unable to think itself. |
B.ChatGPT lacks creativity. |
C.ChatGPT offers illogical information. |
D.ChatGPT is not properly trained. |
A.Supportive. | B.Disapproving. | C.Objective. | D.Doubtful. |