1 . Here are four books for you and your children. Pick anyone you like!
Football with Dad by Frank BerriosAfter watching the big game on TV every Sunday, a boy and his dad head outside to throw around a football like their favorite players. With its focus on playing safe and having fun. Football With Dad is the perfect way to introduce your little reader to the game of football.
There’s an Alligator Under My Bed by MercerMayer Fear of the dark often brings people another fear, like having a monster (怪物) under your bed. This story uses clever thinking to show children how to do with their fear during the night. And it does have helped lots of children overcome their fear of the dark.
Room on the Broom by Julia DonaldsonThe story has a great message of friendship. A witch (女巫) is happily flying around on her broom when the wind blows away her hat, then her bow, and then her wand. Luckily, a helpful animal finds her missing belongings each time.
7 Days till Ice Cream by Bernardo FelicianoJerron, A. J., and Cha are so excited about ice cream day! But sometimes, the ice cream car drives down a different street. Can they work together to control their car right to their house? This fun, easy-to-read story also shows us problem-solving and hands-on activities that your children will love!
1. What is the book Football With Dad about?A.How to enjoy an exciting football game. | B.How to play football. |
C.How to be a father. | D.How to get kids to be athletic. |
A.Football with Dad. | B.There’s an Alligator Under My Bed. |
C.Room on the Broom. | D.7 Days till Ice Cream. |
A.Boring. | B.Responsible. | C.Sensitive. | D.Intelligent. |
2 . Artificial intelligence (AI) is showing promise in earthquake prediction, challenging the long-held belief that it is impossible. Researchers at the University of Texas, Austin, have developed an AI algorithm (算法) that correctly predicted 70% of earthquakes a week in advance during a trial in China and provided accurate strength calculations for the predicted earthquakes.
The research team believes their method succeeded because they stuck with a relatively simple machine learning approach. The AI was provided with a set of statistical features based on the team’s knowledge of earthquake physics, and then instructed to train itself using a five-year database of earthquake recordings. Once trained, the AI provided its prediction by listening for signs of incoming earthquakes within the background rumblings (隆隆声) in the Earth.
This work is clearly a milestone in research for AI-driven earthquake prediction. “You don’t see earth-quakes coming,” explains Alexandros Savvaidis, a senior research scientist who leads the Texas Seismological Network Program (TexNet). “It’s a matter of milliseconds, and the only thing you can control is how prepared you are. Even with the 70% accuracy, that’s a huge result and could help minimize economic and human losses and has the potential to remarkably improve earthquake preparation worldwide.”
While it is unknown whether the same approach will work at other locations, the researchers are confident that their AI algorithm could produce more accurate predictions if used in areas with reliable earthquake tracking networks. The next step is to test artificial intelligence in Texas, since UT’s Bureau TexNet has 300 earth-quake stations and over six years worth of continuous records, making it an ideal location for these purposes.
Eventually, the authors hope to combine the system with physics-based models. This strategy could prove especially important where data is poor or lacking. “That may be a long way off, but many advances such as this one, taken together, are what moves science forward,” concludes Scott Tinker, the bureau’s director.
1. How does the AI forecast earthquakes?A.By identifying data from the satellites. |
B.By analyzing background sounds in the Earth. |
C.By modeling data based on earthquake recordings. |
D.By monitoring changes in the Earth’s magnetic field. |
A.The ways to reduce losses in earthquakes. |
B.The importance of preparing for earthquakes. |
C.The significance of developing the AI prediction. |
D.The limitation of AI algorithms in earthquake prediction. |
A.Conducting tests in different locations. |
B.Applying the AI approach to other fields. |
C.Building more earthquake stations in Texas. |
D.Enlarging the database to train the calculation accuracy. |
A.Stable but outdated. | B.Effective but costly. |
C.Potential and economical. | D.Pioneering and promising. |
3 . Until the Road Ends (By Phil Earle)
In this Second World War story, Peggy and her dog Beau are separated when she is sent to the countryside for safety. Left behind in the city, Beau becomes an unlikely hero, searching the streets and helping families as bombs (炸弹) fall around them. Will he and Peggy ever be reunited?
Finding Bear (By Hannah Gold)
In the continuation of The Last Bear, April is home from her adventure but she can't stop thinking about Bear. When she hears that a polar bear has been shot and injured in Svalbard, she believes it is her friend and she sets out on a journey to the northernmost reaches of the Arctic to find him.
Calling the Whales (By Jasbinder Bilan)
Tulsi and her friend Satchen discover a whale trapped in a fishing net. Aiming to free the creature, they repeatedly dive (潜水) down into the sea. But in the end, they have to accept the failure. Heading home to ask for help, they get caught in a storm and their boat overturns. Just as they think all is lost, help arrives from an unexpected source.
City of Horses (By Frances Moloney)
Thirteen-year-old Misty’s life is turned upside down when she has to move far away from her friends. Her new home is on an estate (庄园) where horses run free, and she soon gets to know Dylan, a mysterious local boy who loves horses. When the horses come under threat, Misty must find the courage to help save her new home.
1. Which book is set in a war?A.Finding Bear. | B.Until the Road Ends. |
C.Calling the Whales. | D.City of Horses. |
A.Hannah Gold. | B.Phil Earle. |
C.Jasbinder Bilan. | D.Frances Moloney. |
A.They remind people of the future. | B.They show high technology. |
C.They offer advice on raising animals. | D.They are about adventures with animals. |
4 . It is no secret that building a large language model (LLM) requires huge amounts of data. In conventional training, an LLM is fed mountains of texts and encouraged to guess each word before it appears. With each prediction, the LLM makes small adjustments to improve its chances of guessing right. The end result is something that has a certain statistical "understanding" of what is proper language and what isn't.
But an LLM that has only undergone this so-called "pretraining" is not yet particularly useful. When asked for a joke to cheer you up, for instance, the pretrained model GPT-2 just repeated the question back three times. Clearly, improved training methods have to be found.
Here comes the so-called Reinforcement Learning From Human Feedback (RLHF), which normally involves three steps. First, human volunteers are asked to choose which of potential LLM responses might better fit a given situation. This process is repeated many thousands of times over. Then the final data set is used to train a reward model. Finally, the well-trained reward model is employed to train the original LLM. But this way of doing RLHF is quite complex and using two separate LLMs takes time and money.
It now turns out that the same result can be achieved with much less effort. Dr Rafailov and his colleagues, including Archit Sharma and Eric Mitchell, presented this alternative in December 2023 at an AI conference. Their method, Direct Preference Optimisation (DPO), relies on a satisfying mathematical trick.
According to the authors, removing the middleman makes DPO between three and six times more efficient than RLHF, and capable of better performance at tasks such as text summarisation. Its ease of use is already allowing smaller companies to train their own models. "A year ago, only a few world-leading models, such as Google's Gemini and OpenAI's GPT-4, could afford to use RLHF, "says Dr Rafailov. "But as of March 12, eight out of the ten LLMs used DPO."
1. What is the second paragraph mainly about?A.The applications of GPT-2. | B.The secret of building LLMs. |
C.The process of pretraining. | D.The limitations of pretrained LLMs. |
A.Expensive. | B.Efficient. | C.Useless. | D.Simple. |
A.It has to use more LLMs. | B.It is much more popular than RLHF. |
C.It is still too complex to use. | D.It is not cheap enough for small companies. |
A.DPO, the Perfect LLM Training Method |
B.The Development of Large Language Models |
C.A Brief Introduction to LLM Training Methods |
D.GPT-4, the Most Intelligent Large Language Model |
1. Who will help deliver the bottles on Monday morning?
A.Lisa | B.Steven. | C.The teacher. |
A.To decorate the room for the party. |
B.To make gifts for the homeless. |
C.To use them as money cans. |
A.A community club. | B.A charity event. | C.An entertainment activity. |
1. Where will listeners go first?
A.The coast. | B.The park. | C.The zoo. |
A.Prepare some snacks. | B.Learn about its history. | C.Book tickets. |
A.Play volleyball. | B.Have a picnic. | C.Visit museums. |
A.A hat. | B.A camera. | C.A map. |
1. Why does the woman suggest going to Egypt by ship?
A.It’s more interesting. | B.It’s cheaper. | C.It’s more comfortable. |
A.She is not in good health. |
B.She is busy with her work. |
C.She always worries too much. |
A.Go to Egypt. | B.Stay at home. | C.Go to the seaside. |
1. What was the matter with the first room?
A.It had a bad view. | B.It was noisy. | C.It was untidy. |
A.A garden. | B.The ocean. | C.A parking lot. |
A.She could get her money back. |
B.She could be upgraded with a lower price. |
C.She could be accommodated for free next time. |
1. Why did Billy perform poorly last year?
A.He didn’t adapt to the new school. |
B.He didn’t get help from his family. |
C.He didn’t try his best to study. |
A.Writing. | B.Spelling. | C.Reading. |
A.Running. | B.Swimming. | C.Hiking. |