أداوت وتكنولوجيا

Textbook Giant Pearson Is Launching an AI Study Buddy


Pearson, one of the top five textbook makers in the world, is leaping into the world of AI with the same speed and subtlety as a cinderblock-sized primer tumbling to the floor. The company said it plans to offer a new kind of study buddy AI chatbot they promise will be “free from the noise and corruption of web-based AI models.”

Pearson’s early plans to establish new AI-based study tools came out in its mid-year shareholder’s presentation on Monday. The company said this AI study tool would launch in its Pearson+ and Mastering subscription services in time for this fall’s back-to-school. Such a service would use Pearson’s textbooks and learning services to train the AI for “real-time support” for students.

CEO Andy Bird told investors the bot wouldn’t be a “shortcut” to any answer, but it would try and personalize the tutoring experience for a student.

A good chunk of the company’s video message was spent talking up AI’s use in different Pearson products, such as its employment data analytic services, or its software that checks for cheaters’ job certification tests. On the question of whether AI would replace the people who write the company’s textbooks, VP of product management Marcia Horton told investors authors “would not be replaced by technology,” but they would anticipate authors could work with large language models to craft lesson plans.

The textbook company is likely looking at what its competitor Chegg has been saying about generative AI. The textbook rental company hosts human tutoring services called Chegg Study where users can post homework questions that get answered by live humans. The company told investors back in May that chatbots like ChatGPT have taken a chunk out of new Study sign-ups, despite working with ChatGPT-maker OpenAI to create its own AI bot called CheggMate.

Pearson’s stock price took a big hit after news of Chegg’s misfortunes came to light as the market feared ChatGPT would siphon all would-be students’ test questions. Let’s also not forget that the language models that ChatGPT is based on, GPT-3.5 and GPT-4, may very well contain some of Pearson’s published textbooks in its training data.

That’s not to say the textbook company is feeling the same sort of burn. Pearson said in its mid-2023 shareholders report that its Pearson+ app saw a 200% increase of paid subscribers to 938,000 year over year. That app allows users to download eTextbooks they’re forced to rent through Pearson’s services and post about that content with other users. The company also has the Mastering services, which give students and teachers access to online homework and testing products.

Despite the fear of AI impacting the company’s bottom line, Pearson was touting growth amid all this AI hubbub due to—in Bird’s words—proprietary IP and vast data sets.” Pearson, along with the other top five textbook companies Scholastic, McGraw-Hill, Cengage Learning, and Houghton Mifflin Harcourt, are an effective monopoly that has seen textbook prices rise to the point where students can barely afford them.

Pearson has thought of other ways to deal with the problems of free access to online information. Last year, the company floated the idea of creating an NFT service for its textbooks, a move that critics told us would only bolster the all-too-necessary secondhand textbook market.

Of course, Pearson isn’t the only group talking about online learning services using AI. Khan Academy has instituted a whole swathe of beta AI services onto its platform with the similar goal of a personalized learning experience. Still, there’s widespread concern in academia about how students could use services like ChatGPT to cheat in classrooms, or otherwise make up entire essays based on a single prompt. Even as some education heads are cooling on overt AI antipathy, that doesn’t mean there aren’t obvious problems with these systems in an education environment.

Despite Pearson’s promises of an uncorrupted AI, these chatbots often lie and spit out incoherent answers in what some have called “hallucinations.” Even when you train the AI on your own proprietary learning information, that doesn’t mean the AI won’t create falsehoods. As Anthropic co-founder Daniela Amodei put it to the AP, “I don’t think that there’s any model today that doesn’t suffer from some hallucination.” That’s because AI isn’t actually intelligent, it simply has ultra-capable deep learning capabilities to “predict the next word” in a sentence.


Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators, The Best ChatGPT Alternatives, and Everything We Know About OpenAI’s ChatGPT.

اترك تعليقاً

لن يتم نشر عنوان بريدك الإلكتروني. الحقول الإلزامية مشار إليها بـ *

زر الذهاب إلى الأعلى