Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily.

If you’ve ever used the internet to plan a trip, chances are you’ve taken advice on what to see and do from someone who has never been to your destination. In fact, your guide probably has had no direct knowledge of—or even personal interest in—sunbathing on the Gulf Coast, rock climbing in Moab, or marveling at the architecture of Milan. And yet, on travel websites across the internet, writers provide jet-setters with terrifically specific guidance: what time of day to head out, what kind of shoes to wear, and where to score a deal.

In the past, you might have purchased a travel book written by someone who actually went to a place (or who, at the very least, did old-school reporting on it, making phone calls to gather and verify information from people who had been there). Today the recommendations you find via Google are made by people who, well, also used Google.

This is a problem facing not just travel advice. It infects everything recommendation-related. Every day, writers are paid a pittance by marketing companies, big brands, and a swarm of content mills attempting to capture a place in our search results and hoover up our attention with very specific advice. I am one of those writers, churning out that work: During my decade as a word monkey, I’ve recommended drinks and dishes from bars and restaurants I’ve never been to and waxed lyrical about hunting equipment despite having shot precisely one gun in my life. I’ve even written product descriptions for items that aren’t available in my country. (There are about half a dozen compression-sleeve brands that apparently ship only to the U.S., not my native England, much to the disappointment of my dodgy knee.)

The information included in these articles is pulled from a number of sources. Sometimes they’re more official, like brand webpages. But often, they’re sources like Tripadvisor, Amazon reviews, or even random posts on niche subreddits. And not every writer will be like me, making good use of what I learned via my history degree and careful to include solely information that has been repeated in multiple places with strong reputations. When deadlines and bills are circling you, the temptation to cut corners is extremely powerful.

Even though I research extensively and pride myself on accuracy, without direct experience things go wrong. In the past, I’ve accidentally given incorrect public transit information when writing about how to get to a museum, or reported the wrong number of poles in the product description for a tent. Small mistakes, but ones that don’t happen when you take a journey yourself or hold an item in your hands. Such errors can be corrected, and they aren’t always consequential. But they can be: Imagine someone with impaired mobility expecting a ramp at a museum and showing up to find steps—having their meticulously planned day out ruined, all because someone had to hit a deadline and assumed that a beloved tourist attraction was accessible.

Through methods like search engine optimization and other nifty page-ranking subterfuge, this nonverified content climbs to the top of search results and people’s consciousness. Yes, there’s really good travel—and product, and drink—advice out there, based on real experiences. But better-researched pieces by actual experts might not have the benefit of being buoyed by SEO tricks, as the people producing that content won’t know the importance of internal linking, keyword repetition, and other factors that can help a page shoot up in search results.

With the rise of large language models, the problem of not-quite-right advice will only get worse. The quickly written, often shoddily verified content is going to become what the LLMs take as the truth.

LLMs don’t search for information like we would. Instead, they produce responses via token prediction, effectively a more complex version of predictive text. (Tokens are numerical values given to words, parts of words, and sometimes even letters, thus allowing the computer to “read” them.) But these predictions are based on data fed to machines, and information that is consistent and considered “higher quality” can be given more weight in the model’s internal logic during its training. An LLM doesn’t know whether what it is saying is right. It is designed not to provide the truth—simply to provide answers. You can see this clearly when models lead their users into “A.I. psychosis.” The LLM doesn’t care where it’s taking you. It simply chooses the most plausible word to follow the previous one, based on preset parameters and the vast quantities of information it has been trained on.

Although many LLM engineers can tinker with source weighting, like those at X do to Grok every time it veers too close to the actual truth rather than whatever Elon Musk thinks, the people who run popular large language models like ChatGPT and Google Gemini say they prioritize training their models via sources that are generally seen as more authoritative. However, that doesn’t mean that those sources will always provide the truth or that the chatbot will always repeat it. It means that chatbots will try to collect information from sources that tick the correct boxes. Those sources can be wrong, and facts can be lost or warped in the game of telephone. What’s more, marketing professionals are already studying how LLMs rank sources to ensure that their content is picked up in A.I. overviews. That is, an incorrect fact in hastily produced copy—intended, at the end of the day, to capture as many eyeballs as possible rather than to inform—can all too easily be repeated by an LLM.

The stakes aren’t very high when a model believes that a hotel is 50 feet from the beach when it’s really 500, or that the stain remover some copywriter was paid hardly anything to “review” doesn’t actually work on colors. But the amount of people using generative A.I. for things like mental health support and nutrition advice makes these discrepancies troubling. Leaders in the A.I. space, like Nvidia and OpenAI, claim that there exist robust safeguards against this crystallization of falsity into fact, but OpenAI researchers have already admitted that “hallucinations are mathematically inevitable,” and industry experts note that there are some real issues with homogenous errors across multiple models.

Consider the following hypothetical: a natural health brand looking to sell its supplements to a broader audience. It might hire a writer to, in a piece on its site, extol the virtues of zinc and magnesium, focusing on the alleged immunity-boosting properties of taking supplements with a particular blend of the two (which the company, of course, sells). This writer, keen to do a good job, then reads some studies that showcase this fact, but due to a lack of understanding of the science, or a deficiency in understanding statistics, makes an erroneous claim. (One of the most spurious phrases in modern advertising is studies show.) The writer, thanks to their ability to improve page rankings via keywords and section headings, will have created an article that looks like information but is really a thinly disguised advertisement. It floats to the top of Google … and is copied again and again by others selling vitamins. This claim will then be included in top-line A.I. responses about the benefits of magnesium and zinc supplements, as the LLM considers it the most “probable” answer to, say, common questions about staying healthy during cold and flu season.

The tips and tricks I use to avoid being taken in by sloppy A.I.-generated content are the same that have always existed for combating disinformation, and were honed mainly during my humanities degree. I double-check facts and figures and ensure they’re from reputable sources, ideally with multiple additional sources backing them up. (Often, articles on a topic will cite the same incorrect source—so be careful!) Polarized viewpoints often rise to the top: If I read something that either makes my blood boil or completely aligns with my own perspective, I make sure to check the source. When it comes to your health, experts stress the importance of having a “human in the loop”—that is, checking with your doctor before taking advice from a machine. And on your next vacation? Well, if you use ChatGPT to plan it, maybe just bake in extra time in case things go awry.



Source link

Scroll to Top