Meta’s AI keeps pretending to be me and giving my phone number out to strangers

On a lazy Sunday afternoon in mid-July, I got a WhatsApp notification. A Peruvian stranger called Franco had added me to a Shrek-themed group on the Meta-owned messaging app.

Called “grupo alfa buena maravilla onda dinamita escuadron lobo” — Team Alpha Super Awesome Cool Dynamite Wolf Squadron — its dozen or so members immediately peppered me with instructions in Spanish.

“Tell Franco to go to sleep, little one,” one told me. “Tell Gretel to stop studying,” said another.

I asked what was going on, and the group members reacted with laughter and confusion. Then Franco shared a screenshot of the chat, and I noticed he’d saved me in his contacts as Meta AI, a chatbot Meta built for WhatsApp and other services.

“I don’t work for Meta and I have no idea who you all are,” I replied.


“Sorry it was a mistake and we all thought you were Meta,” replied a member called Harumi in English

Franco chimed in again, sharing a screenshot of a one-on-one conversation he’d had with MetaAI in which he’d asked how to add the chatbot to a group chat.

“You can add me to a WhatsApp group as if I were just another contact. You only need to save my phone number,” the AI responded in Spanish — before sharing my phone number.

Meta AI had falsely told someone that it owned my phone number, and that they should message me on WhatsApp to contact it.

About a week later, it happened again.

And then a third time.

AI is suddenly everywhere


Tech companies are baking artificial intelligence into anything and everything.

Google is inserting “AI Overviews” at the top of search results. LinkedIn has added “AI-powered insights” to the bottom of posts. Amazon now automatically summarizes product reviews. Meta has got in on the action, adding its generative “Meta AI” chatbot to the search bar of Facebook, Instagram, and WhatsApp, where users can message it individually or as part of a larger group chat.

It’s a gold-rush frenzy to capitalize on the AI wave and demonstrate its utility — but there can be hiccups.

About a week after my run-in with Team Alpha Super Awesome Cool Dynamite Wolf Squadron, I got another unsolicited WhatsApp message.

“Someone named Meta AI told me to talk to you, hi,” a user with an Argentinian phone number wrote to me in Spanish. (I’ve translated her messages and others throughout this story using AI.)

“I’m Moon, a girl in a Taylor Swift group,” she later explained in English. Someone in this WhatsApp group had asked Meta AI for its phone number — and again it shared my number. “You can send me a message there, and I’ll be happy to help you with any questions or tasks you have,” it had helpfully added.

A screenshot that Moon shared of a message sent by Meta AI in a Taylor Swift WhatsApp group, claiming my phone number belongs to it.

A few days later, it happened yet again.

Another Argentinian messaged me, this one called Maxi. “An AI told me that you were a bot and that you generate photos. Is that true?” they wrote.

They had been chatting one-on-one with Meta AI in WhatsApp, they explained, and asked if it had image-generating capabilities like Midjourney. It had replied that it did, and Maxi simply had to message a phone number — my phone number — to access the content.

Even after I explained myself, Maxi was skeptical that I was a real person, demanding that I call them “to confirm you are a human.” I did so, then Maxi blocked me.

I’m used to getting messages and calls from strangers.

I’ve been a reporter for a decade, several years of that spent covering Facebook and its family of apps. I’ve shared my phone number liberally online and in articles. In return, I got a steady flow of tips, as well as the occasional person trying to get their Facebook account unbanned.

But this was different. Meta’s own technology was lying to people about me and my phone number.

What was going on?

My three hypotheses

After the first incident, I wondered if it was just a random fluke: Franco had asked for a phone number, so Meta AI may have just provided a random 10-digit number. The odds of it happening once were low — one-in-a-billion probably. But it had now happened three separate times.

Option two: It’s a Pan-South-American conspiracy to confuse a journalist. I was highly doubtful. Over the space of several weeks, people from multiple countries shared screenshots of Meta AI’s messages sharing my phone number. They seemed legitimately convinced that I was Meta AI, and appeared to have zero ulterior motive (both the first group chat and Maxi blocked me upon belatedly concluding I was human, after all). But I couldn’t definitively rule out this theory.

Are the Argentinians conspiring against me? I’m unconvinced.

My preferred hypothesis came down to how generative AI functions.

The large-language models that underpin chatbots are incredibly sophisticated. But one way to think about them is “fancy autocomplete.” They take the user’s prompt (their message or question) and try to figure out what most likely should come next, based on a corpus of training data. The responses seem convincing, and are often factually correct — but not always.

That training data is, in practice, just about anything the AI companies can get their hands on — often large-scale scrapes of the web. If Meta’s systems had scraped B-17’s website and my old stories, that might point to an answer.

I’ve written almost 300 B-17 stories that mention Facebook while including my phone number soliciting tips, plus many more about AI. All of that could have been scooped up into Meta’s training data. Then, during the complex training and inference process that forms AI models and chatbots, this frequent confluence of topics — Facebook, Meta, AI, and my phone number — may have created some misguided causal connection in the bowels of the LLM.

So if you ask the LLM for a phone number that corresponds to Meta and AI, mine might seem like a pretty good one to output.

The thorny side of LLMs


I reached out to Meta’s (human) spokespeople to ask what was going on — and if they could please stop random South Americans from messaging me.

The company said the error was likely because it had scraped my phone number from public sources online.

“Meta AI was trained on publicly available information online, which may include personal information. For example, if we collect a public blog post it may include the author’s name and contact information,” Meta’s Emil Vazquez said in a statement. “This is new technology, and it may not always return the response we intend, which is the same for all generative AI systems.”

Meta declined to confirm whether it had scraped B-17 stories specifically. In a recent blog post, CEO Mark Zuckerberg said Meta’s AI models “are trained by information that’s already on the internet” — a category that certainly might include B-17’s website.

There’s a raging debate at the moment around the ethics of AI training data. Some argue that it’s fair game for AI companies to trawl the open web; others say publishers should be compensated for their work. OpenAI, the maker of ChatGPT, has struck deals with multiple publishers to use their content to train models, including B-17.

“We don’t have any agreement in place that would give Meta access to our data for AI training purposes,” a B-17 spokesperson told me.

Facebook, like most other major tech firms, is going all-in on AI.

My experience also pointed to the “black box” nature of LLMs and AI. While you can talk generally about how they tend to behave, their inner workings and the exact reasons for their outputs often remain an un-auditable mystery. Even AI companies themselves don’t necessarily know why a specific output occurs.

This isn’t just an academic or intellectual-property problem. Errors made by LLMs can have very real consequences. In April, CBC News reported, a Canadian man asked Meta AI if a phone number he’d found online claiming to belong to Facebook’s support team was legitimate. Meta AI said it was. But it actually belonged to scammers, and the man was defrauded out of hundreds of dollars. As more and more tech companies integrate AI, positioning it as a helpful source of information, these incidents will likely only multiply.

Since I asked Meta’s communications team for help, I haven’t received any more messages from people looking for Meta AI.

I was never able to replicate the issue for myself. When I tried, Meta AI incorrectly claimed it wasn’t available in WhatsApp (even though I was messaging it in WhatsApp. Another time, Meta AI said it didn’t have a phone number (it actually does, +1 313-555-0002, although it doesn’t advertise it publicly). Once, the chatbot gave me a non-working number with a Memphis, Tennessee, area code.

It’s unclear if I’ll ever get answers to all my questions about what happened (not least: Why only South Americans?) In the meantime, I’ll keep reporting on AI.

Similar Posts

Leave a Reply