I use these numbers for my job, and sometimes for my own peace of mind. I build decks. I check reports. I talk with lab folks and policy people. And you know what? The stats on animal testing often don’t match the way people repeat them online. They look clean in a post. But in the real world, they get messy fast. For the nitty-gritty version of how I see these numbers spin out, check out my full breakdown.
I’m not here to scold. I’m here to share what I’ve seen, up close, with coffee on my sleeve and a spreadsheet that won’t behave.
The “90%” Drug Stat That Won’t Sit Still
You’ve seen it. “Ninety percent of drugs that pass animal tests fail in humans.” I even put that line on a slide once. It sounded sharp. A friend in pharma called me after the talk and said, “Kayla, that’s not quite it.”
He was right. The FDA has said that about 90% of drugs that enter human trials don’t make it to approval. Those drugs did have animal studies first. But the stat isn’t proof that animal tests “passed” those drugs. It’s not that simple. Some drugs looked OK in animals, then failed for many reasons in people—safety, dosing, side effects, or just no benefit. Also, folks quote 90%, 92%, even 95%. It shifts by time and field.
So when you see that one-liner? Ask: Is this about drugs entering trials, or about animal tests “passing” them? Big difference.
My Spreadsheet vs. The Mouse Room
Here’s a real one from my week. I pulled USDA APHIS numbers for my state. The report showed a few thousand animals used at local labs. That seemed low. Then I walked through a mouse room with a lab manager. She said, gently, “We use tens of thousands of mice and rats each year. Those aren’t in that report.”
She’s right. In the U.S., the Animal Welfare Act does not cover rats, mice, or birds bred for research. The USDA reports leave them out. But they are most of the animals used. Groups like NC3Rs and Speaking of Research say mice and rats make up the big majority, by a lot.
So the official number looked neat and small. The real number was much larger. That gap matters. It changes how we talk about scale, cost, and care.
One Country Counts “Animals.” Another Counts “Procedures.”
A chart fooled me once. It compared the U.S. and Europe and said one side used more animals. But the EU counts “procedures.” The U.S. counts animals (and again, not all animals). If one mouse has two procedures, the EU count goes up by two. Same mouse, two lines. In the U.S., that might be one animal in the tally.
I learned this the hard way while fixing a slide before a meeting. I called our IACUC office (they oversee animal care). They walked me through it, step by step. “Make sure you say what the number is counting,” they said. Simple, but easy to miss.
Pain Categories: Not All Columns Mean the Same Thing
Another place where people trip: pain and distress. The USDA has columns—C, D, and E. C means no or little pain. D means pain was relieved with drugs. E means pain could not be relieved. I’ve seen posts that say, “Most animals feel no pain.” But that’s not what Column C means every time. It’s more like “no or minor pain,” or “no more than a needle.” It’s still a serious topic.
Across the ocean, the EU reports “severity.” Different words, different bins. So you can’t match them line by line. I once tried. I ended up with a color-coded mess and a headache.
The Cosmetics Tangle
I once shared a post that said, “Cosmetics animal testing is banned in the U.S.” A friend who works in policy texted me, “Not quite.” There’s no blanket federal ban. Some states—like California and New York—ban the sale of new cosmetics tested on animals. The EU bans most cosmetics animal testing and the sale of newly tested products. But even there, chemicals rules under REACH can push testing for worker safety. It’s not a tidy story.
For a brand-specific snapshot, check out how Neutrogena navigates animal testing commitments and how Pantene’s policy reveals its own complications.
Also, Congress passed FDA Modernization Act 2.0 in 2022. It removed the old rule that said animals must be used for drug approval. That doesn’t ban animal tests. It just means other methods can be used. See how the words shift? I now read those posts twice.
Newer in-vitro skin models, such as those highlighted by InvitroDerm, show how non-animal methods are stepping up to fill the gap.
Missing Years, Blocked Pages, Slow Data
A quick memory: in 2017, USDA APHIS pulled some inspection files from the web for a while. I had a deadline and sat there refreshing the page. Data goes missing. Or it comes a year late. Or the format changes. When people share old charts like they are fresh, I cringe a little. Time stamps matter.
Rehoming and End-of-Study: The Quiet Blind Spot
People ask me, “How many animals get adopted out?” I wish I had a clean number. Some schools post nice stories about beagle adoptions. Some don’t share much at all. There isn’t a standard national rollup. I’ve tried to track it. I made a small sheet with notes from facility reports and local laws. It felt like trying to hold water in my hands.
Two Talks That Changed How I Read These Numbers
-
A vet tech showed me how refinement works. She talked about better housing, handling, and pain care. I saw how small changes helped. The 3Rs—Replace, Reduce, Refine—weren’t just words on a poster. NC3Rs has good guides on this. It made me less quick to judge a whole lab by one stat.
-
A toxicology lead walked me through false positives and false negatives in tests. It sounded dry. It wasn’t. It explained why a test can look “right” in animals and still steer a team wrong for people. Or flag risk where there isn’t one. It’s not just “animal vs human.” It’s the test, the model, the dose, the method.
So… Are The Stats Useless?
No. I still use them. I just treat them like a weather report: helpful, but not the whole sky. Numbers guide budgets, 3Rs work, and policy talks. They also get spun. Mine have too. I’m not proud of that one slide. But I learned.
Quick Checks I Use Now
- What is being counted? Animals, procedures, or studies?
- Which animals? Are mice, rats, and birds included?
- What year is the data? Is it national, state, or just one facility?
- How were pain and severity scored? By USDA columns or EU levels?
- What’s the source? USDA APHIS, EU Commission, NC3Rs, FDA, or a group with a stance?
- Does the claim match the source words, or is it a leap?
I keep these on a sticky note. It lives next to my keyboard with a coffee ring on it.
That habit of double-checking raw counts carries over to my non-work life too. A while back I was helping a community health friend tally how many massage and wellness shops actually operated in Central Florida. The state licensing board’s list was months out of date, and Yelp wasn’t much better. A street-level directory like Rubmaps’ Winter Haven roundup pulled the locations, user feedback, and updated status into one neat table, making it far easier to verify which storefronts were real and which were phantom listings.
For anyone who has ever stared at a spreadsheet and then had to jump straight into a meeting to explain it, turning stats into plain speech is its own mini-art form. A quick refresher like the conversational pointers in this set of hot chat tips can help you frame tricky numbers in a way that keeps the room engaged and open to nuance.
One More Real Example
A local reporter asked me if our city’s labs used “only a few hundred animals” last year. She had a neat PDF from a USDA report. I said, “That’s not the full set. It leaves out mice and rats.” We called the lab’s comms team. They shared a high-level figure that was much higher. The story changed. It got more honest. No one loved the new number, but it was true to the data.
My Take, Plain and Simple
- The 90% drug stat