Ethics in Marketing Research

Overview

The realities you will face in the real world, especially in the age of AI.

Presented by:
Larry Vincent,
Professor of the Practice
Marketing
Presented to:
MKT 512
April 23, 2026

Widening the lens

Algorithmic bias is about downstream outcomes — what happens when machines make decisions at scale.

But research ethics doesn’t start with the algorithm. It starts the moment we decide to ask a question.

Three lenses for the next 20 minutes:

  • Obligations to the people who give us data
  • Integrity in the research act itself
  • Honesty in how we report, and in how findings get used

Part 1 — Obligations to respondents

The re-identification problem

“Anonymized” rarely means anonymous.

Netflix Prize (2006)

100M “anonymized” ratings released as a competition dataset.

Researchers cross-referenced with public IMDb reviews and re-identified users within weeks.

Netflix cancelled the sequel contest and settled a privacy lawsuit.

Target (2012)

Statistician Andrew Pole built a pregnancy prediction score from purchase patterns — unscented lotion, calcium, large purses that could double as diaper bags.

A father learned his teenage daughter was pregnant from the coupons Target mailed.

Target never asked. They inferred.

Sugging and frugging

Selling Under the Guise of research

FundRaising Under the Guise of research

A fake “survey” that’s really a sales pitch or a donation ask.

Still rampant. Erodes public trust in legitimate research — a significant reason survey response rates have collapsed over the past twenty years.

Part 2 — Integrity in the research act

Leading questions

Loaded

“How important is it for companies to act ethically?”

“Do you agree that our product is the best on the market?”

“Would you rather have a safe car or a cheap car?”

Balanced

“How important, if at all, is … ?”

“Compared to alternatives you’ve used, how would you rate … ?”

Balanced choices with genuine trade-offs.

A question that can only produce one answer isn’t measurement. It’s theater.

How researchers fool themselves

HARKingHypothesizing After the Results are Known. You ran twenty tests, one was significant, you report it as your hypothesis.

p-hacking — Trying many cuts of the data, reporting the ones that “worked.”

The garden of forking paths (Gelman & Loken) — Even without explicit cheating, every analyst makes dozens of implicit choices. Researcher degrees of freedom produce false positives at scale.

Confirmation bias — The most corrosive bias is often the one no one names out loud. Especially when the sponsor has an expected answer.

Cherry-picking in qualitative

You have 40 interview transcripts. You pick three vivid quotes to illustrate each theme.

Were they chosen because they were representative — or because they sounded good?

Qualitative research is not exempt from rigor. If anything, it requires more honesty, because the constraints are softer.

Part 3 — Reporting and downstream use

Loyalty to the story vs. loyalty to the data

Research exists to inform decisions. That means it must perform in the boardroom.

But “performs well” should mean performs well because it is true — not because it is tidy.

The most common soft fraud in marketing research is not fabrication. It is the slow smoothing of inconvenient findings until a clean narrative emerges.

The “sign your name” test

Would you put your name on this slide?

On this finding?

On this recommendation?

If not — why are you sending it forward?

Insights weaponized: Cambridge Analytica

A Facebook personality quiz app harvested data on roughly 87 million users — participants and their friends, without friends’ consent.

That data built psychographic profiles used for micro-targeted political advertising in the 2016 US election.

Standard marketing-research techniques, applied at scale, without consent, to exploit psychological vulnerabilities.

The lesson isn’t that targeting is evil. It’s that knowing how your findings will be used is part of your job.

Dark patterns and vulnerability targeting

Research can be used to help customers — or to exploit them.

  • “Distressed gambler” as a segment
  • Addiction-optimized app design
  • FOMO countdown timers that aren’t real
  • “Only 1 left!” when there are hundreds

The question isn’t “Is this legal?”

It’s “Would I be proud to explain this to my own customers?”

Discussion

Pick one

  1. Is there a real ethical difference between a question designed to get a truthful answer and one designed to produce the answer the client wants to hear?

  2. You discover that your findings, presented honestly, will probably kill your client’s pet project. What do you owe them?

  3. In your own team project this semester — at what point, if ever, did you pause to ask whether respondents fully understood what you were doing with their data?

  4. Which is the bigger ethical threat in marketing research today: dishonest researchers, or honest researchers who don’t think carefully about how their findings will be used?

Bringing it back

Algorithmic bias, revisited

Every failure we just discussed — consent, integrity, honest reporting, thoughtful use —

becomes harder to police when the decisions happen:

  • in milliseconds
  • at the level of millions of individuals
  • inside a system no one fully understands

Algorithmic bias is not a new ethics.

It is the old ethics, automated.

Will you sign your name to this?