Measuring the Invisible: Scales and Latent Variables

Teaching Note

Author

Larry Vincent

Published

February 11, 2026

Introduction

Imagine you’re conducting research for a retail brand that wants to understand customer loyalty. You add a question to your survey: “How loyal are you to this brand?” Respondents answer on a scale from 1 to 7. You collect the data, run your analysis, and report that the average loyalty score is 5.2.

But what have you actually measured? When a respondent circles “5,” what are they telling you? One person might be saying they’d recommend the brand to friends. Another might mean they’d pay a premium for it. A third might simply mean they haven’t bothered to switch. They’re all potentially correct. Each of those is a facet of loyalty. Loyalty is a multi-dimensional psychological construct, and a single question lets each person report on whichever dimension is most salient to them. And that’s fine, except you’re still left with a data point that doesn’t tell you much. You’ve collected a column of numbers that looks precise, but you have no way of knowing which facet of loyalty each number represents.

This is one of the most important concepts in survey research, and one of the most frequently mishandled. Latent variables are everywhere in marketing: brand trust, customer satisfaction, purchase intent, perceived quality, emotional attachment. You cannot measure any of them directly. You can only trace their presence through carefully designed questions that capture different facets of the underlying construct. Get this wrong and your research produces numbers that feel precise but mean very little. Get it right and you gain genuine insight into what’s happening inside your customers’ heads.

When and Why

You’ll encounter latent variables whenever your research question involves something psychological—an attitude, a belief, a perception, a tendency. If you’re measuring purchase frequency or dollars spent, you’re dealing with observable behavior. But some of the things we most want to measure in marketing, like satisfaction, trust, perceived quality, and emotional attachment, are amorphous psychological constructs. They’re real. They exist in people’s minds, but they have no singular representation. The only way to detect them is by measuring the qualities that constitute them—the specific beliefs, perceptions, and judgments that ladder up to the construct itself.

Consider some common marketing research objectives: understanding how customers perceive your brand relative to competitors, measuring satisfaction to predict retention, identifying attitude-based segments, testing whether an advertisement shifts brand perceptions. Each of these requires measuring something invisible. And that measurement challenge shapes how you design your survey, how many questions you ask, and how you analyze the results.

Scales—sets of survey items designed to measure a latent variable—are the standard solution. You’ve seen them countless times: the series of statements where respondents indicate their level of agreement, the batteries of questions that all seem to be asking about the same thing in slightly different ways. That apparent redundancy is the point. Because you cannot observe the construct directly, you triangulate. Multiple items, each capturing a slightly different aspect of the same underlying variable, combine to produce a more reliable and valid measure than any single question could provide.

How It Works

Start with the construct, not the questions

The most common mistake in scale development is jumping straight to writing survey questions. Resist this impulse. Before you write a single item, you need to define the construct you’re trying to measure. What is it, theoretically? What are its dimensions? How does it differ from related constructs?

Consider brand trust—a construct that matters enormously in marketing but proves slippery when you try to pin it down. The researcher Elena Delgado-Ballester, whose work on brand trust is widely cited, didn’t start by asking “what questions should I put on my survey?” She started by asking “what is brand trust, conceptually?” Her answer: brand trust has two distinct dimensions. The first is reliability—the belief that the brand will deliver on its promises and meet your expectations. The second is intentions—the belief that the brand would act in your interest if problems arose, that it wouldn’t take advantage of you.

This theoretical structure came before any survey items were written. It draws on decades of research into interpersonal trust and applies that framework to consumer-brand relationships. Only after the conceptual work was complete did scale development begin.

Write items that trace the construct

Once you have a clear theoretical structure, you write items that capture each dimension. The items should feel related—they’re all getting at the same underlying thing—but they shouldn’t be identical. Each item approaches the construct from a slightly different angle, which is why scales often feel repetitive to respondents. That repetition is doing important work.

Here are sample items from Delgado-Ballester’s brand trust scale, organized by dimension:

Reliability dimension:

  • “[Brand] is a brand that meets my expectations.”
  • “I feel confidence in [Brand].”
  • “[Brand] is a brand that never disappoints me.”

Intentions dimension:

  • “[Brand] would be honest and sincere in addressing my concerns.”
  • “I could rely on [Brand] to solve a problem with the product.”
  • “[Brand] would make any effort to satisfy me.”

Notice how the items within each dimension are asking similar questions in different ways. They’re not identical—each brings a slightly different nuance—but they all trace the same underlying facet of trust. When responses to these items correlate strongly with each other (and less strongly with items from the other dimension), you have evidence that you’re measuring something real.

Validating that your items work

How do you know whether your scale items actually measure the construct you intend? Two statistical tools do most of the heavy lifting. Exploratory Factor Analysis (EFA) helps you discover whether your items group together into coherent dimensions. If you’ve written six items expecting them to capture two dimensions of brand trust, EFA will reveal whether the data supports that structure—or whether your items are loading onto factors you didn’t anticipate. Cronbach’s alpha measures internal consistency—the degree to which items within a dimension are correlated with each other. A high alpha (typically 0.70 or above) suggests your items are reliably capturing the same underlying construct, which gives you confidence to combine them into a single score for your analysis.

We’ll cover the application of these methods later in the course. For now, understand that they exist and what they accomplish: EFA explores structure, and Cronbach’s alpha tests reliability. Both help you move from individual survey items to validated measures you can trust.

Respect the difficulty

Developing a valid, reliable scale is genuinely hard. It requires theoretical grounding, careful item writing, extensive testing, and statistical validation. Researchers spend entire careers refining measures of constructs like customer satisfaction or brand personality.

This is why, for most applied research projects, you should look for existing validated scales rather than inventing your own. Published scales have been tested across multiple samples, refined based on statistical analysis, and scrutinized by peer reviewers. When you use a validated scale, you inherit all of that work.

Where do you find them? Marketing journals—the Journal of Marketing, Journal of Consumer Research, Journal of Marketing Research—are full of scale development articles and studies that use established measures. The method sections of empirical papers typically include the scale items or cite their source. Our library provides full digital access to these journals, and the librarians can help you navigate the literature.

AI can also assist your search. Ask it to identify validated scales for whatever construct you’re measuring, but—and this is critical—always ask for specific citations and then verify them yourself. AI occasionally invents plausible-sounding references that don’t exist. The citations are your path back to the original work, where you can evaluate whether the scale fits your context and review the validation evidence.

AI Exploration Prompts

Use these prompts to deepen your understanding of scales and latent variables:

  1. “I want to measure [construct] in a survey. Can you explain what this construct means theoretically, identify its key dimensions, and suggest validated scales from published marketing research? Please provide specific citations I can verify.”
  2. “Here are the survey items I’m considering for measuring [construct]: [list items]. Do these items appear to capture a single dimension or multiple dimensions? Are there gaps in what I’m measuring?”
  3. “What’s the difference between [construct A] and [construct B]? They seem related but I want to make sure I’m measuring the right thing for my research question.”

Further Resources

USC Gaughan & Tiberti Business Library: Access the major marketing journals (JM, JCR, JMR) through the library’s digital collections. Search for “scale development” plus your construct of interest.

Looking Ahead: In Session 26, we’ll work through the application of Exploratory Factor Analysis and Cronbach’s alpha so you can validate your own scales. For now, focus on understanding what latent variables are and why measuring them requires this careful, multi-item approach.