The MaxDiff (BWS) methodology

MaxDiff was pivotal in unlocking our understanding of customer preferences. Without it, we would have relied on instincts and headed in the wrong direction. It was a game-changer for making informed decisions.
Paul D'Abruzzo, 2021

When to use MaxDiff

MaxDiff, short for "maximum difference," is a methodology aimed at evaluating an individual's preferences across a range of options. Conceived by Professor Jordan Louviere, and originally labelled Best Worst Scaling (BWS) this approach involves comparing various options to rank them by popularity. This analysis can then be extended to specific groups, such as 'youth' or 'women', or other targeted sub-populations.

Nowadays the term 'MaxDiff' englobes  'BWS', especially in marketing, but from an academic point of view, it should be the reverse!

Logos

Brand names

Slogans

Reasons to believe

Subscription models

Packaging

Favorite product

New features

A unique process at your service

Get an instant quote
01
Identify Your Needs

You determine what you want to test (e.g., logos, advertising messages, claims, etc.)

02
Request a instant Quote

You get a quote by simply filling out our dedicated form and submitting your request

03
Initial Review

We review your project, provide feedbacks and confirm its feasibility with our panelists.

04
Survey Preparation

We prepare the experimental design and the online survey. You have the option to review and provide feedback on the survey design.

05
Payment for Field Work

We worked on trust up to this point, payment only required when you are ready to start data collection

06
Data Collection Begins

We commence the field work, typically lasting 3-5 days.

07
Analysis and Reporting

We need an additional 24-48 hours after field work to analyze the data and prepare our report.

How the MaxDiff methodology works

Imagine we have a list of potential slogans for our dog treats, and we're eager to determine which ones resonate most with dog owners. To effectively gauge their popularity and appeal, we'll use a MaxDiff analysis, which relies on a specialized survey technique. This approach presents respondents with different sets of slogans and asks them to choose the most and least appealing options in each set. Through this method, we can rank the slogans based on consumer preference, ensuring that our final branding choices are backed by solid data.

In a MaxDiff task, respondents are shown sets of 4-6 items from a larger list. They are asked to choose the most and least appealing item in each set. This process is repeated with various combinations of items.

No items found.

1/4

At the end of data collection, where typically 50 to 400 respondents are surveyed, we are able to estimate a mathematical model that quantitatively reflects their preferences. This model uses the 'most preferred' and 'least preferred' choices for each task to calculate preference scores for each slogan. These scores allow us to rank the slogans based on their overall attractiveness. The model quantifies the relative importance of each slogan, thus providing a precise view of consumer preferences.

What’s wrong with asking people to rate or rank items ?

See the answer
?
Problems with RATING
Not appealing at all
1
3
4
Extremely appealing
5
2
MaxDiffPro ensures fair comparisons with their balanced design method
heir in-house algorithm reduces biases for more reliable data
The flexible 'Dual Response' question enhances study insights
heir in-house algorithm reduces biases for more reliable data
The flexible 'Dual Response' question enhances study insights
heir in-house algorithm reduces biases for more reliable data
The flexible 'Dual Response' question enhances study insights
heir in-house algorithm reduces biases for more reliable data
The flexible 'Dual Response' question enhances study insights
heir in-house algorithm reduces biases for more reliable data
The flexible 'Dual Response' question enhances study insights
heir in-house algorithm reduces biases for more reliable data
The flexible 'Dual Response' question enhances study insights
MaxDiffPro delivers a complete MaxDiff package, from scripting, managing respondents and data analysis

Results may not be discriminating (Rate everything as important)

The scale is arbitrary, does not tell the strength of the importance

People in different cultures use scale differetly

Cannot handle a long list

Problems with RATING
Ranking
MaxDiffPro ensures fair comparisons with their balanced design method
heir in-house algorithm reduces biases for more reliable data
?
MaxDiffPro ensures fair comparisons with their balanced design method
?
heir in-house algorithm reduces biases for more reliable data
?
MaxDiffPro ensures fair comparisons with their balanced design method
?
heir in-house algorithm reduces biases for more reliable data
?
MaxDiffPro ensures fair comparisons with their balanced design method
?
heir in-house algorithm reduces biases for more reliable data
?
MaxDiffPro ensures fair comparisons with their balanced design method
?
heir in-house algorithm reduces biases for more reliable data
?
MaxDiffPro ensures fair comparisons with their balanced design method
?
heir in-house algorithm reduces biases for more reliable data
MaxDiffPro ensures fair comparisons with their balanced design method
heir in-house algorithm reduces biases for more reliable data

People are good at picking the extremes, but their preferences for anything in between might be innacurate

Only tells you the order of the importance, not the strength of importance

Cannot handle a long list

How does MaxDiff (BWS) solves these problems ?

See the answer
?

Nowadays the term 'MaxDiff' englobes  'BWS', especially in marketing, but from an academic point of view, it should be the reverse!

Always generates discriminating results as respondents are asked to choose the BEST and WORST option, which simulates real-world behaviour – people make choices and trade-offs

The results will tell you the order and strength of importance of all items

There is no scale-bias and results are NOT subject to cultutral differences

Can handle a long list of items as people are given a few items in each tasks to evaluate

Can get accurate preferences of all items as respondents evaluate only a few items in each exercise

Identifying Optimal Marketing Messages with MaxDiff

By engaging customers in making comparative judgments, MaxDiff reveals the most persuasive messages through a clear, quantitative preference ranking. This method beats traditional ratings by minimizing biases, factoring inconsistencies and delivering insights that are both actionable and accurate. The insights gained enable businesses to refine their marketing strategies across various channels, ensuring that every message resonates strongly with their target audience.

Ultimately, MaxDiff empowers marketers to craft data-driven campaigns that significantly enhance engagement and return on investment, making it an indispensable tool for optimizing marketing effectiveness.

Our innovation: The dual response

Our MaxDiff studies can include a "Dual Response" question to enhance the insights we gather. This option is customizable to fit the unique needs of your study.

What is it?

After identifying the top choice in a MaxDiff task, we ask participants a "Dual Response" question to gauge their true commitment to their preference, determining if they would realistically subscribe to, buy, or interact with it.

Customization for relevance:

We tailor Dual Response to fit your study, from assessing willingness to "join" a party, "subscribe" to a gym, to decisions on "buying" or "renting." The aim is to understand the impact of each option on your specific outcome.

We innovate further: Balanced design

Unlike our competitors, we employ an in-house algorithm to create a 'balanced design' method, rather than relying on a random display. This approach maximizes the number of pairs and triplets etc.. that are being presented, leading to a comprehensive and fair assessment.

By opting for a balanced design over a random one, we eliminate potential biases and uneven distributions of items across the survey. This method ensures more accurate and reliable data collection. The balanced distribution of items for comparison not only enhances the integrity of the results but also provides deeper insights and more actionable outcomes from the survey.

Some competitors merely tally selections of 'best' or 'worst,' losing nuanced data. For instance, if A is preferred over B and B over C, it logically follows that A should be preferred over C. While this may seem simplistic, it highlights the depth of analysis that complex statistics can provide. Such analysis not only catches inconsistencies but also enhances the accuracy of the results. It is also crucial for implementing dual response mechanisms.

The optimal methodology

MaxDiffPro

MaxDiff (BWS)

Traditional ranking

Consumer Preferences

Enhanced Discrimination

Clear Preference Hierarchy

AI Quality Control

Balanced Design

Flexible Dual Response

Modeling Choice Data

Preference Structure

Multi Attribute MaxDiff

Confidence Intervals

The optimal methodology

Consumer Preferences

Enhanced Discrimination

Clear Preference Hierarchy

AI Quality Control

Balanced Design

Flexible Dual Response

Modeling Choice Data

Preference Structure

Multi Attribute MaxDiff

Confidence Intervals

MaxDiffPro

MaxDiff (BWS)

Traditional ranking

MaxDiff (Maximum Difference Scaling) offers a distinct advantage in gathering precise and reliable data. It enhances the survey experience by prompting respondents to make specific choices, accurately reflecting their real preferences. This method stands in stark contrast to rating scales, which can often yield ambiguous results.