MaxDiff, short for "maximum difference," is a methodology aimed at evaluating an individual's preferences across a range of options. Conceived by Professor Jordan Louviere, and originally labelled Best Worst Scaling (BWS) this approach involves comparing various options to rank them by popularity. This analysis can then be extended to specific groups, such as 'youth' or 'women', or other targeted sub-populations.
Nowadays the term 'MaxDiff' englobes 'BWS', especially in marketing, but from an academic point of view, it should be the reverse!
Logos
Brand names
Slogans
Reasons to believe
Subscription models
Packaging
Favorite product
New features
You determine what you want to test (e.g., logos, advertising messages, claims, etc.)
You get a quote by simply filling out our dedicated form and submitting your request
We review your project, provide feedbacks and confirm its feasibility with our panelists.
We prepare the experimental design and the online survey. You have the option to review and provide feedback on the survey design.
We worked on trust up to this point, payment only required when you are ready to start data collection
We commence the field work, typically lasting 3-5 days.
We need an additional 24-48 hours after field work to analyze the data and prepare our report.
At the end of data collection, where typically 50 to 400 respondents are surveyed, we are able to estimate a mathematical model that quantitatively reflects their preferences. This model uses the 'most preferred' and 'least preferred' choices for each task to calculate preference scores for each slogan. These scores allow us to rank the slogans based on their overall attractiveness. The model quantifies the relative importance of each slogan, thus providing a precise view of consumer preferences.
Results may not be discriminating (Rate everything as important)
The scale is arbitrary, does not tell the strength of the importance
People in different cultures use scale differetly
Cannot handle a long list
People are good at picking the extremes, but their preferences for anything in between might be innacurate
Only tells you the order of the importance, not the strength of importance
Cannot handle a long list
Nowadays the term 'MaxDiff' englobes 'BWS', especially in marketing, but from an academic point of view, it should be the reverse!
Always generates discriminating results as respondents are asked to choose the BEST and WORST option, which simulates real-world behaviour – people make choices and trade-offs
The results will tell you the order and strength of importance of all items
There is no scale-bias and results are NOT subject to cultutral differences
Can handle a long list of items as people are given a few items in each tasks to evaluate
Can get accurate preferences of all items as respondents evaluate only a few items in each exercise
By engaging customers in making comparative judgments, MaxDiff reveals the most persuasive messages through a clear, quantitative preference ranking. This method beats traditional ratings by minimizing biases, factoring inconsistencies and delivering insights that are both actionable and accurate. The insights gained enable businesses to refine their marketing strategies across various channels, ensuring that every message resonates strongly with their target audience.
Ultimately, MaxDiff empowers marketers to craft data-driven campaigns that significantly enhance engagement and return on investment, making it an indispensable tool for optimizing marketing effectiveness.
Our MaxDiff studies can include a "Dual Response" question to enhance the insights we gather. This option is customizable to fit the unique needs of your study.
What is it?
After identifying the top choice in a MaxDiff task, we ask participants a "Dual Response" question to gauge their true commitment to their preference, determining if they would realistically subscribe to, buy, or interact with it.
Customization for relevance:
We tailor Dual Response to fit your study, from assessing willingness to "join" a party, "subscribe" to a gym, to decisions on "buying" or "renting." The aim is to understand the impact of each option on your specific outcome.
Unlike our competitors, we employ an in-house algorithm to create a 'balanced design' method, rather than relying on a random display. This approach maximizes the number of pairs and triplets etc.. that are being presented, leading to a comprehensive and fair assessment.
By opting for a balanced design over a random one, we eliminate potential biases and uneven distributions of items across the survey. This method ensures more accurate and reliable data collection. The balanced distribution of items for comparison not only enhances the integrity of the results but also provides deeper insights and more actionable outcomes from the survey.
Some competitors merely tally selections of 'best' or 'worst,' losing nuanced data. For instance, if A is preferred over B and B over C, it logically follows that A should be preferred over C. While this may seem simplistic, it highlights the depth of analysis that complex statistics can provide. Such analysis not only catches inconsistencies but also enhances the accuracy of the results. It is also crucial for implementing dual response mechanisms.
MaxDiffPro
MaxDiff (BWS)
Traditional ranking
Consumer Preferences
Enhanced Discrimination
Clear Preference Hierarchy
AI Quality Control
Balanced Design
Flexible Dual Response
Modeling Choice Data
Preference Structure
Multi Attribute MaxDiff
Confidence Intervals
Consumer Preferences
Enhanced Discrimination
Clear Preference Hierarchy
AI Quality Control
Balanced Design
Flexible Dual Response
Modeling Choice Data
Preference Structure
Multi Attribute MaxDiff
Confidence Intervals
MaxDiff (Maximum Difference Scaling) offers a distinct advantage in gathering precise and reliable data. It enhances the survey experience by prompting respondents to make specific choices, accurately reflecting their real preferences. This method stands in stark contrast to rating scales, which can often yield ambiguous results.