Pulling Consumer Perceptions From the JAR

Christine Homsey, Contributing Editor

May 28, 2009

7 Min Read
Pulling Consumer Perceptions From the JAR

Just about right (JAR) questions are used in consumer taste tests to provide feedback to product developers as to whether an attribute or ingredient in a product is at the ideal level or if it requires adjustment. The scale used for these questions is bipolar, with the first and last scale points representing too much or too little of an attribute, and the midpoint representing the just about right point.
Examples of typical JAR questions include:
     Rate the COLOR of this product: Much Too Light; Somewhat Too Light; Just About Right; Somewhat Too Dark; Much Too Dark.
     Rate the SWEETNESS of this product: Not Nearly Sweet Enough; Not Quite Sweet Enough; Just About Right; Somewhat Too Sweet; Much Too Sweet.
     Rate the AMOUNT OF SAUCE in this product: Not Nearly Enough; Somewhat Not Enough; Just About Right;  Somewhat Too Much; Far Too Much.
When a 5-point scale is used, the results are typically collapsed into three categories: not enough, just about right and too much. JAR results are generally reported as three numbers per attribute. For example, reporting a result of 20/70/10 would be one way to show that 20% of the consumers thought the attribute intensity was too low or not enough; 10% thought it was too strong or too much; and 70% thought it was just about right. A general rule of thumb is that an attribute can be considered sufficiently optimized when at least 70% or 75% of the consumers marked the just about right option.

JARred advantages
JARs have been widely used in food industry research for at least two decades, for the following reasons:
JARS are easy to understand. JARs are easily comprehended, provided that the words used to anchor the scales are true semantic op-posites and the attributes are well understood by consumers. JARs are a simple way to gather information from a target population of consumers, and the results are relatively easy to communicate in presentations.
JARs shortcut the development process. Well-optimized products can be developed via experimental design, in which attribute levels or treatments are systematically varied and the resulting products tested with consumers, or by using key attribute drivers studies, in which several products representing a variety of product attributes and attribute levels are presented to consumers. From the data gathered in these studies, attributes of the target product can be predicted based on consumers liking of each product in the test set.
Although the studies described above remain the gold standard in product research using consumers, they are not always undertaken due to a greater up-front investment of time and money. JAR questions provide one means of shortcutting the product-development and consumer-testing process. Rather than making a sensory attribute an experimental factor in a study, JARs allow us to make an attribute a question on a questionnaire. The results can then be used to provide direction as to whether an attribute should be increased, decreased or left unchanged.
JARs can be used in tandem with liking (hedonic) questions to better understand which attributes have a large impact on consumer liking. This information can be used to prioritize which attributes should be adjusted.

Understanding the pitfalls
Although JARs are commonly included in consumer questionnaires, their use still creates some debate. One reason for this contro-versy is that JARs are often employed incorrectly or the resulting scores are interpreted in a too-literal fashion, which then leads the de-veloper astray and ultimately results in failure to improve the product.

However, the benefits of using JARs can outweigh the negatives if we understand the pitfalls, several of which are outlined below. The value of JAR scores to provide direction is directly related to the consumers ability to use the words accurately and the researchers ability to translate product differences into specific just about right scales. The risks and pitfalls of using JARs include:
     Trying to relate JARs directly to an ideal attribute intensity. JARs can provide directional information, but do not correspond to an exact percentage that an attribute or ingredient needs to be increased or decreased.
     Manipulating the wrong ingredient in response to a not-JAR rating. Consider chocolate flavor. Increasing cocoa doesnt necessarily increase the JAR ratings for chocolate flavor, but increasing the fat or vanilla in the formula very well may. Darkening the product color may also increase JAR ratings for chocolate.
     Halo/horns effects. Consumers sometimes have a tendency of rating most attributes either positively or negatively depending on their overall liking of the product. Rather than thinking about each attribute in an isolated, objective manner, the consumer may just be telling us over and over again, I love this, or I really dislike this.
     Failure to recognize a contrast effect. The JAR ratings for one product may be influenced by other products in the test set. When the product is retested within a different set of products, the JAR scores may change.
     Not recognizing a flavor profile (vs. flavor intensity) issue. Sometimes well find splits in JAR scores where large numbers of consumers rate the attribute as not enough and an equally large number rate the attribute as too much. For instance, a banana flavor question may result in JAR ratings of 40/20/40. Rather than answering the question as it was intended (as a flavor intensity question), consumers may simply be indicating that they dont like the character of the flavor. Consumers may have marked not enough because the flavor did not taste like a real banana to them, or they may have rated it as too much because a bad flavor in any amount is too much.
     Failure to recognize preference segments, as for spicy, hot foods. When developing a hot salsa, consumers who are recruited for taste testing should identify themselves as hot-food likers. Failure to recruit for specific preference segments will result in high percentages of not-JAR ratings.
     Lack of understanding about how people tend to rate certain attributes or ingredients. There are certain ingredients that consumers tend to rate as not enough, regardless of the amount in the formula. Commonly used examples are cheese on pizza, chocolate chips in cookies, and the amount of meat in soup. However, increasing the ingredients past a certain point will result in a product that is too greasy, too unhealthy, too costly, etc.
     Not recognizing that respondents may be giving you a morally or socially correct answer. For attributes that carry negative health connotations, JARs can induce a response bias. Examples are saltiness in soup and sweetness in desserts. Desserts are often rated as too sweet because the respondents feel that so much sugar is not good for them, but liking of the dessert would go down if the sweetness were reduced.
     Trying to use JARs for attributes that dont have a logical midpoint. An example is color hue (e.g., too orange to too red). Not all attributes are JARable.
     Using scale anchors that are not semantic opposites. (Too dry to too greasy; too sweet to too sour, and so on.)
     Using words that have negative connotations. Greasiness, is one example. Few consumers would say that a product is not nearly greasy enough. Some attributes dont lend themselves to JARs because one side of the scale would never be used.
     Using words that are complex or have multiple meanings (such as creaminess, richness or spiciness). Creaminess may mean percent of cream or percent of milkfat to the scientist, but can mean smoothness, thickness or dairy flavor to the consumer. When you ask consumers about the creaminess of the ice cream, creaminess is synonymous with overall liking of the product. They may be thinking, Isnt all good ice cream creamy?
These are just some of the items to watch out for when using JAR scales. For a more-comprehensive overview of JAR scales, see the recently published manual by ASTM International, West Conshohocken, PA, Just-About-Right (JAR) Scales: Design, Usage, Benefits, and Risks.
Although the use and interpretation of JAR questions is somewhat artful, there is no need to reject JARs as a tool in your kit of meth-odologies. With a deeper awareness of what the JARs may be telling us, we can begin to understand what the consumer is really trying to say.

Christine M. Homsey is a senior project manager with Food Perspectives, a consumer guidance research firm in Plymouth, MN. After a decade of developing products for the grocery and restaurant industries, she has spent the past several years designing and managing consumer research projects. For more information about Food Perspec-tives, please visit foodperspectives.com.

 

About the Author(s)

Subscribe and receive the latest insights on the health and nutrition industry.
Join 37,000+ members. Yes, it's completely free.

You May Also Like