Translating Measurement Theory Into Everyday Language: Tips For Effective Communication
By Chris Orem
Reprinted with permission from the National Council on Measurement in Education Newsletter
As I have confessed in earlier columns, I am partial to higher education assessment. I spend the majority of my day learning about assessment, conducting assessment, or consulting with someone about assessment. Over the past two years, I've worked with faculty members to craft objectives, analyze test items, and write reports that summarize assessment results. One of the lessons I learned quite quickly from working with faculty—and other professionals in higher education—is that those of us with measurement expertise speak a language that very few people outside of our field understand. The technical nature of measurement and statistics is complicated and most clients who I've worked with don't have the time or desire to understand the finer points of a mixed ANOVA or learn how to interpret the output from a CFA. I've watched multiple faculty and staff zone out when I start to stress the importance of reporting inter-rater reliability. These experiences have helped me learn that being able to communicate measurement concepts to someone with no background knowledge of the field is incredibly important. A client who doesn't understand why we have assumptions in statistics, or why reliability is an important consideration when making inferences about test scores, is one who is more likely to make ill-advised decisions regarding assessment results. Although I still have a long way to go in honing my communication skills, the following are some tips that I've used to effectively communicate technical concepts to higher education stakeholders.
- Examples are crucial. I'm sure we've all been in a class where the professor is explaining a complicated concept and he or she uses some everyday example to make the point clearer. If you're like me, usually these examples are not only useful, but they tend to stick with me for a while. The same can be said about communicating measurement concepts such as reliability or item performance to faculty or clients who lack this foundational knowledge. When expressing technical information to anyone with a specialty outside of statistics or measurement, think of examples that can be understood by your audience. If you know that the person with whom you're working enjoys sports, think of ways to relate baseball to reliability. If you're meeting with your client for the first time, consider drawing on his or her field of interest for material. By using familiar examples, even basic ones, you can translate measurement concepts into a language your client can understand. For instance, if you are helping a professor of Theater and Dance understand the importance of inter-rater reliability, try comparing the use of multiple raters scoring a rubric to having multiple judges critique a dance performance in order to drive home your points.
- Your clients are experts in their own fields. Just like you may not understand the finer points of mechanical engineering or 15th century British literature, the people with whom you work may not fully understand that scales have factor structures or that low reliability can severely limit the use of test scores for their intended purposes. With that said, remember to be patient with your clients, even when they just can't seem to grasp the basic points of what you're saying. When explaining a measurement concept, look for social cues from your clients to tell you whether or not they are following. Check with them at various points along the way to see if they have questions, but let them guide the conversation. Trust that if they have questions, they will ask. Showing faculty respect and patience while explaining difficult measurement concepts will build trust. This trust will help facilitate the learning process, making it more likely that the faculty member or client will ask clarifying questions until he or she feels comfortable with the concept you are explaining. If the faculty member understands why the psychometric properties of a test matter, then he or she may be more likely to recognize how a psychometrically-sound assessment can provide better evidence of student learning.
- Practice your pitch. A few semesters ago, I took a statistics consulting seminar. In this course, my professor role played various consulting scenarios and I would have to consult with her as if she were my client. During this class, I fumbled multiple times to explain simple concepts to this "client" with no experience in statistics. I learned from the experience that even though I could interpret output and discuss results with my professor or other students, I needed a lot of practice explaining these concepts to someone who lacked any knowledge of statistics. One way that I have worked on this skill is to practice what faculty in my program call elevator pitches—one minute spiels about anything from defining reliability to explaining p-values (something you could deliver in the span of an elevator ride, get it?). Having well-rehearsed and succinct explanations of complicated measurement—and assessment—related concepts can improve the communication between you and your client and helps to build your credibility as a measurement expert. Obviously, not every concept can be explained in a minute, but hopefully you get the point that when explaining test theory to a faculty member, practice makes perfect.
I've written a lot in this column about working with faculty, primarily because they are my main clients. However, the above rules of thumb apply in any situation, whether it's teaching students in a class or consulting with lawmakers in government about important test-related policies. And even if your current career aspirations won't carry you anywhere near a classroom or Washington, D.C., it's still important to be able to communicate technical language effectively. Take the advice offered by Dr. Kurt Geisinger during a panel discussion at the 2010 Northeastern Educational Research Association conference. When asked whether or not it was important for psychometricians to be able to communicate advanced techniques to a lay audience, Dr. Geisinger commented (and I paraphrase) that psychometricians are often called into court to defend their analytic practices, especially regarding high-stakes assessments. These testimonies aren't likely to do much good if the judge (who by all accounts will not be an expert in measurement) cannot understand what it is the psychometrician is explaining. Thus, being unable to explain and justify your work may end up costing you or your company much more than an afternoon in court.
Dr. Geisinger's comments bring home an important point: No matter what path we take in our graduate work, and no matter what career choices we make, we need to be able to communicate the importance of our work to outside stakeholders. Whether it's to educate students more effectively, help faculty members design better tests, or convince a judge that your company is conducting meaningful, accurate, and effective work, we always need to be thinking of ways to express ourselves in a language that anyone can understand.