By: Chelsie Kuhn, MEL Associate, Headlight Consulting Services, LLP
In full disclosure, prior to joining Headlight, I was a quantitative methodologist who had some experience building surveys, building quantitative datasets from secondary sources, and running statistical analyses. When I started, I had facilitated a few semi-structured interviews and focus groups to gather data, but I had never done qualitative coding before. In the quantitative realm, I really enjoyed the perception of rigor and reliability that statistics could provide. And even though I knew statistics could never answer all of my questions (e.g. What does it mean practically/in people’s lived experiences that x thing is related to y thing?), qualitative methods sometimes get a bad reputation for being “too soft” or “not as valid as numbers” that made me avoid it.
Thinking about this in retrospect and knowing what I do now, the bigger underlying concern is less about data collection method/type and much more about rigor (can qualitative responses be substantiated across different sources to say something about the bigger picture?). Qualitative data collection and analyses are just as valid and are equally, if not more, important as quantitative data collection and analyses. As we shared in a previous post, there are plenty of ways to set up a qualitative analysis process to speak to concerns about scrutiny and rigor (sampling saturation, triangulation, secondary analysis, etc.). With my concerns and questions about rigor mostly placated by a well-designed framework put in place, I was able to dive into qualitative analysis headfirst and see what the process was like.
To help those considering implementing qualitative analysis or those who are already working in the space and want to “go back to the basics” of qualitative coding, I’ve pulled together some Dos and Don’ts that helped me streamline the process and get the most out of the data while maintaining a high level of rigor and confidence in the findings:
- Do take time to build out and define your codebook. Reviewing the definitions set for each code at the beginning of the qualitative analysis process helped to re-ground me each day that I was working, and I was able to focus and code much quicker when trying to figure out how to label something.
- Once you’re a few documents in, do try to figure the best way for you to tackle the whole task, whether by the length of the document, type of document (annual report, assessment, evaluation, interview, focus groups, etc.), or whatever works for you. For my most recent project, I found that I was able to make more progress working through the assessments and evaluations and then return to the annual reports at the end because I needed to understand more of the big picture context. Sometimes you need to start with documents that give more context, which then feeds into your understanding of other materials. Other times, you need ground-level details first, which then builds upward into a wider view as you read other types of materials. Find which way works best for your current project and plan from there.
- Do keep a reminder of what you’re coding for in front of you in some form. For this project, I did this by visual sticky notes with my coding hierarchy and code definitions next to my screen. This was always helpful to redirect my attention as I switched document/data types or when coming back after a break. This practice also helps maintain consistent coding applications.
- Do code enough of the passages that you’re reading to give enough context for analysis, but not too much that you’ll need to do a lot of re-reading. Someone else should be able to read a passage that you’ve coded and understand why you coded it that way without needing to look at the full data source/document or re-read an entire paragraph. For this project, excerpts 2-3 sentences long were the sweet spot for just enough context without being too lengthy.
- As much as possible, do get yourself in a focused headspace where you can do the deep work necessary to read, sort, and code excerpts. This means as much as you can turn off your email and phone notifications and set a timer for paced sprints. Set a deadline and just keep going!
Bonus: Outside of your time reading and coding, do plan some social time. Coding and analysis are naturally isolating tasks because of the brainpower you’re using to stay focused and to sort where quotes or sections of text fit among your codebook, so plan intentionally and give yourself a break when you can. We’re human beings that need social interaction, and planning ahead to meet this need helps you come back to larger datasets more refreshed.
- When you have gathered your documents for the task, don’t forget to take a look at your document list to make sure everything has been included before you move into coding. Run this list past your client to confirm that you’ve captured all of the data that you agreed on. This takes minimal time and it’s better to find out that you’ve missed something now than at the end of the project when you think you’ve finished your coding and analysis. When you’re near the end of your document review and ready to analyze, again double-check internally and make sure that you haven’t missed coding any documents.
- Also, when you have gathered your documents for the task, don’t forget to look for duplicate uploads! You don’t want to double-code a duplicate both for the sake of efficiency and to avoid over-weighting certain evidence and findings. Similarly, don’t double code for the same things within the same document (don’t code the same finding in the executive summary and in the more detailed findings section) because this will also improperly over-weight evidence.
- As you’re making progress, your expertise using your codebook will build and grow with you—don’t worry, your codebook should evolve as you’re further grounded into the content of the documents chosen for your review. It’s better to add codes as you identify emergent possible trends than to miss out on what the data is saying.
- As things evolve, don’t back-code documents you’ve already looked through. It’s unlikely that you’ll change too much, the time it would take to do this isn’t worth the trade-off of potential new trends, and there is an inherent bias in looking for particular excerpts or trends in the data. Unless something is seriously wrong, avoid back-coding at all costs.
- When you’re in the analysis phase, don’t just leave findings as an identified trend! Do the work needed on secondary analysis coding to get even more nuance and insight out of all the data you worked hard to gather and code. For example, you might be able to triangulate that something about a program made it sustainable, but can you say more about what that factor was as a sub-trend? Using this level of additional rigor allows us to name better actionable insights with even more confidence. More actionable insights then lead to better recommendations and more wins for our clients in improving their work–a win-win for all parties involved.
In the next and final piece of the series, we will tackle Findings, Conclusions, Recommendations (FCR) Matrices–what they are, how to set them up, and why you should use them in your data analysis. In the mean time, if you have liked our blog so far and want to be alerted when a new post is published, subscribe to our email notifications. For other questions, please reach out to us at <firstname.lastname@example.org>.