Introduction to Primary Analysis

By: Maxine Secskas, CLAME Associate, Headlight Consulting Services, LLC

This blog post is the second in a 3-part series on components of qualitative methods.

This blog post (as well as the previous post with tips on qualitative coding and the upcoming post on Secondary Analysis coming on April 7) is intended to expand on our 101 qualitative analysis guidance and to offer insight into the approaches that we take here at Headlight to ensure qualitative rigor. Part of Headlight’s vision is to use and promote the use of integrated systems thinking, collaborative solution building, and rigorous learning and evidence-driven decision-making in international development. Towards this goal, Headlight holds the principles of being data-driven, utilization-focused, learning-oriented, and systems-oriented as guidance for our work. Putting these principles in action, we strive to bring a high level of rigor to our qualitative research and analysis and produce learnings and recommendations that focus on enabling our partners to adapt in real time.

This blog focuses on one of the first steps we take after we’ve completed coding on qualitative data, for instance, when conducting a Learning Review. Once coding has been completed on the relevant qualitative data, the next step is to conduct primary analysis of the coded excerpts. At Headlight, primary analysis focuses on understanding the basics of your dataset, such as an overall assessment of how rigorous the evidence base is, the geographic representation in the data, differentiations in broader themes various stakeholder groups focused on, etc. Primary analysis can help direct a more targeted and meaningful secondary analysis process and provide early indications of what the data can and cannot speak to.

Note: At Headlight, we use the Dedoose software for our qualitative analysis, and some of the below information is specific to Dedoose. However, most of the processes discussed here can be replicated no matter which software is used.

ORGANIZING AN ANALYSIS SPACE

In order to analyze your coded excerpts, we suggest first setting up a workspace. To begin a new analysis process, we recommend creating a workbook, in Google Sheets or Excel, with separate worksheets for:

Worksheet A is a space to determine what to prioritize for secondary analysis (columns with code name, code count, triangulation check, interesting co-occurrences)

Worksheet B is a space for note-taking during secondary analysis (columns with theme name, name of the reviewer, source breakdown, and draft findings write up)

Worksheet C is the space for an FCR Matrix (columns for theme name, Findings, Conclusions, and Recommendations)

Worksheets D are multiple sheets for primary analysis of each descriptor set used during coding (with one axis as the list of codes and the other as the descriptor fields)

Worksheets E are multiple sheets for each code/theme you are secondarily analyzing (columns for code category, code, media, excerpt, and then any descriptors or codes you are comparing for co-occurrence)

Note: You could also do a lot of this analysis within Dedoose itself, leveraging a secondary round of coding of excerpts within triangulated parent and child codes and the various excerpt filtering windows. At Headlight, with large datasets, we find that the combination of Dedoose to help with the initial coding and leveraging the Code Application, Code Co-Occurrence, and various Descriptors Charts and using Google Sheets to dive deeper with more varied working space to write up findings and run pivot tables is most conducive to our robust qualitative analysis approach.

PULLING DATA FROM DEDOOSE

If using Dedoose, data should be exported from the Analyze tab in Dedoose and organized and analyzed in your workbook.

  1. When beginning analysis, it is useful to export the total application counts for each code in your codebook. To export this helpful information, go to the Analyze screen, click on Qualitative Charts, open the Code Application chart, and click on the export button in the upper-right corner. 
  2. For exporting descriptor data from Dedoose, go to the Analyze screen, click on Descriptors Charts, open the Descriptor x Code Count Table chart, choose a descriptor set, and click on the export button in the upper-right corner. Do this for each of your descriptor sets.
  3. While not necessary to export, the code co-occurrence chart is where you can visually see how different codes have overlapped on excerpts. The top and left axes show the entire codebook, and the grey diagonal line of boxes is where the same code intersects. We will discuss how to review co-occurrences for analysis in the section below.

PRIMARY ANALYSIS

The first part of analysis outside of Dedoose is primary analysis, which can also partially be conducted in Dedoose through various analytics found on the Analyze Screen. Primary analysis involves narrowing down codes for secondary analysis, identifying co-occurrences, reviewing descriptor data, and creating tables and charts that speak to the diversity and representation of the dataset/evidence base. The following steps provide some implementation guidance for Primary Analysis.

  1. Paste your code application counts into Worksheet A.
  2. Determine which codes require secondary analysis. We first eliminate codes that are applied less than 3 times, looking for trends with a higher likelihood of triangulation (occurrence in at least 3 different sources). We then eliminate parent codes that are mostly encompassed by their child codes, check the number of sources for triangulation of codes with less than 10 applications and eliminate those which are not found in at least 3 different media, and remove specialized bucket codes (for example, “illustrative quotes,” “bright spots,” and “issues with the quality of evidence”), which are reviewed separately to bolster triangulated findings and the report. The codes remaining are the ones with the most potential for leading to rigorous and actionable learnings for the client, and they should be put through a round of secondary analysis (discussed in the next blog post).
  3. Identify significant code co-occurrences. In order to analyze code co-occurrences, we recommend taking a look at the Code Co-Occurrence Chart in Dedoose and making a note of any cross-codes that have numerous co-occurrences which are not intuitive nor overlapping within the same parent code group. If you identify potentially interesting co-occurrences, you should then check to ensure that these co-occurrences are triangulated (that these two codes co-occur in at least 3 different documents). In order to check for triangulation, click on the numbered box you are interested in with the chart, and it will tell you the number of “Matching Resources.” Make a note in Worksheet A of any significant, triangulated co-occurring codes. When doing secondary analysis, you will look for triangulated sub-trends related to these co-occurring codes, which would establish a meaningful association between those two codes.
  4. Analyze descriptor applications. Paste the descriptor data into Worksheets D. When looking at the descriptor data, examine the application counts to find significant trends for additional analysis through pivot tables. Is there a significant difference in evidence by region? By stakeholder group related to a particular parent code grouping? What is the breakdown of types and rigor of evidence in the dataset? The analyst will have to make this decision based on how best to describe the dataset according to the evaluation or qualitative efforts purpose, as well as what appears unusual or significant among the data. Sometimes this is easier to see upon adding charts from a pivot table, so feel free to use these visual options to make exploring trends easier. These tables allow for a closer examination of a subset of codes against a subset of descriptor fields.
  5. Create illustrative tables and graphs. Now that you have done an initial review of the code counts, co-occurrences, and descriptors, the next step will be to create some visuals to highlight interesting and noteworthy initial findings. The goal of creating tables and charts here is to look at the evidence base as it relates to the descriptors that have been assigned and the evaluation questions or any implications the final dataset will have on how audiences should frame the findings. You should take time to reflect on the previous steps and identify subsets where the code application count is unusually high or low or if there are any unexpected co-occurrences. The particular subset of information that you want to feature will determine if it is best reflected in a pivot table, pie chart, or some other type of visual. We recommend keeping the visuals you create in the same spreadsheet tab (Worksheets D) as the related Descriptor information for easy reference.

Primary Analysis allows us to identify where secondary analysis will be most illuminating, and it helps us to illustrate interesting co-occurrences and unexpected coding trends within the dataset. For further information on conducting qualitative analysis, we recommend the Qualitative Data Analysis: Practical Strategies reference book by Bazeley, especially Chapter 13, “Exploring, Seeing, and Investigating Connections in Data.”

Please check back on April 7 for the next post on an Introduction to Secondary Analysis or sign up for blog notifications to receive updates on all future blogs!

If you have any questions about qualitative analysis or need help implementing a Learning Review, Headlight would love to support you! We have the breadth and depth of experience to tailor-meet your needs. For more information about our services, please email info@headlightconsultingservices.org. Headlight is a certified woman-owned small business and therefore eligible for sole source procurements. We can be found on the Dynamic Small Business Search or on SAM.gov via our name or DUNS number (081332548).


Leave a Reply

Comments

no comments found.