ePortfolio: Dr. Simon Priest

RESEARCH & STATISTICS

 

 

THE RESEARCH INQUIRY PYRAMID: The Research Pyramid (1994) explains my philosophy regarding inquiry. It describes a sequence of steps that I believe researchers ought to take when investigating a phenomenon (called "IT" here). Answering a good research question leads to more good research questions. Therefore, I always recommend that future inquiry direction should be built upon past evidence-based inquiries. Otherwise, researchers would take a scattered (shotgun) approach to investigation and much time, money, and other resources would get wasted.

I like to use the example of earthquakes or cancer to explain this concept. If we attempted to predict and control these (causality and discrimination points where studies are currently focused), without understanding influence, relationships, differentiation, or description, then we would most certainly fail. However, by initially reporting what these phenomena look like, how they are similar to other geothermal events (volcanic eruptions) or other diseases (failed immune systems), what else is associated with them, and what can impact them, then we build a strong foundation to inform our inquiry toward stabilizing cities or finding a common cure.

 

 

POSITIVISTIC vs. NATURALISTIC PARADIGMS: In 1987, I was a devout positivist (deductive and objective scientist) and my then life partner was a confirmed naturalist (inductive and subjective artist). Her 1988 dissertation conducted three studies: a statistical analysis of quantitative data collected under positivism, a thick and rich description of qualitative data on the same topic conducted under naturalism, and a comparison of the knowledge generated by both approaches seeking "truth" from evidence. She won a top award for her ground breaking research. Together, we developed this comparative table by debating philosophy around the dinner table and on long dog walks.

 

 

BRIDGING QUALITATIVE / QUANTITATIVE GAPS: I grew up in Vancouver and this photo of the Lions Gate Bridge over the First Narrows is a great metaphor of bridging the gap between quantitative and qualitative data. Regardless of one's affirmed paradigm for conducting research (Naturalistic or Positivistic), bridging is the same.

Researchers are like photographers. The Positivist photographs wildlife and frequently chooses a telephoto lens to get in close to their subjects. The Naturalist photographs scenery and often selects a wide angle lens to take in the whole view. Since the photographers seek to create different images, they pick different lenses. As researchers, they ask different questions and therefore gravitate toward using different methods. The Positivist prefers to measure quantity, while the Naturalist prefers to describe quality.

However, sometimes the wildlife photographer will use the other (wide angle) lens to gain a broader perspective of the entire herd and ocassionally the scenery photographer will switch to the other (telephoto) lens to capture a particular object of interest. Researchers are the same. Despite their predetermined research philosophies and inquiry paradigmal preferences, many are deciding to utilize both types of data in the same singular study.

The content added within this attractive picture lists the key hallmarks of each data type: its name, definition, formats, sources, examples, and analysis techniques. The key factors that may bridge the gap between them are listed in the middle. MEDIUM sample size is the first opportunity. Since large randomized samples are more common in quantitative and small purposeful samples fit better with qualitative, the middle ground of medium size enables the use of both data analyses. MIXED methods can be sequential (one before the other or in reversed order) or convergent (both in parallel, then joining together). Since one approach tests hypotheses and the other creates theory, doing BOTH can be a common focal point for the same study. Data can be converted from one type to the other. Triangulation involves using other data or one data set to confirm and corroborate another. Longitudinal designs offer the chance to combine methods as both can examine changes over the long term. Irrespective of beliefs, researchers can partially integrate these two methodologies by many means.

 

 

FIVE KINDS OF PROGRAM EVALUATION: Most of my students get confused about program evaluation and mix everything together into a single amalgam. In order to understand the five kinds of program evaluation, one must first understand the five phases of program development, because evaluation is conducted at different places in this developmental sequence. Jude Hirsch and I created these phases (2002), building on earlier work I did with Lee Gillis and Mike Gass.

  • DIAGNOSE: Assess need by multiple methods/sources (use combination of conversations, surveys, interviews & observations)
  • DESIGN: Plan program based on assessed needs/desires (refine purpose, goals & plan LETS: logistics, events, timing, staff)
  • DELIVER: Present program as planned, but be flexible (include intro, body of events, conclusion & remain open to changes)
  • DEBRIEF: Facilitated discussion around events (facilitate by funneling, front-loading, framing, freezing, focusing & fortifying)
  • DEPARTURE: Closing activities before clients leave (action planning, evaluations, anchors/souvenirs & schedule follow ups)

 

 

COMPARISON OF THE FIVE KINDS: As you can see in the table below from the year before, each type of program evaluation (and its "also known as" names) fits into the sequence at a different location and for a different reason (2001). The methods used are the same as in social science research with descriptive data analysis. This isn't rocket science, but you do need a map to avoid getting lost.

 

 

RESOLVING ETHICAL DILEMMATA: Dilemmata (plural of dilemma) are choices to be made among multiple alternatives, where no single selection is the favorable preference and all options are apparently undesirable.  This happens often in life and I use this approach to reach resolution in those dilemmatic times.  However, when conducting research with humans or animals (in whole as subjects or in part as tissue samples), I have the additional responsibility of ensuring that my chosen methodology does not harm them. 

While the decision making and problem solving processes are familiar (see multiphasic methods), these five approaches inform one’s judgment, when drawing on previous experience is unavailable, by providing data for the unknown in times of uncertainty.  By definition, a dilemma is rife with uncertainty and the unknown.  Thanks to Mike Gass and Jasper Hunt for their clarifying time spent chatting about ethics as adapted from Kitchener’s (1984) model of moral reasoning.

The approach takes the form of signposts.  I like this metaphor because, on those rare occasions where I have been lost while route finding in uncharted territory with map and compass or modern technology, I am thrilled to discover a sign on a post that points me in the direction I want to go!  For me the signposts are, at best, salvation from disaster or devastation and, at worst, reinforcement for what I already know in my heart or mind to be true.  Signposts offer a series of steps arranged in sequence and, most of the time, resolution can be reached by only considering their first few signs.  However, in a more complex dilemma, the last few signs become necessary and are thought provoking and reinforcing. 

Basically, these five steps are shown and described below.  I follow them in sequence and, when resolution becomes obvious, I stop at that point.  However, sometimes, I go all the way to the end to reach resolution, where the compromising choices are more about whom or what gets harmed the least and whether I could defend making the same choices in similar situations.  I have always passed these five approaches on to my research students and other faculty.  I hope they have found them useful.

 

 

THE FAMILY TREE OF STATISTICS: When I was working on my doctorate, I had the great pleasure to support Lorraine Davis, teacher extraordinaire of statistics. I learned a great deal from her, but the greatest gift she gave me was the opportunity to be her teaching assistant and a lead instructor of the lab sessions that complemented her lectures. This forced me to know my statistics really well and struggle to be as good as her at teaching them.

While working together on a small handbook of statistics in 1985, we evolved this family tree to show how some of the more basic tests related to one another. The arrangements below are rather self-explanatory.

 

 

THE CONFIRMATION MATRIX FOR STATISTICS: Something Lorraine shared with me, early on in our conductor passenger relationship, was a basic confirmation matrix as a double check to make sure one is using the correct test. I expanded it to its present content in 1986 and have applied it over the years since then. The key to using this matrix is knowing your data type. Its format (for both independent and dependent variables) can be: INTERVAL (numbers in sequence that can have equivalent differences among values), NOMINAL (has no sequence and therefore no numerical difference among values) or ORDINAL (numbers in sequence, but the difference among values cannot be determined). When necessary, interval data can be converted to ordinal data, but not the reverse. Simply find the corresponding list of tests from both data formats and then triple check against the family tree above.

To help understand the types of data consider that you and I are in a 10K race, where I finish second to your first place. With just this much information, we don't know how close or far apart we were when finishing. These are ordinal data. If we knew our run times, then these would be interval data (where we can quantify the gap between us). Obviously, interval data (times) can be converted to ordinal (placement), but not vice versa. Nominal refers to classification variables like gender (male/female) or brand of running shoes (Adidas, Nike, etc.) and these nominal data cannot be turned into interval or ordinal, because nominal has no numeric worth or order.

 

 

TRANSDISCIPLINARY INQUIRY: Students often confuse multidisciplinary, interdisciplinary and transdisciplinary terms, so I made this graphic in 1995 to help them see the evolving difference. Late one night, I was explaining to a doctoral student my thoughts on the difference between basic and applied inquiry with respect to interdisciplinary and transdisciplinary inquiry of complexity with diversity. I drew this diagram of four purposes for inquiry on a napkin; here is a better version from that same year.

 

RESEARCH INSTRUMENTS: Here is a collection of instruments that I authored or co-authored in the 1990's.

General Surveys

virtualteamworks.com

Interpersonal Trust

TEAM EFFECTIVENESS ASSESSMENT MEASURE: LONG (100 items, 30 minutes, single use) & SHORT (50 items, 15 minutes, repeated use)
VIRTUAL TEAMWORK (50 items, 10 minutes): SELF (use for self) & OTHERS (use for team members)
measures INTERPERSONAL TRUST on 6 dimensions:
A=Acceptance,
B=Believability,
C=Confidentiality,
D= Dependability,
E= Encouragement, and
O= Overall Trustworthiness
for GROUPS, PARTNERS, SELF & ORGANIZATIONS
LEADERSHIP EFFICACY ASSESSMENT DIAGNOSTIC: LONG (100 items, 30 minutes, single use) & SHORT (50 items, 15 minutes, repeated use)
ELECTRONIC FACILITATION (50 items, 10 minutes): SELF (use for self) & OTHERS (use for team members)
PRIEST ATTARIAN RISK TAKING INVENTORY: BUSINESS (10 scenarios, 30 minutes, use pre- and post) & OUTDOOR (10 scenarios, 30 minutes, use during)
GLOBAL LEADERSHIP (50 items, 10 minutes): SELF (use for self) & OTHERS (use for team members)
DIMENSIONS of the ADVENTURE EXPERIENCE: LONG (26 items, 15 minutes, once daily) & SHORT (2 items, 1 minute, multiple times)
ONLINE TRUST (50 items, 10 minutes): SELF (use for self) & OTHERS (use for team members)
s i m o n _ p r i e s t @ y a h o o . c o m
Copyright © 1975 | All Rights Reserved