Research in psychology
Do their examples show that Stam was off the mark when he argued that, although such research is highly desirable, it is something that we see rarely if at all in the field? Do the examples demonstrate that my approach fails to incorporate an important range of research efforts?
In fact, I believe that this research offers us excellent examples of ``good'' quantitative research. I disagree with Dawson et al.'s characterizations of their own research, however. As I see it, the research in question is a fascinating example of one of the situations I described earlier: the case in which initial findings in a particular domain or a few domains suggest a general principle. In particular, in this situation, the general principle is the developmental sequence of hierarchical integration. There is a real risk here (given our philosophical tradition) of imagining that this sequence is a fully abstract, reified structure that ``lies behind'' concrete phenomena and failing to recognize the ways in which interpretation enters into the research.
The studies by Fischer, Dawson, and their colleagues employ measures that are extremely useful, but not ``strong'' in the positivist sense marked out by classical notions of measurement or Stam's idea about measures that ``refer back to some concrete feature of the world.'' Consider examples from Dawson's (2006) LecticalTM Assessment System. In that system, a child's understanding is said to be at the level of single representations if the child offers a statement like ``Camping is fun'' in an assessment interview. By contrast, the child's understanding would be at the higher level of representational mappings if he or she employed an expression describing a ``linear'' relationship, such as ``If you don't do what your father tells you to do, he will be really mad at you.'' But determining the level of such responses is by no means a transparent process. For one thing, there is no one-to-one relationship between developmental level and form of speech. A child might say, ``If I go camping, I have fun'' and still be at the level of single representations, if the statement really boils down to ``Camping is fun'' because the child cannot actually coordinate relevant single representations in a mapping relationship. Dawson (2006) herself noted that meaning is ``central'' to the scoring procedure and gave an example concerning the interview question, ``Could you have a good life without having had a good education?'' In this example, a rater found it difficult to score a response that included the word ``richer'' because it was not clear whether this word referred to having more money or having a life with broader/deeper significance.
Dawson (2006) claimed that the ``developmental ruler'' provides a way to ``look through'' content in assessments of structure, but in my view, the brief remarks I have just offered point to a crucial sense in which hierarchical integration is a concretely meaningful idea. When we apply the developmental ruler to a new domain, we have to discover not only whether the developmental sequence holds in that domain but also what counts as single representations, representational mappings, and so forth in this context. The ``ruler'' provides us with valuable ideas about how to think about complexity, but in itself it is empty. To use it, investigators have to proceed with the crucial steps of designing an assessment procedure and preparing scoring manuals for each domain. These steps reflect the investigators' rich appreciation of the concretely meaningful practices in the domain, including what kinds of connections can obtain within this range of phenomena. This rich appreciation is largely prereflective understanding. Hence, the procedures and scoring manuals for each domain play a truly central role that is not ``given'' by the general principles. Furthermore, they do not offer exhaustive concrete specifications of the phenomena of interest. Raters have to draw on their own prior familiarity with the way things work.
Some further comments are in order concerning the fact that most or all of the studies under consideration were based on assessing individuals' developmental level in a structured interview or by means of some other similar structured procedure. These assessments provide measures that have considerable precision. Moreover, they unquestionably tap important skills. Nevertheless, we should recognize that such investigations differ from other possible studies that would examine what an individual does when he or she is engaged in ongoing activities. For example, consider ``leadership,'' one of the research areas discussed by Dawson et al. Dawson (2006) described a carefully developed system for assessing a subject's level of understanding this concept from responses during an assessment interview. But instead of proceeding this way, an investigator could examine what a subject does upon encountering a particular item while going through his or her ``inbox'' or when a subordinate asks the subject a specific question in a naturally occurring situation. Thinking about such in situ examples makes it clear that we could not possibly map what a person might do in such situations onto the developmental sequence without drawing on rich prior familiarity with the relevant practices. Therefore, these examples underscore the role played by interpretation. Looking at this matter the other way, the in situ examples help us to see how much actually is involved when we do use the structured assessment procedures that these investigators have so successfully developed. A great wealth of interpretive appreciation of the phenomena is concretized in those measurement procedures.
The in situ examples also raise a new issue: is the 13-level sequence relevant for some or all naturally occurring situations, or is its relevance limited to the kind of skills that can be assessed in the particular ways typically employed in the research in question, which could be called skills at understanding of a more reflective sort? I am not asserting that the complexity sequence would not hold for a broad range of skills involving in situ behavior. I only wish to point out that the sequence might be limited in these ways. The work is interpretive. It is based on procedures that provide concrete examples of certain meaningful phenomena. Therefore, we can ask whether the assessments made in these investigations actually serve as concrete examples of clearly in situ psychological phenomena and, more generally, we can ask what is the range of phenomena that are successfully tapped by the structured assessments. None of this is to argue against the value of this research. It is possible for raters to draw upon their prereflective understanding and employ the carefully developed manuals and the developmental model to assess complexity levels. Furthermore, it is of great interest that research efforts along these levels have demonstrated that the developmental sequence holds in many different areas when skills are assessed using the kinds of procedures that have been employed. In sum, I believe that the research by Dawson, Fischer, and their colleagues represents examples of excellent, ``apparently strong'' quantitative research.
5. Caveats concerning possible pitfalls
In my position paper, I made several specific suggestions about how researchers should change the ways they use quantitative methods. For example, I argued for using relational codes instead of always coding discrete behaviors. It should now be clear that my comments along those lines were misleading if they suggested that I believe certain quantitative methods (e.g., discrete behavior codes) are always problematic--or, if they suggested that I wanted to rule out quite generally what others might call ``strong'' quantitative methods. According to my approach, ``good'' quantitative research includes many examples of what others consider ``strong'' methods in addition to many examples of ``soft'' methods.
Notwithstanding possible confusion, at this juncture, I also want to state that this does not amount to wholesale approval of all quantitative research. I agree with Stam that there are real dangers in what he calls ``Pythagoreanism.'' Quantitative methods frequently are employed in a problematic manner. In my opinion, this occurs when they are used in such a way that they cannot serve their interpretive function. For example, measures of decibel levels are likely to fail at assessing angry behavior if the vocalizations in question do not occur in a structured situation in which loudness serves as a concrete example of such behavior (this is circular, and that is the point). In general, quantitative methods are unhelpful in a particular case insofar as they are actually used in a way that conforms to traditional positivist conceptualizations about ``real'' measures and the like. Hence, I hope that my position paper and this rejoinder serve to mark out a view about how to employ quantitative methods and how not to employ them, rather than a position about which quantitative methods we should use.
Some brief comments are in order about techniques for statistical analysis. In large measure, I agree with the cautionary remarks Stiles offered in his commentary about ``high end statistics.'' How researchers use these techniques very often reflects what I view as a misguided understanding of quantitative research. As Stiles noted, these sophisticated techniques are often applied to decontextualized variables. One of the examples I mentioned in my position paper is relevant here. I argued that investigators studying parent-child interaction from a social learning theory vantage point attempt to explain interactions by breaking them down into isolable, elemental behaviors (e.g., prosocial child behavior, parental praise) instead of taking as their starting point what parent and child are doing together. These researchers then try to put together an account of the exchanges by statistically examining sequential dependencies between these putative building block behaviors. In my opinion, there is a great deal that cannot be recovered about the interactions when we proceed in this way, no matter how sophisticated we may get at looking at dependencies across multiple lags. At the same time, I also agree with Stiles when he urges us not to throw out inferential statistics because of its historical association with misguided notions about methodology. Notwithstanding Stam's interesting points about longstanding problems that remain unresolved in the logic of hypothesis testing, I think these techniques can be useful. But I do wonder whether something could be gained by reexamining the assumptions of the statistical procedures we employ and considering whether some other analytic techniques are called for in light of the approach to quantitative research I have offered.
I would like to underscore one point from my position paper that is highly relevant when it comes to pitfalls associated with quantitative research. I believe that theory plays an extremely important role in whether quantitative researchers proceed in ways that are truly useful. In particular, I believe that researchers are likely to conduct ``good'' quantitative studies if they are guided by theories that are based on the idea that people are always already involved in practical activities in the world. In my position paper, I briefly discussed my participatory approach, which is an attempt to mark out a general framework for theories of this sort (also see Westerman & Steen, in press). I also gave the example of scaffolding research and pointed out that investigators in that area--in contrast to social learning theory researchers--often use relational codes rather than discrete behavior coding. In this rejoinder, I noted that when Wood did code discrete behaviors in his investigations of scaffolding (e.g., Wood & Middleton, 1975), he did so in a way (his specificity scale) that still made it possible to examine what parent and child were doing (i.e., the parent was attempting to teach the child how to build the puzzle) instead of breaking down what they were doing into isolable behaviors (e.g., prosocial behaviors, praise). He even examined sequential dependencies in a simple statistical way to study maternal homing in and out (also see Westerman, 1990), which suggests that statistical analyses, too, are likely to be useful so long as a study is based on helpful theory.
6. Concluding remarks
I have attempted to offer a reconceptualization of quantitative procedures that is much more focused on how we should employ these procedures than on endorsing some of these methods over others. This reconceptualization also puts the distinction between quantitative and qualitative research in a new light. There are differences between the two kinds of research--for example, quantitative research directs more attention to concretely specifying phenomena--but the contrast is less fundamental than most researchers think. From my vantage point, both types of research are aimed at learning about concretely meaningful practices and both are pursued by investigators who are themselves participants in the world of practices.
In their commentary, Dawson et al. suggested that my view is a transitional one because, while it attempts to integrate quantitative and qualitative methods, it comes down on the side of interpretation, privileges qualitative research over quantitative, and excludes positivist approaches. They claimed that their problem-focused methodological pluralism represents a fully integrative model because it includes both positivist and what they call post-positivist approaches. In my opinion, it is the other way around. I believe that in order to integrate the two types of research we need to incorporate all useful examples of both types of work in a new overarching framework that differs from the notions that typically have served to guide each kind of inquiry in the past. As I see it, Dawson et al.'s position is a transitional attempt at integration because it does not go beyond calling for blending the two approaches and their guiding viewpoints. Remarks Yanchar offered in his position paper about mixed-model approaches very effectively present the problems with this strategy for integration (also see Yanchar & Williams, in press). By contrast, I believe that my approach offers the requisite appropriately inclusive overarching framework, which itself is derived from a hermeneutic perspective based on practices. In particular, in this rejoinder, I have tried to show that my approach does not exclude what others would call ``strong'' quantitative procedures. In addition, my approach does not subordinate this type of quantitative research to ``soft'' quantitative research, nor does it lead to subordinating quantitative research to qualitative. I believe that all of these research endeavors represent ways of understanding concretely meaningful phenomena while they differ in the degree to which they focus on concretely specifying those phenomena versus characterizing them in meaning-laden terms. All, however, are interpretive.
I will conclude with some comments on a related issue: what can we say about when to use quantitative and/or qualitative approaches? All three commentaries include the idea that choice of methods should depend on the research problem at hand. I agree with this viewpoint. In fact, I believe it is another example of the limits of inquiry, a notion that is central to my perspective. General considerations can only provide what might be called an ``outer envelope'' for thinking about how to proceed in any given research situation. This outer envelope tells us that we need to find some interpretive method for investigating the phenomenon of interest, that the phenomenon is concretely meaningful in nature, and that the challenge is to find a method or set of methods that is appropriate for this particular problem given where the possible methods fall along a continuum that ranges from the concrete to the meaning laden--although all points along this continuum have concrete and meaningful aspects. Beyond this, however, we must decide just how to explore the particular research problem at hand as investigators who ultimately pursue our investigations--as Dawson et al. said--in medias res.
References
1. Dawson, T.L. (2006). The Lectical TM Assessment System. Retrieved September 26, 2006, from hhttp:// www.lectica.infoi.
2. Dawson, T.L., Fischer, K. W., & Stein, Z. (2006). Reconsidering qualitative and quantitative research approaches: A cognitive developmental perspective. New Ideas in Psychology, 24, 229-239.
3. Fischer, K.W. (1980). A theory of cognitive development: The control and construction of hierarchies of skills. Psychological Review, 87, 477-531.
4. Fischer, K.W., & Bidell, T.R. (1998). Dynamic development of psychological structures in action and thought. In W. Damon (Series Ed.) & R. M. Lerner (vol. Ed.), Handbook of child psychology (vol. 1): Theoretical models of human development (5th ed., pp. 467-561). New York: Wiley.
5. Merleau-Ponty, M. (1962). Phenomenology of perception (C. Smith, Trans.). London: Routledge and Kegan Paul.
6. Stam, H.J. (2006). Pythagoreanism, meaning and the appeal to number. New Ideas in Psychology, 24, 240-251.
7. Stiles, W.B. (2006). Numbers can be enriching. New Ideas in Psychology, 24, 252-262.
8. Strand, P.S. (2002). Coordination of maternal directives with preschoolers' behavior: Influence of maternal coordination training on dyadic activity and child compliance. Journal of Clinical Child Psychology, 31, 6-15.
9. Sugarman, J., & Martin, J. (2005). Toward an alternative psychology. In B. D. Slife, J.S. Reber, & F.C. Richardson (Eds.), Critical thinking about psychology: Hidden assumptions and plausible alternatives (pp. 251-266). Washington, DC: APA Books.
10. Westerman, M.A. (1990). Coordination of maternal directives with preschoolers' behavior in compliance problem and healthy dyads. Developmental Psychology, 26, 621-630.
11. Westerman, M.A. (2004). Theory and research on practices, theory and research as practices: Hermeneutics and psychological inquiry. Journal of Theoretical and Philosophical Psychology, 24, 123-156.
12. Westerman, M.A. (2006). Quantitative research as an interpretive enterprise: The mostly unacknowledged role of interpretation in research efforts and suggestions for explicitly interpretive quantitative investigations. New Ideas in Psychology, 24, 189-211.
13. Westerman, M.A., & Steen, E. M. (in press). Going beyond the internal-external dichotomy in clinical psychology: The theory of interpersonal defense as an example of a participatory model. Theory & Psychology.
14. Wood, D., & Middleton, D. (1975). A study of assisted problem-solving. British Journal of Psychology, 66, 181-191.
15. Yanchar, S.C. (2006). On the possibility of contextual-quantitative inquiry. New Ideas in Psychology, 24, 212-228.
16. Yanchar, S.C., & Williams, D.D. (in press). Reconsidering the compatibility thesis and eclecticism: Five proposed guidelines for method use. Educational Researcher.
Страницы: 1, 2
|
|