MDDE 602 - Research Methods in Distance Education

My instructor is Dr. Cynthia Blodgett


This is a core course in the Master of Distance Education degree program on the subject of research design and data collection methods. This course was created for students in a professional program in distance education. It focuses on the tenets of sound research practice to allow students to make reasoned judgments about research they read or undertake. It will result in students understanding the relationship between research and knowledge development in distance education.

On completion of MDDE 602, students will understand and be able to evaluate project design, data collection, and data analyses common in academic and professional journals. Students who wish to complete a project or thesis will have foundational knowledge in research design and methods, including a decision-making framework for identifying research questions and choosing an appropriate research design.


Calendar to help organize myself

My Assignments

Grade: 13.5/15 =90%
  • Note: I could have improved the essay by expanding the discussion the problem in DE and connecting the pros and cons to the problem. I did have one page left that I could have used for this purpose.
  • If one chooses to write about qualitative and quantitative methods, I would recommend reading the relevant chapters a few weeks before they are assigned. It would have been easier for me if I had done so.

Course Objectives

  1. Understanding the research process: It is the role of those with a graduate-level education to manage society’s knowledge base, such that it is an appropriate and usable entity to guide and shape human existence. The research process is the mechanism by which society’s knowledge base is developed and managed. Understanding the research process teaches students how new knowledge is generated and evaluated, and former knowledge is checked, replicated and reconstituted.
  2. Differentiating between small ‘r’ and big ‘R’ research: In this course, small ‘r’ research means research completed to develop and inform our individual knowledge and decisions. Small ‘r’ research taps the collective knowledge base in order to develop our own! Big ‘R’ research refers to adding to the collective knowledge held by society. Big “R’ research starts with a comprehensive understanding of what society knows about a topic. Research is then designed to replicate, verify or augment that which is already known. This research may be descriptive, exploratory (in reference to possible relationships between concepts) or explanatory (testing the plausibility of cause and effect relationships between concepts). Graduate education focuses on big ‘R’ research.
  3. Becoming an informed consumer: Students will understand the research process so they can analyze and evaluate the research concepts, designs and processes they are exposed to. For students who wish to be discerning ‘consumers’ of knowledge, this course is adequate. For students who wish to be researchers themselves, more courses in specific research methods and research tools are required.
  4. Becoming critical thinkers: Learning about and informing research activity facilitates the development of well-reasoned argument. Learning the process of identifying a sound research or project question requires exposure to, and understanding of a rational, careful, thorough thought process. In addition, identification of the question must be made in reference to an already well-reasoned body of literature. Analyzing, synthesizing and evaluating current knowledge on an issue or question is a central part of the research design process. These activities contribute to the development of higher-order thinking skills desirable in graduate students in either professional or discipline-based programs, doing thesis or project work as their culminating assignment.

Module 1--Science, social science and the construction of knowledge

Relevant resource I found: Major Research Paradigms (YouTube video 5:45)


Week 1
History of science and research methods

Week 2 Epistemology, knowledge and research
  • Neuman, Chapter 4
  • Garrison, D. R. & Shale, D. (1994). Methodological issues: Philosophical differences and complementary methodologies. In Garrison, D. R. (Ed.), Research perspectives in adult education. Florida: Krieger. (21 pages). (Module 1, Unit 2.)

Week 3 Theory and knowledge construction
  • Neuman, Chapter 3
  • Garrison, D. R. (2000). Theoretical challenges for distance education in the 21st century. International Review of Research in Open and Distance Learning 1(1). Retrieved November 29, 2002, (9 pages). (Module 1, Unit 3.)
  • Jarvis, P. (1999). Theory re-conceptualized. In The practitioner-researcher: Developing theory from practice. San Francisco: Jossey-Bass. (10 pages). (Module 1, Unit 3.)
  • Jarvis, P. (1999). From practice to theory. In The practitioner-researcher: Developing theory from practice. San Francisco: Jossey-Bass. (7 pages). (Unit 1, topic 3.)

Recommended Readings

  • Gibbons, M. (1996). The new production of knowledge: Dynamics of science and research in contemporary societies. Thousand Oaks: Sage Publications. (Module 1, unit 1.)
  • Stronbach, I. & MacLure, M. (1997). Educational research undone: The postmodern embrace. Philadelphia: Open University Press. (Module 1, unit 2.)
  • Dewar, T. D. (1998). Women and adult education: A postmodern perspective. In G. Selman & P. Dampier (Eds.), The Foundations of Adult Education In Canada. Toronto: Thompson Educational Publishing. Available electronically at:
  • (Module 1, unit 2.)
  • Beare, H. & Slaughter, R. (1993). Beyond scientific materialism: Accepting other ways of knowing. In H. Beare & R. Slaughter, Education in the twenty-first century. London: Routledge. (Module 1, unit 3.)
  • Heyman, R. (1994). Beyond mind reading: The power of strategic talk. In R. Heyman, Why didn't you say that in the first place? San Francisco: Jossey-Bass. (Module 1, unit 3.)


Neuman, Chapter 1

1. What sources of knowledge are alternatives to social research? (p. 2-7)
  • Authority: females are taught to make, select, mend, and clean clothing as part of a female focus on physical appearance and on caring for children or others in a family. Women do the laundry based on their childhood preparation.
  • Tradition: Women have done the laundry for centuries , so it is a continuation of what has happened for a long time.
  • Common Sense: Men just are not as concerned about clothing as much as women are, so it only makes sense that women do the laundry more often.
  • Media Myth: Television commercials show women often doing laundry and enjoying it, so they do laundry because they think it's fun.
  • Personal Experience: My mother and the mothers of all my friends did the laundry. My female friends did it for their boyfriends, but never the other way around. It just feels natural for the woman to do it.
    • Overgeneralization Statements that go far beyond what can be justified based on the data or empirical observations that one has.
    • Selective observation Making observations in a way that it reinforces preexisting thinking, rather than observing in a neutral and balanced manner.
    • Premature closure Making a judgment, or reaching a decision and ending an investigation, before one has the amount or depth of evidence required by scientific standards.
    • Halo effect Allowing the prior reputation of persons, places, or things to color one's evaluations , rattler than evaluating all in a neutral, equal manner.

2. Why is social research usually better than the alternatives'?
  • Scientists gather data using specialized techniques and use the data to support or reject theories. (p. 8)

3. Is social research always right? Can it answer any question? Explain.
  • Knowledge from the alternatives is often correct. But knowledge based on research is more likely to be true and has fewer errors. It is important to recognise that
research does not always produce perfect knowledge. Nonetheless, compared to the alternatives, it is less likely to be flawed. Let us review the alternatives before examining social research. (p. 2/3)

4. How did science and oracles serve similar purposes in different eras'?
  • Before science became fully entrenched, people used prescientific or nonscientific methods. These methods that are less widely accepted in modern society (e.g., oracles. mysticism, magic, astrology. or spirits). Such prescientific systems were an unquestioned way to produce knowledge that people took to he true. Such prescientific methods still exist but are secondary to science. Some people use nonscientific methods to study topics beyond the scope of science (e.g.. religion, art, or philosophy). Today few people seriously question science as a legitimate way to produce know (edge about modern society. (p. 8)

5. What is the scientific community? What is its role?
  • Scientific community: A collection of people who share a system of attitudes, beliefs, and rules that sustains the production and advance of scientific knowledge.

6. What are the norms of the scientific community? What are their effects? (p. 11)
  1. Universalism. Irrespective of who conducts research (e.g., old or young, male or female) and regardless of where it was conducted (e.g., United States or France, Harvard or Unknown University ), the research is to be judged only on the basis of scientific merit.
  2. Organized skepticism. Scientists should not accept new ideas or evidence in a carefree, uncritical manner. They should challenge and question all evidence and subject each study to intense scrutiny. The purpose of their criticism is not to attack the individual, but to ensure that the methods used in research can stand up to close, careful examination.
  3. Disinterestedness. Scientists must be neutral, impartial, receptive, and open to unexpected observations or new ideas. They should not be rigidly wedded to a particular idea or point of view. They should accept, even look for, evidence that runs against their positions and should honestly accept all findings based on high-quality research.
  4. Communalism. Scientific knowledge must be shared with others; it belongs to everyone. Creating scientific knowledge is a public act, and the findings are public property, available for all to use. The way in which the research is conducted must be described in detail. New knowledge is not formally accepted until other researchers have reviewed it and it has been made publicly available in a special form and style.
  5. Honesty. This is a general cultural norm, but it is especially strong in scientific research. Scientists demand honesty in all research; dishonesty or cheating in scientific research is a major taboo.

7. How does a study get published in a scholarly social science journal?

8. What steps are involved in conducting a research project? (p. 15)

page 15 from Neuman

page 15 from Neuman

9. What does it mean to say that research steps are not rigidly fixed? (p. 15)
  • Research is an interactive process in which steps blend into each other. A later step may stimulate reconsideration of a previous one. The process is not strictly linear; it may flow in several directions before reaching an end. Research does not abruptly end at step 7. It is an ongoing process, and the end of one study often stimulates new thinking and fresh research questions. (Iterative Design Process)

10. What types of people do social research? For what reasons? (p. 20)
  • Students, professors, professional researchers, and scientists in universities, research centers, and the government, with an army of assistants and technicians,
conduct much social research.

  • Most quantitative data techniques are data condensers. They condense data in order to we the big picture....
  • Qualitative methods are best understood as data enhancers. Data are enhanced to see key aspects of cases more clearly.
  • Qualitative researchers begin with a self-assessment and reflections about themselves as situated in a sociohistorical context. It is a highly self-aware acknowledgment of social self or of a researcher's position in society. (p. 14/15)
  • We need to be aware of our hidden assumptions and biases.
external image deconstruction.JPG
external image deconstruction.JPG

Neuman, Chapter 2

1. When is exploratory research used, and what can it accomplish?
  • Research in which the primary purpose is to examine a little understood issue or phenomenon to develop preliminary ideas and move toward refined research questions by focusing on the "what" question.
  • Purpose: To gain background information and better understand and clarify a problem (from this ppt)
  • Can be used to develop hypotheses and to develop questions to be answered
  • Can be used to help a researcher understand how to measure something.
  • Exploratory research is less formal, sometimes even unstructured. (Techniques for exploratory research: Pilot study – a sample experiment with fewer subjects, Can be very basic. Focus groups – like a group interview, Surveys)
  • Exploratory research is generally a precursor to a more formal study. Help saves time, resources, and lives.
  • If a researcher is starting a new project, they probably should start with exploration.
  • Case studiesResults from exploratory research are generally limited.

2. What types of results are produced by a descriptive research study? (from the same ppt)
  • Used to answer questions of who, what, where, when, and how – but not why. What is the current status of a phenomenon
  • Descriptive research is generally quantitative.
  • Techniques of descriptive research: Surveys, Correlation studies, Observation studies, Interviews
  • Cannot answer questions of causality
  • Descriptive research can help understand a topic and lead to causal analysis.

3. What is explanatory research? What is its primary purpose? (from the same ppt)
  • Explanatory research seeks to “explain” a phenomenon. Generally involves revealing cause, but also structural and interpretive. Builds on exploratory and descriptive research.
  • Techniques for explanatory research: Experiments, Quasi-experimental designs – experiment that lack random assignment.
  • Ultimately, we always want to explain what we are studying.
  • Exploration and description are a vital part of this.
  • Some researchers may want the description and could care less about the explanation.

4. What are the major differences between basic and applied research? (from the same ppt)
  • Basic – aka pure, academic. Used to advance general knowledge. Understanding something for the sake of knowledge. Sometimes basic researchers do miss the big picture.
  • Basic research is really the source of all knowledge. Applied research needs it.
  • Basic research is not presented to the public in an intelligible fashion.
  • Applied Research – applying knowledge gained from research for a particular application.
  • Applied Research – addresses more practical concerns (Examples: Which toy will be more popular? How effective are online classes? How can you reduce anticipatory nausea in cancer patients receiving chemotherapy?)

5. Who is likely to conduct basic research, and where are results likely to appear?
  • Hard core scientists at the center of the scientific community conduct and consume most of the basic research. (p. 25)
  • Basic research are more likely to enter the public domain in publications. (p.25)

6. Explain the differences among the three types of applied research.
Evaluation research
  • Does it work? What is the merit of a particular project.
  • Focuses on outcomes
  • Measures the effectiveness of a strategy or program.
  • Industry and business are likely to use this.
  • Example: Effectiveness of psychotherapy
  • Evaluation research can include:
    • Formative evaluation – built-in monitoring, continuous feedback, E.g., classroom effectiveness,
    • Summative evaluation – looks at program outcomes
  • Techniques: Experiments, Surveys, Many other types
  • Potential problems in evaluation research: Unrealistic expectations, Potential pressure of vested interests, Want to maintain objectivity during the research.

7. How do time-series, panel, and cohort studies differ'?

8. What are some potential problems with cost-benefit analysis?

9. What is a needs assessment? What complications can occur when conducting one?

10. Explain the differences between qualitative and quantitative research.

Exploration: If little is known or understood, exploration is the purpose. For example, we know a great deal about distance education instructional design, but not as much about individual responses to various distance education instructional design modes. (From course notes)

Descriptive: If the phenomenon is well established, but little is know about the internal mechanisms and causal relationships embedded in the phenomenon, your purpose may be descriptive. This approach will identify critical aspects and the boundaries of an issue or event. A case study of a successful agency delivering distance education in rural areas will tell us what makes up that agency, what roles people play, the characteristics of students, the delivery modes used, etc. From this descriptive work, ideas about relationships among the characteristics emerge, and the research design process begins again. (From course notes)



Shulman Article

Dimensions of Analysis (p. 4-6)
  • General: nature of learning (How does learning occur?)
  • Focused: educational practice and policy (How do students' prior conceptions of conservation affect their learning of physics?)
  • Control: psychological laboratories, controlled classrooms, carefully designed questionnaires, inventories, or interviews (Argument: safer generalizations could be proposed.)
  • No Controls: real classrooms with real teachers. (Argument: impossible to generalize)
  • Traditionally: exclusively disciplinary specialists: psychologists, historians, philosophers, or sociologists.
  • Currently: wider range of social scientist (anthropologists, linguists, economists), humanists, subject-matter specialists, and classroom teacher.
  • Traditionally: used psychology's experimental and correlational methods.
  • Currently: augmented by qualitative or field research methods. (ethnographic methods of anthropology, discourse analysis procedures from linguistics and sociolinguistics, "think-aloud" and other forms of protocol analysis from cognitive science, ...) Also, new quantitative techniques for analyzing data.
  • discover/invent new theoretical understandings of particular educational processes or phenomena
  • develop new methods, techniques, or strategies for solving specific problems
  • acquire a more complete description or accounting of the conditions associated with particular schools, students, or content areas
  • apply previously acquired understandings in the amelioration or improvement of current educational conditions, whether of practice or policy
  • connect or integrate previously distinct areas of theory, practice, or policy
  • improve particular forms of practice or to inform specific policies
  • test or extend a theoretical formulation in a related discipline such as psychology or sociolinguistics
  • evaluate or understand the impact of practice in a particular school or classroom
  • formulation of broad generalizations and principles
  • ...

Disciplined Inquiry
Research definition for Disciplined Inquiry – The systematic, controlled, empirical, and critical investigations of phenomena of interest to the decision–maker.
Disciplined Inquiry has moved away from the dichotomy of Qualitative versus Quantitative model

external image ParadigmsLost1.jpg
external image ParadigmsLost1.jpg

to a revised model of Disciplined Inquiry.

external image ParadigmsLost2.jpg
external image ParadigmsLost2.jpg

Interesting: Education is not a discipline, but a field of study.
  • " is a field of study, a locus containing phenomena, events, institutions, problems, persons, and processes that themselves constitute the raw material for inquiries of many kinds. The perspectives and procedures of many disciplines can be brought to bear on the questions arising from and inherent in education as a field of study. As each of these disciplinary perspectives is brought to bear on the field of education, it brings with it its own set of concepts, methods, and procedures, often modifying them to fit the phenomena or problems of education. Such modifications, however, can rarely violate the principles defining those disciplines from which the methods were drawn." (p. 9 Shulman)

"different procedures are used to ask different questions and to solve different problems for different purposes." (p. 11)

  • Generalizability across people--generalization from the particular sample of individuals who are tested, taught, or observed in a given study to some larger population of individuals or groups of which they are said to be representative (p. 13)
  • Generalizability across situations--generalization is from the particular tasks or settings in which a piece of work is conducted to that population of tasks or settings that the research situation is claimed to represent (p. 14)

  • Correlationists: study the natural covariations occurring in nature. (Goal: to understand and exploit the natural and, presumably, enduring variations among individuals) (p. 17)

  • Experimentalists are interested in the variation they themselves create. The experimental method is one in which scientists change conditions in order to observe the consequences of those changes. They are interested in understanding how nature is put together, not by inspecting nature as it is but by introducing modifications or changes in nature in order to better understand the consequences of those changes for subsequent states. They argue that only through the systematic study of planned modifications can we distinguish causal relationships between events or characteristics from mere chance co-occurrences. (p. 15) (Goal: to create conditions to reduce those variations.) (p. 17)

  • One of the enduring problems in research methodology has been the tendency to treat selection of method as primarily a technical question not associated with the underlying theoretical or substantive rationale of the research to be conducted. Selecting the method most appropriate for a particular disciplined inquiry is one of the most important, and difficult, responsibilities of a researcher. The choice requires an act of judgment grounded in knowledge both of methodology and of the substantive area of the investigation. (p. 17)


1. Consider the methodological positions outlined by Neuman and Garrison & Shale. Is there one position that attracts you the most? If, yes, what about it is most appealing and why? If no, why not? In this case, which position would you take doing research?

2. Find an article that reports the findings from research on a topic of interest. Evaluate the link between:
a) the authors/researchers apparent position regarding the production of knowledge and
b) the research methods. Are they consistent?

3. What is the possible outcome in research if data collection methods do not match a researcher's conceptions of knowledge


Neuman, Chapter 4

1. What is the purpose of social research according to each approach?
2. How does each approach define social reality?
3. What is the nature of human beings according to each approach?
4. How are science and common sense different in each approach'?
5. What is social theory according to each approach?
6. How does each approach test a social theory?
7. What does each approach say about facts and how to collect them?
8. How is value-free science possible in each approach? Explain.
9. How are the criticisms of positivism by the interpretive and critical science approaches similar?
10. How does the model of science and the scientific community presented in Chapter 1 relate to each of the three approaches?


My quick visualization of The Three Approaches to Social Research.

Garrison & Shale Article

Positivists: rationalistic (realist, objective) (link to ppt)
  • the belief in a logically ordered, objective reality that we can come to know better and better through science (link to a paper)
  • Criticisms of Positivism:Objectivity is a myth, Not truly systematic, Lacks external validity

Assumptions of Interpretivism (link to ppt)
  • Meanings are constructed by humans as they engage with the world they are interpreting.
  • Humans make sense of the world based on their historical and social perspective. They seek to understand the context and then make an interpretation of what they find which is shaped by their own experiences and backgrounds.
  • The basic generation of meaning is always social.
  • assumes that the social world does not have an existence separate from the investigator and that it is the manner by which the investigator interprets the social world that determines reality (link to a paper)
  • From the general to the particular, interpretivists look for the intricacies of everyday life and the characteristics of interaction with objects and individuals. Theory is developed inductively in that it emerges from research. This approach expects that perception (the process of selecting and interpreting information) results in varying realities. It is made up of descriptions that illuminate the way meaning is created and sustained. Interpretativism is associated with qualitative research methods.

Positivists’ and interpretivists commonly recognize that an investigator may affect subjects of a research investigation.
  • The difference between the two ideological perspectives is that positivists’ believe that there are deliberate steps for researchers to control investigators’ interferences, other potential contaminants, and confounding variables among others.
  • On the other hand, interpretivists’ respond with the view that efforts toward such objectivity are illusionary at best. Indeed, interpretivists’ maintain that such interference is not limited to physical/social interaction (or other disturbances of the social “reality”). Such interference begins with the construction of the original research question: “In the view of (interpretivist), scientists construct an image of reality that reflects their own preferences and prejudices and their interactions with others” (Schutt, 1999, p. 393). (link to a paper)

Naturalism: Philosophy The system of thought holding that all phenomena can be explained in terms of natural causes and laws.
  • Factual or realistic representation, especially: a. The practice of describing precisely the actual circumstances of human life in literature. b. The practice of reproducing subjects as precisely as possible in the visual arts.
  • Criticisms of Naturalism: Superficial, Lacks rigor, Unscientific (unsystematic), Subjective, Lacks internal validity (link to ppt)

Phenomenology is used to refer to subjective experiences or their study.
  • The experiencing subject can be considered to be the person or self, for purposes of convenience. In phenomenological philosophy 'experience' is a considerably more complex concept than it is usually taken to be in everyday use. Instead, experience (or Being, or existence itself) is an 'in-relation-to' phenomena, and it is defined by qualities of directedness, embodiment and worldliness which are evoked by the term 'Being-in-the-World'. Nevertheless, one abiding feature of 'experiences' is that, in principle, they are not directly observable by any external observer.
  • Describes and interprets the meaning of everyday experiences, concepts and phenomena from the perspective of several individuals
  • Phenomenology refers to the relationship between consciousness and social life, and the creation of social action and situations

Quantitative Research: two major subtypes
  • experimental
  • non-experimental
Qualitative Research: five major subtypes
  • Phenomenology: a form of qualitative research in which the researcher attempts to understand how one or more individuals experience a phenomenon.
  • Ethnography: a form of qualitative research focused on describing the culture of a group of people.
  • Case study research: a form of qualitative research that is focused on providing a detailed account of one or more cases.
  • Grounded theory research: a qualitative approach to generating a theory from the data that the researcher collects.
  • Historical research: research about events in the past.

external image 0200210402005.png
external image 0200210402005.png

  • A branch of philosophy and underpins the research process. Epistemology refers to the study of the character, scope, and nature of knowledge, with particular concern for the limits of knowledge and validity. knowledge=episteme and study=ology
  • Study of the origin, nature, and limits of human knowledge. Nearly every great philosopher has contributed to the epistemological literature. Some historically important issues in epistemology are:
    1. whether knowledge of any kind is possible, and if so what kind;
    2. whether some human knowledge is innate (i.e., present, in some sense, at birth) or whether instead all significant knowledge is acquired through experience (see empiricism; rationalism);
    3. whether knowledge is inherently a mental state (see behaviourism);
    4. whether certainty is a form of knowledge; and
    5. whether the primary task of epistemology is to provide justifications for broad categories of knowledge claim or merely to describe what kinds of things are known and how that knowledge is acquired.
    Issues related to (1) arise in the consideration of skepticism, radical versions of which challenge the possibility of knowledge of matters of fact, knowledge of an external world, and knowledge of the existence and natures of other minds. (link)
According to Plato, knowledge is a subset of that which is both true and believed
According to Plato, knowledge is a subset of that which is both true and believed

According to Plato, knowledge is a subset of that which is both true and believed
Theoretical Perspectives = Knowledge Paradigms
Methodology = Research Designs

QN = Quantitative research approach
QL = Qualitative research approach
Theoretical Perspectives = Knowledge Paradigms
What and how can we know about it?
Overarching perspective concerning appropriate research practice, based on ontological and epistemological assumptions.
How can we go about acquiring knowledge?
What procedures can we use to acquire it?
Objectivity believes that knowledge exists whether we are conscious of it or not. Objectivism (similar to behaviorism) states that reality is external and is objective, and knowledge is gained through experiences.
(Empiricism, postpositivism...)

Constructionist believes that we come to “know” through our interactions. Interpretivism (similar to constructivism) states that reality is internal, and knowledge is constructed.
(Interpretivism, ... )

Subjectivity believes that everyone has a different understanding of what we know. Pragmatism (similar to cognitivism) states that reality is interpreted, and knowledge is negotiated through experience and thinking.
(Realism, pragmatism...)

Don't forget... Advocacy/Participatory
  • political empowerment issue-oriented
  • collaborative change-orientated

Look at Driscoll p 14, Figure 1.3


· Symbolic interactionism
· Phenomenology
· Hermeneutics


· Critical Inquiry
· Feminism

Post Modernism

Experimental (QN)
Quasi experimental (QN)
Survey (QN)

Case Study (QL)
Ethnography (QL)
Phenomenological Research (QL)
Ethnology (QL)
Field Research (QL)
Narratives (QL)
Grounded Theory (QL)
Heuristic Inquiry (QL)
Naturalistic Inquiry (QL)
Action Research (QL)
Biography (QL)

Sequential (MM)
Concurrent (MM)
Transformative (MM)

Discourse Analysis
Feminist Standpoint Research
Statistical analysis (QN)
Performance, attitude, observation, census data (QN)
Instrument based questions (QN)
Predetermined (QN)
Content analysis (QN)
Descriptive (QN)
Document analysis (QN)
Conversation analysis (QN)

Text and image analysis (QL)
Interview, observation, document, audiovisual data (QL)
Open-ended questions (QL)
Emerging methods (QL)
Historical (QL)
Case study (QL)
Interview (QL)
Observation (QL)
· Participant
· Non-participant
Descriptive Statistics (QL)

Measurement and scaling

Focus group
Life history
Visual ethnographic methods
Data reduction
Theme identification
Comparative analysis
Cognitive mapping
Interpretative methods

  • What is out there to know?
  • the study of being - essentially studying questions of what kinds of entities exist link
  • the study of what there is in the world (objects, properties, relations, etc.)
  • basic assumptions about the nature of reality.
  • involves the philosophy of reality link
  • However, besides knowing what the experience of gravity is like, there is also the fact that gravity exists. How is it that gravity exists? What establishes its existence? What is the underlying nature of gravity? All these are ontological questions because they are concerned with the nature of existence. link
  • claims and assumptions that are made about the nature of social reality, claims about what exists, what it looks like, what units make it up and how these units interact with each other. In short, ontological assumptions are concerned with what we believe constitutes social reality.(Blaikie, 2000, p. 8)
  • What is the nature of reality? If there were no human beings, might there still be galaxies, trees and rocks? Would they still be beautiful?

  • what and how can we know about it?
  • What is knowledge? How is knowledge acquired? What do people know? How do we know what we know?
  • the study of knowing - essentially studying what knowledge is and how it is possible) same link
  • the study of knowledge and justification
  • basic assumptions about what we can know about reality, and about the relationship between knowledge and reality.
  • addresses how we come to know reality link
  • there are things we know because we've experienced them sufficiently (like gravity), or I accept something as "known" because others have experienced them sufficiently. link
  • Epistemology poses the following questions: What is the relationship between the knower and what is known? How do we know what we know? What counts as knowledge? link
  • the possible ways of gaining knowledge of social reality, whatever it is understood to be. In short, claims about how what is assumed to exist can be known. Blaikie, 2000, p. 8
  • What is knowledge? What is the relationship between knowledge and reality? If there were no human beings, would there still be three basic types of rock? Did the unconscious exist before Freud?
  • ontology is a paving slab
  • where epistemology is when philosophers wonder HOW we know it is a paving slab. link

  • Overarching perspective concerning appropriate research practice, based on ontological and epistemological assumptions

  • How can we go about acquiring knowledge?
  • Identifies the particular practices used to attain knowledge of it. link
  • Specifies how the researcher may go about practically studying whatever he / she believes can be known.

  • What procedures can we use to acquire it?

Comprehensive View of Validity (Standards advanced by Eisenhart and Howe, 1992)
Standard one: the conceptualization of validity is a unitary construct. Five general standards for educational research are: (p. 29/30)
  • Data collection and analysis techniques fit with the research questions. (Internal Validity)
  • Data collection and analysis techniques are applied effectively.(Internal Validity)
  • Background assumptions are coherent and consistent with research questions and methods. (alertness to and coherence of prior knowledge) (Internal Validity)
  • Conclusions are warranted and credibility strategies employed (e.g., looking for confirming and disconfirming evidence).(Internal Validity)
  • The study has value, in that it contributes understanding to the educational community and in that it has been ethically conducted. Comprehensiveness/overall warrant. (External Validity)

background assumptions both with respect to the literature as well as the researcher's own "subjectivities must be made explicit if they are to clarify, rather than obscure, research design and findings.

regardless of epistemological or methodological perspectives "Good research practice obligates the researcher to triangulate, that is, to use multiple methods, data sources, and researchers to enhance the validity of research findings " ( p. 13). She went on to say that the use of any single method would be biased as would the view of any single individual; therefore, triangulation "is the methodological counterpart to intersubjective agreement." (p. 31)

Internal Validity
  • Internal validity means there are no errors internal to the design of the research project.' It is used primarily in experimental research to talk about possible errors or alternative explanations of results that arise despite attempts to institute controls

External Validity
> External validity is used primarily in experimental research. It is the ability to generalize findings front a specific setting and small group to it range of settings and people. It addresses the question: If something happens in a laboratory or among a particular group of subjects (e.g.. college students), can the findings be generalized to the ''real" (nonlaboratory ) world or to the general public ( nonstudents )?


Neuman, Chapter 3

1. How do concrete and abstract concepts differ? Give examples. (p. 54)
  • Concepts range from very concrete ones easily evident in the familiar empirical world to highly abstract mental creations far removed from direct, daily empirical life. Abstract concepts refer to aspects of the world we do not directly or easily experience but nonetheless help organize thought and expand understanding.
  • Concrete concepts: such as books or height, are defined by simple nonverbal processes
  • Complex abstract concepts: require formal dictionarylike definitions and are defined by other concepts. We often define higher-level, more abstract concepts with lower level ones. Top, bottom, and distance are Iess abstract than height and are used in its definilion. Similarly, the concept of agression is more abstract than hit, slap, puh, ..

2. How do researchers use ideal types and classifications to elaborate concepts? (p. 55)
  • Ideal types are pure, abstract models that define the essence of the phenomenon in question . They are mental pictures that define the central aspects of a concept.
  • for example, the ideal student.

3. How do concepts contain built-in assumptions? Give examples. (p. 52)
  • Concepts sometimes include things that are not observable or testable and we accept them as a starting point. Often assumptions remain hidden or unstated.
  • For example, theory about a book assumes a system of writing, people who can read. and the existence of paper. Without such assumptions. the idea of a book makes little sense.

4. What is the difference between inductive and deductive approaches to theorizing?
  • Deductive reasoning, propositions made up of concepts and their relationships are established first and then data is collected that verify or refute the propositions.
  • Inductive reasoning, the process begins with data or concrete observations. Through thoughtful and rigorous examination of the data, propositions are generalized from the data. Sometimes researchers use inductive reasoning at the exploratory stage of reviewing an issue and move to explanation, where they test the explanatory power of the propositions they created at the exploratory stage. This results in a cycle of theory building through generation, testing, regeneration and retesting. (iterative design process)
external image 400px-Inductive_Deductive_Reasoning.svg.png
external image 400px-Inductive_Deductive_Reasoning.svg.png

5. Describe how the micro, meso, and macro levels of social reality differ.(p. 61/2)
  • Micro-level theory Social theory focusing on the micro level of social life that occurs over short durations (e.g., face-to-face interactions and encounters among individuals or small groups).
  • Meso-level theory Social theory focusing on the relations, processes, and structures at a midlevel of social life (e.g., organizations, movements, and communities) and events operating over moderate durations (many months, several years, or a decade).
  • Macro-level theory Social theory focusing on the macro level of social life (e.g., social institutions, major sectors of society, entire societies, or world regions) and processes that occur over long durations (many years, multiple decades, or a century or longer).

6. Discuss the differences between prediction and theoretical explanation. (p. 62/3)
  • Theoretical explanation: a logical argument that tells why something takes a specific form or occurs. It refers to a general rule or principle.
  • Prediction: a statement that something will occur.
  • It is easier to predict than to explain, and an explanation has more logical power than prediction because good explanations also predict.
  • An explanation rarely predicts more than one outcome, but the same outcome may be predicted by opposing explanations.
  • A prediction is less powerful than an explanation, many people are entranced by the dramatic visibility of a prediction.
  • a weak explanation can produce an accurate prediction. A good explanation depends on a well-developed theory and is confirmed by empirical observations.(ex., earth revolves around the sun, not that a turtle carries the sun across the sky on its back.)
  • explanation implies logically connecting what occurs in a specific situation to a more abstract or basic principle about "how things work."

7. What are the three conditions for causality? Which one is never completely demonstrated? Why? (p. 63-67)
  • temporal order:
    • cause must come before an effect,
    • This establishes the direction of causality (cause-->effect)
    • Simple causal relations are unidirectional
    • Most studies examine unidirectional relations
    • More complex theories specify reciprocal-effect causal relations (mutual causal relationship) or simultaneous causality
  • association: The co-occurrence of two events, characteristics, or factors such that when one happens/is present, the other one is likely to happen be present as well.
  • the elimination of plausible alternatives (this one is never completely demonstrated because it is impossible.
    • A researcher tries to eliminate major alternative explanations in two ways: through built-in design controls and by measuring potential hidden causes.
    • Researchers also try to eliminate alternatives by measuring possible alternative causes. This is common in survey research and is called controlling for another variable.
  • *An implicit fourth condition is an assumption that a causal relationship makes sends or fits with broader assumptions or a theoretical framework.

8. Why do researchers use diagrams to show causal relationships?
  • Researchers often draw diagrams of the causal relations to present a simplified picture of a relationship and see it at a glance. Such symbolic representations supplement verbal descriptions of causal relations and convey complex information. They are a shorthand way to show theoretical relations.

9. How do structural and interpretive explanations differ from one another?
  • Structural: Aspects of social life are explained by noting where they fit within the larger structure emphasizing locations, interdependences, distances, or relations among positions in that structure. (p. 69)
  • Interpretive theorists attempt to discover the meaning of an event or practice by placing it within a specific social context. Understanding.

10. What is the role of the major theoretical frameworks in research?
  • Structural Functionalism (p. 75)
  • Major Concepts: system, equilibrium, dysfunction, division of labor
  • Key Assumptions: Society is a system of interdependent parts that is in equilibrium or balance. Over time, society has evolved from a simple to a complex type, which has highly specialized parts. The parts of society fulfill different needs or functions of the social system. A basic consensus on values or a value system holds society together.
    Exchange Theory (also Rational Choice)
  • Major Concepts: opportunities, rewards, approval, balance, credit
  • Key Assumptions: Human interactions are similar to economic transactions. People give and receive resources (symbolic, social approval, or material) and try to maximize their rewards while avoiding pain, expense, and embarrassment. Exchange relations tend to be balanced. If they are unbalanced, persons with credit can dominate others.
    Symbolic Interactionism
  • Major Concepts: self, reference group, role-playing, perception
  • Key Assumptions: People transmit and receive symbolic communication when they socially interact. People create perceptions of each other and social settings. People largely act on their perceptions. How people think about themselves and others is based on their interactions.
    Conflict Theory
  • Major Concepts : power, exploitation, struggle, inequality, alienation
  • Key Assumptions: Society is made up of groups that have opposing interests. Coercion and attempts to gain power are ever-present aspects of human relations. Those in power attempt to hold onto their power by spreading myths or by using violence if necessary.

Ideology and Theory (p. 50/51)
  • A nonscientific quasi-theory, often based on political values or faith, with assumptions, concepts, relationships among concepts, and explanations. It is a
  • closed system that resists change, cannot be directly falsified with empirical data, and makes normative claims.
  • Ideologies are belief systems closed to contradictory evidence that use circular reasoning.
  • Ideologies selectively present and interpret empirical evidence.
  • "Don't confuse me with facts, I know I'm right!"
  • Contains a set of assumptions or a starting point
  • Explains what the social world is like, how/why it changes
  • Offers a system of concepts/ideas
  • Specifies relationships among concepts, tells what causes what
  • Provides an interconnected system of ideas

Social Theory
Offers absolute certainty
Has all the answers
Fixed, closed, finished
Avoids tests, discrepant findings
Blind to opposing evidence
Locked into specific moral beliefs
Highly partial
Has contradictions, inconsistencies
Rooted in specific position
Conditional, negotiated understandings
Incomplete, recognizes uncertainty
Growing, open, unfolding, expanding
Welcomes tests , positive and negative evidence
Changes based on evidence
Detached, disconnected, moral stand
Neutral considers all sides
Strongly seeks logical consistency, congruity
Transcends/ crosses social positions

Three major forms of theoretical explanation:
  • Causal explanation: Three conditions to establish causality: Temporal order, Association, Eliminating alternatives
  • Structural explanation: sequential theory, network theory, functional theory.
  • Interpretative explanation: The purpose of interpretive explanation is to foster understanding. The interpretive theorist attempts to discover the meaning of an event or practice by placing it within a specific social context. He or she tries to comprehend or mentally grasp the operation of the social world, as well as get a feel for something or to see the world as another person does. Because each person's subjective worldview shapes how lie or she acts, the researcher attempts to discern others' reasoning and view of things. (p. 72)
p 70 Neuman
  • || p 70 Neuman ||


1. Consider an aspect of distance education that interests you. Create an informal list of all that you know’ about this phenomenon. How do you know these things? Can you see relationships between the different aspects of the phenomenon?

2. Identify aspects of your practice that are of interest to you? What unanswered questions do you have about these aspects of your practice?

Week 4
Neuman, Chapter 6

"Reciprocity of perspectives" or intersubjectivity: By making use of systems of signs and expressions, a person can not only be able to communicate her subjective interpretations of the situation with her partner, but she can also across examine the typificality that her partner has imputed to her. As a result, partners in a human interaction may then arrive at a consensus on their perspectives regarding their encounter. (ppt)

Module 2 -- The Research Process

Week 4 Creating a research design
  • Neuman, Chapter 6
  • Jarvis, P. (1999). Action research. In The practitioner-researcher: Developing theory from practice. San Francisco: Jossey-Bass. (13 pages). (Module 1, Unit 1.)
  • Saba, F. (2000). Research in distance education: Astatus report. International Review of Research in Open and Distance Learning 1(1). Retrieved February 21, 2002 (9 pages). (Module 2, Unit 1.) Week 5 Measurement and sampling
  • Neuman, Chapter 7 (pp. 169-189, 206-207) & Chapter 8

Week 6 Quantitative research design
  • Neuman, Neuman, Chapter 9 & 10
  • McLean, S. & Morrison, D. (2000). Sociodemographic characteristics of learners and participation in computer conferencing. Journal of Distance Education, 15(2), 17-36. (20 pages). (Module 2, Unit 3).

Week 7 Qualitative research design
  • Neuman, Chapter 13 & 14
  • McMillan, J. H. & Schumacher, S. (2001). Case study design. In Research in education: A conceptual introduction. Toronto: Addison Wesley Longman, Inc. (3 pages). (Module 2, Unit 4. )

Recommended Readings
Conrad, D. (2002). Deep in the hearts of learners: Insights into the nature of online community. Journal of Distance Education, 17(1), 1-19. (20 pages). (Module 2, Unit 4).
Hoepfl, M. C. (1997). Choosing qualitative research: A primer for technology education researchers. Journal of Technology Education, 9(1). (Module 2, unit 4.)


  1. Review research design opportunities that lead to quantitative data or qualitative data.
  2. Contrast the character of research designs that generate quantitative data versus qualitative data.
  3. Review the requirements of a causal relationship.
  4. Identify the requirements of an hypothesis, hypothesis testing and a null hypothesis design.
  5. Explain units of analysis, ecological fallacy, reductionism, tautology, teleology and spuriousness in relation to causal hypotheses.
  6. Explain the place of ‘context’ in qualitative research.
  7. Consider practitioner-researcher and the research design used.
  8. Outline research design issues in the field of distance education.

Chapter 6
1. What are the implications of saying that qualitative research uses more of a logic in practice than a reconstructed logic? (p, 151)

2. What does it mean to say that qualitative research follows a nonlinear path? In what ways is a nonlinear path valuable? (p. 152)

3. Describe the differences between independent, dependent, and intervening variables. (p. 161)

4. Why don't we prove results in social research? (p. 162/3)
  • Evidence supports or confirms, but does not prove, the hypothesis.

5. Take a topic of interest and develop two research questions for it. For each research question specify the units of analysis and universe?

6. What two hypotheses are used if a researcher uses the logic of disconfirming hypotheses? Why is negative evidence stronger? (p. 164)

7. Restate the following in terms of a hypothesis with independent and dependent variables: "The number of miles a person drives in a year affects the
number of visits if person makes to filling stations, and there is a positive unidirectional relationship between the variables."
  • The number of miles a person drives in a year (independent) will increase the number of visits if person makes to filling stations (dependent).

8. Compare the ways quantitative and qualitative researchers deal with personal bias and the issue of trusting the researcher.

9. How do qualitative and quantitative researchers use theory? (bottom p.173)
  • Researchers use general theoretical issues as a source of topics. Theories provide concepts that researchers turn into variables as well as the reasoning or mechanism that helps researchers connect variables into a research question.

11. Explain how qualitative researchers approach the issue of interpreting data. Refer to first, second-. and third-order interpretations. (p. 160)

Week 5 Measurement and sampling

Chapter 7
1. What are the three basic parts of measurement, and how do they fit together?
  1. Process: Abstract Construct --> Conceptualization--> Conceptual Definition --> Operationalization --> Indicator/Measure
  2. Reliability and Validity:
  3. Measurement:

2. What is the difference between reliability and validity, and how do they complement each other?
  • Reliability: numerical results produced by an indicator do not vary because of characteristics of the measurement process or measurement instrument itself. Reliability means dependability or consistency.
    • Measurement reliability: the numerical results produced by an indicator do not vary because of characteristics of the measurement process or measurement instrument itself. (bathroom scale)
    • Stability reliability is reliability across time. It addresses the question: Does the measure deliver the same answer when applied in different time periods? (bathroom scale)
    • Representative reliability: reliability across subpopulations or groups of people. It addresses the question: Does the indicator deliver the same answer when applied to different groups? An indicator has high representative reliability if it yields the same result for a construct
      when applied to different subpopulations.
    • Equivalence reliability applies when researchers use multiple indicators--that is, when a construct is measured with multiple
      specific measures. Does the measure yield consistent results across different indicators?
  • Validity: Sometimes, it is used to mean "true" or "correct. When it researcher says that an indicator is valid, it is valid for a particular purpose and definition.
  • Validity and reliability are usually complementary concepts, but in some special situations they conflict with each other. Sometimes, as validity increases, reliability is more difficult to attain. and vice versa. This occurs when the construct has a highly abstract and not easily observable definition.
    • Face Validity: judgment by the scientific community that the indicator really measures the construct. It addresses the question: On the face of it, do people believe that the definition and method of measurement fit?
    • Content Validity: addresses the question: Is the full content of a definition represented in a measure'? A conceptual definition holds ideas: it is a "space" containing ideas and concepts.Measures should sample or represent all ideas or areas in the conceptual space. Content validity involves three steps. First, specify the content in a construct's definition. Next, sample from all areas of the definition. Finally, develop one or more indicators that tap all of the parts of the definition.
    • Criterion Validity: uses some standard or criterion to indicate a construct accurately. The validity of an indicator is verified by comparing it with another measure of the same construct in which a researcher has confidence. There are two subtypes of this kind of validity.
      • Concurrent- agrees with a preexisting measure
      • Predictive-agrees with future behavior (conservatism measure-test on conservative groups and they should score high, then test on liberal groups and they should score low. If true then the measure is "validated" by the pilot testing.)
    • Construct Validity: measures with multiple indicators. It addresses the question: If the measure is valid, do the various indicators operate in a consistent manner?
      • Convergent Validity. This kind of validity applies when multiple indicators converge or are associated with one another and multiple measures of the same construct hang together or operate in similar ways.
      • Discriminant Validity: the opposite of convergent validity and means that the indicators of one construct hang together or converge,
        but also are negatively associated with opposing constructs. (My measure of conservatism has discriminant validity if the 10 conservatism items both hang together and are negatively associated with the 5 liberalism ones.)
Neuman, Page 195
Neuman, Page 195

Neuman text, page 197
Neuman text, page 197
3. What are ways to improve the reliability of a measure? (p. 190)
  • Four ways to increase the reliability of measures are
  • clearly conceptualize constructs,
  • use a precise level of measurement
  • use multiple indicators, and
  • use pilot tests.

4. How do the levels of measurement differ from each other?

5. What are the differences between convergent, content, and concurrent validity? Can you have all three at once? Explain your answer.
  • convergent validity means that multiple measures of the same construct operate in the same way
  • content validity means whether or not your instrument reflects the content you are trying to measure.
  • concurrent validity refers to a measurement's ability to correlate or vary directly with an currently accepted measure of the same construct.
  • Yes, one can have all three at once. The measure can correlate with previously accepted measures, and the measures of a construct can be convergent, and I would hope that the measure accurately reflects the construct such that it has content validity.

6. Why are multiple indicators usually better than one indicator?
  • Multiple indicators give confidence that the measure is reliable. (page 190)

7. What is the difference between the logic of a scale and that of an index?
  • Index: The summing or combining of many separate measures of a construct or variable to create a single score. (Consumer Price Index)
  • Scale: A class of quantitative data measures often used in survey research that captures the intensity, direction, level, or potency of a variable construct along a continuum. Scales are common in situations in which a researcher wants to measure how an individual feels or thinks about something.(Likert Scale) (page 207)
  • A researcher can combine several Likert scale questions into a composite index if they all measure it single construct. (p.208)

8. Why is unidimensionality an important characteristic of a scale?
  • scales are always on a continuum where the individuals rate from one extreme to the other. (Likert scale)

9. What are advantages and disadvantages of weighting indexes?

10. How does standardization make comparisons easier?

The difference between a scale and an index, (link)
Scale: A scale is a cluster of items (questions) that taps into a unitary dimension or single domain of behavior, attitudes, or feelings. They are sometimes called composites, subtests, schedules, or inventories. Aptitude, attitude, interest, performance, and personality tests are all measuring instruments based on scales. A scale is always unidimensional, which means it has construct and content validity. A scale is always at the ordinal or interval level, but it's conventional for researchers to treat them as interval or higher. Scales are predictive of outcomes (like behavior, attitudes, or feelings) because they measure underlying traits (like introversion, patience, or verbal ability). It's probably an overstatement, but scales are primarily used to predict effects, as the following example shows:

An Example of a Scale Measuring Introversion:
  • I blush easily.
  • At parties, I tend to be a wallflower.
  • Staying home every night is all right with me.
  • I prefer small gatherings to large gatherings.
  • When the phone rings, I usually let it ring at least a couple of times.

Index: An index is a set of items (questions) that structures or focuses multiple yet distinctly related aspects of a dimension or domain of behavior, attitudes, or feelings into a single indicator or score. They are sometimes called composites, inventories, tests, or questionnaires. Like scales, they can measure aptitude, attitude, interest, performance, and personality, but the only kind of validity they have is convergent (hanging together), content, and face validity. It is possible to use some statistical techniques (like factor analysis) to give them better construct validity (or factor weights), but it is a mistake to think of indexes as multidimensional (no such word exists) since even the most abstract constructs are assumed to have unidimensional characteristics. Indexes are usually at the ordinal, but mostly interval level. Indexes can be predictive of outcomes (again, using statistical techniques like regression), but they are designed mainly for exploring the relevant causes or underlying symptoms of traits (like criminality, psychopathy, or alcoholism). It's probably an overstatement, but indexes are used primarily to collect causes or symptoms, as the following example shows:

An Example of an Index Measuring Delinquency:
  • I have defied a teacher's authority to their face.
  • I have purposely damaged or destroyed public property.
  • I often skip school without a legitimate excuse.
  • I have stolen things worth less than $50.
  • I have stolen things worth more than $50.
  • I use tobacco.
  • I like to fight.
  • I like to gamble.
  • I drink beer, wine, or other alcohol.
  • I use illicit drugs.

Neuman, Chapter 8
1. When is purposive sampling used?
  • Purposive Sampling: A nonrandom sample in which the researcher uses a wide range of methods to locate all possible cases of a highly specific and difficult-to reach population. (p. 222) Get all possible cases that fit particular criteria, using various methods. (p. 220)
  • It is used in exploratory research or in field research. (p. 222)
  • It uses the judgment of an expert in selecting cases or it selects cases with a specific purpose in mind.
  • Purposive sampling is appropriate to select unique cases that are especially informative.
  • Another situation for purposive sampling occurs when a researcher wants to identify particular types of cases for in-depth investigation. The purpose is to gain a deeper understanding of types. (p. 223)
  • Intensive interviews are a device for generating insights, anomalies, and paradoxes, which later ntas• he formalized into hypotheses that can he tested by quantitative social science methods. (p. 223)

2. When is the snowball sampling technique appropriate?(p.222/3)
  • Social researchers are often interested in an interconnected network of people or organizations.4
  • A nonrandom sample in which the researcher begins with one case, and then based on information about interrelationships from that case, identifies other.
  • The crucial feature is that each person or unit is connected with another through a direct or indirect linkage.
  • Snowball sampling (also called network, chain referral. or reputational sampling) is a method for sampling (or selecting) the cases in a network. It is based on an analogy to a snowball, which begins small but becomes larger as it is rolled on wet snow and picks up additional snow. Snowball sampling is a multistage technique. It begins with one or a few people or cases and spreads out on the basis of links to the initial cases.

3. What is a sampling frame and why is it important? (p.224/5/6)
  • A list of cases in a population, or the best approximation of it.
  • A researcher operationalizes by developing a specific list that closely approximates all the elements in the population--this list is a sampling
  • A good sampling frame is crucial to good sampling. A mismatch between the sampling frame and the conceptually defined population can be a major source of error. Just as a mismatch between the theoretical and operational definitions of a variable creates invalid measurement, so a mismatch between the sampling frame and the population causes invalid sampling. Researchers try to minimize mismatches.
  • With it few exceptions (e.g., a list of all students enrolled at a university), sampling frames are almost always inaccurate.

4. Which sampling method is best when the population has several groups and a researcher wants to ensure that each group is in the sample?
  • Quota Sampling for nonprobability samples (p. 220)
  • Stratified Sampling: a researcher first divides the population into subpopulations (strata) on the basis of supplementary information. After dividing the population into strata, the researcher draws a random sample from each subpopulation. In stratified sampling, the researcher controls the relative size of each stratum, rather than letting random processes control it. This guarantees representativeness or fixes the proportion of different
    strata within a sample. (p. 231)

5. How can you get a sampling interval from a sampling ratio?
  • Take the inverse of the sampling ration: (sample size/population)^-1
  • ex. 330/900 = .33333 (.33333)^-1 = 3 the sampling interval is 3

6. When should a researcher consider using probability proportionate to size?
  • When the clusters groups are of different sizes. For instance, when selecting 300 students from 3 000 universities. Some universities are large and some small. (A principle of random sampling is that each element has an equal chance to be selected into the sample.) (p. 237)

7. What is the population in random-digit dialing? Are sampling frame problems avoided? Explain. (p. 237)
  • technique used in research projects in which the general public is interviewed by telephone
  • A researcher using RDD randomly selects telephone numbers. thereby avoiding the problems of telephone directories. The population is telephone numbers, not people with telephones.
  • b/c three kinds of people are missed when the sampling frame is a telephone directory: people without telephones, people who have recently moved, and people with unlisted numbers. Those without phones (e.g.. the poor. the uneducated, and transients) are missed in any telephone interview study, but 95 percent of people have a telephone in advanced industrialized nations.

8. How do researchers decide how large a sample to use?

9. How are the logic of sampling and the logic of measurement related?

10. When is random-digit dialing used, and what are its advantages and disadvantages?

Week 6 Quantitative research design

Neuman, Chapter 9
1. What are the seven elements or parts of an experiment?
  1. Treatment or independent variable
  2. Dependent variable
  3. Pretest
  4. Posttest
  5. Experimental group
  6. Control group
  7. Random assignment

2. What distinguishes preexperimental designs from the classical design?
  • Preexperimental designs Experimental designs that lack random assignment or use shortcuts and are much weaker than the classical experimental design. They are be substituted in situations in which an experimenter cannot use all the features of a classical experimental design, but have weaker internal validity.
Neuman, page 256
Neuman, page 256

3. Which design permits the testing of different sequences of several treatments? (p. 257)
  • Latin Square Designs: Researchers interested in how several treatments given in different sequences or time orders affect it dependent variable

4. A researcher says, "lt was a three by two design, with the independent variables level of fear (low, medium, high) and ease of escape (easy/difficult) and the dependent variable anxiety." What does this mean? What is the design notation, assuming that random assignment with posttest only was used?
  • Three by two design: there are two numbers 3 and 2 so there are two independent variables; one variable has three levels and the other variable has two levels. (p. 259)

5. How do the interrupted and the equivalent time-series designs differ? (p. 256/7)
  • Interrupted Time Series Design, a researcher uses one group and makes multiple pretest measures before and after the treatment.
  • Equivalent time Series Design is another one-group design that extends over a time period. Instead of one treatment, it has a pretest, then a treatment and posttest, then treatment and posttest, then treatment and posttest, and so on.

6. What is the logic of internal validity and how does the use of a control group fit into that logic? (p. 259)
  • Researchers use control groups to eliminate potential alternative explanations for associations between the treatment and dependent variable.
  • internal validity is when the hypothesized independent variable alone affects the dependent variable.

7. How does the Solomon four-group design show the testing effect? (p. 257)
  • A Solomon four-group design is used if the subjects might learn from taking the test itself. The researcher randomly divides clients into four groups. Two groups receive the pretest: one of them gets the new training method and the other gels the old method. Another two groups receive no pretest; one of them gets the new method and the other the old method. All four groups are given the sane posttest and the posttest results are compared. If the two treatment (new method) groups have similar results, and the two control (old method) groups have similar results, then the researcher knows pretest learning is not it problem. If the two groups with a pretest (one treatment, one control) differ from the two groups without a pretest, then the worker concludes that the pretest itself may have an effect on the dependent variable.

8. What is the double-blind experiment and why is it used? (p. 263/4)
  • Designed to control researcher expectancy. In it, people who have direct contact with subjects do not know the details of the hypothesis or the treatment. It is double blind because both the subjects and those in contact with them are blind to details of the experiment.
  • The double-blind design is nearly mandatory in medical research because experimenter expectancy effects are well recognized.

9. Do field or laboratory experiments have greater internal validity? External validity? Explain.
  • Laboratory experiments tend to have greater internal validity but lower external validity. They are logically tighter and better controlled, but less generalizable.
  • Field experiments tend to have greater external validity hut lower internal validity. They are more generalizable but less controlled.

10. What is the difference between experimental and mundane realism? (p. 265)
  • Experimental realism is the impact of an experimental treatment or setting on subjects, it occurs when experimental subjects are caught up in the experiment and are truly influenced by it. It is weak if subjects remain unaffected by the treatment. which is why researchers go to great lengths to create realistic conditions.
  • Mundane realism asks: is the experiment like the real world? For example, a researcher studying learning has subjects memorize four-letter nonsense syllables. Mundane realism would he stronger if he or she had subjects learn factual information used in real life instead of something invented for an experiment alone.

Nueman, page 266

Nueman, page 266

Neuman, Chapter 10
1. What are the six types of things surveys often ask about? (p. 273)
  1. Behavior. How frequently do you brush your teeth? Did you vote in the last city election? When did you last visit a close relative?
  2. Attitudes/beliefs/opinions. What kind of job do you think the mayor is doing? Do you think other people say many negative things about you when you are not there? What is the biggest problem facing the nation these days?
  3. Characteristics. Are you married, never married, single, divorced, separated, or widowed? Do you belong to a union? What is your age?
  4. Expectations. Do you plan to buy a new car in the next 12 months? How much schooling do you think your child will get? Do you think the population in this town will grow, shrink, or stay the same?
  5. Self-classification. Do you consider yourself to be liberal, moderate, or conservative? Into which social class would you put your family? Would you say you are highly religious or not religious?
  6. Knowledge. Who was elected mayor in the last election? About what percentage of the people in this city are nonwhite? Is it legal to own a personal copy of Karl Marx 's Communist Manifesto in this country?

2. Why are surveys called correlational, and how do they differ from experiments? (p. 276)
  • I think experiments are typically easier to interpret as demonstrating causal relationships, whereas the results of surveys do not usually demonstrate causal relationships. Data generated by surveys can only show correlation--I guess that's why they are called correlational.
  • In experiments, researchers place people in small groups. test one or two hypotheses with a few variables, control the timing of the treatment, note
    associations between the treatment and the dependent variable, and control for alternative explanations.
  • By contrast, survey researchers sample math respondents who answer the same questions, measure many variables, test multiple hypotheses,
    and infer temporal order from questions about past behavior. experiences. or characteristics.

3. What five changes occurred in the 1960s and 1970s that dramatically affected survey research? (p. 275)
  1. Computers. Computer technology that became available to social scientists by the 1960s made the sophisticated statistical analysis of large-scale survey data sets feasible for the first time . Today, the computer is an indispensable tool for analyzing data from most surveys.
  2. Organizations. New social research centers with an expertise and interest in quantitative research were established at universities. About 50 such centers were created in the years after 1960.
  3. Data storage. By the 1970s, data archives were created to store and permit the sharing of the large scale survey data for secondary analysis (discussed in Chapter 11). The collection, storage, and sharing of information on hundreds of variables for thousands of respondents expanded the use of surveys.
  4. Funding. For about a decade (late 1960s to late 1970s), the U .S. federal government expanded funds for social science research . Total federal spending for research and development in the social sciences increased nearly tenfold from 1960 to the mid-1970s before it declined in the 1980s.
  5. Methodology. By the 1970s, substantial research was being conducted on ways to improve the validity of surveys. The survey technique advanced as errors were identified and corrected.? In addition, researchers created improved statistics for analyzing quantitative data and taught them to a new generation of researchers. Since the 19805, new cognitive psychology theories have been applied to survey research."

4. Identify 5 of the 10 things to avoid in question writing.
Neuman, page 282
Neuman, page 282

5. What topics are threatening to respondents, and how can a researcher ask about them? (p. 283)
Neuman, page 283
Neuman, page 283
  • One technique is to establish a comfortable setting before asking the questions . They state guarantees of anonymity and confidentiality explicitly
    and emphasize the need for honest answers. They ask sensitive questions following a "warm-up period" of other nonthreatening questions and after creating an atmosphere of trust and comfort.
  • A second technique is to use an "enhanced" phasing of questions. For example , rather than asking, "Have you shoplifted?"-which has an accusatory tone and uses the word shoplift, which implies committing an illegal act--instead get at the same behavior by asking. "Have you even taken anything from a store without paying for it?"
  • Studies show that survey formats Randomized response technique (RRT) A specialized technique in survey research that is used for very
    sensitive topics. With it a respondent randomly receives a question without the interviewer being aware of the question the respondent is answering.
    that permit greater respondent anonymity, such as a self-administered questionnaire or web-based survey. increase the likelihood of honest responses over formats that involve interacting with another person, such as in a face-to-face or telephone interview.
  • Technological innovations such as computer-assisted self-administered interviews (CASAI) and computer-assisted personal interviewing (CAPI) also increase respondent comfort and honesty in answering questions on sensitive topics. In CASAI, respondents are "interviewed" with questions asked on a computer screen or over earphones. They answer by moving a computer mouse or typing on a keyboard.

6. What are advantages and disadvantages of open-ended versus closed-ended questions? (p. 287)
  • Open-ended question is a type of survey research question in which respondents are free to offer any answer they wish to the question.
  • Closed-ended question is a type of survey research question in which respondents must choose from a fixed set of answers.
Advantages of Closed
Disadvantages of Closed
  • It is easier and quicker for respondents to answer.
  • The answers of different respondents are easier to compare.
  • Answers are easier to code and statistically analyze.
  • The response choices can clarify question meaning for respondents.
  • Respondents are more likely to answer about sensitive topics.
  • There are fewer irrelevant or confused answers to questions.
  • Less articulate or less literate respondents are not at a disadvantage.
  • Replication is easier.
  • They can suggest ideas that the respondent would not otherwise have.
  • Respondents with no opinion or no knowledge can answer anyway.
  • Respondents can be frustrated because their desired answer is not a choice.
  • It is confusing if many (e.g., 20) response choices are offered.
  • Misinterpretation of a question can go unnoticed.
  • Distinctions between respondent answers may be blurred.
  • Clerical mistakes or marking the wrong response is possible.
  • They force respondents to give simplistic responses to complex issues.
  • They force people to make choices they would not make in the real world.

Advantages of Open
Disadvantages of Open
  • They permit an unlimited number of possible answers.
  • Respondents can answer in detail and can qualify and clarify responses.
  • Unanticipated findings can be discovered.
  • They permit adequate answers to complex issues.
  • They permit creativity, self-expression, and richness of detail.
  • They reveal a respondent's logic, thinking process, and frame of reference.
  • Different respondents give different degrees of detail in answers.
  • Responses may be irrelevant or buried in useless detail.
  • Comparisons and statistical analysis become very difficult.
  • Coding responses is difficult.
  • Articulate and highly literate respondents have an advantage.
  • Questions may be too general for respondents who lose direction.
  • Responses are written verbatim, which is difficult for interviewers.
  • A greater amount of respondent time, thought, and effort is necessary.
  • Respondents can be intimidated by questions.
  • Answers take up a lot of space in the questionnaire.

7. What are filtered, quasi-filtered, and standard-format questions? How do they relate to floaters? (p. 289)
  • Satisficing when respondents pick no response to avoid expending the effort of answering.
  • Standard-format question A type of survey research question in which the answer categories do not include a "no opinion" or "don't know" option. (Perhaps do not offer neutral position choices as they favour putting pressure on respondents to give a response.)
  • Quasi-filter question A survey research question that includes the answer choice "no opinion; ' unsure; or "don't know."
  • Full-filter question A survey research question in which respondents are first asked whether they have an opinion or know about a topic then only those with an opinion or knowledge are asked a specific question about the topic.
  • Floaters Survey research respondents without the knowledge or an opinion to answer a survey question but who answer it anyway, often giving inconsistent answers. They "float" from giving a response to not knowing.
  • Some believe it is best to offer the no opinion options because when the respondents are pressured for an answer and they don't have one, they will express opinions on fictitious issues, objects, and events.

8. How does ordinary conversation differ from a survey interview?

Neuman, p. 305
Neuman, p. 305

9. Under what conditions are mail questionnaires, telephone interviews, web surveys, or face-to-face interviews best?
Neuman, page 300
Neuman, page 300

10. What are CATI and IVR, and when might they be useful?
  • Computer-assisted telephone interviewing (CATI) Survey research telephone interviewing in which the interviewer sits before a computer screen and keyboard, reads from the screen questions, and enters answers directly into the computer.
  • Interactive Voice Response (IVR) A technique in telephone interviewing in which respondents hear computer-automated questions and indicate their responses by touch-tone phone entry or voice-activated software.

Week 7

Neuman, Chapter 13
1. What were the two major phases in the development of the Chicago school, and what are the journalistic and anthropological models? (p. 380/1)
  • In the first phase. from the 1910s to 1930s, the school used it variety of methods based on the case study or life history approach, including direct observation, informal interviews, and reading documents or official records.
  • Journalistic and anthropological models of research were combined in the first phase.
    • The journalistic model has a researcher get behind fronts, use informants, look for conflict, and expose what is "really happening"
    • In the anthropological model, a researcher attaches himself or herself to a small group for an extended period of time and reports on the members' views of the world.
  • In the second phase, from the 1940s to the 1960s, the Chicago school developed participant observation as a distinct technique. It expanded an
    anthropological model to groups and settings in the researcher's society.
  • Three principles emerged:
    1. Study people in their natural settings, or in situ.
    2. Study people by directly interacting with them.
    3. Gain an understanding of the social world and make theoretical statements batted on the members' perspective.
  • Over time, the method moved from strict description to theoretical analyzes based on involvement by the researcher in the field.

2. List 5 of the 10 things the "methodological pragmatist" field researcher does. (P. 383)
  • A field researcher does the following:
  1. Observes ordinary events and everyday activities as they happen in natural settings, in addition to any unusual occurrences
  2. Becomes directly involved with the people being studied and personally experiences the process of daily social life in the field setting
  3. Acquires an insider's point of view while maintaining the analytic perspective or distance of an outsider
  4. Uses a variety of techniques and social skills in a flexible manner as the situation demands
  5. Produces data in the form of extensive written notes, as well as diagrams, maps, or pictures to provide very detailed descriptions
  6. Sees events holistically (e.g., as a whole unit, not in pieces) and individually in their social context
  7. Understands and develops empathy for members in a field setting, and does not only record "cold" objective facts
  8. Notices both explicit (recognized, conscious, spoken) and tacit (less recognized, implicit, unspoken) aspects of culture
  9. Observes ongoing social processes without imposing an outside point of view
  10. Copes with high levels of personal stress , uncertainty, ethical dilemmas, and ambiguity

3. Why is it important for a field researcher to read the literature before beginning fieldwork? How does this relate to defocusing? (P. 385)
  • As with all social research, reading the scholarly literature helps you learn concepts, potential pitfalls, data collection methods, and techniques for resolving conflicts. In addition, you may find Diaries, novels, journalistic accounts, and autobiographies useful for gaining familiarity and preparing emotionally for the field.
  • You should not get locked into any initial misconceptions. but he open to discovering new ideas. Finding the right questions to ask about the field takes time.
  • Defocusing A technique early in field research when the researcher removes his or her past assumptions and preconceptions to become more open to events in a field site.
  • Reading the literature can assist in opening your mind to greater possibilities and help you defocus, in terms of not focusing exclusively on the role of the researcher.

4. Identify the characteristics of a field site that make it a good one for a beginning field researcher. (p. 386)
  • Three factors are relevant when choosing a field research site: richness of data, unfamiliarity, and suitability.
  • Sites that present a web of social relations, a variety of activities, and diverse events over time provide richer, more interesting data.
  • Beginning field researchers should choose an unfamiliar setting because it is easier to see cultural events and social relations in a new site. Bogdan and Taylor ( 1975:28) noted, "We would recommend that researchers choose settings in which the subjects are stangers and in which then have no particular professional knowledge expertise."
  • When "casing" possible field sites, you must consider such practical issues as your time and skills, serious conflicts among people in the site, personal characteristics and feelings, and access to parts of a site.

5. How does the "presentation of self" affect a field researcher's work? (p. 389/90)
  • People explicitly and implicitly present themselves to others. We display who we are-the type of person we are or would like to be-through our physical appearance, what we say, and how we act. The presentation of self sends it symbolic message.
  • A good field researcher is very conscious of the presentation of self in the field
  • A researcher must be aware that self-presentation will influence field relations to some degree.

6. What is the attitude of strangeness, and why is it important? (p. 390/1)
  • Attitude of strangeness A field research technique in which researchers mentally adjust to "see" events in the field as if for the first time or as an outsider.
  • Researchers adopt the attitude of strangeness in familiar surroundings as because it is easy to be blinded by the familiar. In fact. "intimate acquaintance with one's own culture can create as much blindness as insight".
  • This confrontation of cultures, or culture shock, has two benefits: It makes it easier to see cultural elements and it facilitates self-discovery.

7. What are relevant considerations when choosing roles in the field, and how can the degree of researcher involvement vary? (p.392/3)
  • The assigned role and how a researcher performs in that role influence not only the ease and degree of access but also the success in developing
    social trust and securing cooperation in the field. Some existing roles provide access to all areas of the site, the ability to observe and interact with all members, the freedom to move around, and a way to balance the requirements of researcher and member. At other times, a researcher creates a new role or modifies an existing one. A researcher may adopt several different field roles over time in the field.
  • The field roles open to you are affected by ascriptive factors and physical appearance. You can change some aspects of appearance, such as dress or hairstyle. but not ascriptive features such as age, race, gender, and attractiveness
  • Because many roles are sex-typed, gender is an important consideration. Female researchers often have more difficulty when the setting is perceived as dangerous or seamy and where males are in control (e.g., police work, tire fighting, etc.). Male researchers have more problems in routine and administrative sites where males are in control (e.g.. courts, large offices, etc.). They may not be accepted in female-dominated territory. In sites where both males and females are involved, both sexes may be able to enter and gain acceptance.
  • A role can help you gain acceptance into or be excluded from a clique, be treated as a person in authority or as an underling, and be a friend or an enemy of some members. You need to he aware that by adopting a role, you may be forming allies and enemies who can assist or limit research.
  • A field researcher should be aware of risks to his or her safety, assess the risks, and then decide what he or she is willing to do.

8. Identify three ways to ensure quality field research data.
  • watch and listen (p. 396)
  • taking notes (p. 398)
  • data quality (p. 402)
    • An interpretive approach suggests a different kind of data quality. Instead of assuming one single, objective truth, field researchers hold that members subjectively interpret experiences within a social context. What a member takes to be true results from social interaction and interpretation. Thus, high-quality field data capture such processes and provide an understanding of the member's viewpoint. A field researcher does not eliminate subjective views to get quality data rather, quality data include his or her subjective responses and experiences. Quality field data are detailed descriptions from the researcher's immersion and authentic experiences in the social world of members.
    • Reliability: Are researcher observations about a member or field event internally and externally consistent? (p. 404)
      • Internal consistency refers to whether the data are plausible given all that is known about a person or event, eliminating common forms of human deception. In other words, do the pieces fit together into a coherent picture? For example, are a member's actions consistent over time and in different social contexts?
      • External consistency is achieved by verifying or cross-checking observations with other, divergent sources of data. In other words. does it all fit into the overall context? For example, can others verify what a researcher observed about a person?. Does other evidence confirm the researcher's observations?
      • Reliability in field research also includes what is not said or done, but is expected or anticipated. Such omissions or null data can he significant but are difficult to detect.

  • Validity in field research comes from a researcher's analysis and data as accurate representations of the social world in the field. (p. 405)
    • Ecological validity is the degree to which the social world described by a researcher matches the world of members. It asks: Is the natural setting described relatively undisturbed by the researcher's presence or procedures? A study has ecological validity if events would have occurred without a researcher's presence.
    • Natural history is a detailed description of how the project was conducted. It is a full and candid disclosure of a researcher's actions, assumptions, and procedures for others to evaluate. A study is valid in terms of natural history if outsiders see
      and accept the field site and the researcher's actions.
    • Member validation occurs when a researcher takes field results hack to members, who judge their adequacy. A study is member valid if members recognize and understand the researcher's description as reflecting their intimate social world. Member validation has limitations because conflicting perspectives in a setting produce disagreement with researcher's observations, and members may object when results do not portray their group in a favorable light. In addition, members may not recognize the description because it is not from their perspective or does not fit with their purposes.
    • Competent insider performance is the ability of a nonmember to interact effectively as a member or pass as one. This includes the ability to tell and understand insider jokes. A valid study gives enough of a flavor of the social life in the field and sufficient detail so that an outsider can act as a member. Its limitation is that it is not possible to know the social rules for every situation. Also, an outsider might be able to pass simply because members are being polite and do not want to point out social mistakes.

9. Compare differences between a field research interview and a survey research interview, and between a field interview and a friendly conversation. (p. 407)
Typical Survey Interview
Typical Field Interview
  1. It has a clear beginning and end.
  2. The same standard questions are asked of all respondents in the same sequence.
  3. The interviewer appears neutral at all times.
  4. The interviewer asks questions, and the respondent answers.
  5. It is almost always with one respondent alone.
  6. It has a professional tone and businesslike focus; diversions are ignored.
  7. Closed-ended questions are common, with infrequent probes.
  8. The interviewer alone controls the pace and direction of the interview.
  9. The social context in which the interview occurs is ignored and assumed to make little difference.
  10. The interviewer attempts to mold the communication pattern into a standard framework.
  1. The beginning and end are not dear. The interview can be picked up later.
  2. The questions and the order in which they are asked are tailored to specific people and situations.
  3. The interviewer shows interest in responses, encourages elaboration.
  4. It is like a friendly conversational exchange, but with more interviewer questions.
  5. It can occur in group setting or with others in area, but varies.
  6. It is interspersed with jokes, asides, stories, diversions, and anecdotes, which are recorded.
  7. Open-ended questions are common, and probes are frequent.
  8. The interviewer and member jointly control the pace and direction of the interview.
  9. The social context of the interview is noted and seen as important for interpreting the meaning of responses.
  10. The interviewer adjusts
You are familiar with a friendly conversation, which has its own informal rules and the following elements: (p. 407/8)
  • a greeting ("Hi, it's good to see you again");
  • the absence of an explicit goal or purpose (we don't say, "Let's now discuss what we did last weekend");
  • avoidance of explicit repetition we don't say, "Could you clarify what you said about ... ");
  • question asking ("Did you see the race yesterday?");
  • expressions of interest ("Really? I wish I could have been there!");
  • expressions of ignorance ("No, I missed it. What happened?");
  • turn taking, so the encounter is balanced (one person does not always ask questions and the other only answer);
  • abbreviations (I missed the Derby, but I'm going to the Indy," not "I missed the Kentucky Derby horse race but I will go to the Indianapolis 500 automotive race");
  • a pause or brief silence when neither person talks is acceptable;
  • a closing (we don't say. "Let's end this conversation"; instead, we give a verbal indicator before physically leaving "I've got to get back to work now. See ya tomorrow.").

The field interview differs from a friendly conversation. (p. 408)
  • It has an explicit purpose--to learn about the informant and setting. A researcher includes explanations or requests that diverge from friendly conversation.
  • The field interview is less balanced. A higher proportion of questions come from the researcher, who expresses more ignorance and interest.
  • Also, it includes repetition, and a researcher asks the member to elaborate on unclear abbreviations.
  • Most importantly, the interviewer listens. He or she does not interrupt frequently, repeatedly finish the respondent's sentences, offer associations (e.g., "Oh, that is just like X"), insist on finishing asking a question that the respondent has begun to answer, tight for control over the interview process, or stay fixed with a line of thought and ignore new leads.

10. What are the different types or levels of field notes, and what purpose does each serve? (p. 401)
  • Jotted notes Field notes inconspicuously written while in the field site on whatever is convenient in order to "jog the memory" later.
  • Direct observation notes Field research notes that attempt to include all details and specifics of what the researcher heard or saw in a field site, and are written to permit multiple interpretations later.
  • Separation of inference A field researcher writes direct observation notes in a way that keeps what was observed separate from what was inferred or believed to have occurred.
  • Analytic memos Notes a qualitative researcher takes while developing more abstract ideas, themes, or hypotheses from an examination of details in the data.
  • Personal Journal personal feelings and emotional reactions become part of the data and color what a researcher sees or hears in the field. A researcher keeps a section of notes that is like a personal diary. Personal notes provide a way to cope with stress; they are a source of data about personal reactions; they help to evaluate direct observation or inference notes when the notes are later reread. For example, if you were in a good mood during observations, it might color what you observed.

Neuman, Chapter 14
1. What are some of the unique features of historical-comparative research?
  • the evidence for H-C research is usually limited and indirect. Direct observation or involvement by a researcher is often impossible. An H-C researcher reconstructs what occurred from the evidence. The researcher is limited to what has not been destroyed and what leaves a trace, record, or other evidence behind.
  • Historical-comparative researchers interpret the evidence. The researcher becomes immersed in and absorbs details about a context.
  • a researcher's reconstruction of the past or another culture is easily distorted. Compared to the people being studied, a researcher is usually more aware of events occurring prior to the time studied, events occurring in places other than the location studied, and events that occurred after the period studied. This awareness gives the researcher a greater sense of coherence than was experienced by those living in the past or in an isolated social setting. A researcher's broader awareness can create the illusion that things happened because they had to, or that they tit together neatly.
  • A researcher cannot easily see through the eyes of those being studied. Knowledge of the present and changes over time can distort how events , people, laws. or even physical objects are perceived.
  • Historical-comparative researchers recognize the capacity of people to learn, make decisions, and act on what they learn to modify the course of events. People's capacity to learn introduces indeterminacy into historical-comparative explanations.
  • An H-C researcher wants to find out whether the people involved saw various courses of action as plausible. Thus, the worldview and knowledge of those people is it conditioning factor, shaping what the people being studied saw as possible or impossible. The researcher asks whether people were conscious of certain things
  • Historical-comparative research takes a contingent view of causality. An H-C researcher often uses combinational explanations. They are analogous to a chemical reaction in which several ingredients (chemicals, oxygen) are added together under specified conditions (temperature. pressure) to produce an outcome (explosion).
  • Historical-comparative research focuses on whole cases versus separate variables across cases. A researcher approaches the whole as if it has multiple layers. He or she grasps surface appearances as well as reveals the general, hidden structures, unseen mechanisms, or causal processes.
  • A historical-comparative researcher integrates the micro (small-scale, face-to-face interaction) and macro (large-scale social structures) levels. Instead of describing micro-level or macro-level processes alone, the researcher describes both levels or layers of reality and links them to each other.
  • H-C research shifts between a specific context and a general comparison. A researcher examines specific contexts, notes similarities and differences, then generalizes. He or she then looks again at the specific contexts using the generalizations.

2. What are the similarities between field research and H-C research? (p. 424/5)
  • Both H-C research and field research recognize that the researcher's point of view is an unavoidable part of research. Both involve interpretation, which introduces the interpreter's location in time, place, and worldviews. Historical-comparative research does not try to produce a single, unequivocal set of objective facts. Rather, it is a confrontation of old with new or of different worldviews. It recognizes that a researcher's reading of historical or comparative evidence is influenced by an awareness of the past and by living in the present.
  • Both field and H-C research examine a great diversity of data. In both, the researcher becomes immersed in data to gain an empathic understanding of events and people. Both capture subjective feelings and note how everyday, ordinary activities signify important social meaning.
  • Both field and H-C researchers often use grounded theory. Theory usually emerges during the process of data collection. Both examine the data without beginning with fixed hypotheses. Instead, they develop and modify concepts and theory through a dialogue with the data, then apply theory to reorganize the evidence. Thus, data collection and theory building interact.
  • Both field and H-C research involve a type of translation. The researcher's meaning system usually differs from that of the people he or she
    studies, but he or she tries to penetrate and understand their point of view. Once the life, language, and perspective of the people being studied have been mastered, the researcher "translates " it for others who read his or her report.
  • Both field and H-C researchers focus on action, process, and sequence and see time and process as essential. Both are sensitive to an ever present tension between agency, the fluid-social action and changing social reality, and structure, the fixed regularities and patterns that shape social actions and perceptions. Both see social reality simultaneously as something created and changed by people and as imposing a restriction on human choice.
  • Generalization and theory are limited in field and H-C research. Historical and cross-cultural knowledge is incomplete and provisional, based on selective facts and limited questions. Neither deduces propositions or tests hypotheses in order to uncover fixed laws. Likewise, replication is unrealistic because each researcher has a unique perspective and assembles a unique body of evidence. Instead, researchers offer plausible accounts and limited generalizations.

3. What is the Annales school. and what are three characteristics or terms in its orientation toward studying the past? (p. 427)
  • Annales school research method associated with a group of French historians and named after the scholarly journal Annales: Economies, Societes, Civilisations, founded in 1929. The school's orientation can be summarized by four interrelated characteristics.
  1. One characteristic is the school's synthetic, totalizing, holistic, or interdisciplinary approach. Annales researchers combine geography, ecology, economics, and demography with cultural factors to give a total picture of the past. They blend together the diverse conditions of material life and collective beliefs or culture into a comprehensive reconstruction of the past civilization.
  2. A second characteristic is illustrated by a French term of the school, the mentalities of an era. This term is not directly translatable into English. It
    means a distinctive worldview. perspective, or set of assumptions about life--the way that thinking was organized, or the overall pattern of conscious and unconscious cognition, belief, and values that prevailed in an era. Thus, researchers try to discover the overall arrangement of thought in a historical period that shaped subjective experience about fundamental aspects of reality: the nature of time, the relationship of humans to the physical environment, how truth is created, and the like.
  3. The Annales approach mixes concrete historical specificity and abstract theory. Theory takes the form of models or deep underlying structures, which are causal or organizing principles that account for everyday events. Annales historians look for both the deep-running currents that shape the surface events and individual actions.
  4. A last characteristic is an interest in long-term structures or patterns. In contrast to traditional historians who focus on particular individuals or events over short time spans, from several years to a few decades. Annales historians examine long-term changes. over periods of a century or more, in the fundamental way that social life is organized. They use the term longue duree. It means a long duration or a historical era in geographic space (e.g., feudalism in western Europe. or the fifteenth to eighteenth centuries in the Mediterranean region).

4. What is the difference between a critical indicator and supporting evidence? (pl 430)
  • Critical indicator A clear, unambiguous measure or indicator of a concept in a specific cultural or historical setting.
  • Supporting evidence is evidence for less central parts of a model that builds the overall background or context. It builds the overall background or context. This supporting evidence is less abundant or weaker, and lacks a clear and unambiguous theoretical interpretation.
  • critical indicator: unambiguous and clear, supporting evidence: weaker.

5. What questions are asked by a researcher using external criticism? (p. 436)

6. What are the limitations of using secondary sources? (p. 433/4)
  • The limitations of secondary historical evidence include problems of inaccurate historical accounts and a lack of studies in areas of interest. Such sources cannot be used to test hypotheses. Post facto (after-the-fact) explanations cannot meet positivist criteria of falsifiability, because few statistical controls can be used and replication is impossible. Yet, historical research by others plays an important role in developing general explanations, among its other uses. For example, such research substantiates the emergence and evolution of tendencies over time.
  • One problem is in reading the works of historians. Historians do not present theory-free, objective "facts". They implicitly frame raw data, categorize
    information, and shape evidence using concepts. The historian's concepts are often a mixture drawn from journalism, the language of historical actors, ideologies, philosophy, everyday language in the present, and social science. They may be vague, applied inconsistently, and not mutually exclusive nor exhaustive
  • A second problem is that the historian's selection procedure is not transparent. Historians select some information from all possible evidence.Yet, the H-C researcher does not know how this was done. Without knowing the selection process, a historical-comparative researcher must rely on the historian's judgments, which can contain biases.
  • A third issue is in the organization of the evidence, Historians organize evidence as they write narrative history. This compounds problems of undefined concepts and the selection of evidence.

7. What was Galton's problem and why is it important in comparative research? (p. 440)

8. What strengths or advantages are there to using a comparative method in social research?

9. In what ways is cross-national survey research different from survey research within one's own culture?

10. What is the importance of equivalence in H-C research, and what are the four types of equivalence?

Module 3 - Understanding data collection and analysis


Week 8/9 Collecting and analyzing quantitative data
  • Neuman, Chapter12

Week 10/11 Collecting and analyzing quantitative data
  • Neuman, Chapter 15

Recommended Readings

  • Rowntree, D. (1981). Statistics without tears: A primer for non-mathematicians. New York: Penguin. (Module 3, unit 1.)
  • Faculty of Education, University of Alberta. Understanding statistics. Instructional CD. (Module 3, unit 1.)

Week 8 Collecting and analyzing quantitative data

Chapter 12
1. What is a codebook and how is it used in research? (p. 344)
  • A document that describes the procedure for coding variables and their location in a format that computers can use.
  • When you code data, it is very important to create a well-organized, detailed codebook and make multiple copies of it. If you do not write down the
    details of the coding procedure, or if you misplace the codebook, you have lost the key to the data and may have to recode the data again.

2. How do researchers clean data and check their coding? (p. 346)
  • After very careful coding, the researcher checks the accuracy of coding, or "cleans" the data. He or she may code a 10 to 15 percent random sample of the data a second time. If no coding errors appear, the researcher proceeds; if he or she finds errors, the researcher rechecks all coding.
  • Researchers verify coding after the data are in a computer in two ways.
    • Possible code cleaning (or wild code checking) involves checking the categories of all variables for impossible codes. For example, respondent sex is coded 1 = Male, 2 = Female. Finding a 4 for a case in the field for the sex variable indicates a coding error.
    • contingency cleaning (or consistency checking), involves cross-classifying two variables and looking for logically impossible combinations.
      For example, education is cross-classified by occupation. If a respondent is recorded as never having passed the eighth grade and also is recorded as being a legitimate medical doctor, the researcher checks for a coding error.

3. Describe how researchers use the optical scan sheets. (p. 346/7)
  • like our attendance at high school, or multiple choice test exams.

4. In what ways can a researcher display frequency distribution information? (p. 347/8)
  • Frequency distribution is a table that shows the distribution of cases into the categories of one variable, that is, the number or percent of cases in each category.
  • There are many ways to display frequency distribution information some examples are as raw count frequency distribution, percentage frequency distribution, bar chart frequency distribution, grouped data frequency distribution, or a frequency distribution polygon.
external image measlev2.gif
external image measlev2.gif
  • In nominal measurement the numerical values just "name" the attribute uniquely. No ordering of the cases is implied. For example, jersey numbers in basketball are measures at the nominal level. A player with number 30 is not more of anything than a player with number 15, and is certainly not twice whatever number 15 is.
  • In ordinal measurement the attributes can be rank-ordered. Here, distances between attributes do not have any meaning. For example, on a survey you might code Educational Attainment as 0=less than H.S.; 1=some H.S.; 2=H.S. degree; 3=some college; 4=college degree; 5=post college. In this measure, higher numbers mean more education. But is distance from 0 to 1 same as 3 to 4? Of course not. The interval between values is not interpretable in an ordinal measure.
  • In interval measurement the distance between attributes does have meaning. For example, when we measure temperature (in Fahrenheit), the distance from 30-40 is same as distance from 70-80. The interval between values is interpretable. Because of this, it makes sense to compute an average of an interval variable, where it doesn't make sense to do so for ordinal scales. But note that in interval measurement ratios don't make any sense - 80 degrees is not twice as hot as 40 degrees (although the attribute value is twice as large).
  • Finally, in ratio measurement there is always an absolute zero that is meaningful. This means that you can construct a meaningful fraction (or ratio) with a ratio variable. Weight is a ratio variable. In applied social research most "count" variables are ratio, for example, the number of clients in past six months. Why? Because you can have zero clients and because it is meaningful to say that "...we had twice as many clients in the past six months as we did in the previous six months."
  • It's important to recognize that there is a hierarchy implied in the level of measurement idea. At lower levels of measurement, assumptions tend to be less restrictive and data analyses tend to be less sensitive. At each level up the hierarchy, the current level includes all of the qualities of the one below it and adds something new. In general, it is desirable to have a higher level of measurement (e.g., interval or ratio) rather than a lower one (nominal or ordinal).

5. Describe the differences between mean, median, and mode. (p. 349/50)
  • Mean: can only be used with interval or ratio level data. The mean is strongly affected by changes in extreme values (very large or very small).
    • In general, the median is best for skewed distributions, although the mean is used in most other statistics.
  • Median: middle point. It is also the 50th percentile, or the point at which half the cases are above it and half below it. It can be used with ordinal-, interval- or ratio-level data (but not nominal level). Note that the median does not change easily.
  • Mode: can be used with nominal, ordinal, interval, or ratio data. It is simply the most common or frequently occurring number. There will always be at least one case with a score that is equal to the mode.

  • If the frequency distribution forms a, normal distribution or bell-shaped curve, the three measures of central tendency equal each other.
Neuman, page 350
Neuman, page 350
  • If most cases have lower scores with a few extreme high scores, the mean will he the highest, the median in the middle, and the mode the lowest.
Neuman, page 350
Neuman, page 350
  • If most cases have higher scores with a few extreme low scores, the mean will be the lowest. the median in the middle, and the mode the highest.

Neuman, page 350
Neuman, page 350
6. What three features of a relationship can he seen from a scattergram? (p. 355/6)
  • A scattergram is a graph on which a researcher plots each case or observation. where each axis represents the value of one variable. It is used for variables measured at the interval or ratio level, rarely for ordinal variables, and never if either variable is nominal
  • Form: Relationships can take three forms: independence, linear, and curvilinear.
    • Independence or no relationship is the easiest to see. It looks like random scatter with no pattern, or a straight line that is exactly parallel to the horizontal or vertical axis.
    • A linear relationship means that a straight line can be visualized in the middle of a maze of cases running from one corner to another.
    • A curvilinear relationship means that the center of a maze of cases would form it U curve, right side up or upside down, or an S curve.
  • Direction:Linear relationships can have a positive or negative direction.
  • Precision:Precision is the amount of spread in the points on the graph.
    • A high level of precision occurs when the points hug the line that summarizes the relationship.
    • A low level occurs when the points are widely spread around the line.

7. What is a covariation and how is it used? (p. 353)
  • Covariation is the idea that two variables vary together, such that knowing the values on one variable provides information about values found on another.
  • For example, people with higher values on the income variable are likely to have higher values on the life expectancy variable. Likewise, those with lower incomes have lower life expectancy. This is usually stated in a shorthand way by saying that income and life expectancy are related to each other, or covary. We could also say that knowing one's income tells us one's probable life expectancy, or that life expectancy depends on income.
  • Most researchers state hypotheses in terms of a causal relationship or expected covariation: if they use the null hypothesis, the hypothesis is that there is independence. It is used in formal hypothesis testing and is frequently found in inferential statistics .

8. When can a researcher generalize from a scattergram to a percentaged table to find a relationship among variables? (p. 361)
  • The circle-the-largest-cell rule works-with one important caveat. The categories in the percentages table must he ordinal or interval and in the
    same order as in a scattergram. In scattergrams the lowest variable categories begin at the bottom left. If the categories in a table are not ordered the same way, the rule does not work.
Neuman, page 361
Neuman, page 361
  • If there is no relationship in a table, the cell percentages look approximately equal across rows or columns.
  • A linear relationship looks like larger percentages in the diagonal cells.
  • If there is a curvilinear relationship, the largest percentages for a a pattern across cells. For example, the largest cells might be the upper right, the bottom middle, and the upper left.

9. Discuss the concept of control as it is used in trivariate analysis. (p. 265-9)
  • In order to meet all the conditions needed for causality, researchers want to "control for" or see whether an alternative explanation explains away a causal relationship. If an alternative explanation explains a relationship, then the trivariate relationship is spurious (false). Alternative explanations are operationalized as third variables, which are called control variables because they control for alternative explanation.
  • One way to take such third variables into consideration and see whether they influence the trivariate relationship is to statistically introduce
    control variables using trivariate or three-variable tables.
  • Elaboration paradigm A system for describing patterns evident among tables when the bivariate contingency table is compared with partials after the control variable has been added.
  • Different patterns can emerge from the elaboration paradigm
    • Replication pattern A pattern in the elaboration paradigm in which the partials show the same relationship as in a bivariate contingency table of the independent and dependent variable alone.
    • Specification pattern A pattern in the elaboration paradigm in which the bivariate contingency table shows a relationship. One of the partial tables shows the relationship, but other tables do not.
    • Interpretation pattern A pattern in the elaboration paradigm in which the bivariate contingency table shows a relationship, but the partials show no relationship and the control variable is intervening in the causal explanation.
    • Explanation pattern A pattern in the elaboration paradigm in which the bivariate contingency table shows a relationship, but the partials show no relationship and the control variable occurs prior to the independent variable.
    • Suppressor variable pattern A pattern in the elaboration paradigm in which no relationship appears in a bivariate contingency table, but the partials show a relationship between the variables. The control variable is a suppressor variable because it suppressed the true relationship. The true relationship appears in the partials.

10. What does it mean to say "statistically significant at the 0.001 level," and what type of error is more likely: Type I or Type II? (p. 307-)
  • Statistical significance means that results are not likely to be due to chance factors.
  • Statistical significance tells only what is likely. It cannot prove anything with absolute certainty. It states that particular outcomes are more or less
    probable. Statistical significance is not the same as practical, substantive, or theoretical significance. Results can be statistically significant but theoretically meaningless or trivial.
  • If a researcher says that results are significant at the 0.001 level, this means the following:
    • Results like these are due to chance factors only 1 in 1000 times.
    • There is a 99.9%chance that the sample results are not due to chance factors alone. but reflect the population accurately.
    • The odds of such results based on chance alone are .001, or 0.1%.
    • One can be 99.9% confident that the results are due to a real relationship in the population, not chance factors.
    • If the researcher attributes the results to such a high chance (0.1%) that for the results to occur they are quite rare--such a high standard means that the researcher is more likely to make a mistake by saying the results are due to chance when in fact they are not. They might falsely accept a relationship when in fact none exists (Type II error).
  • Type I error occurs when the researcher says that a relationship exists when in fact none exists.The logical error of falsely rejecting the null hypothesis.
  • Type II error occurs when a researcher says that a relationship does not exist, when in fact it does. The logical error of falsely accepting the null hypothesis.
  • For example, the researcher might use the . 0001 level. He or she attributes the results to chance unless they are so rare that they would occur by chance only I in 10,000 times. Such a high standard means that the researcher is most likely to err by saying results are due to chance when in fact they are not. He or she may falsely accept the null hypothesis when there is a causal relationship (a Type II error ).
  • By contrast, a risk - taking researcher sets a low level of significance, such as . 10. His or her results indicate a relationship would occur by chance I in 10 times. He or she is likely to err by saying that a causal relationship exists, when in fact random factors (e.g., random sampling error ) actually cause the results.The researcher is likely to falsely reject the null hypothesis (Type I error).
  • In sum, the .05 level is a compromise between Type I and Type II errors.

Week 10 Collecting and analyzing qualitative data

Neuman - Chapter 15

1. Identify four differences between quantitative and qualitative data analysis. (p. 458/459)
  1. Quantitative data analysis is more standardized; hypothesis testing and statistical methods are similar across different social research projects or across the natural and social science. Whereas, qualitative data analysis is less standardized. The wide variety in qualitative research is matched by the many approaches to data analysis.
  2. Quantitative researchers do not begin data analysis until they have collected all of the data and condensed them into numbers. They then manipulate the numbers in order to see patterns or relationships. Qualitative researchers look for patterns or relationships, early in a research project, while they are still collecting data. The results of early data analysis guide subsequent data collection. Thus, analysis is less a distinct final stage of research than a dimension of research that stretches across all stages.
  3. Another difference is the relation to social theory. Quantitative researchers manipulate numbers that represent empirical facts in order to test an abstract hypothesis with variable constructs. By contrast, qualitative researchers create new concepts and theory by blending together empirical evidence and abstract concepts. Instead of testing a hypothesis, a qualitative analyst may illustrate or color in evidence showing that a theory, generalization, or interpretation is plausible.
  4. In quantitative analysis, data analysis is clothed in statistics, hypotheses, and variables. Quantitative researchers assume that social life can be measured by using numbers, then manipulate the numbers with statistics to reveal features of social life. Qualitative analysis does not draw on a large, well-established body of formal knowledge from mathematics and statistics. The data are relatively imprecise, diffuse, and context- based, and can have more than one meaning . This is not seen as a disadvantage.

2. How does the process of conceptualization differ in qualitative and quantitative research? (p. 460)
  • Quantitative researchers conceptualize variables and refine concepts as part of the process of measuring variables.
  • Qualitative researchers form new concepts or refine concepts that are grounded in the data. Concept formation is an integral part of data analysis and begins during data collection. Thus, conceptualization is one way that a qualitative researcher organizes and makes sense of data.

3. How does data coding differ in quantitative and qualitative research, and what are the three kinds of coding used by a qualitative researcher? (p. 460)
  • Coding Video on YouTube: I've got some interview data! What next?
  • When quantitative researcher codes data, he or she arranges measures of variables into it machine-readable form for statistical analysis--a clerical data management task.
  • A researcher organizes the raw data into conceptual categories and creates themes or concepts. It is guided by the research question and leads to new questions. it frees a researcher from entanglement in the details of the raw data and encourages higher-level thinking about them. It also moves him or her toward theory and generalizations.
  1. Open coding A first coding of qualitative data in which a researcher examines the data to condense them into preliminary analytic categories or codes. (p. 461)
  2. Axial coding A second stage of coding of qualitative data in which a researcher organizes the codes, links them, and discovers key analytic categories. During axial coding. ask about causes and consequences, conditions and interactions. strategies and processes, and look for categories or concepts that cluster together. (p. 462)
      • You should ask questions such as:
        • Can I divide existing concepts into subdimensions or subcategories?
        • Can I combine several closely related concepts into one more general one?
        • Can I organize categories into a sequence (i.e.. A. then B , then C), or by their physical location (i.e., where they occur), or their relationship to a major topic of interest?
  3. Selective coding A last stage in coding qualitative data in which a researcher examines previous codes to identify and select data that will support the conceptual coding categories that were developed. (p. 464) Selective coding is the process of choosing one category to be the core category, and relating all other categories to that category. The essential idea is to develop a single storyline around which all everything else is draped. There is a belief that such a core concept always exists. (link)

4. What is the purpose of analytic memo writing in qualitative data analysis? (p. 464/5)
  • Analytic memos are notes that a qualitative researcher takes while developing more abstract ideas, themes, or hypotheses from an examination of details in the data.
  • Each coded theme or concept forms the basis of it separate memo. and the memo contains a discussion of the concept or theme. Rough theoretical ideas form the beginning of analytic memos.
  • The analytic memo forges a link between theconcrete data or raw evidence and more abstract, theoretical thinking. It contains your reflections on and thinking about the data and coding. Add to the memo and use it as you pass through the data with each type of coding. The memos form the basis for analyzing data in the research report. In fact, rewritten sections from good quality analytic memos can become sections of the final report.

5. Describe successive approximation. (p. 469)
  • This method involves repeated iterations or cycling through steps, moving toward a final analysis. Over time, or after several iterations, it researcher moves from vague ideas and concrete details in the data toward a comprehensive analysis with generalizations. This is similar to coding discussed earlier.
  • A researcher begins with research questions and a framework of assumptions and concepts. He or she then probes into the data, asking questions of the evidence to see how well the concepts tit the evidence and reveal features of the data. He or she also creates new concepts by abstracting from the evidence and adjusts concepts to fit the evidence better. The researcher then collects additional evidence to address unresolved issues that appeared in the first stage, and repeats the process. At each stage, the evidence and the theory shape each other. This is called successive approximation because the modified concepts and the model approximate the full evidence and are modified over and over to become successively more accurate.
  • Each pass through the evidence is provisional or incomplete. The concepts are abstract, but they are rooted in the concrete evidence and reflect the context. As the analysis moves toward generalizations that are subject to conditions and contingencies, the researcher refines generalizations and linkages to reflect the evidence better.'

6. What are the empty boxes in the illustrative method and how are they used? (p 469)
  • Illustrative method is a method of qualitative data analysis in which a researcher takes the theoretical concepts and treats them as empty boxes to be filled with specific empirical examples and descriptions.
  • With the illustrative method, a researcher applies theory to a concrete historical situation or social setting, or organizes data on the basis of prior theory. Preexisting theory provides the empty boxes. The researcher sees whether evidence can be gathered to fill them. The evidence in the boxes confirms or rejects the theory, which he or she treats as a useful device for interpreting the social world. The theory can be in the form of a general model, an analogy, or a sequence of steps.
  • A single case study with the illustrative method does not permit a strong test or verification of an explanation. "This is because data from one case can illustrate the empty boxes from several competing explanations. In addition. finding evidence to illustrate an empty box using one case does not build a generalized explanation. A general explanation requires evidence from numerous cases.
    • Case Clarification: The theoretical model illuminates or clarifies a specific case or single situation. The case becomes understandable by applying the theory to it.
    • Parallel Demonstration: A researcher juxtaposes multiple cases to show that the theory operates in multiple cases. The researcher can illustrate theory with specific material from multiple cases.
    • Pattern Matching: A researcher matches the observations from one case with the pattern or concepts derived from theory or other studies. It allows for partial theory falsification: it narrows the range of possible explanations by eliminating some ideas. variables, or patterns from consideration

7. What is the difference between the method of agreement and the method of difference? Can it researcher use both together? Explain why or why not. (p. 471-473)
  • The method of agreement: If two cases of a phenomenon share only one feature, that feature is their cause or their effect. Example: Two persons in different places are asked to wear green spectacles all day. That night they are woken when they show REM sleep and asked what colour they are dreaming in. If both have green dreams then the green glasses is the cause of the colour of their dreams (link to ppt)
  • The method of difference: If a case in which a phenomenon occurs and one in which it does not differ by only one feature that feature is the cause or a necessary part of the cause of the phenomenon, or its effect. This is the method used in most experiments where an attempt is made to make two groups as identical as possible, and then to give one of the groups an experimental treatment, and then look to see if the experiment has made the groups different. (link to same ppt)
  • You can use the method of difference alone or in conjunction with the method of agreement. The method of difference is usually stronger and is a "double application" of the
    method of agreement.
    • First, locate cases that are similar in many respects but differ in a few crucial ways.
    • Next pinpoint features whereby a set of cases is similar with regard to an outcome and causal features, and another set whereby the cases differ on outcomes and causal features.
    • The method of difference reinforces information from positive cases (e.g., cases that have common causal features and outcomes) with negative cases (e.g., cases lacking the outcome and causal features). Thus, you look for cases that have many of the causal features of positive cases but lack a few key features and have a different outcome.
    • another good ppt

8. What are the parts of a domain and how are they used in domain analysis? (p. 470)
  • Cultural domains have three pans: a cover term, included terms, and a semantic relationship.
    • The cover term is simply the domain's name.
    • Included terms are the subtypes or parts of the domain.
    • A semantic relationship tells how the included terms fit logically within the domain.
  • For example, in the domain of a witness in a judicial setting. The cover term is "witness." Two subtypes or included terms are "defense witness" and "expert witness." The semantic relationship is "is a kind of." Thus, an expert witness and a defense witness are kinds of witnesses

9. What are the major features of a narrative? (p. 474)
  • Narrative analysis Both a type of historical writing that tells a story and a type of qualitative data analysis that presents a chronologically linked chain of events in which individual or collective social actors have an important role.
  • Despite the diversity of its uses, it narrative shares six core elements:
    1. telling a story or tale (i.e.. presenting unfolding events from a point of view)
    2. a sense of movement or process ( i.e., a before and after condition)
    3. interrelations or connections within a complex, detailed context
    4. an involved individual or collectivity that engages in action and makes choices
    5. coherence or the whole holds together, and
    6. the temporal sequencing of a chain of events

10. Why is it important to look for negative evidence, or things that do not appear in the data, for a full analysis? (p. 478)
  • Negative case method A method of qualitative data analysis in which a research focuses on a case that does not conform to theoretical expectations and uses details from that case to refine theory. (To study what is not explicit in the data or what did not happen.)
  • At first studying what is not there may appear counterintuitive, but an alert observer who is aware of all the clues notices what is missing as well as what is there. When what was expected does not occur, it is important information.
  • Negative evidence takes mans forms
    • Events that do not occur.
    • Events of which the population is unaware.
    • Events the population wants to hide.
    • Overlooked commonplace events.
    • Effects of a researcher 's preconceived notions.
    • Unconscious nonreporting .
    • Conscious nonreporting.

Summary of Analytic Strategies Used in Qualitative Data Analysis
  • Ideal Type (p. 467)
  • Successive Approximation (p.469)
  • Illustrative Method (p. 469)
  • Domain Analysis (p. 470)
  • Analytic Comparison (p. 471)
  • Narrative Analysis (p. 474)
  • Negative Case Method (p. 478)

Neuman, page 481/2
Neuman, page 481/2

==Module 4 - Engaging the research enterprise



Week 12 The knowledge base and ethics in research
  • Neuman, Chapter 5
  • Hayes, E. R. (1991). A brief guide to critiquing research. New Directions for Continuing Education, 51(Fall). (13 pages). (Module 4, Unit 1.)
  • Kuyper, B. J. (1991). Bringing up scientists in the art of critiquing research. BioScience, 41 (4). (3 pages). (Module 4, Unit 1.)

Week 13 Dissemination and politics of research findings
  • Neuman, Chapter 16
  • Easterby-Smith, M.,Thorpe, R., & Lowe, A. (1991). Chapter 4: The politics of management research. In Management Research: An introduction. Thousand Oaks: Sage Publications. (27 pages). (Module 4, Unit 2.)

Neuman Chapter 5

Interesting pptabout ethics in Japan. Specifically page 20, see below. (RCR = Responsible Conduct of Research)
Slide 20 from the mentioned ppt

Slide 20 from the mentioned ppt