MDDE+602+Module+2

=Module #2=

Module 1 Module 3 Module 4 Glossary

Textbook Readings
Week 4 Creating a research design > Neuman, Chapter 6 Week 5 Measurement and sampling > Neuman, Chapter 7 (pp. 169-189, 206-207) & Chapter 8 Week 6 Quantitative research design > Neuman, Neuman, Chapter 9 & 10 Week 7 Qualitative research design > Neuman, Chapter 13 & 14

Articles to Read
Week 4 Creating a research design: > Jarvis, P. (1999). Action research. In //The practitioner-researcher: Developing theory from practice//. San Francisco: Jossey-Bass. (13 pages). (Module 1, Unit 1.)

Saba, F. (2000). Research in distance education: Astatus report. International Review of Research in Open and Distance Learning 1(1). Retrieved February 21, 2002 (9 pages). (Module 2, Unit 1.)

Week 5 Measurement and sampling: > None

Week 6 Quantitative research design: > McLean, S. & Morrison, D. (2000). Sociodemographic characteristics of learners and participation in computer conferencing. Journal of Distance Education, 15(2), 17-36. (20 pages). (Module 2, Unit 3).

Week 7 Qualitative research design: > McMillan, J. H. & Schumacher, S. (2001). Case study design. In //Research in education: A conceptual introduction//. Toronto: Addison Wesley Longman, Inc. (3 pages). (Module 2, Unit 4. )

Recommended Readings
Conrad, D. (2002). Deep in the hearts of learners: Insights into the nature of online community. Journal of Distance Education, 17(1), 1-19. (20 pages). (Module 2, Unit 4). Hoepfl, M. C. (1997). Choosing qualitative research: A primer for technology education researchers. //Journal of Technology Education, 9//(1). (Module 2, unit 4.)


 * WEEK 4**


 * Objectives **
 * 1) Review research design opportunities that lead to quantitative data or qualitative data.
 * 2) Contrast the character of research designs that generate quantitative data versus qualitative data.
 * 3) Review the requirements of a causal relationship.
 * 4) Identify the requirements of an hypothesis, hypothesis testing and a null hypothesis design.
 * 5) Explain units of analysis, ecological fallacy, reductionism, tautology, teleology and spuriousness in relation to causal hypotheses.
 * 6) Explain the place of ‘context’ in qualitative research.
 * 7) Consider practitioner-researcher and the research design used.
 * 8) Outline research design issues in the field of distance education.

1. What are the implications of saying that qualitative research uses more of a logic in practice than a reconstructed logic? (p, 151)
 * Chapter 6**

2. What does it mean to say that qualitative research follows a nonlinear path? In what ways is a nonlinear path valuable? (p. 152)

3. Describe the differences between independent, dependent, and intervening variables. (p. 161)

4. Why don't we prove results in social research? (p. 162/3) > Evidence supports or confirms, but does not prove, the hypothesis.

5. Take a topic of interest and develop two research questions for it. For each research question specify the units of analysis and universe?

6. What two hypotheses are used if a researcher uses the logic of disconfirming hypotheses? Why is negative evidence stronger? (p. 164)

7. Restate the following in terms of a hypothesis with independent and dependent variables: "The number of miles a person drives in a year affects the number of visits if person makes to filling stations, and there is a positive unidirectional relationship between the variables." > The number of miles a person drives in a year (independent) will increase the number of visits if person makes to filling stations (dependent).

8. Compare the ways quantitative and qualitative researchers deal with personal bias and the issue of trusting the researcher.

9. How do qualitative and quantitative researchers use theory? (bottom p.173) > Researchers use general theoretical issues as a source of topics. Theories provide concepts that researchers turn into variables as well as the reasoning or mechanism that helps researchers connect variables into a research question.

11. Explain how qualitative researchers approach the issue of interpreting data. Refer to first, second-. and third-order interpretations. (p. 160)

Week 5 Measurement and sampling
1. What are the three basic parts of measurement, and how do they fit together?
 * Chapter 7**
 * 1) **Process**: Abstract Construct --> Conceptualization--> Conceptual Definition --> Operationalization --> Indicator/Measure
 * 2) **Reliability and Validity**:
 * 3) **Measurement**:

2. What is the difference between reliability and validity, and how do they complement each other? >> when applied to different subpopulations. >> specific measures. Does the measure yield consistent results across different indicators? >>> but also are negatively associated with opposing constructs. (My measure of conservatism has discriminant validity if the 10 conservatism items both hang together and are negatively associated with the 5 liberalism ones.)
 * Reliability: numerical results produced by an indicator do not vary because of characteristics of the measurement process or measurement instrument itself. Reliability means dependability or consistency.
 * Measurement reliability: the numerical results produced by an indicator do not vary because of characteristics of the measurement process or measurement instrument itself. (bathroom scale)
 * Stability reliability is reliability across time. It addresses the question: Does the measure deliver the same answer when applied in different time periods? (bathroom scale)
 * Representative reliability: reliability across subpopulations or groups of people. It addresses the question: Does the indicator deliver the same answer when applied to different groups? An indicator has high representative reliability if it yields the same result for a construct
 * Equivalence reliability applies when researchers use multiple indicators--that is, when a construct is measured with multiple
 * Validity: Sometimes, it is used to mean "true" or "correct. When it researcher says that an indicator is valid, it is valid for a particular purpose and definition.
 * Validity and reliability are usually complementary concepts, but in some special situations they conflict with each other. Sometimes, as validity increases, reliability is more difficult to attain. and vice versa. This occurs when the construct has a highly abstract and not easily observable definition.
 * Face Validity: judgment by the scientific community that the indicator really measures the construct. It addresses the question: On the face of it, do people believe that the definition and method of measurement fit?
 * Content Validity: addresses the question: Is the full content of a definition represented in a measure'? A conceptual definition holds ideas: it is a "space" containing ideas and concepts.Measures should sample or represent all ideas or areas in the conceptual space. Content validity involves three steps. First, specify the content in a construct's definition. Next, sample from all areas of the definition. Finally, develop one or more indicators that tap all of the parts of the definition.
 * Criterion Validity: uses some standard or criterion to indicate a construct accurately. The validity of an indicator is verified by comparing it with another measure of the same construct in which a researcher has confidence. There are two subtypes of this kind of validity.
 * Concurrent- agrees with a preexisting measure
 * Predictive-agrees with future behavior (conservatism measure-test on conservative groups and they should score high, then test on liberal groups and they should score low. If true then the measure is "validated" by the pilot testing.)
 * Construct Validity: measures with multiple indicators. It addresses the question: If the measure is valid, do the various indicators operate in a consistent manner?
 * Convergent Validity. This kind of validity applies when multiple indicators converge or are associated with one another and multiple measures of the same construct hang together or operate in similar ways.
 * Discriminant Validity: the opposite of convergent validity and means that the indicators of one construct hang together or converge,

3. What are ways to improve the reliability of a measure? (p. 190) > Four ways to increase the reliability of measures are
 * clearly conceptualize constructs,
 * use a precise level of measurement
 * use multiple indicators, and
 * use pilot tests.

4. How do the levels of measurement differ from each other?

5. What are the differences between convergent, content, and concurrent validity? Can you have all three at once? Explain your answer.
 * convergent validity means that multiple measures of the same construct operate in the same way
 * content validity means whether or not your instrument reflects the content you are trying to measure.
 * concurrent validity refers to a measurement's ability to correlate or vary directly with an currently accepted measure of the same construct.
 * Yes, one can have all three at once. The measure can correlate with previously accepted measures, and the measures of a construct can be convergent, and I would hope that the measure accurately reflects the construct such that it has content validity.

6. Why are multiple indicators usually better than one indicator? > Multiple indicators give confidence that the measure is reliable. (page 190)

7. What is the difference between the logic of a scale and that of an index?
 * Index: The summing or combining of many separate measures of a construct or variable to create a single score. (Consumer Price Index)
 * Scale: A class of quantitative data measures often used in survey research that captures the intensity, direction, level, or potency of a variable construct along a continuum. Scales are common in situations in which a researcher wants to measure how an individual feels or thinks about something.(Likert Scale) (page 207)
 * A researcher can combine several Likert scale questions into a composite index if they all measure it single construct. (p.208)

8. Why is unidimensionality an important characteristic of a scale? > scales are always on a continuum where the individuals rate from one extreme to the other. (Likert scale)

9. What are advantages and disadvantages of weighting indexes?

10. How does standardization make comparisons easier?

The difference between a scale and an index, (link)
 * Scale**: A scale is a cluster of items (questions) that taps into a //unitary dimension// or single domain of behavior, attitudes, or feelings. They are sometimes called composites, subtests, schedules, or inventories. Aptitude, attitude, interest, performance, and personality tests are all measuring instruments based on scales. A scale is always //unidimensional//, which means it has construct and content validity. A scale is always at the ordinal or interval level, but it's conventional for researchers to treat them as interval or higher. Scales are predictive of outcomes (like behavior, attitudes, or feelings) because they measure underlying traits (like introversion, patience, or verbal ability). It's probably an overstatement, but scales are primarily used to //predict effects//, as the following example shows:

An Example of a Scale Measuring Introversion:
 * I blush easily.
 * At parties, I tend to be a wallflower.
 * Staying home every night is all right with me.
 * I prefer small gatherings to large gatherings.
 * When the phone rings, I usually let it ring at least a couple of times.


 * Index**: An index is a set of items (questions) that structures or focuses multiple yet distinctly related aspects of a dimension or domain of behavior, attitudes, or feelings into a single indicator or score. They are sometimes called composites, inventories, tests, or questionnaires. Like scales, they can measure aptitude, attitude, interest, performance, and personality, but the //only kind of validity they have is convergent// (hanging together), content, and face validity. It is possible to use some statistical techniques (like factor analysis) to give them better construct validity (or factor weights), but it is a mistake to think of indexes as multidimensional (no such word exists) since even the most abstract constructs are assumed to have unidimensional characteristics. Indexes are usually at the ordinal, but mostly interval level. Indexes can be predictive of outcomes (again, using statistical techniques like regression), but they are //designed mainly for exploring the relevant causes or underlying symptoms of traits// (like criminality, psychopathy, or alcoholism). It's probably an overstatement, but indexes are used primarily to collect //causes or symptoms//, as the following example shows:

An Example of an Index Measuring Delinquency:
 * I have defied a teacher's authority to their face.
 * I have purposely damaged or destroyed public property.
 * I often skip school without a legitimate excuse.
 * I have stolen things worth less than $50.
 * I have stolen things worth more than $50.
 * I use tobacco.
 * I like to fight.
 * I like to gamble.
 * I drink beer, wine, or other alcohol.
 * I use illicit drugs.

1. When is purposive sampling used?
 * Neuman, Chapter 8**
 * Purposive Sampling: A nonrandom sample in which the researcher uses a wide range of methods to locate all possible cases of a highly specific and difficult-to reach population. (p. 222) Get all possible cases that fit particular criteria, using various methods. (p. 220)
 * It is used in exploratory research or in field research. (p. 222)
 * It uses the judgment of an expert in selecting cases or it selects cases with a specific purpose in mind.
 * Purposive sampling is appropriate to select unique cases that are especially informative.
 * Another situation for purposive sampling occurs when a researcher wants to identify particular types of cases for in-depth investigation. The purpose is to gain a deeper understanding of types. (p. 223)
 * Intensive interviews are a device for generating insights, anomalies, and paradoxes, which later ntas• he formalized into hypotheses that can he tested by quantitative social science methods. (p. 223)

2. When is the snowball sampling technique appropriate?(p.222/3)
 * Social researchers are often interested in an interconnected network of people or organizations.4
 * A nonrandom sample in which the researcher begins with one case, and then based on information about interrelationships from that case, identifies other.
 * The crucial feature is that each person or unit is connected with another through a direct or indirect linkage.
 * Snowball sampling (also called network, chain referral. or reputational sampling) is a method for sampling (or selecting) the cases in a network. It is based on an analogy to a snowball, which begins small but becomes larger as it is rolled on wet snow and picks up additional snow. Snowball sampling is a multistage technique. It begins with one or a few people or cases and spreads out on the basis of links to the initial cases.

3. What is a sampling frame and why is it important? (p.224/5/6) > frame.
 * A list of cases in a population, or the best approximation of it.
 * A researcher operationalizes by developing a specific list that closely approximates all the elements in the population--this list is a sampling
 * A good sampling frame is crucial to good sampling. A mismatch between the sampling frame and the conceptually defined population can be a major source of error. Just as a mismatch between the theoretical and operational definitions of a variable creates invalid measurement, so a mismatch between the sampling frame and the population causes invalid sampling. Researchers try to minimize mismatches.
 * With it few exceptions (e.g., a list of all students enrolled at a university), sampling frames are almost always inaccurate.

4. Which sampling method is best when the population has several groups and a researcher wants to ensure that each group is in the sample? > strata within a sample. (p. 231)
 * Quota Sampling for nonprobability samples (p. 220)
 * Stratified Sampling: a researcher first divides the population into subpopulations (strata) on the basis of supplementary information. After dividing the population into strata, the researcher draws a random sample from each subpopulation. In stratified sampling, the researcher controls the relative size of each stratum, rather than letting random processes control it. This guarantees representativeness or fixes the proportion of different

5. How can you get a sampling interval from a sampling ratio?
 * Take the inverse of the sampling ration: (sample size/population)^-1
 * ex. 330/900 = .33333 (.33333)^-1 = 3 the sampling interval is 3

6. When should a researcher consider using probability proportionate to size?
 * When the clusters groups are of different sizes. For instance, when selecting 300 students from 3 000 universities. Some universities are large and some small. (A principle of random sampling is that each element has an equal chance to be selected into the sample.) (p. 237)

7. What is the population in random-digit dialing? Are sampling frame problems avoided? Explain. (p. 237)
 * technique used in research projects in which the general public is interviewed by telephone
 * A researcher using RDD randomly selects telephone numbers. thereby avoiding the problems of telephone directories. The population is telephone numbers, not people with telephones.
 * b/c three kinds of people are missed when the sampling frame is a telephone directory: people without telephones, people who have recently moved, and people with unlisted numbers. Those without phones (e.g.. the poor. the uneducated, and transients) are missed in any telephone interview study, but 95 percent of people have a telephone in advanced industrialized nations.

8. How do researchers decide how large a sample to use?

9. How are the logic of sampling and the logic of measurement related?

10. When is random-digit dialing used, and what are its advantages and disadvantages?


 * Week 6** **Quantitative research design**

1. What are the seven elements or parts of an experiment?
 * Neuman, Chapter 9**
 * 1) Treatment or independent variable
 * 2) Dependent variable
 * 3) Pretest
 * 4) Posttest
 * 5) Experimental group
 * 6) Control group
 * 7) Random assignment

2. What distinguishes preexperimental designs from the classical design?
 * Preexperimental designs Experimental designs that lack random assignment or use shortcuts and are much weaker than the classical experimental design. They are be substituted in situations in which an experimenter cannot use all the features of a classical experimental design, but have weaker internal validity.

3. Which design permits the testing of different sequences of several treatments? (p. 257)
 * Latin Square Designs: Researchers interested in how several treatments given in different sequences or time orders affect it dependent variable

4. A researcher says, "lt was a three by two design, with the independent variables level of fear (low, medium, high) and ease of escape (easy/difficult) and the dependent variable anxiety." What does this mean? What is the design notation, assuming that random assignment with posttest only was used?
 * Three by two design: there are two numbers 3 and 2 so there are two independent variables; one variable has three levels and the other variable has two levels. (p. 259)

5. How do the interrupted and the equivalent time-series designs differ? (p. 256/7)
 * Interrupted Time Series Design, a researcher uses one group and makes multiple pretest measures before and after the treatment.
 * Equivalent time Series Design is another one-group design that extends over a time period. Instead of one treatment, it has a pretest, then a treatment and posttest, then treatment and posttest, then treatment and posttest, and so on.

6. What is the logic of internal validity and how does the use of a control group fit into that logic? (p. 259)
 * Researchers use control groups to eliminate potential alternative explanations for associations between the treatment and dependent variable.
 * internal validity is when the hypothesized independent variable alone affects the dependent variable.

7. How does the Solomon four-group design show the testing effect? (p. 257)
 * A Solomon four-group design is used if the subjects might learn from taking the test itself. The researcher randomly divides clients into four groups. Two groups receive the pretest: one of them gets the new training method and the other gels the old method. Another two groups receive no pretest; one of them gets the new method and the other the old method. All four groups are given the sane posttest and the posttest results are compared. If the two treatment (new method) groups have similar results, and the two control (old method) groups have similar results, then the researcher knows pretest learning is not it problem. If the two groups with a pretest (one treatment, one control) differ from the two groups without a pretest, then the worker concludes that the pretest itself may have an effect on the dependent variable.

8. What is the double-blind experiment and why is it used? (p. 263/4)
 * Designed to control researcher expectancy. In it, people who have direct contact with subjects do not know the details of the hypothesis or the treatment. It is double blind because both the subjects and those in contact with them are blind to details of the experiment.
 * The double-blind design is nearly mandatory in medical research because experimenter expectancy effects are well recognized.

9. Do field or laboratory experiments have greater internal validity? External validity? Explain.
 * Laboratory experiments tend to have greater internal validity but lower external validity. They are logically tighter and better controlled, but less generalizable.
 * Field experiments tend to have greater external validity hut lower internal validity. They are more generalizable but less controlled.

10. What is the difference between experimental and mundane realism? (p. 265)
 * Experimental realism is the impact of an experimental treatment or setting on subjects, it occurs when experimental subjects are caught up in the experiment and are truly influenced by it. It is weak if subjects remain unaffected by the treatment. which is why researchers go to great lengths to create realistic conditions.
 * Mundane realism asks: is the experiment like the real world? For example, a researcher studying learning has subjects memorize four-letter nonsense syllables. Mundane realism would he stronger if he or she had subjects learn factual information used in real life instead of something invented for an experiment alone.

1. What are the six types of things surveys often ask about? (p. 273)
 * Neuman, Chapter 10**
 * 1) **Behavior**. How frequently do you brush your teeth? Did you vote in the last city election? When did you last visit a close relative?
 * 2) **Attitudes/beliefs/opinions**. What kind of job do you think the mayor is doing? Do you think other people say many negative things about you when you are not there? What is the biggest problem facing the nation these days?
 * 3) **Characteristics**. Are you married, never married, single, divorced, separated, or widowed? Do you belong to a union? What is your age?
 * 4) **Expectations**. Do you plan to buy a new car in the next 12 months? How much schooling do you think your child will get? Do you think the population in this town will grow, shrink, or stay the same?
 * 5) **Self**-**classification**. Do you consider yourself to be liberal, moderate, or conservative? Into which social class would you put your family? Would you say you are highly religious or not religious?
 * 6) **Knowledge**. Who was elected mayor in the last election? About what percentage of the people in this city are nonwhite? Is it legal to own a personal copy of Karl Marx 's Communist Manifesto in this country?

2. Why are surveys called correlational, and how do they differ from experiments? (p. 276) > associations between the treatment and the dependent variable, and control for alternative explanations. > and infer temporal order from questions about past behavior. experiences. or characteristics.
 * I think experiments are typically easier to interpret as demonstrating causal relationships, whereas the results of surveys do not usually demonstrate causal relationships. Data generated by surveys can only show correlation--I guess that's why they are called correlational.
 * In experiments, researchers place people in small groups. test one or two hypotheses with a few variables, control the timing of the treatment, note
 * By contrast, survey researchers sample math respondents who answer the same questions, measure many variables, test multiple hypotheses,

3. What five changes occurred in the 1960s and 1970s that dramatically affected survey research? (p. 275)
 * 1) **Computers**. Computer technology that became available to social scientists by the 1960s made the sophisticated statistical analysis of large-scale survey data sets feasible for the first time . Today, the computer is an indispensable tool for analyzing data from most surveys.
 * 2) **Organizations**. New social research centers with an expertise and interest in quantitative research were established at universities. About 50 such centers were created in the years after 1960.
 * 3) **Data storage**. By the 1970s, data archives were created to store and permit the sharing of the large scale survey data for secondary analysis (discussed in Chapter 11). The collection, storage, and sharing of information on hundreds of variables for thousands of respondents expanded the use of surveys.
 * 4) **Funding**. For about a decade (late 1960s to late 1970s), the U .S. federal government expanded funds for social science research . Total federal spending for research and development in the social sciences increased nearly tenfold from 1960 to the mid-1970s before it declined in the 1980s.
 * 5) **Methodology**. By the 1970s, substantial research was being conducted on ways to improve the validity of surveys. The survey technique advanced as errors were identified and corrected.? In addition, researchers created improved statistics for analyzing quantitative data and taught them to a new generation of researchers. Since the 19805, new cognitive psychology theories have been applied to survey research."

4. Identify 5 of the 10 things to avoid in question writing.

5. What topics are threatening to respondents, and how can a researcher ask about them? (p. 283) > and emphasize the need for honest answers. They ask sensitive questions following a "warm-up period" of other nonthreatening questions and after creating an atmosphere of trust and comfort. > sensitive topics. With it a respondent randomly receives a question without the interviewer being aware of the question the respondent is answering. > that permit greater respondent anonymity, such as a self-administered questionnaire or web-based survey. increase the likelihood of honest responses over formats that involve interacting with another person, such as in a face-to-face or telephone interview.
 * One technique is to establish a comfortable setting before asking the questions . They state guarantees of anonymity and confidentiality explicitly
 * A second technique is to use an "enhanced" phasing of questions. For example, rather than asking, "Have you shoplifted?"-which has an accusatory tone and uses the word shoplift, which implies committing an illegal act--instead get at the same behavior by asking. "Have you even taken anything from a store without paying for it?"
 * Studies show that survey formats Randomized response technique (RRT) A specialized technique in survey research that is used for very
 * Technological innovations such as computer-assisted self-administered interviews (CASAI) and computer-assisted personal interviewing (CAPI) also increase respondent comfort and honesty in answering questions on sensitive topics. In CASAI, respondents are "interviewed" with questions asked on a computer screen or over earphones. They answer by moving a computer mouse or typing on a keyboard.

6. What are advantages and disadvantages of open-ended versus closed-ended questions? (p. 287)
 * Open-ended question is a type of survey research question in which respondents are free to offer any answer they wish to the question.
 * Closed-ended question is a type of survey research question in which respondents must choose from a fixed set of answers.
 * ~ Advantages of Closed ||~ Disadvantages of Closed ||
 * * It is easier and quicker for respondents to answer.
 * The answers of different respondents are easier to compare.
 * Answers are easier to code and statistically analyze.
 * The response choices can clarify question meaning for respondents.
 * Respondents are more likely to answer about sensitive topics.
 * There are fewer irrelevant or confused answers to questions.
 * Less articulate or less literate respondents are not at a disadvantage.
 * Replication is easier. || * They can suggest ideas that the respondent would not otherwise have.
 * Respondents with no opinion or no knowledge can answer anyway.
 * Respondents can be frustrated because their desired answer is not a choice.
 * It is confusing if many (e.g., 20) response choices are offered.
 * Misinterpretation of a question can go unnoticed.
 * Distinctions between respondent answers may be blurred.
 * Clerical mistakes or marking the wrong response is possible.
 * They force respondents to give simplistic responses to complex issues.
 * They force people to make choices they would not make in the real world. ||


 * ~ Advantages of Open ||~ Disadvantages of Open ||
 * * They permit an unlimited number of possible answers.
 * Respondents can answer in detail and can qualify and clarify responses.
 * Unanticipated findings can be discovered.
 * They permit adequate answers to complex issues.
 * They permit creativity, self-expression, and richness of detail.
 * They reveal a respondent's logic, thinking process, and frame of reference. || * Different respondents give different degrees of detail in answers.
 * Responses may be irrelevant or buried in useless detail.
 * Comparisons and statistical analysis become very difficult.
 * Coding responses is difficult.
 * Articulate and highly literate respondents have an advantage.
 * Questions may be too general for respondents who lose direction.
 * Responses are written verbatim, which is difficult for interviewers.
 * A greater amount of respondent time, thought, and effort is necessary.
 * Respondents can be intimidated by questions.
 * Answers take up a lot of space in the questionnaire. ||

7. What are filtered, quasi-filtered, and standard-format questions? How do they relate to floaters? (p. 289)
 * **Satisficing** when respondents pick no response to avoid expending the effort of answering.
 * **Standard-format question** A type of survey research question in which the answer categories do not include a "no opinion" or "don't know" option. (Perhaps do not offer neutral position choices as they favour putting pressure on respondents to give a response.)
 * **Quasi-filter question** A survey research question that includes the answer choice "no opinion; ' unsure; or "don't know."
 * **Full-filter question** A survey research question in which respondents are first asked whether they have an opinion or know about a topic then only those with an opinion or knowledge are asked a specific question about the topic.
 * **Floaters** Survey research respondents without the knowledge or an opinion to answer a survey question but who answer it anyway, often giving inconsistent answers. They "float" from giving a response to not knowing.
 * Some believe it is best to offer the no opinion options because when the respondents are pressured for an answer and they don't have one, they will express opinions on fictitious issues, objects, and events.

8. How does ordinary conversation differ from a survey interview?

9. Under what conditions are mail questionnaires, telephone interviews, web surveys, or face-to-face interviews best?

10. What are CATI and IVR, and when might they be useful?
 * **Computer-assisted telephone interviewing** (CATI) Survey research telephone interviewing in which the interviewer sits before a computer screen and keyboard, reads from the screen questions, and enters answers directly into the computer.
 * **Interactive Voice Response** (IVR) A technique in telephone interviewing in which respondents hear computer-automated questions and indicate their responses by touch-tone phone entry or voice-activated software.


 * Week 7**

1. What were the two major phases in the development of the Chicago school, and what are the journalistic and anthropological models? (p. 380/1) > anthropological model to groups and settings in the researcher's society.
 * Neuman, Chapter 13**
 * In the first phase. from the 1910s to 1930s, the school used it variety of methods based on the case study or life history approach, including direct observation, informal interviews, and reading documents or official records.
 * Journalistic and anthropological models of research were combined in the first phase.
 * The journalistic model has a researcher get behind fronts, use informants, look for conflict, and expose what is "really happening"
 * In the anthropological model, a researcher attaches himself or herself to a small group for an extended period of time and reports on the members' views of the world.
 * In the second phase, from the 1940s to the 1960s, the Chicago school developed participant observation as a distinct technique. It expanded an
 * Three principles emerged:
 * 1) Study people in their natural settings, or in situ.
 * 2) Study people by directly interacting with them.
 * 3) Gain an understanding of the social world and make theoretical statements batted on the members' perspective.
 * Over time, the method moved from strict description to theoretical analyzes based on involvement by the researcher in the field.

2. List 5 of the 10 things the "methodological pragmatist" field researcher does. (P. 383)
 * A field researcher does the following:
 * 1) Observes ordinary events and everyday activities as they happen in natural settings, in addition to any unusual occurrences
 * 2) Becomes directly involved with the people being studied and personally experiences the process of daily social life in the field setting
 * 3) Acquires an insider's point of view while maintaining the analytic perspective or distance of an outsider
 * 4) Uses a variety of techniques and social skills in a flexible manner as the situation demands
 * 5) Produces data in the form of extensive written notes, as well as diagrams, maps, or pictures to provide very detailed descriptions
 * 6) Sees events holistically (e.g., as a whole unit, not in pieces) and individually in their social context
 * 7) Understands and develops empathy for members in a field setting, and does not only record "cold" objective facts
 * 8) Notices both explicit (recognized, conscious, spoken) and tacit (less recognized, implicit, unspoken) aspects of culture
 * 9) Observes ongoing social processes without imposing an outside point of view
 * 10) Copes with high levels of personal stress, uncertainty, ethical dilemmas, and ambiguity

3. Why is it important for a field researcher to read the literature before beginning fieldwork? How does this relate to defocusing? (P. 385)
 * As with all social research, reading the scholarly literature helps you learn concepts, potential pitfalls, data collection methods, and techniques for resolving conflicts. In addition, you may find Diaries, novels, journalistic accounts, and autobiographies useful for gaining familiarity and preparing emotionally for the field.
 * You should not get locked into any initial misconceptions. but he open to discovering new ideas. Finding the right questions to ask about the field takes time.
 * **Defocusing** A technique early in field research when the researcher removes his or her past assumptions and preconceptions to become more open to events in a field site.
 * Reading the literature can assist in opening your mind to greater possibilities and help you defocus, in terms of not focusing exclusively on the role of the researcher.

4. Identify the characteristics of a field site that make it a good one for a beginning field researcher. (p. 386)
 * Three factors are relevant when choosing a field research site: **richness of data**, **unfamiliarity**, and **suitability**.
 * Sites that present a web of social relations, a variety of activities, and diverse events over time provide richer, more interesting data.
 * Beginning field researchers should choose an unfamiliar setting because it is easier to see cultural events and social relations in a new site. Bogdan and Taylor ( 1975:28) noted, "//We would recommend that researchers choose settings in which the subjects are stangers and in which then have no particular professional knowledge expertise//."
 * When "casing" possible field sites, you must consider such practical issues as your time and skills, serious conflicts among people in the site, personal characteristics and feelings, and access to parts of a site.

5. How does the "presentation of self" affect a field researcher's work? (p. 389/90)
 * People explicitly and implicitly present themselves to others. We display who we are-the type of person we are or would like to be-through our physical appearance, what we say, and how we act. The presentation of self sends it symbolic message.
 * A good field researcher is very conscious of the presentation of self in the field
 * A researcher must be aware that self-presentation will influence field relations to some degree.

6. What is the attitude of strangeness, and why is it important? (p. 390/1)
 * **Attitude of strangeness** A field research technique in which researchers mentally adjust to "see" events in the field as if for the first time or as an outsider.
 * Researchers adopt the attitude of strangeness in familiar surroundings as because it is easy to be blinded by the familiar. In fact. "intimate acquaintance with one's own culture can create as much blindness as insight".
 * This confrontation of cultures, or culture shock, has two benefits: It makes it easier to see cultural elements and it facilitates self-discovery.

7. What are relevant considerations when choosing roles in the field, and how can the degree of researcher involvement vary? (p.392/3) > social trust and securing cooperation in the field. Some existing roles provide access to all areas of the site, the ability to observe and interact with all members, the freedom to move around, and a way to balance the requirements of researcher and member. At other times, a researcher creates a new role or modifies an existing one. A researcher may adopt several different field roles over time in the field.
 * The assigned role and how a researcher performs in that role influence not only the ease and degree of access but also the success in developing
 * The field roles open to you are affected by ascriptive factors and physical appearance. You can change some aspects of appearance, such as dress or hairstyle. but not ascriptive features such as age, race, gender, and attractiveness
 * Because many roles are sex-typed, gender is an important consideration. Female researchers often have more difficulty when the setting is perceived as dangerous or seamy and where males are in control (e.g., police work, tire fighting, etc.). Male researchers have more problems in routine and administrative sites where males are in control (e.g.. courts, large offices, etc.). They may not be accepted in female-dominated territory. In sites where both males and females are involved, both sexes may be able to enter and gain acceptance.
 * A role can help you gain acceptance into or be excluded from a clique, be treated as a person in authority or as an underling, and be a friend or an enemy of some members. You need to he aware that by adopting a role, you may be forming allies and enemies who can assist or limit research.
 * A field researcher should be aware of risks to his or her safety, assess the risks, and then decide what he or she is willing to do.

8. Identify three ways to ensure quality field research data.
 * watch and listen (p. 396)
 * taking notes (p. 398)
 * data quality (p. 402)
 * An interpretive approach suggests a different kind of data quality. Instead of assuming one single, objective truth, field researchers hold that members subjectively interpret experiences within a social context. What a member takes to be true results from social interaction and interpretation. Thus, high-quality field data capture such processes and provide an understanding of the member's viewpoint. A field researcher does not eliminate subjective views to get quality data rather, quality data include his or her subjective responses and experiences. Quality field data are detailed descriptions from the researcher's immersion and authentic experiences in the social world of members.
 * **Reliability**: Are researcher observations about a member or field event internally and externally consistent? (p. 404)
 * //Internal consistency// refers to whether the data are plausible given all that is known about a person or event, eliminating common forms of human deception. In other words, do the pieces fit together into a coherent picture? For example, are a member's actions consistent over time and in different social contexts?
 * //External consistency// is achieved by verifying or cross-checking observations with other, divergent sources of data. In other words. does it all fit into the overall context? For example, can others verify what a researcher observed about a person?. Does other evidence confirm the researcher's observations?
 * Reliability in field research also includes what is not said or done, but is expected or anticipated. Such omissions or null data can he significant but are difficult to detect.

>> and accept the field site and the researcher's actions.
 * Validity in field research comes from a researcher's analysis and data as accurate representations of the social world in the field. (p. 405)
 * **Ecological validity** is the degree to which the social world described by a researcher matches the world of members. It asks: Is the natural setting described relatively undisturbed by the researcher's presence or procedures? A study has ecological validity if events would have occurred without a researcher's presence.
 * **Natural history** is a detailed description of how the project was conducted. It is a full and candid disclosure of a researcher's actions, assumptions, and procedures for others to evaluate. A study is valid in terms of natural history if outsiders see
 * **Member validation** occurs when a researcher takes field results hack to members, who judge their adequacy. A study is member valid if members recognize and understand the researcher's description as reflecting their intimate social world. Member validation has limitations because conflicting perspectives in a setting produce disagreement with researcher's observations, and members may object when results do not portray their group in a favorable light. In addition, members may not recognize the description because it is not from their perspective or does not fit with their purposes.
 * **Competent insider performance** is the ability of a nonmember to interact effectively as a member or pass as one. This includes the ability to tell and understand insider jokes. A valid study gives enough of a flavor of the social life in the field and sufficient detail so that an outsider can act as a member. Its limitation is that it is not possible to know the social rules for every situation. Also, an outsider might be able to pass simply because members are being polite and do not want to point out social mistakes.

9. Compare differences between a field research interview and a survey research interview, and between a field interview and a friendly conversation. (p. 407) You are familiar with a friendly conversation, which has its own informal rules and the following elements: (p. 407/8)
 * ~ Typical Survey Interview ||~ Typical Field Interview ||
 * # It has a clear beginning and end.
 * 1) The same standard questions are asked of all respondents in the same sequence.
 * 2) The interviewer appears neutral at all times.
 * 3) The interviewer asks questions, and the respondent answers.
 * 4) It is almost always with one respondent alone.
 * 5) It has a professional tone and businesslike focus; diversions are ignored.
 * 6) Closed-ended questions are common, with infrequent probes.
 * 7) The interviewer alone controls the pace and direction of the interview.
 * 8) The social context in which the interview occurs is ignored and assumed to make little difference.
 * 9) The interviewer attempts to mold the communication pattern into a standard framework. || # The beginning and end are not dear. The interview can be picked up later.
 * 10) The questions and the order in which they are asked are tailored to specific people and situations.
 * 11) The interviewer shows interest in responses, encourages elaboration.
 * 12) It is like a friendly conversational exchange, but with more interviewer questions.
 * 13) It can occur in group setting or with others in area, but varies.
 * 14) It is interspersed with jokes, asides, stories, diversions, and anecdotes, which are recorded.
 * 15) Open-ended questions are common, and probes are frequent.
 * 16) The interviewer and member jointly control the pace and direction of the interview.
 * 17) The social context of the interview is noted and seen as important for interpreting the meaning of responses.
 * 18) The interviewer adjusts ||
 * a greeting ("Hi, it's good to see you again");
 * the absence of an explicit goal or purpose (we don't say, "Let's now discuss what we did last weekend");
 * avoidance of explicit repetition we don't say, "Could you clarify what you said about ... ");
 * question asking ("Did you see the race yesterday?");
 * expressions of interest ("Really? I wish I could have been there!");
 * expressions of ignorance ("No, I missed it. What happened?");
 * turn taking, so the encounter is balanced (one person does not always ask questions and the other only answer);
 * abbreviations (I missed the Derby, but I'm going to the Indy," not "I missed the Kentucky Derby horse race but I will go to the Indianapolis 500 automotive race");
 * a pause or brief silence when neither person talks is acceptable;
 * a closing (we don't say. "Let's end this conversation"; instead, we give a verbal indicator before physically leaving "I've got to get back to work now. See ya tomorrow.").

The field interview differs from a friendly conversation. (p. 408)
 * It has an **explicit purpose**--to learn about the informant and setting. A researcher includes explanations or requests that diverge from friendly conversation.
 * The field interview is **less balanced**. A higher proportion of questions come from the researcher, who expresses more ignorance and interest.
 * Also, it includes **repetition**, and a researcher asks the member to elaborate on unclear abbreviations.
 * Most importantly, the interviewer **listens**. He or she does not interrupt frequently, repeatedly finish the respondent's sentences, offer associations (e.g., "Oh, that is just like X"), insist on finishing asking a question that the respondent has begun to answer, tight for control over the interview process, or stay fixed with a line of thought and ignore new leads.

10. What are the different types or levels of field notes, and what purpose does each serve? (p. 401)
 * **Jotted notes** Field notes inconspicuously written while in the field site on whatever is convenient in order to "jog the memory" later.
 * **Direct observation notes** Field research notes that attempt to include all details and specifics of what the researcher heard or saw in a field site, and are written to permit multiple interpretations later.
 * **Separation of inference** A field researcher writes direct observation notes in a way that keeps what was observed separate from what was inferred or believed to have occurred.
 * **Analytic memos** Notes a qualitative researcher takes while developing more abstract ideas, themes, or hypotheses from an examination of details in the data.
 * **Personal Journal** personal feelings and emotional reactions become part of the data and color what a researcher sees or hears in the field. A researcher keeps a section of notes that is like a personal diary. Personal notes provide a way to cope with stress; they are a source of data about personal reactions; they help to evaluate direct observation or inference notes when the notes are later reread. For example, if you were in a good mood during observations, it might color what you observed.

1. What are some of the unique features of historical-comparative research?
 * Neuman, Chapter 14**
 * the evidence for H-C research is usually **limited and indirect**. Direct observation or involvement by a researcher is often impossible. An H-C researcher reconstructs what occurred from the evidence. The researcher is limited to what has not been destroyed and what leaves a trace, record, or other evidence behind.
 * Historical-comparative researchers **interpret the evidence**. The researcher becomes immersed in and absorbs details about a context.
 * a researcher's **reconstruction** of the past or another culture **is easily distorted**. Compared to the people being studied, a researcher is usually more aware of events occurring prior to the time studied, events occurring in places other than the location studied, and events that occurred after the period studied. This awareness gives the researcher a greater sense of coherence than was experienced by those living in the past or in an isolated social setting. A researcher's broader awareness can create the illusion that things happened because they had to, or that they tit together neatly.
 * A researcher cannot easily **see through the eyes of those being studied.** Knowledge of the present and changes over time can distort how events, people, laws. or even physical objects are perceived.
 * Historical-comparative researchers **recognize the capacity of people to learn, make decisions, and act** on what they learn to modify the course of events. People's capacity to learn introduces indeterminacy into historical-comparative explanations.
 * An H-C researcher wants to find out whether the people involved saw various courses of action as plausible. Thus, the **worldview and knowledge of those people is it conditioning factor**, shaping what the people being studied saw as possible or impossible. The researcher asks whether people were conscious of certain things
 * Historical-comparative research takes a **contingent view of causality**. An H-C researcher often uses combinational explanations. They are analogous to a chemical reaction in which several ingredients (chemicals, oxygen) are added together under specified conditions (temperature. pressure) to produce an outcome (explosion).
 * Historical-comparative research focuses on **whole cases versus separate variables across cases**. A researcher approaches the whole as if it has multiple layers. He or she grasps surface appearances as well as reveals the general, hidden structures, unseen mechanisms, or causal processes.
 * A historical-comparative researcher integrates the **micro (small-scale, face-to-face interaction) and macro (large-scale social structures) levels.** Instead of describing micro-level or macro-level processes alone, the researcher describes both levels or layers of reality and links them to each other.
 * H-C research **shifts between a specific context and a general comparison**. A researcher examines specific contexts, notes similarities and differences, then generalizes. He or she then looks again at the specific contexts using the generalizations.

2. What are the similarities between field research and H-C research? (p. 424/5) > studies, but he or she tries to penetrate and understand their point of view. Once the life, language, and perspective of the people being studied have been mastered, the researcher "translates " it for others who read his or her report.
 * Both H-C research and field research recognize that the researcher's point of view is an unavoidable part of research. Both involve **interpretation**, which introduces the interpreter's location in time, place, and worldviews. Historical-comparative research does not try to produce a single, unequivocal set of objective facts. Rather, it is a confrontation of old with new or of different worldviews. It recognizes that a researcher's reading of historical or comparative evidence is influenced by an awareness of the past and by living in the present.
 * Both field and H-C research examine a great diversity of data. In both, the researcher becomes **immersed** in data to gain an empathic understanding of events and people. Both capture subjective feelings and note how everyday, ordinary activities signify important social meaning.
 * Both field and H-C researchers often use **grounded theory**. Theory usually emerges during the process of data collection. Both examine the data without beginning with fixed hypotheses. Instead, they develop and modify concepts and theory through a dialogue with the data, then apply theory to reorganize the evidence. Thus, data collection and theory building interact.
 * Both field and H-C research involve a type of **translation**. The researcher's meaning system usually differs from that of the people he or she
 * Both field and H-C researchers **focus on action, process, and sequence** and **see time and process as essential**. Both are sensitive to an ever present tension between agency, the fluid-social action and changing social reality, and structure, the fixed regularities and patterns that shape social actions and perceptions. Both see social reality simultaneously as something created and changed by people and as imposing a restriction on human choice.
 * Generalization and theory are limited in field and H-C research. Historical and cross-cultural knowledge is incomplete and provisional, based on selective facts and limited questions. Neither deduces propositions or tests hypotheses in order to uncover fixed laws. Likewise, replication is unrealistic because each researcher has a unique perspective and assembles a unique body of evidence. Instead, researchers offer plausible accounts and limited generalizations.

3. What is the Annales school. and what are three characteristics or terms in its orientation toward studying the past? (p. 427) > Annales school research method associated with a group of French historians and named after the scholarly journal //Annales: Economies, Societes, Civilisations//, founded in 1929. The school's orientation can be summarized by four interrelated characteristics. > means a **distinctive worldview. perspective, or set of assumptions about life**--the way that thinking was organized, or the overall pattern of conscious and unconscious cognition, belief, and values that prevailed in an era. Thus, researchers try to discover the overall arrangement of thought in a historical period that shaped subjective experience about fundamental aspects of reality: the nature of time, the relationship of humans to the physical environment, how truth is created, and the like.
 * 1) One characteristic is the school's **synthetic, totalizing, holistic, or interdisciplinary approach**. Annales researchers combine geography, ecology, economics, and demography with cultural factors to give a total picture of the past. They blend together the diverse conditions of material life and collective beliefs or culture into a comprehensive reconstruction of the past civilization.
 * 2) A second characteristic is illustrated by a French term of the school, the **//mentalities//** of an era. This term is not directly translatable into English. It
 * 1) The Annales approach **mixes concrete historical specificity and abstract theory**. Theory takes the form of models or deep underlying structures, which are causal or organizing principles that account for everyday events. Annales historians look for both the deep-running currents that shape the surface events and individual actions.
 * 2) A last characteristic is an **interest in long-term structures or patterns**. In contrast to traditional historians who focus on particular individuals or events over short time spans, from several years to a few decades. Annales historians examine long-term changes. over periods of a century or more, in the fundamental way that social life is organized. They use the term //longue duree//. It means a long duration or a historical era in geographic space (e.g., feudalism in western Europe. or the fifteenth to eighteenth centuries in the Mediterranean region).

4. What is the difference between a critical indicator and supporting evidence? (pl 430)
 * Critical indicator A clear, unambiguous measure or indicator of a concept in a specific cultural or historical setting.
 * Supporting evidence is evidence for less central parts of a model that builds the overall background or context. It builds the overall background or context. This supporting evidence is less abundant or weaker, and lacks a clear and unambiguous theoretical interpretation.
 * critical indicator: unambiguous and clear, supporting evidence: weaker.

5. What questions are asked by a researcher using external criticism? (p. 436)

6. What are the limitations of using secondary sources? (p. 433/4) > information, and shape evidence using concepts. The historian's concepts are often a mixture drawn from journalism, the language of historical actors, ideologies, philosophy, everyday language in the present, and social science. They may be vague, applied inconsistently, and not mutually exclusive nor exhaustive
 * The limitations of secondary historical evidence include problems of inaccurate historical accounts and a lack of studies in areas of interest. Such sources cannot be used to test hypotheses. Post facto (after-the-fact) explanations cannot meet positivist criteria of falsifiability, because few statistical controls can be used and replication is impossible. Yet, historical research by others plays an important role in developing general explanations, among its other uses. For example, such research substantiates the emergence and evolution of tendencies over time.
 * One problem is in reading the works of historians. Historians do not present theory-free, objective "facts". They implicitly frame raw data, categorize
 * A second problem is that the historian's selection procedure is not transparent. Historians select some information from all possible evidence.Yet, the H-C researcher does not know how this was done. Without knowing the selection process, a historical-comparative researcher must rely on the historian's judgments, which can contain biases.
 * A third issue is in the organization of the evidence, Historians organize evidence as they write narrative history. This compounds problems of undefined concepts and the selection of evidence.

7. What was Galton's problem and why is it important in comparative research? (p. 440)

8. What strengths or advantages are there to using a comparative method in social research?

9. In what ways is cross-national survey research different from survey research within one's own culture?

10. What is the importance of equivalence in H-C research, and what are the four types of equivalence?