Syllabi

University of Ljubljana

The summary or overview of a course includes the course content and structure, the prerequirements, a brief description of the learning objectives, mandatory and/or voluntary course literature as well as important examination information.

Prerequisites (knowledge of topic)

Substantive Background: Students taking this course should have a general familiarity with the types of data that can be obtained through survey research. While not absolutely required, it would be useful if students bring to the course survey datasets from their own fields. But, even if students do not bring their own data, the instructor will provide several survey datasets for course use.

 

Statistical Methods: Students in this course should be familiar with multiple regression analysis and comfortable with the process of employing regression models to analyze empirical data.

 

Computing: Students in this course should have some prior exposure and basic experience with the R statistical computing environment. But specific packages and functions will be introduced and explained in detail throughout the course.

 

Hardware

Students in this course should bring their own laptop computers to class so they can access the software required to carry out the analyses in course examples and exercises.

 

Software

This course will rely on the R statistical computing environment. Students should install the latest version of R on their computers before the first class session. While not absolutely required, it is strongly recommended that students also install RStudio. Doing so will make it much easier to interact with the R system in productive ways.

 

The course material will use several R packages. Students should install the optiscale, psych, mokken, and smacof packages before the first class session. Additional R packages will be made available and used throughout the course.

 

Course content

This course is aimed at demonstrating to students how to complete 3 critical tasks with survey data: 1) combine several survey items into a more reliable and powerful scale, 2) assess the dimensionality of a set of attitudes, 3) produce geometric maps of attitudes and preferences, so that the fundamental structure of people’s beliefs can be more readily interpreted. More generally, this course is aimed at aiding researchers in better measuring the phenomena they are interested in. Though researchers of all sorts recognize measurement as a fundamental and crucial step of the scientific process, the topic is rarely given formal attention in core graduate courses beyond a cursory treatment of the concepts of reliability and validity.

The course will cover a variety of strategies for producing quantitative (usually interval-level) variables from qualitative survey responses (which are usually believed to be measured at the nominal or ordinal level). We will begin with a discussion of measurement theory, giving detailed consideration to such concepts as measurement level and measurement accuracy. This will lead us to optimal scaling strategies, for assigning numbers to objects. Following that, we will cover a variety of methods for combining multiple survey responses in order to produce higher-quality summary measures. These include: summated rating (or “Likert”) scales and reliability of measurement; principal components analysis; item response theory; factor analysis; multidimensional scaling; the vector model for profile data; and correspondence analysis. Each of these methods applies a measurement model to empirical data in order to generate a quantitative representation of the observations and survey items. The results provide new variables that can be employed as input to subsequent statistical models. These methods are not just “mere” measurement tools; in addition to quantifying observations, they often provide useful new insights about the systematic structure that exists within those observations. And, from a practical perspective, consideration of measurement theory and scaling methods can guide researchers to construct more powerful batteries of survey questions.

 

Structure

On each class day, the morning session will be used to introduce new concepts, models, and techniques. Some of this discussion may extend on into the afternoon sessions. But, most of the time during the afternoon sessions will be devoted to class exercises that provide students an opportunity to apply the material discussed during the morning session.

 

Day 1

General introduction and basic concepts
Measurement theory
Optimal scaling
Summated rating scales (or, additive indexes)


 

Day 2

Reliability
Cumulative scales (or, Mokken scaling, IRT)


 

Day 3

Biplots
Principal components analysis


 

Day 4

Factor analysis (exploratory and confirmatory)
Multidimensional scaling


 

Day 5

More multidimensional scaling
Correspondence analysis


 

Literature

 

Mandatory

Unfortunately, there is no single textbook that covers all of the topics in this course. In addition, many of the texts that are available have certain drawbacks that limit their usefulness for our purposes: They tend to be very expensive; they usually assume a high level of mathematical sophistication; they often contain sections that are out of date. Because of these considerations, the required readings can be taken from two alternative sources: (1) The Sage series on Quantitative Applications in the Social Sciences (i.e., the “little green books”); or (2) chapters from The Wiley Handbook of Psychometric Testing, edited by Paul Irwing, Tom Booth, and David J. Hughes.

 

Sage QASS monographs:

Dunteman, George H. (1989) Principal Components Analysis.

Jacoby, William G. (1991) Data Theory and Dimensional Analysis.

Kim, Jae-On and Charles W. Mueller. (1978a) Introduction to Factor Analysis.

Kim, Jae-On and Charles W. Mueller. (1978b) Factor Analysis: Statistical Methods and     Practical Issues.

Kruskal, Joseph B. and Myron Wish. (1978) Multidimensional Scaling.

McIver, John and Edward G. Carmines. (1981) Unidimensional Scaling.

Van Schuur, Wijbrandt. (2011) Ordinal Item Response Theory: Mokken Scale Analysis.

Weller, Susan C. and A. Kimball Romney. (1990) Metric Scaling: Correspondence Analysis.

Chapters from The Wiley Handbook of Psychometric Testing:

DeMars, Christine. “Classical Test Theory and Item Response Theory.”  

Hughes, David J. “Psychometric Validity: Establishing the Accuracy and Appropriateness of Psychometric Measures.”

Jacoby, William G. and David J. Ciuk. “Multidimensional Scaling: An Introduction.”

Jennrich, Robert J. “Rotation.”

Meijer, Rob R. and Jorge N. Tendeiro. “Unidimensional Item Response Theory.”

Mulaik, Stanley A. “Fundamentals of Common Factor Analysis.”

Revelle, William and David M. Condon. “Reliability.”

Timmerman, Marieke E.; Urbano Lorenzo-Seva; Eva Ceulemans. “The Number of Factors Problem.”

 

Supplementary / voluntary

Armstrong II, David A.; Ryan Bakker; Royce Carroll; Christopher Hare; Keith T. Poole; Howard Rosenthal. (2014) Analyzing Spatial Models of Choice and Judgment with R.

Bartholomew, David J.; Fiona Steele; Irini Moustaki; Jane I. Galbraith. (2008) Analysis of Multivariate Social Science Data (Second Edition).

Borg, Ingwer and Patrick Groenen. (2005) Modern Multidimensional Scaling: Theory and Applications (Second Edition).

Cudek, Robert and Robert C. MacCallum, Editors (2007) Factor Analysis at 100.

Lattin, James; J. Douglas Carroll; Paul E. Green. (2003) Analyzing Multivariate Data.

Mulaik, Stanley A. (2010) Foundations of Factor Analysis (Second Edition).

Wickens, Thomas D. (1995) The Geometry of Multivariate Statistics.

 

Mandatory readings before course start

None.

 

Examination part

Course participants will be evaluated on the basis of oral participation (20%) and a major homework exercise (80%). In the homework exercise, course participants will apply one or more of the techniques covered in the class to actual survey data. Ideally, students will have their own survey data drawn from their respective substantive fields. But, if not, the course instructor can provide some survey data drawn from political science and sociological applications.

Prerequisites (knowledge of topic)
Comfortable familiarity with univariate differential and integral calculus, basic probability theory, and linear algebra is required. Students should have completed Ph.D.-level courses in introductory statistics, and in linear and generalized linear regression models (including logistic regression, etc.), up to the level of Regression III. Familiarity with discrete and continuous univariate probability distributions will be helpful.

Hardware
Students will be required to provide their own laptop computers.

Software
All analyses will be conducted using the R statistical software. R is free, open-source, and runs on all contemporary operating systems. The instructor will also offer support for students wishing to use Stata.

Learning objectives
Students will learn how to visualize, analyze, and conduct diagnostics on models for observational data that has both cross-sectional and temporal variation.

Course content

Analysts increasingly find themselves presented with data that vary both over cross-sectional units and across time. Such panel data provides unique and valuable opportunities to address substantive questions in the economic, social, and behavioral sciences. This course will begin with a discussion of the relevant dimensions of variation in such data, and discuss some of the challenges and opportunities that such data provide. It will then progress to linear models for one-way unit effects (fixed, between, and random), models for complex panel error structures, dynamic panel models, nonlinear models for discrete dependent variables, and models that leverage panel data to make causal inferences in observational contexts. Students will learn the statistical theory behind the various models, details about estimation and inference, and techniques for the visualization and substantive interpretation of their statistical results. Students will also develop statistical software skills for fitting and interpreting the models in question, and will use the models in both simulated and real data applications. Students will leave the course with a thorough understanding of both the theoretical and practical aspects of conducting analyses of panel data.

Structure
Day One:
Morning:
•     (Very) Brief Review of Linear Regression
•     Overview of Panel Data: Visualization, Pooling, and Variation
•     Regression with Panel Data
Afternoon:
•     Unit Effects Models: Fixed-, Between-, and Random-Effects

Day Two:
Morning:
•     Dynamic Panel Data Models: The Instrumental Variables / Generalized Method of Moments Framework
Afternoon:
•     More Dynamic Models: Orthogonalization-Based Methods

Day Three:
Morning:
•     Unit-Effects and Dynamic Models for Discrete Dependent Variables
Afternoon:
•     GLMs for Panel Data: Generalized Estimating Equations (GEEs)

Day Four:
Morning:
•     Introduction to Causal Inference with Panel Data (Including Unit Effects)
Afternoon:
•     Models for Causal Inference: Differences-In-Differences, Synthetic Controls, and Other Methods

Day Five:
Morning:
•     Practical Issues: Model Selection, Specification, and Interpretation
Afternoon:
•     Course Examination

Literature

Mandatory
Hsiao, Cheng. 2014. Analysis of Panel Data, 3rd Ed. New York: Cambridge University Press.
OR
Croissant, Yves, and Giovanni Millo. 2018. Panel Data Econometrics with R. New York: Wiley.

Supplementary / voluntary
Abadie, Alberto. 2005. “Semiparametric Difference-in-Differences Estimators.” Review of Economic Studies 72:1-19.

Anderson, T. W., and C. Hsiao. 1981. “Estimation Of Dynamic Models With Error Components.” Journal of the American Statistical Association 76:598-606.

Antonakis, John, Samuel Bendahan, Philippe Jacquart, and Rafael Lalive. 2010. “On Making Causal Claims: A Review and Recommendations.” The Leadership Quarterly 21(6):1086-1120.

Arellano, M. and S. Bond. 1991. “Some Tests Of Specification For Panel Data: Monte Carlo Evidence And An Application To Employment Equations.” Review of Economic Studies 58:277-297.

Beck, Nathaniel, and Jonathan N. Katz. 1995. “What To Do (And Not To Do) With Time-Series Cross-Section Data.” American Political Science Review 89(September): 634-647.

Bliese, P. D., D. J. Schepker, S. M. Essman, and R. E. Ployhart. 2020. “Bridging Methodological Divides Between Macro- and Microresearch: Endogeneity and Methods for Panel Data.” Journal of Management, 46(1):70-99.

Clark, Tom S. and Drew A. Linzer. 2015. “Should I Use Fixed Or Random Effects?” Political Science Research and Methods 3(2):399-408.

Doudchenko, Nikolay, and Guido Imbens. 2016. “Balancing, Regression, Difference-In-Differences and Synthetic Control Methods: A Synthesis.” Working paper: Graduate School of Business, Stanford University.

Gaibulloev, K., Todd Sandler, and D. Sul. 2014. “Of Nickell Bias, Cross-Sectional Dependence, and Their Cures: Reply.” Political Analysis 22: 279-280.

Hill, T. D., A. P. Davis, J. M. Roos, and M. T. French. 2020. “Limitations of Fixed-Effects Models for Panel Data.” Sociological Perspectives 63:357-369.

Hu, F. B., J. Goldberg, D. Hedeker, B. R. Flay, and M. A. Pentz. 1998. “Comparison of population-averaged and subject-specific approaches for analyzing repeated binary outcomes.” American Journal of Epidemiology 147(7):694-703.

Imai, Kosuke, and In Song Kim. 2019. “When Should We Use Unit Fixed Effects Regression Models for Causal Inference with Longitudinal Data?” American Journal of Political Science 62:467-490.

Keele, Luke, and Nathan J. Kelly. 2006. “Dynamic Models for Dynamic Theories: The Ins and Outs of Lagged Dependent Variables.” Political Analysis 14(2):186-205.

Lancaster, Tony. 2002. “Orthogonal Parameters and Panel Data.” Review of Economic Studies 69:647-666.

Liu, Licheng, Ye Wang, Yiqing Xu. 2019. “A Practical Guide to Counterfactual Estimators for Causal Inference with Time-Series Cross-Sectional Data.” Working paper: Stanford University.

Mummolo, Jonathan, and Erik Peterson. 2018. “Improving the Interpretation of Fixed Effects Regression Results.” Political Science Research and Methods 6:829-835.

Neuhaus, J. M., and J. D. Kalbfleisch. 1998. “Between- and Within-Cluster Covariate Effects in the Analysis of Clustered Data. Biometrics, 54(2): 638-645.

Pickup, Mark and Vincent Hopkins. 2020. “Transformed-Likelihood Estimators for Dynamic Panel Models with a Very Small T.” Political Science Research & Methods, forthcoming.

Xu, Yiqing. 2017. “Generalized Synthetic Control Method: Causal Inference with Interactive Fixed Effects Models.” Political Analysis 25:57-76.

Zorn, Christopher. 2001. “Generalized Estimating Equation Models for Correlated Data: A Review with Applications.” American Journal of Political Science 45(April):470-90.

Mandatory readings before course start

Hsiao, Cheng. 2007. “Panel Data Analysis — Advantages and Challenges.” Test 16:1-22.

Examination part
Students will be evaluated on two written homework assignments that will be completed during the course (20% each) and a final examination (60%). Homework assignments will typically involve a combination of simulation-based exercises and “real data” analyses, and will be completed during the evenings while the class is in session. For the final examination, students will have two alternatives:

•    “In-Class”: Complete the final examination in the afternoon of the last day of class (from roughly noon until 6:00 p.m. local time), or

•    “Take-Home”: Complete the final examination during the week following the end of the course (due date: TBA).

Additional details about the final examination will be discussed in the morning session on the first day of the course.

Supplementary aids

The exam will be a “practical examination” (see below for content). Students will be allowed access to (and encouraged to reference) all course materials, notes, help files, and other documentation in completing their exam.

Examination content

The examination will involve the application of the techniques taught in the class to one or more “live” data example(s). These will typically take the form of either (a) a replication and extension of an existing published work, or (b) an original analysis of observational data with a panel / time-series cross-sectional component. Students will be required to specify, estimate, and interpret various statistical models, to conduct and present diagnostics and robustness checks, and to give detailed justifications for their choices.

Examination relevant literature
See above. Details of the examination literature will be finalized prior to the start of class.

Prerequisites (knowledge of topic)

The course is designed for Master, PhD students and practitioners in the social and policy sciences, including political science, sociology, public policy, public administration, business, and economics. It is especially suitable to MA students in these fields who have an interest in carrying out research. Previous courses in research methods and philosophy of science are helpful but not required. Materials not in the books assigned for purchase and not easily available through online library databases will be made available electronically. Bringing a laptop to class will be helpful but is not essential.

Hardware
Laptop helpful but not required

Software
None

Course content
The central goal of the seminar is to enable students to create and critique methodologically sophisticated case study research designs in the social sciences. To do so, the seminar will explore the techniques, uses, strengths, and limitations of case study methods, while emphasizing the relationships among these methods, alternative methods, and contemporary debates in the philosophy of science. The research examples used to illustrate methodological issues will be drawn primarily from international relations and comparative politics. The methodological content of the course is also applicable, however, to the study of history, sociology, education, business, economics, and other social and behavioral sciences.

Course structure
The seminar will begin with a focus on the philosophy of science, theory construction, theory testing, causality, and causal inference. With this epistemological grounding, the seminar will then explore the core issues in case study research design, including methods of structured and focused comparisons of cases, typological theory, case selection, process tracing, and the use of counterfactual analysis. Next, the seminar will look at the epistemological assumptions, comparative strengths and weaknesses, and proper domain of case study methods and alternative methods, particularly statistical methods and formal modeling, and address ways of combining these methods in a single research project. The seminar then examines field research techniques, including archival research and interviews.

Course Assignments and Assessment
In addition to doing the reading and participating in course discussions, students will be required to present orally an outline for a research design, either written or in powerpoint, in the final sessions of the class for a constructive critique by fellow students and Professor Bennett. Students will then write this into a research design paper about 3000 words long (12 pages, double-spaced).

Presumably, students will choose to present the research design for their PhD or MA thesis, though students could also present a research design for a separate project, article, or edited volume. Research designs should address all of the following tasks (elaborated upon in the assigned readings and course sessions): 1) specification of the research problem and research objectives, in relation to the current stage of development and research needs of the relevant research program, related literatures, and alternative explanations; 2) specification of the independent and dependent variables of the main hypothesis of interest and alternative hypotheses; 3) selection of a historical case or cases that are appropriate in light of the first two tasks, and justification of why these cases were selected and others were not; 4) consideration of how variance in the variables can best be described for testing and/or refining existing theories; 5) specification of the data requirements, including both process tracing data and measurements of the independent and dependent variables for the main hypotheses of interest, including alternative explanations.

Students will be assessed on how well their research design achieves these tasks, and on how useful their suggestions are on other students’ research designs. Students will also be assessed on the general quality of their contributions to class discussions.

Literature

Mandatory:
Assigned Readings for GSERM Case Study Methods Course

Andrew Bennett, Georgetown University

Students should obtain and read these books in advance of the course (see below for specific page assignments):
•Alexander L. George and Andrew Bennett, Case Studies and Theory Development in the Social Sciences (MIT Press 2005).
•Henry Brady and David Collier, Rethinking Social Inquiry (second edition, 2010)
•Gary Goertz, Social Science Concepts: A User’s Guide, (Princeton, 2005).
•Andrew Bennett and Jeffrey Checkel, eds., Process Tracing: From Metaphor to Analytic Tool (Cambridge University Press, 2014).
•Gary King, Robert Keohane, and Sidney Verba, Designing Social Inquiry (Princeton University Press, 1994).

Lecture 1: Inferences About Causal Effects and Causal Mechanisms
This lecture addresses the philosophy of science issues relevant to case study research.
Readings:
•Alexander L. George and Andrew Bennett, Case Studies and Theory Development, preface and chapter 7, pages 127-150.
•King, Keohane, and Verba, Designing Social Inquiry pp. 3-33, 76-91, 99-114.

Lecture 2: Critiques and Justifications of Case Study Methods
Readings:
•Gary King, Robert Keohane, and Sidney Verba, Designing Social Inquiry, pp. 46-48, 118-121, 208-230.
•Brady and Collier, Rethinking Social Inquiry, 1-64, 123-201 (or if you have the first edition, pages 3-20, 36-50, 195-266)
•George and Bennett, Case Studies and Theory Development, Chapter 1, pages 3-36.

Lecture 3: Concept Formation and Measurement
Readings:
•Gary Goertz, Social Science Concepts, chapters 1, 2, 3, and 9, pages 1-94, 237-268.
•Gary Goertz, Exercises, available at

http://press.princeton.edu/releases/m8089.pdf

Please think through the following exercises: 7, 21, 48, 49, 52, 163, 252, 253, 256, 257.

Lecture 4: Designs for Single and Comparative Case Studies
Readings:
•George and Bennett, Case Studies and Theory Development, chapter 4, pages 73-88.
•Jason Seawright and John Gerring, Case Selection Techniques In Case Study Research. Political Research Quarterly June 2008. Available at: http://blogs.bu.edu/jgerring/files/2013/06/CaseSelection.pdf

Lecture 5: Typological Theory, Fuzzy Set Analysis
Readings:
•George and Bennett, Case Studies and Theory Development chapter 11, pages 233-262.
•Excerpt from Andrew Bennett, "Causal mechanisms and typological theories in the study of civil conflict," in Jeff Checkel, ed., Transnational Dynamics of Civil War, Columbia University Press, 2012.
•Charles Ragin, "From Fuzzy Sets to Crisp Truth Tables," available at:
http://www.compasss.org/files/WPfiles/Raginfztt_April05.pdf

Lecture 6: Process Tracing, Congruence Testing, and Counterfactual Analysis
Readings:
•Andrew Bennett and Jeff Checkel, Process Tracing, chapter 1, conclusions, and appendix on Bayesianism.
•David Collier, online process tracing exercises. Look at exercises 3, 4, 7, and 8 at:

http://polisci.berkeley.edu/sites/default/files/people/u3827/Teaching%20Process%20Tracing.pdf

Lecture 7: Multimethod Research: Combining Case Studies with Statistics and/or Formal Modeling
Readings:
•Andrew Bennett and Bear Braumoeller, "Where the Model Frequently Meets the Road: Combining Statistical, Formal, and Case Study Methods," draft paper.
•Evan Lieberman, "Nested Analysis as a Mixed-Method Strategy for Comparative Research," American Political Science Review August 2005, pp. 435-52.

Lecture 8: Field Research Techniques: Archives, Interviews, and Surveys
Readings:
•Cameron Thies, "A Pragmatic Guide to Qualitative Historical Analysis in the Study of International Relations," International Studies Perspectives 3 (4) (November 2002) pp. 351-72.

Lecture 9 & 10: Student research design presentations
Read and be ready to constructively critique your fellow students’ research designs.

Supplementary / voluntary:
The following readings are useful for students interested in exploring the topic further, but they are not required:

I) Philosophy of Science and Epistemological Issues

Henry Brady, "Causation and Explanation in Social Science," in Janet Box-
Steffensmeier, Henry Brady, and David Collier, eds., Oxford Handbook of Political Methodology (Oxford, 2008) pp. 217-270.

II) Case Study Methods

George and Bennett, Case Studies and Theory Development, Chapter 1.

Gerardo Munck, "Canons of Research Design in Qualitative Analysis," Studies in Comparative International Development, Fall 1998.

Timothy McKeown, "Case Studies and the Statistical World View," International Organization Vol. 53, No. 1 (Winter, 1999) pp. 161190.

Concept Formation and Measurement

John Gerring, "What Makes a Concept Good?," Polity Spring 1999: 357-93.

Robert Adcock and David Collier, "Measurement Validity: A Shared Standard for Qualitative and Quantitative Research," APSR Vol. 95, No. 3 (September, 2001) pp. 529-546.

Robert Adcock and David Collier, "Democracy and Dichotomies," Annual Review of Political Science, Vol. 2, 1999, pp. 537-565.

David Collier and Steven Levitsky, "Democracy with Adjectives: Conceptual Innovation in Comparative Research," World Politics, Vol. 49, No. 3 (April 1997) pp. 430451.

David Collier, "Data, Field Work, and Extracting New Ideas at Close Range," APSA -CP Newsletter Winter 1999 pp. 1-6.

Gerardo Munck and Jay Verkuilen, "Conceptualizing and Measuring Democracy: Evaluating Alternative Indices," Comparative Political Studies Feb. 2002, pp. 5-34.

Designs for Single and Comparative Case Studies and Alternative Research Goals

Aaron Rapport, Hard Thinking about Hard and Easy Cases in Security Studies, Security Studies 24:3 (2015), 431-465.

Van Evera, Guide to Methodology, pp. 7788.

Richard Nielsen, "Case Selection via Matching," Sociological Methods and Research

(forthcoming).

Typological Theory and Case Selection

Colin Elman, "Explanatory Typologies and Property Space in Qualitative Studies of International Politics," International Organization, Spring 2005, pp. 293-326.

Gary Goertz and James Mahoney, "Negative Case Selection: The Possibility Principle," in Goertz, chapter 7.

David Collier, Jody LaPorte, Jason Seawright . "Putting typologies to work: concept formation, measurement, and analytic rigor." Political Research Quarterly, 2012

Process Tracing

Tasha Fairfield and Andrew Charman, 2015 APSA paper on Bayesian process tracing.

David Waldner, "Process Tracing and Causal Mechanisms." In Harold Kincaid, ed., The Oxford Handbook of Philosophy of Social Science (Oxford University Press, 2012), pp. 65‐84.

Gary Goertz and Jack Levy, "Causal Explanation, Necessary Conditions, and Case Studies: The Causes of World War I," manuscript, Dec. 2002.

Counterfactual Analysis, Natural Experiments

Jack Levy, paper in Security Studies on counterfactual analysis.

Thad Dunning, "Design-Based Inference: Beyond the Pitfalls of Regression Analysis?" in Brady and Collier, pp. 273-312.

Thad Dunning, Natural Experiments in the Social Sciences: A Design‐Based Approach (Cambridge University Press, 2012), Chapters 1,7

Philip Tetlock and Aaron Belkin, eds., Counterfactual Thought Experiments, chapters 1, 12.

Multimethod Research: Combining Case Studies with Statistics and/or Formal Modeling

David Dessler, "Beyond Correlations: Toward a Causal Theory of War," International Studies Quarterly vol. 35 no. 3 (September, 1991), pp. 337355.

Alexander George and Andrew Bennett, Case Studies and Theory Development, Chapter 2.

James Mahoney, "Nominal, Ordinal, and Narrative Appraisal in MacroCausal Analysis," American Journal of Sociology, Vol. 104, No.3 (January 1999).

Field Research Techniques: Archives, Interviews, and Surveys

Diana Kapiszewski, Lauren M. MacLean, and Benjamin L. Read, "Field Research in

Political Science: Practices and Principles," chapter 1 in Field Research in Political Science: Practices and Principles (Cambridge University Press). Read pages 15-33.

Diana Kapiszewski, Lauren M. MacLean, and Benjamin L. Read, "Interviews, Oral

Histories, and Focus Groups" in Field Research in Political Science: Practices and Principles (Cambridge University Press).

Elisabeth Jean Wood, "Field Research," in Carles Boix and Susan Stokes, eds., Oxford Handbook of Comparative Politics, Oxford University Press 2007, pp. 123-146.

Soledad Loaeza, Randy Stevenson, and Devra C. Moehler. 2005. "Symposium: Should Everyone Do Fieldwork?" APSA-CP 16(2) 2005: 8-18.

Layna Mosley, ed., Interview Research in Political Science, Cornell University Press, 2013.

Hope Harrison, "Inside the SED Archives," CWIHP Bulletin

Ian Lustick, "History, Historiography, and Political Science: Multiple Historical Records and the Problem of Selection Bias," APSR September 1996, pp. 605618.

Symposium on interview methods in political science in PS: Political Science and Politics (December, 2002), articles by Beth Leech ("Asking Questions: Sampling and Completing Elite Interviews"), Kenneth Goldstein ("Getting in the Door: Sampling and Completing Elite Interviews"), Joel Aberbach and Bert Rockman ("Conducting and Coding Elite Interviews"), Laura Woliver ("Ethical Dilemmas in Personal Interviewing"), and Jeffrey Barry ("Validity and Reliability Issues in Elite Interviewing), pp. 665-682.

Diana Kapiszewski, Lauren M. MacLean, and Benjamin L. Read, "A Historical and

Empirical Overview of Field Research in the Discipline" Chapter 2 in Field Research in Political Science: Practices and Principles (Cambridge University Press, forthcoming).

Mandatory readings before course start:
It is advisable to do as much of the mandatory reading as possible before the course starts.

Prerequisites (knowledge of topic)
Participants should have a basic working knowledge of the principles and practice of multiple regression and elementary statistical inference. No knowledge of matrix algebra is required or assumed, nor is matrix algebra ever used in the course.

Hardware
Participants are strongly encouraged to bring their own laptops (Mac or Windows)

Software
Computer applications will focus on the use of OLS regression and the PROCESS macro for SPSS and SAS developed by Andrew F. Hayes (processmacro.org) that makes the analyses described in this class much easier than they otherwise would be. Because this is a hands-on course, participants are strongly encouraged to bring their own laptops (Mac or Windows) with a recent version of SPSS Statistics (version 19 or later) or SAS (release 9.2 or later) installed. SPSS users should ensure their installed copy is patched to its latest release. SAS users should ensure that the IML product is part of the installation. R and STATA users can benefit from the course content, but PROCESS makes these analyses much easier and is not available for R or STATA.

Course content
Statistical mediation and moderation analyses are among the most widely used data analysis techniques in social science, health, and business fields. Mediation analysis is used to test hypotheses about various intervening mechanisms by which causal effects operate. Moderation analysis is used to examine and explore questions about the contingencies or conditions of an effect, also called “interaction”. Increasingly, moderation and mediation are being integrated analytically in the form of what has become known as “conditional process analysis,” used when the goal is to understand the contingencies or conditions under which mechanisms operate. An understanding of the fundamentals of mediation and moderation analysis is in the job description of almost any empirical scholar. In this course, you will learn about the underlying principles and the practical applications of these methods using ordinary least squares (OLS) regression analysis and the PROCESS macro for SPSS and SAS.

Topics covered in this five-day course include:

  • Path analysis: Direct, indirect, and total effects in mediation models.
  • Estimation and inference about indirect effects in single mediator models.
  • Models with multiple mediators
  • Mediation analysis in the two-condition within-subject design.
  • Estimation of moderation and conditional effects.
  • Probing and visualizing interactions.
  • Conditional Process Analysis (also known as “moderated mediation”)
  • Quantification of and inference about conditional indirect effects.
  • Testing a moderated mediation hypothesis and comparing conditional indirect effects

As an introductory-level course, we focus primarily on research designs that are experimental or cross-sectional in nature with continuous outcomes. We do not cover complex models involving dichotomous outcomes, latent variables, models with more than two repeated measures, nested data (i.e., multilevel models), or the use of structural equation modeling.

This course will be helpful for researchers in any field—including psychology, sociology, education, business, human development, political science, public health, communication—and others who want to learn how to apply the latest methods in moderation and mediation analysis using readily-available software packages such as SPSS and SAS.

Structure
The schedule for the course will be partially determined by previous experience of the students, and their existing familiarity with mediation and moderation. The below schedule is a rough approximation of the schedule for the course.

Day 1

  • Path analysis: Direct, indirect, and total effects in mediation models.
  • Estimation and inference about indirect effects in single mediator models.


Day 2

  • Models with multiple mediators
  • Mediation analysis in the two-condition within-subject design.


Day 3

  • Estimation of moderation and conditional effects.
  • Probing and visualizing interactions.
  • Moderation analysis in the two-condition within-subject design


Days 4 & 5

  • Estimation of conditional process models (also known as “moderated mediation”)
  • Quantification of and inference about conditional indirect effects.
  • Testing a moderated mediation hypothesis and comparing conditional indirect effects


Literature
This course is a companion to Andrew Hayes’s book Introduction to Mediation, Moderation, and Conditional Process Analysis (IMMCPA), published by The Guilford Press. The content of the course overlaps the book to some extent, but many of the examples are different, and this course includes material not in the first edition of the book. A copy of the book is not required to benefit from the course, but it could be helpful to reinforce understanding.

Beyond IMMCPA additional materials include:
Montoya, A. K., & Hayes, A. F. (2017). Two-condition within-participant statistical mediation analysis: A path-analytic framework. Psychological Methods, 22(1), 6-27.

Hayes, A. F. (2015). An index and test of linear moderated mediation. Multivariate Behavioral Research, 50, 1-22.

Mandatory:
No materials are mandatory, but students will benefit greatly from reading Andrew Hayes’s book Introduction to Mediation, Moderation, and Conditional Process Analysis (IMMCPA), published by The Guilford Press

Supplementary / voluntary:
Introduction to Mediation, Moderation, and Conditional Process Analysis (IMMCPA), published by The Guilford Press

Montoya, A. K., & Hayes, A. F. (2017). Two-condition within-participant statistical mediation analysis: A path-analytic framework. Psychological Methods, 22(1), 6-27.

Hayes, A. F. (2015). An index and test of linear moderated mediation. Multivariate Behavioral Research, 50, 1-22.

Mandatory readings before course start:
N/A

Examination part
100% of assessment will be based on a written final examination at the end of the course. The exam will be a combination of multiple choice questions and short-answer/fill in the blank questions, along with some interpretation of computer output. Students will take the examination home on the last day of class and return it to the instructor within one week.
During the examination students will be allowed to use all course materials, such as PDFs of PowerPoint slides, student notes taken during class, and any other materials distributed or student-generated during class. Although the book mentioned in “Literature” is not a requirement of the course nor is it necessary to complete the exam, students may use the book if desired during the exam.


A computer is not required during the exam, though students may use a computer if desired, for example as a storage and display device for class notes provided to them during class.

Examination content
Among the topics of the exam may include how to quantify and interpret path analysis models, calculate direct, indirect, and total effects, and determine whether evidence of a mediation effect exists in a data set based on computer output provided or other information. Also covered will be the testing moderation of an effect, interpreting evidence of interaction, and probing interactions.  Students will be asked to generate or interpret conditional indirect effects from computer output given to them and/or determine whether an indirect effect is moderated. Students may be asked to construct computer commands that will conduct certain analyses. All questions will come from the content listed in “Course Content” above.


Literature
Although the book mentioned in “Literature” is not a requirement of the course nor is it necessary to complete the assignments, students may use the book if desired.

Qualitative Research Methods and Data Analysis presents strategies for analyzing and making sense of qualitative data. Both descriptive and interpretive qualitative studies will be discussed, as will more defined qualitative approaches such as grounded theory, narrative analysis, and case studies. The course will briefly cover research design and data collection strategies but will largely focus on analysis. In particular, we will consider how researchers develop codes and integrate memo writing into a larger analytic process. The purpose of coding is to provide a focus to qualitative analysis; it is critical to have a handle on coding practices as you move deeper into analysis. The course will present coding and memo writing as concurrent tasks that occur during an active review of interviews, documents, focus groups, and/or multi‑media data. We will discuss deductive and inductive coding and how a codebook evolves, that is, how codes might “emerge” and shift during analysis. Managing codes includes developing code hierarchies, identifying code “constellations,” and building multidimensional themes.

The class will present memo writing as a strategy for capturing analytical thinking, inscribed meaning, and cumulative evidence for condensed meanings. Memos can also resemble early writing for reports, articles, chapters, and other forms of presentation. Researchers can also mine memos for codes and use memos to build evocative themes and theory. Coding and memo writing are discussed in the context of data-driven qualitative research beginning with design and moving toward presentation of findings. The course will also discuss using visual tools in analysis, such as diagramming core quotations from data to holistically present the participant’s key narratives. Visual tools can also assist in looking horizontally across many transcripts to identify connective themes and link the parts to the whole.

Software
We will spend one day learning a qualitative analysis software package:
GSERM St. Gallen Atlas.TI
GSERM Ljubljana NVIVO
If the course will be in a remote format, we will work with MAXQDA.

The methods discussed in the course will be applicable to qualitative studies in a range of fields, including the behavioral sciences, social sciences, health sciences, communications, and business.

Structure
Day 1

  • Core Principles and Practices in Qualitative Data Inquiry
  • Qualitative Research Design: An Overview Data types
  • Comparative strategies
  • Qualitative sampling
  • Triangulation


Analysis Task 1: Memo Writing Document summary memos

  • Key-quote memos
  • Methods memos


Day 2

  • Analysis Task 2: Using Visual Tools
  • Episode profiles
  • Making sense of data using diagrams
  • Working with core quotations

Analysis Task 3: Coding Qualitative Data

  • Descriptive coding
  • Interpretive coding
  • Strategies to coding
  • Line‑by‑line coding
  • Creating a codebook

 
Day 3

  • Introduction to Qualitative Software: MAXQDA (see information at “Software”)

a. Overview
b. Beginning a project
c. Writing comments and memos
d. Coding data

  • Hands‑on Exercises Using MAXQDA
  • Analysis in MAXQDA
  • Exploring codes and memos in queries
  • Matrices and diagrams
  • Blending quantitative and qualitative data


Day 4

  • Methodological Traditions

a. Grounded theory
b. Narrative analysis
c. Case study
e. Pragmatic qualitative analysis


Day 5

  • Qualitative Research Design: Revisiting Strategies
  • Data Collection considerations Types
    •Interviews
    •Focus groups
    •Other types of data
  • Developing interviewing skills
  • Other data types
  • Evaluating qualitative articles
  • Class discussion


Suggested Reading (Articles)
Electronic version of these articles will be provided to registered participants:

Ahlsen, Birgitte, et al. 2013. “(Un)doing Gender in a Rehabilitation Context: A Narrative Analysis of Gender and Self in Stories or Chronic Muscle Pain.” Disability and Rehabilitation 1‑8.

Charmaz, Kathy. 1999. “Stories of Suffering: Subjective Tales and Research Narratives.” Qualitative Health Research 9:362‑82.

Sandelowski, Margarete. 2000. “Whatever Happened to Qualitative Description?” Research in Nursing and Health 23:334‑40.

Rouch, Gareth, et al. 2010. “Public, Private and Personal: Qualitative Research on Policymakers’ Opinions on Smokefree Interventions to Protect Children in ‘Private’ Spaces.” BMC Public Health 10:797‑807.

Suggested Reading (Books)
Charmaz, Kathy. 2006. Constructing Grounded Theory: A Practical Guide through Qualitative Analysis. Sage.

Marshall, Catherine, and Gretchen B. Rossman. 2006. Designing Qualitative Research. 4th ed. Sage.

Yin, Robert. 2013. Case Study Research Design and Methods. Sage.

Examination
Participants will be asked to read several interviews or journal entries and generate a preliminary analysis of the data using techniques discussed during the course. This examination will be due three weeks after the course ends.

Examination content
Students will have to demonstrate familiarity with the differences between grounded theory, narrative analysis, case study, and pragmatic analysis. The assignment will require them to choose one of these approaches to design a study and analyze several documents provided by the instructor. Their preliminary analysis will include memos, a codebook, diagrams, early findings, and reflection on next steps.

Prerequisites (knowledge of topic)
Basic knowledge of descriptive statistics, data analysis and R is useful, but not necessary. Participants need to bring their own laptop and complete our detailed installation instructions for R and RStudio (both open source software) shared prior to the course.

Learning objectives
The creation and communication of data visualizations is a critical step in any data analytic project. Modern open-source software packages offer ever more powerful data visualizations tools. When applied with psychological and design principles in mind, users competent in these tools can produce data visualizations that easily tell more than a thousand words. In this course, participants learn how to employ state-of-the-art data visualization tools within the programming language R to create stunning, publication-ready data visualizations that communicate critical insights about data. Prior to, during, and after the course participants work their own data visualization project.

Course content
Each day will contain a series of short lectures and demonstrations that introduce and discuss new topics. The bulk of each day will be dedicated to hands-on, step-by-step exercises to help participants ‘learn by doing’. In these exercises, participants will learn how to read-in and prepare data, how to create various types of static and interactive data visualizations, how to tweak them to exactly fit one’s needs, and how to embed them in digital reports. Accompanying the course, each participant will work on his or her own data visualization project turning an initial visualization sketch into a one-page academic paper featuring a polished, well-designed figure. To advance these projects, participants will be able to draw on support from the instructors in the afternoons of course days two to four.

Structure
Day 1
Morning: Cognitive and design principles of good data visualizations
Afternoon: Introduction to R

Day 2
Morning: Reading-in, organizing and transforming data
Afternoon: Project sketch pitches

Day 3
Morning: Creating plots using the grammar of graphics
Afternoon: Visualizing statistical uncertainty, facets, networks, and maps

Day 4
Morning: Styling and exporting plots
Afternoon: Making visualizations interactive

Day 5
Morning: Reporting visualizations using Markdown
Afternoon: Final presentation and competition

Literature
Voluntary readings:
Knaflic, C. N. (2015). Storytelling with data: A data visualization guide for business professionals. John Wiley & Sons.
Healy, K. (2018). Data visualization: a practical introduction. Princeton University Press.

Examination part
The course grade is determined based on the quality of the initial project sketch (20%), the data visualization produced during the course (40%), and the one-page paper submitted after the course (40%).