Sociologia e Avaliação nos EUA: Questões sobre a relação entre metodologia e teoria
Nicoletta Stame
Resumo
A avaliação baseada em teoria (ABT) enfatiza o pensamento teórico para compreender a dinâmica entre os insumos e resultados de um programa, oferecendo uma alternativa mais matizada às avaliações orientadas por métodos. Ela defende o pluralismo metodológico, garantindo que o método de avaliação esteja alinhado com o problema específico. A ABT é notável por sua profundidade na análise não apenas da ocorrência de resultados, mas também de sua relevância para diferentes grupos, contextos e as razões por trás deles. Essa abordagem evoluiu notavelmente durante a década de 1960 nos EUA, particularmente com os programas da Grande Sociedade, que exigiam avaliações de intervenções sociais abordando questões como degradação urbana e discriminação racial. Figuras-chave como Ann Oakley defendiam a integração de percepções qualitativas em métodos experimentais, marcando uma mudança de foco. O Bureau de Pesquisa Social Aplicada (BASR) influenciou significativamente a ABT com sua ênfase em teorias de alcance médio e métodos de pesquisa integrados, embora seu potencial total em avaliações em larga escala não tenha sido completamente realizado. Este artigo, revisitado para esta publicação a pedido de seu editor-chefe, destacada a capacidade das ABT de lidar com fenômenos sociais complexos e seu foco em compreender tanto os resultados esperados quanto os inesperados de programas, movendo-se além do rigor meramente metodológico para uma compreensão mais ampla da mudança social.
Palavras-chave
Referências
Banerjee, Abhijit, & Duflo, Esther. (2011). Poor economics: A radical rethinking of the way to fight global poverty. USA: Public Affairs.
Befani, Barbara, & Mayne, John. (2014). Process tracing and contribution analysis: a combined approach to generative causal inference for impact evaluation. IDS Bulletin, 45(6), 17-36. http://dx.doi. org/10.1111/1759-5436.12110
Bennett, Charles A., & Lumsdaine, Arthur A. (1975). Evaluation and experiment. Cambridge: Academic Press.
Campbell Collaboration. (2001). Campbell systematic reviews. Guidelines for the preparation of review protocols. Recuperado em 18 de agosto de 2006.
Campbell, Donald T. (1968). Reforms as experiments. The American Psychologist, 24(4), 409-429. http://dx.doi.org/10.1037/h0027982
Campbell, Donald T. (1979). Assessing the impact of planned social change. Evaluation and Program Planning, 2(1), 67-90. http://dx.doi.org/10.1016/0149-7189(79)90048-X
Caro, Francis G. (Ed.). (1971). Readings in evaluation research. New York: Russel Sage Foundation.
Cartwright, Nancy, & Munro, Eileen. (2010). The limitations of randomized controlled trials in predicting effectiveness. Journal of Evaluation in Clinical Practice, 16(2), 260-266. PMid:20367845. http://dx.doi. org/10.1111/j.1365-2753.2010.01382.x
Chen, Huey T., & Rossi, Peter H. (1983). Evaluating with sense: The theory-driven approach. Evaluation Review, 7(3), 283-302. http://dx.doi.org/10.1177/0193841X8300700301
Coleman, James S. (1990). Foundations of social theory. Cambridge: Harvard University Press.
Coleman, James S., Katz, Elihu, & Menzel, Herbert. (1966). Medical innovation. Indianápolis: Bobbs-Merrill.
Connell, James P., & Kubisch, Anne C. (1995). Applying a theory of change approach to the evaluation of comprehensive community initiatives: Progress, prospects, and problems. In J. P. Connell, A. C. Kubisch, L. B. Schorr, & C. H. Weiss (Eds.), New approaches to evaluating community initiatives: Concepts, methods, and contexts (pp. 1-36). Washington: Aspen Institute.
Cook, Thomas D. (1997). Lessons learned in evaluation over the past 25 years. In E. Chelimsky & W. R. Shadish (Eds.), Evaluation for the 21th Century (pp. 30-52). Thousand Oaks: Sage Publications. http://dx.doi.org/10.4135/9781483348896.n2.
Deaton, Angus. (2020). Randomization in the Tropics revisited: A theme and eleven variations. Cambridge: National Bureau of Economic Research. http://dx.doi.org/10.3386/w27600
Guba, E. and Lincoln, Y. (1987) “The Countenances of Fourth Generation Evaluation.” in: D. Palumbo (ed.) The politics of program evaluation, Beverly Hills, Sage Publications. Pp.292-234
Hirschman, Albert O. (1995). A propensity for self-subversion. Cambridge: Harvard University Press.
Horowitz, Irving L. (1993). The decomposition of sociology. Oxônia: Oxford University Press. http://dx.doi. org/10.1093/oso/9780195073164.001.0001.
Hyman, Herbert, & Wright, Charles R. (1967). Evaluating social action programs. In P. F. Lazarsfeld (Ed.), The uses of sociology (pp. 741-782). New York: Basic Books.
Hyman, Herbert, Wright, Charles R., & Hopkins, Thomas H. (1962). Applications of methods of evaluation: four studies of the encampment for citizenship. Berkeley: University of California Press. http://dx.doi. org/10.1525/9780520321908 International Initiative for Impact Evaluation – 3ie. Recuperado em 18 de agosto de 2006, de https:// www.3ieimpact.org/taxonomy/term/771
Lazarsfeld, Paul F. (1967). Introduction. In P. F. Lazarsfeld (Ed.), The uses of sociology. New York: Basic Books.
Leeuw, Frans L. (2003). Reconstructing program theories: methods available and problems to be solved. The American Journal of Evaluation, 24(1), 5-20. http://dx.doi.org/10.1177/109821400302400102
Lipsey, Mark W., Crosse, Scott, Dunkle, Julie, Pollard, John, & Stobart, Gordon. (1985). Evaluation: The state of the art and the sorry state of the science. In D. S. Cordray (Ed.), Utilizing prior research in evaluation planning, (New Directions for Program Evaluation, Vol. 27, pp. 7-28). Hoboken: Jossey-Bass.
Manski, Charles F., & Garfinkel, Irwin. (1992). Evaluating welfare and training programs. Cambridge: Harvard University Press.
Mark, Melvin, Henry, Gary H., & Julnes, George. (2000). Evaluation. An integrated framework for understanding, guiding, and improving public and nonprofit policies and programs. Hoboken: Jossey Bass.
Martire, Francesco. (2006). Come Nasce e Come Cresce una Scuola Sociologica. Merton, Lazarsfeld e il Bureau. Acireale: Bonanno Editore.
Mayne, John. (2017). Theory of change analysis: building robust theories of change. The Canadian Journal of Program Evaluation, 32(2), 155-173. http://dx.doi.org/10.3138/cjpe.31122
Merton, Robert K. (1936). The unanticipated consequences of purposive social action. American Sociological Review, 1(6), 894-904. http://dx.doi.org/10.2307/2084615
Merton, Robert K. (1949). Social theory and social structure. USA: Free Press.
Merton, Robert K. (1968). On sociological theories of the middle range. In R. K. Merton (Ed.), Social theory and social structure (3rd ed., pp. 39-72). USA: Free Press.
Mulgan, Geoff. (2003). Government, knowledge and the business of policy-making. Canberra Bulletin of Public Administration, 108, 1-5.
Oakley, Ann. (1999, July). An infrastructure for assessing social and educational interventions: same or different? In Background Paper for Meeting at the School of Public Policy. London: University of London.
Oakley, Ann. (2000). Experiments in Knowing. USA: The New Press.
OECD-DAC (2010). Quality Standards for Development Cooperation. DAC Guidelines and Reference Series. http://www.oecd.org/dataoecd/55/0/44798177.pdf
Pawson, Ray. (1989). A measure for measures. London: Routledge.
Pawson, Ray. (2002). Evidence-based policy: The promise of realist synthesis. Evaluation, 8(3), 340-358. http://dx.doi.org/10.1177/135638902401462448
Pawson, Ray. (2004). Would Campbell be a member of the Campbell collaboration? The Evaluator, Winter, 13-15.
Pawson, Ray. (2006). Evidence-based policy: a realist perspective. Thousand Oaks: Sage Publications. http://dx.doi.org/10.4135/9781849209120
Pawson, R. (2010) “Middle Range Theory and Program Theory Evaluation: from Provenance to Practice”, in J. Vaessen and F.L. Leeuw (eds.) Mind the Gap: perspectrives on Policy Evaluation and the Social Sciences, Transaction Publishers, New Brunswick, NJ. pp. 171-202
Pawson, Ray, & Tilley, Nick. (1997). Realistic evaluation. Thousand Oaks: Sage Publications.
Perrin, Burt (2005) “How evaluation can help make knowledge management real” in: Rist C.R & Stame N., eds., From Studies to streams, New Brunswick NJ: Transaction Publisahers, pp.23-45
Riecken, H.W., Boruch, R.F., Campbell D.T., Glennan T.K., pratt,J., Rees A. & Williams W. (1974) Social experimentation. A method for planning and evaluating social Interventions, New York, Academic Press
Rist, Ray C., & Stame, Nicoletta. (Eds.). (2006). From studies to streams: Managing evaluative systems. Piscataway: Transaction Publishers.
Rivlin, Alice. (1971). Systematic thinking for social action. Washington: The Brookings Institution.
Rossi, Peter H. (1969). Practice, method and theory in evaluating social-action programs. In J. L. Sundquist (Ed.), On fighting poverty. New York: Basic Books.
Rossi, Peter H. (2004). My views of evaluation and their origins. In M. C. Alkin (Ed.), Evaluation Roots — Tracing Theorists’ Views and Influences (pp. 122-131). Thousand Oaks: Sage Publications. http://dx.doi. org/10.4135/9781412984157.n7
Rossi, Peter H., Freeman, Howard E., & Wright, Sonia R. (1979). Evaluation: A systematic approach (1st ed.). Thousand Oaks: Sage Publications.
Schultze, Charles L. (1977). The public use of private interest. Washington: The Brookings Institution.
Stame, Nicoletta. (2004). Theory based evaluations and types of complexity. Evaluation, 10(1), 58-76. http://dx.doi.org/10.1177/1356389004043135
Stame, Nicoletta. (2010). US sociology and evaluation: Issues in the relationship between methodology and theory. In J. Vaessen & F. L. Leeuw (Eds.), Minding the gap: Perspectives on policy (Comparative Policy Evaluation, 16th ed.,Vol. 3, pp. 29-44). USA: Transaction Publishers.
Stame, Nicoletta. (2014). Positive thinking approaches to evaluation and program perspectives. The Canadian Journal of Program Evaluation, 29(2), 67-86. http://dx.doi.org/10.3138/cjpe.29.2.67
Stern, Elliot, Stame, Nicoletta, Mayne, John, Forss, Kim, Davies, Rick, & Befani, Barbara. (2012). Broadening the range of designs and methods for impact evaluation (DFID Working Paper, No. 38, pp. 1-92). London: DFID. http://dx.doi.org/10.22163/fteval.2012.100
Stern, Elliot. (2005). Introduction. In E. Stern (ed.), Evaluation Research Methods (Vol. 1, pp. XXI-XLIII). Thousand Oaks: Sage Publications. http://dx.doi.org/10.4135/9781446261606
Stouffer, Samuel A. (1949). The American soldier. Princeton: Princeton University Press.
Suchman, Edward A. (1967). Evaluative research. New York: Russel Sage Foundation.
Vaessen, Jos, & Leeuw, Frans L. (Eds.). (2010). Mind the gap: Perspectives on policy evaluation and the social sciences. New Brunswick: Transaction Publishers.
Valters, Craig. (2014). Theories of change in international development: Communication, learning, or accountability? London: JSRP, Asia Foundation.
Weiss, Carol H. (1972). Evaluation research. Englewood Cliffs: Prentice Hall.
Weiss, Carol H. (1995). Nothing as practical as good theory: Exploring theory-based evaluation for comprehensive community initiatives for children and families. In J. P. Connell, A. C. Kubisch, L. B. Schorr, & C. H. Weiss (Eds.), New approaches to evaluating community initiatives (Vol. 1, pp. 65-92). Washington: The Aspen Institute.
Weiss, Carol H. (1997). Theory-based evaluation: Past, present, and future. In D. J. Rog (Ed.), Progress and future directions in evaluation, new directions for evaluation (Vol. 76, pp. 41-55). San Francisco: JosseyBass. http://dx.doi.org/10.1002/ev.1086
Weiss, Carol H. (2004). Rooting for evaluation: A cliff notes version of my work. In M. C. Alkin (Ed.), Evaluation roots: Tracing theorists’ views and influence (pp. 153-168). Thousand Oaks: Sage Publications. http://dx.doi.org/10.4135/9781412984157.n9
White, Howard. (2010). A contribution to current debates in impact evaluation. Evaluation, 16(2), 153-164. http://dx.doi.org/10.1177/1356389010361562
Wildavsky, Aaron. (1972). The self-evaluating organization. Public Administration Review, 5, 509-520. https://doi.org/10.2307/975158
Woolcock, Michael. (2023). International development: Navigating Humanity’s greatest challenge. Cambridge: Polity Press.
Submetido em:
23/11/2023
Aceito em:
27/11/2023