Back Eşti aici:Home Dezbateri Dezbateri actuale Măsurare şi evaluare în PR Prof. James E. Grunig: The Nature of Conceptualization in Public Relations

Prof. James E. Grunig: The Nature of Conceptualization in Public Relations

Pin It

James Grunig PRRom

Conceptualizing Quantitative Research in Public Relations. Part 2

Conceptualization is the process of thinking logically and systematically about concepts, definitions, measures, and the relationships among them.

First, researchers begin to conceptualize when they isolate and describe problems — both theoretical and applied—that are worthy of study. Second, they think logically about how to solve the problem by identifying a concept, which usually is called a dependent variable in quantitative research, whose presence or absence defines the problem. Third, they identify independent variables that have  a logical effect on the dependent variable that can be changed to have an effect on the dependent variable—thus solving the problem. Most of this conceptualization takes place at an abstract, or theoretical, level. If researchers do not think theoretically before they measure something, their measurements usually turn out to have little or no value—other than measurement for its own sake.

Public relations people can apply the same kind of rigor to practice. They need to define problems, identify variables that can be changed to solve the problem, change these independent variables, and then measure to determine if the dependent variable has changed and the problem has been solved. Practitioners tend to behave historically rather than scientifically, however. They do what they have always done—or what others in their organization have done. As a result, they usually cannot explain why they do what they do or what effect it has when a skeptical top manager or client asks. Furthermore, they have difficulty working with professional researchers (who usually do not understand public relations) who need to have logically related variables to measure before they conduct either formative or evaluative research.

This is not to say that public relations practitioners do not have a theory. Nearly every human being can construct an explanation for his or her behavior if asked. The difference between a scientist (or other kind of scholar) and a layperson is that the scientist has systematically developed his or her conceptualization. In public relations, practitioner theories often include concepts such as image, reputation, brand, relationships, and issues. The word “management” is then attached to these concepts (such as reputation management) to suggest  that the dependent variable (reputation) can be changed (managed). However, dependent variables can seldom be changed directly because they are outcomes of behaviors or processes (independent variables) that can be changed. Thus, we can manage the behaviors and processes that result in a reputation, for example; but we cannot manage the reputation.

The Conceptual Process

Theorists commonly use the diagram in Figure 1 to explain the process of conceptualization. The two levels of this diagram describe two kinds of thinking. The top level describes theoretical or conceptual thinking. The bottom level describes operational thinking—applying measures to concepts.


Figure1 Grunig

At the conceptual level, a theory usually consists of at least two concepts, which are usually called “variables” in quantitative research (but not necessarily so in qualitative research). We simplify the theory here by including only two concepts, although there can be more than two related independent or dependent variables in a theory. The theorist links these two concepts by stating a logical expected relationship —a theoretical proposition. The relationship can be causal but it does not have to be. The theorist only needs to say that the dependent variable is affected by (depends on) the independent variable.

The independent variable is independent of the other variable in the theory. It can be affected from outside the theory, however, by intervention of a researcher (such as in an experiment) or by a practitioner (who can engage in an activity that eventually affects the dependent variable). For example, in public relations the dependent variable might be reputation. The independent variable might consist of messages sent by public relations practitioners   (a common assumption in public relations thinking) or it might consist of management behaviors (which I believe is more likely).

When one “manages” within this framework, he or she “directs, controls, or carries on” (New Webster’s Dictionary) the independent variable. If one could “direct, control, or carry on” the dependent variable directly, it would not be a dependent variable. To continue with the example of reputation, a practitioner cannot manage reputation; he or she can manage only the messages or the management behaviors that have an expected theoretical relationship with reputation.

We move to the operational level of the diagram when we do research to test the theoretical proposition we developed at the conceptual level. Theories cannot be tested directly because they are abstract. To measure the concepts that are included in a theory, we have to operationalize them—specify how we can observe or measure the concept. For example, we can observe messages by counting the number or types of messages sent out. We can observe management behavior by classifying, describing, or counting what management did. Reputation has been measured by analyzing media coverage, by asking people what they think about the organization, or by asking financial analysts and CEOs to complete a survey of their attitudes.

We “explicate” a concept when we derive operational definitions of the concept. The operational definitions must be carefully thought through and be logically related to the concept. There may be several related operational definitions of the same concept, however, because the concept is more abstract than the definition. The concept must “cover” the operational definition. Philosophers of science call these explications “covering laws.”

We call the relationship between operational definitions a “hypothesis.” It must parallel the theoretical proposition at the conceptual level so that we can test that proposition by measuring empirically whether the operational definition of the independent variable affects the operational definition of the dependent variable in the way predicted by the proposition. Because the proposition is always more abstract than the hypothesis, many hypotheses can be developed to test the proposition. That is why scientists say you can never prove a theory by confirming a single hypothesis. You can support a theory by confirming a single hypothesis, but you must confirm many hypotheses over time before the weight of the evidence allows researchers to conclude that the theory is a good one —”good” meaning that managing the independent variable more often than not solves the problem by changing the dependent variable.

For research to be done well, the thinking and measuring processes that occur in the boxes and the relationships between the boxes of Figure 1 must meet high logical and empirical standards. First, theoretical variables must be defined well so that they have a single clear meaning. Chaffee (1996) called this “the disciplined use of words” (p. 20). He emphasized that the name we choose for a concept is crucial. Other scholars and practitioners should be able to grasp the meaning of the concept when they hear the name. It should contain the “essential features of an idea” (p. 21). If the name conveys different meanings to different people, it is not a good concept. Unfortunately, that is the case with popular public relations terms such as image and reputation. We determine the validity of research by comparing the operational definitions with the concepts they measure. We ask, does the operational definition “represent the concept as we have defined it . . . does the hypothesis represent a test of the theory?” (p. 24).

In addition to the conceptual processes described in Figure 1, the unit of analysis of a theory has particular relevance to theorizing in public relations. As Chaffee put it:

A basic question that can be surprisingly tricky is, for what class of entities does this concept vary? Is it an attribute of individual persons, of aggregates such as communities or nations, of messages, of events, or of some other unit? Units of analysis should be the ones talked about in your theorizing and the ones that are observed and described in empirical work. Inconsistency in the unit of analysis is a common error in communication research.

The concept of image provides a good example of confusion over units of analysis. Practitioners often say that their organization has an image—therefore, defining the term as a property of an organization. Others talk about projecting, creating, polishing, or restoring images. Essentially, they are talking about communicating positive messages about their organization—the unit of analysis is the message. Others talk about images as residing in the minds of their publics—an individual, psychological unit of analysis. Still others define image as what the media say about an organization—so that a content analysis of media stories defines image operationally. Still others lump all these units of analysis together and define image as the “sum total” or “composite” of all of them —a certain problem of adding apples and oranges.

An unclear unit of analysis becomes particularly problematic when the independent variable is defined as a different unit of analysis than the dependent variable, or the operational definitions specify a different unit than the theoretical concepts. This is evident in the assertions of many media monitoring services that maintain that their analyses of media content are measures of public opinion—media messages are defined as measures of collective opinions.

An Example of a Conceptualization of the Public Relations Process

Although other theorists might conceptualize the process differently, I believe that a logical conceptualization of the public relations process states that public relations people manage communication with top managers and with publics (concepts are italicized) to contribute to the strategic decision processes of organizations. They manage communication between management and publics to build relationships with the publics that are most likely to affect the behavior of the organization or who are most affected by the behavior of the organization. Communication processes can be managed (they are independent variables), and processes that facilitate dialogue among managers and publics can also contribute to managing organizational behaviors—although public relations people cannot manage organizational behaviors by themselves. Dialogue among managers and publics, in turn, can produce long-term relationships character- ized as communal relationships that result in higher levels of the indicators of the quality of a relationship my students and I (e.g., J. Grunig & Huang, 2000; J. Grunig & Hung, 2002) have identified and defined—trust, mutuality of control, commitment, and satisfaction. Relationships also are affected much more by the behavior of management than by one-way messages sent out by public relations or advertising people.

The independent variables, therefore, are communication activities conducted by public relations departments and management behaviors that result from strategic decisions. The key dependent variable is relationships. Relationships do influence dependent variables farther down the causal chain, such as reputations, images, attitudes, and brands. But these variables also are affected by other variables outside the control of public relations, such as financial markets, the state of the economy, or corporate behaviors over which public relations has little influence.

With this basic understanding of conceptualization, we can now move to develop a conceptual framework for research conducted in the practice of public relations.

In the next chapter: Basic Concepts for Research in Public Relations
See the Part 1


Grunig, J. E.: Conceptualizing quantitative research in public relations. In B. Van Ruler, A. Tkalac Verčič, & D. Verčič, (Eds.). Public relations metrics (pp. 88-119). New York and London: Routledge, 2008. Republish with the permission of author


Aldoory, L. (2001). Making health communications meaningful for women: Factors that influence involvement. Journal of Public Relations Research, 13, 163–185.

Berger, B. K. (2005). Power over, power with, and power to public relations: Critical reflections on public relations, the dominant coalition, and activism. Journal of Public Relations Research, 17, 5–28.

Bowen, S. A. (2000). A theory of ethical issues management: Contributions of Kantian deontology to public relations’ ethics and decision making. Unpublished doctoral dissertation, University of Maryland, College Park.

Bowen, S. A. (2004). Expansion of ethics as the tenth generic principle of public relations excellence: A Kantian theory and model for managing ethical issues. Journal of Public Relations Research, 16, 65–92.

Broom, G. M. (1977). Coorientational measurement of public issues. Public Relations Review, 3(4), 110–119.

Broom, G. M., & Dozier, D. M. (1990). Using research in public relations: Applications to program management. Englewood Cliffs, NJ: Prentice-Hall.

Chaffee, S. H. (1996). Thinking about theory. In M. B. Salwen & D. W. Stacks (Eds.), An integrated approach to communication theory & research (pp. 15–32). Mahwah, NJ: Lawrence Erlbaum Associates.

Chang, Y.-C. (2000). A normative exploration into environmental scanning in public relations. Unpublished M.A. thesis, University of Maryland, College Park.

Chen, Y.-R. (2005). Effective government affairs in an era of marketization: Strategic issues management, business lobbying, and relationship management by multinational corporations in China. Unpublished doctoral dissertation, University of Maryland, College Park.

Curtin, P. A., & Gaither, T. K. (2005). Privileging identity, difference, and power: The circuit of culture as a basis for public relations theory. Journal of Public Relations Research, 17, 91–116.

Durham, F. (2005). Public relations as structuration. Journal of Public Relations Research, 17, 29–48.

Ehling, W. P. (1992). Estimating the value of public relations and communication to an organization. In J. E. Grunig (Ed.), Excellence in public relations and communi- cation management (pp. 617–638). Hillsdale, NJ: Lawrence Erlbaum Associates.

Fleisher, C. S. (1995). Public affairs benchmarking. Washington, DC: Public Affairs Council.

Fombrun, C. J. (1996). Reputation: Realizing value from the corporate image. Boston: Harvard Business School Press.

Fombrun, C. J., & Van Riel, C. B. M. (2004). Fame & fortune: How successful companies build winning reputations. Upper Saddle River, NJ: Financial Times/ Prentice-Hall.

Grunig, J. E. (1997). A situational theory of publics: Conceptual history, recent challenges and new research. In D. Moss, T. MacManus, & D. Vercˇicˇ (Eds.), Public relations research: An international perspective (pp. 3–46). London: International Thomson Business Press.

Grunig, J. E. (2002). Qualitative methods for assessing relationships between organ- izations and publics. Gainesville, FL: The Institute for Public Relations, Commission on PR Measurement and Evaluation.

Grunig, J. E. (2005). Guia de pesquisa e mediçaõ para elaborar e avaliar uma funçaõ excelente de relações públicas (A roadmap for using research and measurement to design and evaluate an excellent public relations function). Organicom: Revisa Brasileira de Communiçaõ Organizacional e Relações Públicas (Brazilian Journal of Organizational Communication and Public Relations], 2(2), 47–69.

Grunig, J. E. (2006). Furnishing the edifice: Ongoing research on public relations as a strategic management function. Journal of Public Relations Research, 18, 151–176. Grunig, J. E., & Grunig, L. A. (1996). Implications of symmetry for a theory of ethics and social responsibility in public relations. Paper presented to the International Communication Association, Chicago (May).

Grunig, J. E., & Grunig, L. A. (2000a). Conceptualization: The missing ingredient of much PR practice and research. Jim and Lauri Grunig’s Research: A Supplement of PR Reporter, 10 (December), 1–4.

Grunig, J. E., & Grunig, L. A. (2000b). Research methods for environmental scanning. Jim and Lauri Grunig’s Research: A Supplement of PR Reporter, 7 (February), 1–4. Grunig, J. E., & Grunig, L. A. (2001, March) Guidelines for formative and evaluative research in public affairs: A report for the Department of Energy Office of Science.

Washington, DC: U.S. Department of Energy.

Grunig, J. E., & Huang, Y. H. (2000). From organizational effectiveness to relationship indicators: Antecedents of relationships, public relations strategies, and relation- ship outcomes. In J. A. Ledingham & S. D. Bruning (Eds.), Public relations as rela- tionship management: A relational approach to the study and practice of public relations (pp. 23–53). Mahwah, NJ: Lawrence Erlbaum Associates.

Grunig, J. E., & Hung, C. J. (2002). The effect of relationships on reputation and reputa- tion on relationships: A cognitive, behavioral study. Paper presented to the Inter- national, Interdisciplinary Public Relations Research Conference, Miami, Florida (March).

Grunig, J. E., & Hunt, T. (1984). Managing public relations. New York: Holt, Rinehart & Winston.

Grunig, L. A., Grunig, J. E., & Dozier, D. M. (2002). Excellent public relations and effective organizations: A study of communication management in three countries. Mahwah, NJ: Lawrence Erlbaum Associates.

Grunig, L. A., Grunig, J. E., & Vercˇicˇ, D. (1998). Are the IABC’s excellence principles generic? Comparing Slovenia and the United States, the United Kingdom and Canada. Journal of Communication Management, 2, 335–356.

Holtzhausen, D. R., & Voto, R. (2002). Resistance from the margins: The postmodern public relations practitioner as organizational activist. Journal of Public Relations Research, 14, 57–84.

Hon, L. C., & Grunig, J. E. (1999). Guidelines for measuring relationships in public relations. Gainesville, FL: The Institute for Public Relations, Commission on PR Measurement and Evaluation.

Hung, C.-J. (2002). The interplays of relationship types, relationship cultivation, and relationship outcomes: How multinational and Taiwanese companies practice public relations and organization-public relationship management in China. Unpublished doctoral dissertation, University of Maryland, College Park.

Hung, C.-J. (2004). Cultural influence on relationship cultivation strategies: Multi- national companies in China. Journal of Communication Management, 8, 264–281. Jeffries-Fox Associates (2000a). Toward a shared understanding of corporate reputation and related concepts: Phase I: Content analysis. Basking Ridge, NJ: Report prepared for the Council of Public Relations Firms (March 3).

Jeffries-Fox Associates (2000b). Toward a shared understanding of corporate reputa- tion and related concepts: Phase III: Interviews with client advisory committee members. Basking Ridge, NJ: Report prepared for the Council of Public Relations Firms (June 16).

Pin It


   BT Logo Aliniat Central    GFPR new logo web            logo SMP          chapter4 logoSigla Action Global Communications web dccom LOGO nou siteGMP logo 2        Logo ThePublicAdvisors  logo oxygen site