Embracing evaluation theory to overcome “stasis”: Informing standards, impact and methodology

Jim Macnamara websiteThis paper explores evaluation theory in a field closely related to corporate communication and public relations (PR) as well as in other disciplines and argues that embracing evaluation theory more broadly can break the “stasis” and “deadlock” identified in evaluation of corporate communication and PR. Specifically, this analysis seeks to show that a transdisciplinary approach can contribute to standards and demonstration of impact – two long-sought goals in evaluation of corporate communication and PR – as well as inform methodology.

Introduction – Three key challenges

Measurement and evaluation (M&E), referred to as evaluation for brevity in this discussion, has long been recognized as an important and even essential part of the practice of corporate communication and public relations (PR). In a historical review of the field, Likely and Watson (2013) noted that evaluation has received intensive focus over the past 40 years. In a review of 10 years of data collected for the European Communication Monitor, Tench et al. described evaluation as “the alpha and omega of strategy” (2017, p. 91).

However, despite four decades of research and industry debate, evaluation in the corporate communication and PR field has been described as being in a state of “stasis” (Gregory and Watson, 2008; Macnamara and Zerfass, 2017), or even in a “deadlock” (Macnamara, 2015. Industry journals consistently discuss evaluation as a challenge facing corporate communication and PR practitioners (e.g., Comcowich, 2018).

This critical analysis examines three key issues that are raised in discussion of evaluation and, drawing on a transdisciplinary literature review and a review of frameworks and models for evaluation of corporate communication and PR, presents conclusions and recommendations that offer a contribution to overcoming stasis in the field and advancing evaluation theory and practice.

The first issue examined is what was optimistically described at the 2012 Summit on Measurement hosted by the International Association for Measurement and Evaluation of Communication (AMEC) in Dublin as the “the march to standards” (Marklein and Paine, 2012). Standards are applied in many professional fields and industries, as reflected in ISO 9000 standards for quality management and quality assurance (https://www.iso.org/iso-9001-quality-management.html) and the Global Reporting Initiative (GRI) standards for reporting on a range of economic, environmental and social impacts (https://www.globalreporting.org/standards). In corporate communication and PR, proponents of evaluation have advocated standards to help establish and maintain rigor, comparability and transparency. For example, Michaelson and Stacks (2011) reported that more than two-thirds of PR practitioners believe that standards for evaluation are necessary. Michaelson and Stacks noted that standards allow “comparative evaluations” over time and they ensure appropriate methods are used (2011, p. 4). However, despite a number of attempts, the so-called “march to standards” has floundered, as will be discussed. Analysis of evaluation literature in corporate communication and PR shows that, rather than moving towards standards, the field has gone down a path of ‘reinventing the wheel’ through frequent introduction of new terms and measures, rather than adopting widely used evaluation frameworks, models and methods.

A second area of concern is that, when evaluation is conducted, studies have shown a short-term and narrow focus on measuring outputs such as media publicity (clips), advertising reach, social media posts, and website and video views, rather than outcomes or impact (Macnamara and Zerfass, 2017; Schriner et al., 2017; Zerfass et al., 2015, p. 71). In particular, corporate communication and PR frequently fail to demonstrate outcomes and impact aligned to organizational objectives (Holtzhausen and Zerfass 2013; Volk and Zerfass, 2018; Zerfass, et al., 2017). Because corporate communication and PR have been unable to reliably identify outcomes and impact in many instances, practitioners have struggled to demonstrate their value. As Buhmann and Likely say, “value assessment is based on research that shows impact” (2018, p. 2). Buhmann et al. reinforce this need, stating: “the common objective of these frameworks is to move far beyond output metrics alone” (2018, p. 115).

Third, the field has been found wanting in terms of methodological rigor, evidenced by the continuing use of methods such as advertising value equivalents (AVEs) by up to one-third of practitioners (USC & The Holmes Report, 2016); claims of return on investment (ROI) that are “loose and fuzzy” (Watson and Zerfass, 2012, p. 11), and automated ‘black box’ measures that are not transparent (Paine, 2014).

PR theory and models in relation to evaluation also has been criticized by some scholars. Critiques include identification of an organization-centric focus on achieving functional and organizational objectives (Macnamara and Gregory, 2018) and a lack of focus on stakeholders’ and societal interests, as recommended by Grunig et al. (2002, 2006) and others. While Zerfass and colleagues have called for PR to closely align to organizational goals and objectives (Tench et al. 2017; Zerfass et al., 2017), this does not obviate the need for alignment to the organization’s stakeholders and operating environment, which corporate communication and PR theory identify as important (Cornelissen, 2011; Grunig et al., 2002).

This paper presents a critical analysis based on a transdisciplinary literature review that offers a contribution to overcome stasis and deadlock in the field by identifying theoretical and practical pathways to advance towards standards in terminology and approaches, a focus on outcomes and impact, and the use of appropriate and rigorous methodology.

For brevity, the terms ‘corporate communication’ will be used in the following discussion to include cognate fields such as communication management, strategic communication, government communication[1] and, while acknowledging resistance to the abbreviation by some, public relations will be referred to from here as PR without any pejorative inference.

The literature on corporate communication and PR evaluation

Given the extensive debate in academic and professional literature over four decades, a detailed literature review of corporate communication and PR evaluation will not be repeated here. Comprehensive references for a history and review of developments in PR evaluation include Gregory and Watson (2008), Likely and Watson (2013), Michaelson and Stacks (2011) and Watson and Noble (2014). A broader review of evaluation of public communication including advertising and specialist fields such as health communication as well as PR is provided in Macnamara (2018). Corporate communication evaluation is discussed in a chapter on “research and measurement” in Corporate Communication: A Guide to Theory and Practice (Cornelissen, 2011), although this focusses predominantly on “measuring corporate reputation” (pp. 129–35) and “measuring corporate identity” (pp. 135–38). A study of corporate communication in Italian companies by Invernizzi and Romenti (2009) concluded that “the propagation of summative evaluation and in particular the use of business methods in the professional field to evaluate the results of communication is still very limited” (p. 118). They also noted that corporate communication evaluation is focused largely on quantitative performance measurement methods such as balanced scorecards and return on investment (ROI), even though such methods are reported to be poorly implemented or not applicable in many instances (Watson and Zerfass, 2011). Such comments further point to a need for further development of evaluation in corporate communication and PR.

In this analysis, a number of widely published and promoted frameworks and models of evaluation for corporate communication and PR are examined. These are discussed in the following section for the purposes of comparison with theory, frameworks and models in other disciplines, which can inform evaluation of corporate communication and PR. 

Evaluation theory and models – Multidisciplinary perspectives

Multidisciplinarity and transdisciplinarity[2] are increasingly encouraged in scholarship, particularly in the social sciences in which disciplines such as psychology, sociology, cultural studies, phenomenology, and others inform each other as they often address the same problems and issues, but from different perspectives. According to Craig (2018), communication studies is an exemplar of multidisciplinary and, increasingly, transdisciplinary scholarship, as the field has emerged out of studies of rhetoric, speech communication, mass communication, ‘Chicago School’ sociology, American cultural studies, and European literary, critical and cultural studies. Therefore, with corporate communication and PR grounded in the social sciences, and particularly in communication studies, transdisciplinary exploration is a logical and advisable strategy when encountering a problem for which a solution has not been found within a field.

To that end, it is important to recognize that evaluation theory and models originated in the late 1960s in public administration (Suchman 1967) and aid programs such as the early logical framework approach, also referred to as log frames, of the US Agency for International Development (USAID) (Practical Concepts, 1979). During the 1970s and 1980s a number of evaluation theories, models and methods were produced in international development, public administration, education, health services, and even in neighboring communication fields of practice such as health communication.

Program theory and theory of change

Evaluation theory is grounded in and derived from program theory and theory of change.

Program theory was pioneered by Joseph Wholey (1979, 1983, 1987), a professor of public administration at the University of Southern California for more than 30 years, followed by Peter Rossi and one of his students, Huey Chen, who have championed the notion of theory-based evaluation (Chen & Rossi, 1983; Rossi et al., 2004). Other influential figures in developing program theory for evaluation include Leonard Bickman (1987) and Carol Weiss (1972, 1995, 1998).

Wholey (1987) summarizes program theory as that which “identifies program resources, program activities, and intended outcomes, and specifies a chain of causal assumptions linking program resources, activities, intermediate outcomes, and ultimate program goals” (1987, p. 78). In a recent review, Davidson says that “program theory is simply a description of the mechanism by which a program achieves (or is expected to achieve) its effects” (2007, p. ix). Interestingly in terms of discussion later in this analysis, Davidson also argues that program theory and program theory evaluation (PTE) should not be restricted to measuring outcomes related to goals and objectives but, rather, should openly evaluate all effects including unintended side effects.

Theory of change emerged from program theory in the mid-1990s as a way of analyzing initiatives that seek to create change and explain how change is achieved. Development was led by program evaluation researchers and initiatives such as the Aspen Institute Roundtable on Community Change (Anderson, 2005), and continues today through groups such as the Center for Theory of Change in New York (https://www.theoryofchange.org). In simple terms, theory of change focusses on identifying the short-term and mid-term changes that need to be achieved in order to produce longer-term outcomes and impacts or, as one knowledge base on the subject says, “the causal processes through which change comes about” (Shapiro, 2005, para. 3).

While program theory, which incorporates PTE, and theory of change initially were applied to evaluating the delivery of human and social services such as aid, health services and education, this knowledge has been progressively taken up in a number of other fields ranging from agricultural programs to large construction projects and contemporary management. Rossi et al. say that program evaluation based on program theory and theory of change is “useful in virtually all spheres of activity in which issues are raised about the effectiveness of organized social action” and specifically note its relevance for advertising, marketing, and other communication activities (2004, p. 6).

Program logic models

The stages and elements of program theory are commonly explicated in program logic models, a graphic illustration of the processes in a program from setting objectives and pre-program planning to measurement of its outcomes and impact (Wholey, 1979). Widely used examples of program logic models for planning and evaluation are those of the Kellogg Foundation (2004) and the University of Wisconsin Extension program (UWEX) (Taylor-Power & Henert, 2008, p. 4). The Kellogg Foundation model breaks programs into five key stages referred to as inputs, activities, outputs, outcomes, and impact (see Figure 1.)

Figure 1. A basic program logic model (based on Kellogg Foundation, 2004, p. 1).

Picture1

Some models identify up to seven stages in programs by breaking outcomes into short, medium, and long-term (see Figure 2). The UWEX Developing a Logic Model: Teaching and Training Guide notes that “many variations and types of logic models exist” (Taylor-Power and Henert, 2008, p. 2). The Kellogg Foundation similarly says that “there is no one best logic model” (2004, p. 13). However, there are a number of common concepts and principles in program logic models for evaluation. As well as identifying the logic of programs (i.e., which inputs and activities lead to the desired outcomes and impact), all program evaluation models emphasize that planning and delivery of programs begin with identification of SMART objectives and progress through stages. The most commonly used stages are described as inputs or resources, activities, outputs, outcomes and impact. Furthermore, program evaluation literature emphasizes that evaluation of outcomes and impact is most important – not simply evaluation of outputs.

Figure 2. Program logic model proposed by Knowlton and Phillips (2013, p. 37) in their Logic Models Guidebook.

 

Picture2

 

Formative, process and summative evaluation

As well as developing program logic models as a framework for evaluation, evaluation theory recognizes three types of evaluation – formative, process and summative (Rice and Atkin, 2013, p. 13). Evaluation theory has long emphasized the importance of formative evaluation conducted before programs begin to gain understanding of audience interests, concerns, needs and preferences, as well as identify baselines for later comparison (e.g., existing awareness, attitudes and behavior). Process evaluation is proposed to inform fine tuning and adjustment of strategies, as well as provide progress reporting. Very importantly, summative evaluation has been identified as a process for learning and program improvement (Bauman and Nutbeam, 2014), not simply reporting results and justification. Detailed analysis of program evaluation theory including program logic models and formative, process and summative evaluation strategies is available in a number of publications including Bauman and Nutbeam (2014), Macnamara (2018) and Rossi et al. (2004).

Realist evaluation

Another approach to evaluation, termed realist evaluation, adds two further key considerations to evaluation relevant to this analysis. Sometimes referred to as ‘realistic’ evaluation – a term that is mostly rejected because it suggests a narrow and simplistic approach focused on practicality – realist evaluation specifically turns attention to stakeholders and social context. Realist evaluation (RE) places emphasis on the context of programs and the interests of all ‘actors’ and influencers using context-mechanism-outcome (CMO) analysis as a methodology for evaluation (Better Evaluation, 2016; Salter and Kothari, 2014).

Health communication evaluation

A field of communication closely related to corporate communication and PR that has adopted advanced forms of evaluation, and from which lessons can be learned, is health communication, also often referred to as health promotion. While planning and evaluation in this field were initially grounded in expectancy value theory and theory of reasoned action (Fishbein and Ajzen, 1975), which were based on individual rational thinking, contemporary approaches have adopted behavior change communication and, most recently, social and behavior change communication (SBCC), a social ecology model, and culture-centered approaches (Panter-Brick et al., 2006); Dutta et al., 2013). These have embraced techniques from behavioral science including behavioral economics and, like CMO methods of realist evaluation, focus attention on the context of communication including stakeholders and social and cultural influences.

In implementing evaluation, health communication extensively uses program logic models to plan and conduct formative, process and summative evaluation based on program theory and theory of change. Space does not permit a detailed review of health communication evaluation here, but this can be found in specialist texts such as Bauman and Nutbeam (2014).

Public relations evaluation

In comparison, over the past 40 years corporate communication and PR scholars and practitioners have developed a range of evaluation models that either ignored program evaluation in other disciplines or created their own variations. For example, the Cutlip, Center and Broom (1985) PII model, which has been presented in one of the most widely used PR textbooks through multiple editions, proposed evaluation at three stages called preparation, implementation and impact. While having merit in bringing attention to evaluation in PR, this model bore little relationship to program evaluation theory that was well-advanced at the time in other disciplines. Another widely-cited US model of PR evaluation developed by Walter Lindenmann (1993) also proposed three stages, but referred to these as outputs, outgrowths and outcomes. UK Institute of Public Relations[3] evaluation consultant, Michael Fairchild, created what he called his “three measures”, which were output, outtake and outcome (1997, p. 24). Subsequently, Lindenmann (2003) expanded his model to four stages in which he adopted Fairchild’s notion of outtakes, but changed the stages to propose “PR outputs”, “PR outtakes”; “PR outcomes” and “business/organization outcomes” (pp. 4–7). UK evaluation specialists Noble and Watson (1999) also proposed four-stages, but called these input, output, impact and effect.

Meanwhile, in Germany the Deutsche Public Relations Gesellschaft (DPRG) – the German Public Relations Association – and the Gesellschaft Public Relations Agenturen – the German Association of Public Relations Agencies (GPRA) – began development of evaluation guidelines in the early 2000s. These developments drew on management theory to incorporate the use of scorecards (Hering, Schuppener and Sommerhalder, 2004) drawn from Balanced Scorecards introduced in business in the 1990s (Kaplan and Norton, 1992) and the business concept of value creation (Lange, 2005). Almost a decade of development involving practitioners and academics culminated in a four-stage model made up of inputs, outputs, outcome and outflow, which was published by the DPRG and the International Controller Verein, a European association of business process controllers (DPRG/ICV, 2009; Huhn et al., 2011, p. 13). This model has become closely associated with the concept of communication controlling that has been promoted, particularly in Germany.

The Integrated Evaluation Framework (IEF) developed by AMEC (2016) has been a significant step forward in evaluation of corporate communication and PR as it provides an interactive online tool, as well as an accompanying taxonomy of metrics and methods that apply at each stage (AMEC, 2019). However, it retains outtakes – the stage introduced by Fairchild (1997) and later adopted by Lindenmann (2003) – to create six stages of evaluation. Also, this model restricts impact to “organizational impact” rather than impact per se – that is, it is focused only on identifying the extent to which an organization achieves its objectives, rather than considering outcomes and impact more broadly, including from the perspective of stakeholders and society.

As recently as 2018, PR scholars published an evaluation logic model listing the stages as inputs, outputs, outtakes, outcomes, and outgrowths (Buhmann et al., 2018). This further deviates from established program evaluation literature by retaining two of the stages invented by PR practitioners (both Fairchild and Lindenmann were practitioners, not academic researchers). This model also reverses the order of outcomes and outgrowths from that proposed by Lindenmann (1993).

To be fair, it has to be noted that the Buhmann et al. (2018) model, despite its inconsistent arrangement of stages of evaluation, does identify that evaluation should be conducted at multiple levels, listing product, campaign, program, society, individual and function as “units of analysis”. This supports and expands on the recommendation of the third and final PR Excellence theory text, which advocated evaluation at (1) program level; (2) functional level (e.g., department or unit); (3) organizational level; and (4) societal level (Grunig et al., 2002, pp. 91–2).

An expanded focus in evaluation that more closely follows evaluation literature in other disciplines by proposing identification of outcomes and impact from the perspective of stakeholders, publics and society as well as the organization, including unintended as well as intended outcomes and impact, is available in Evaluation of Public Communication: Exploring New Models, Standards and Best Practice (Macnamara, 2018, p. 136). As well as being based on the classic program logic model stages of inputs, activities, outputs, outcomes and impact, this model recognizes the internal and external contexts of communication, which may change over time and which may necessitate adjustments in strategy, as noted in program theory and theory of change. Furthermore, it illustrates the two-way flow of information from stakeholders, publics and society to the organization, as well from the organization, and therefore the potential for organizational learning and change as well as learning and change among stakeholders and publics (see Figure 3).

Figure 3. Integrated model of evaluation (Macnamara, 2018).

Picture3

Nevertheless, most evaluation frameworks and models for corporate communication and PR vary substantially from evaluation theory, frameworks and models used in other disciplines. Specifically, despite corporate communication and PR theories emphasizing two-way interaction, relationships, engagement, dialogue and mutuality (Grunig et al., 2002, 2006; Taylor and Kent, 2014), most theories and models of evaluation in the field are organization-centric (Macnamara and Gregory, 2018). That is to say, they focus narrowly on the goals and objectives of the organization, rather than a more open approach as recommended by Davidson (2007) Salter and Kothari (2014) and others. Second, few align to or use the program logic models applied in other disciplines, which focus on outcomes and impact beyond activities and outputs. Third, there is no consistent terminology, with a wide range of terms used for key concepts and processes. In short, there are no consistent or even emerging standards informing and guiding practice.

A further illustration of the dislocation between evaluation theory and practice in other fields and corporate communication and PR is that two of the main contemporary books on PR evaluation – Evaluating Public Relations: A Best Practice Guide to Public Relations Planning, Research and Evaluation (Watson and Noble, 2014) and Primer of Public Relations Research (Stacks, 2011) –do not mention program theory, theory of change, or the other relevant theories and concepts discussed in this analysis. Similarly, the chapter on ‘Research and Measurement’ in Corporate Communication: A Guide to Theory and Practice by Cornelissen (2011) does not refer to evaluation theory developed and applied in other disciplines.

Towards standards, impact and rigorous methodology

Engagement with program theory, program evaluation theory, theory of change, tools such as program logic models, and methods of evaluation widely used in other disciplines can overcome the stasis and deadlock identified in relation to evaluation of corporate communication and progress this important field of practice in at least three ways. The following discussion seeks to identify how wider transdisciplinary engagement can facilitate standards, help show the value of corporate communication and PR by shifting focus to outcomes and impact, and ensure access to appropriate methodology for such evaluation. In doing so, it opens up new directions for research and theory building in corporate communication and PR, as well as transformation of practice.

A path to standards for evaluation

Management and organization studies have identified the pathways and key steps in developing standards, or what some refer to as the process of standardization. In broad terms, standards fall into two categories: de jure standards that are enshrined in law or regulation with sanctions for breaches and de facto standards that are voluntary adopted, such as in best practice principles and codes of practice. Given that corporate communication and PR are not regulated fields of practice under statutes other than in a few countries, such as the UK in which the Chartered Institute of Public Relations (CIPR) operates under a Royal Charter that gives it powers to sanction, voluntary standards are the most widely applicable in corporate communication and PR.

The body of literature on the process of standardization identifies three dimensions that contribute to the regulatory capacity of standards, whether they are de jure or de facto: (1) design, (2) legitimation and (3) monitoring (Slager et al., 2012, p. 765). These dimensions are enabled and supported by three types of what Slager et al. call “standardization work”, which they refer to as calculative framing, engaging, and valorizing (p. 764).

Calculative framing includes defining and denotes the establishment of common terminology. This is aided by what standards literature refers to as mimicking and analogical work. Slager et al. say that “mimicking of pre-existing templates in the organizational field renders the new practices and standards that are promoted more understandable” (2012, p. 776). It also adds to their credibility. Mimicking in measurement and evaluation also commonly includes adopting existing legitimized indices, which contributes to overall legitimacy as well as credibility. Analogical work refers to highlighting conformity to existing templates or models (i.e., drawing analogies), although customization and innovation can be added (Slager et al., 2012, p. 776).

The second type of standardization work advocated by Slager et al., and others (e.g., Brunsson and Jacobsson, 2000) involves engaging with relevant third parties including collaboration with experts and other organizations to access specialist knowledge and gain momentum through partnerships and affiliations. Engagement with experts further contributes to legitimation. The third type of standardization work, valorizing, includes educating potential adopters, promotion of standards, and symbolic initiatives such as awarding certificates or accreditation.

All of these types of standardization work are relevant to evaluation of corporate communication and PR. Standards literature clearly suggests that calculative framing involving adoption of common terminology, mimicking, and conforming with other related disciplines and engaging with other relevant organizations are important and even necessary steps to establish credible, workable standards – even though deviation and customization can occur in future iterations as part of innovation. This indicates that corporate communication and PR evaluation scholarship and practice should start with and build on program theory, theory of change, widely used program logic models, frameworks such as realist evaluation, and theories of behavior change applied in fields such as health communication. Such an approach would bring standardization including simplification of terminology, credibility, and specialist expertise to evaluation of corporate communication and PR.  Leveraging existing frameworks, models and knowledge would also contribute momentum in place of stasis and deadlock, as it would build on existing theory and models rather than ‘reinventing the wheel’. 

Showing value through outcomes and impact

Engaging with transdisciplinary literature on evaluation also would bring with it a shift of focus from activities and outputs, which have been the main concern in evaluation of corporate communication and PR, to outcomes and impact. This, in turn, would inform the search to show the value of corporate communication and PR. As Buhmann and Likely stated: “value assessment is based on research that shows impact” (2018, p. 2).

In doing so, this approach leads to a number of questions about methodology. Evaluation of outcomes and impact requires appropriate methodology and methods – or, more specifically, methodology that has validity (i.e., it measures what it purports to measure) and reliability of quantitative methods, or credibility, dependability and overall trustworthiness in the case of qualitative methods (Lincoln & Guba, 1985; Shenton (2004).

Developing methodology and capability

Recent research shows the most common method of evaluation in corporate communication and PR continues to be collecting data on “clippings and media response”, followed by internet use statistics (Zerfass et al., 2015, p. 71–72). Often, as Grunig and others stated with concern, these are presented as ‘results’, when in fact they are measures of outputs, or what Davidson describes as “mechanisms by which a program achieves (or is expected to achieve) its effects” (2007, p. ix). Cutlip et al. warned as early as 1985 that in corporate communication and PR “the common error in program evaluation is substituting measures from one level for those at another level” (1985, p. 295). Grunig specifically warned that many practitioners use “a metric gathered at one level of analysis to [allegedly] show an outcome at a higher level of analysis” (2008, p. 89).

To some extent, this substitution error is the result of a lack of knowledge among practitioners (Cutler, 2004; Grunig, 2014). But, in addition, review of the structure and capability of the corporate communication and PR evaluation industry, including professional organizations and service providers, shows that it does not have the expertise necessary for outcome and impact evaluation and it has not engaged with a number of third parties that could provide such expertise.

For example, the two main global industry bodies representing this industry are the Federation Internationale des Bureaux d’Extraits de Presse (FIBEP) and AMEC, both of which are comprised mainly of corporate members that supply evaluation services. While both have rebranded themselves as ‘intelligence’ and ‘insights’ suppliers in recent times, their membership is almost entirely made up of media monitoring and media analysis firms as well as corporate communication and PR agencies. FIBEP’s membership originated, as its names indicates, from press clipping agencies and even today its website describes its services as “media monitoring, media analysis, PR distribution, journalist databases, as well as consulting services” (https://www.fibepcongress.com). Despite being the leading international body focused specifically on evaluation of ‘communication’, AMEC’s membership does not include any major market or social research companies such as Ipsos, Nielsen, GfK, or Kantar, which now includes the former TNS, Millward Brown, BMRB (the British Market Research Bureau) and IMRB (the Indian Market Research Bureau). Nor does it include major research institutes of which there are many with an interest in communication and media.

Given that evaluation of outcomes and impact, such as awareness, attitude change and behavior change, requires quantitative and qualitative research such as audience surveys, interviews, focus groups and other social science research methods, the membership of the leading evaluation industry body is not equipped to valorize, educate, or provide services based on appropriate methodology in relation to outcome and impact evaluation.

Furthermore, the major industry bodies have not engaged to any significant extent with a number of expert third parties that could contribute knowledge and add credibility and legitimacy to their endeavors, such as academic researchers or business performance management (BPM) consultancy firms. AMEC appointed an Academic Advisory Group in 2016 to ostensibly advise the organization on evaluation policy and initiatives and expand its thinking about evaluation. However, while some members were involved in development of the AMEC Integrated Evaluation Framework and its accompanying resources, the group has been called on only occasionally for input, and final approval of all AMEC policies and initiatives rests with its board comprised of representatives from media analysis firms and corporate communication and PR agencies (see https://amecorg.com/about-amec/amec-international-board-members).

Even though AMEC ‘Summits on Measurement’ have at times invited speakers from management consulting firms with expertise in BPM, AMEC’s membership and affiliations do not include any of the major consulting firms that increasingly advise organizations on performance management and performance measurement such as PriceWaterhouseCoopers, KPMG, Deloitte, or the Boston Consulting Group (BCG). Some may argue that these firms are outside the field of corporate communication and PR evaluation. Commercial trends show that such arguments are out of date. In 2018, the European Commission contracted a consortium led by Deloitte in partnership with international development firm Coffey International, supported by survey research firm Ipsos and a university, to conduct evaluation of pan-European corporate communication campaigns – one of the biggest contracts for evaluation awarded by the EC.

Chair of AMEC in 2018–2019, Richard Bagnall, is aware of the limitations of AMEC’s membership and its lack of engagement with the broader research industry, and he sees an expansion of membership and thinking as necessary in the industry’s peak bodies (R. Bagnall, perscomm. 27 August, 2018). The retirement in 2018 of long-standing and respected CEO of AMEC, Barry Leggetter, afforded an opportunity for change. However, despite being highly regarded within AMEC and the US PR evaluation field, the newly appointed managing director, Johna Burke, also comes from a traditional PR and media background, most recently serving as chief marketing officer of BurrellesLuce. BurellesLuce describes itself on its website as “a provider of PR services to PR agencies, corporate communicators, and media relations managers” (Burrellesluce, 2018), with most of those services related to media relations, media monitoring and media content analysis.


Conclusions

This critical analysis acknowledges substantial efforts made by many academic researchers, practitioners, and industry bodies in a number of countries to develop evaluation for corporate communication and PR. However, it has pointed out major disparities between evaluation theory, models and approaches in corporate communication and PR, on one hand, and other major fields of practice such as development communication, health communication, as well as public administration and management in which a body of knowledge has been developed and applied more extensively and more effectively.

This leads to a conclusion that evaluation of corporate communication and PR could be advanced considerably by engaging with evaluation literature in other disciplines. Specifically, such engagement could contribute to standards through adoption of common terminology, legitimized indices, and conformity with existing templates and models. Standards would, in turn, bring simplification, legitimacy, and credibility to evaluation of corporate communication and PR. This is not to rule out a case for innovation and customization for specific corporate communication and PR activities. But theory building should begin from the base of existing knowledge. Furthermore, contemporary scholarship advocates interdisciplinarity and transdisciplinarity (Bernstein, 2015).

Second, it can be concluded that as long as evaluation of corporate communication and PR is focused on measuring activities and outputs such as media publicity and website clicks, it will be unable to demonstrate value. In arranging activities such as events and producing outputs such as media releases and website content, corporate communication and PR are a cost-center in an organization. Only when they can provide evidence of outcomes and impact can they lay claim to be a value-adding function. As well as contributing to standards, engagement with the body of knowledge in other disciplines would bring an important shift in focus from activities and outputs to outcomes and impact of corporate communication and PR.

Third, it can be concluded from analysis of the structure of professional bodies allegedly defining and valorizing evaluation of corporate communication and PR that they are mostly ill-equipped to promote and facilitate evaluation of outcomes and impact. This suggests that industry restructuring is required to access and apply the expertise necessary to evaluate outcomes and impact using valid and rigorous quantitative and qualitative methods. Restructuring could occur through expanded membership of industry bodies, partnerships, mergers with other organizations with complementary expertise, or advisory boards with active meaningful roles. Expertise is available among academic researchers, who can advise on methodology, and social and market research firms and business performance and management consultancies, which can conduct audience surveys, depth interviews, focus groups, ethnographic studies, and specialist research such as behavioral insights, cost-benefit analysis, cost-effectiveness analysis, and attribution modelling.   

These conclusions open up new directions for research, which could include transdisciplinary studies, exploration of case studies in other fields, and collaborative studies with evaluation specialists in other disciplines. Such research will substantially expand theory building in relation to evaluation of corporate communication and PR. These contributions, if complemented by structural change in the industry to expand capabilities, have the potential to overcome stasis and transform practice.

References

AMEC (Association for Measurement and Evaluation of Communication). (2016), “Integrated evaluation framework”, London, available at: http://amecorg.com/amecframework (accessed 1 March 2019).

AMEC (Association for Measurement and Evaluation of Communication). (2019), “A taxonomy of evaluation: Towards standards”, available at: https://amecorg.com/amecframework/home/supporting-material/taxonomy (accessed 8 March 2019)

Anderson, A. (2005), The Community Builder’s Approach to Theory of Change: A Practical Guide to Theory and Development, The Aspen Institute Roundtable on Community Change, New York, NY.

Bauman, A. and Nutbeam, D. (2014), Evaluation in a Nutshell: A Practical Guide to the Evaluation of Health Promotion Programs, 2nd ed., McGraw-Hill, North Ryde, NSW.

Bernstein, J. (2015), “Transdisciplinarity: A review of its origins, development, and current issues”, Journal of Research Practice, Vol. 11 No. 1, Article R1, available at: http://jrp.icaap.org/index.php/jrp/article/view/510/412 (accessed 1 March 2019).

Better Evaluation. (2016), “Realist evaluation”, available at: http://betterevaluation.org/approach/realist_evaluation (accessed 1 March 2019).

Bickman, L. (1987), “The functions of program theory”, New Directions for Program Evaluation, No. 33, pp. 5–18.

Brunsson, N. and Jacobsson, B. (Eds) (2000), A World of Standards, Oxford University Press, Oxford, UK.

Bryant, J. and Oliver, M. (2009), Media Effects: Advances in Theory and Research, 3rd ed., Routledge, Abingdon, UK.

Buhmann, A. and Likely F. (2018), “Evaluation and measurement”, in R. Heath and W. Johansen (Eds), The International Encyclopedia of Strategic Communication, John Wiley & Sons, Malden, MA.  DOI: https://doi.org/10.1002/9781119010722.iesc0103

Buhmann, A. Likely, F. and Geddes, D. (2018), “Communication evaluation and measurement: connecting research to practice,” Journal of Communication Management, Vol. 22 No. 1, pp. 113–19. DOI: https://doi.org/10.1108/JCOM-12-2017-0141

BurellesLuce. (2018), “Company Overview”, available at: https://www.burrellesluce.com/company (accessed 1 February 2019).

Chen, H. and Rossi, P. (1983). “Evaluating with sense: The theory-driven approach”, Evaluation Review, Vol. 7 No. 3, pp. 283–302. DOI: https://doi.org/10.1177/0193841X8300700301

Comcowich, W. (2018, November 30). PR metrics and analytics: 10 trends to watch in 2019. PR Daily. Retrieved from https://www.prdaily.com/pr-metrics-and-analytics-10-trends-to-watch-in-2019  

Cornelissen, J. (2011), Corporate Communication: A Guide to Theory and Practice, Sage. London, UK.

Craig, R. (2018), “For a practical discipline”, Journal of Communication, Vol. 68 No. 2, pp. 289–97. DOI: https://doi.org/10.1093/joc/jqx013

Cutler, A. (2004), “Methodical failure: The use of case study method by public relations researchers”, Public Relations Review, Vol. 30 No. 3, pp. 365–375. DOI: https://doi.org/10.1016/j.pubrev.2004.05.008

Cutlip, M. Center, A. and Broom, G. (1985), Effective Public Relations, 6th ed., Prentice-Hall, Englewood Cliffs, NJ.

Davidson, J. (2007), “The ‘baggaging” of theory-based evaluation’, Journal of Multidisciplinary Evaluation, Vol. 3. No. 4, pp. iii–xiii, available at: http://journals.sfu.ca/jmde/index.php/jmde_1/issue/archive (accessed 9 April 2019).

DPRG/ICV (Deutsche Public Relations Gesellschaft and International Controller Verein). (2009). “DPRG/ICV framework for communication controlling”, available at: http://www.communicationcontrolling.de/index.php?id=280&type=98&tx_ttnews[tt_news]=&L=3 (accessed 1 September 2018).

Dutta, M. Anaele, A. and Jones, C. (2013), “Voices of hunger: Addressing health disparities through the culture-centered approach”, Journal of Communication, Vol. 63 No. 1, pp. 159–180. DOI: https://doi.org/10.1111/jcom.12009

Fairchild, M. (1997), How to Get Real Value from Public Relations, ICO, London, UK.

Fishbein, M. and Ajzen, I. (1975), Belief, Attitude, Intention, and Behaviour: An Introduction to Theory and Research, Addison-Wesley, Reading, MA.

Gregory, A. and Watson, T. (2008), “Defining the gap between research and practice in public relations programme evaluation – towards a new research agenda”, Journal of Marketing Communications, Vol. 24 No. 5, pp. 337–50.

Grunig, J. (2008), “Conceptualizing quantitative research in public relations”, in van Ruler, B., Tkalac Vercic, A. and Vercic, D. (Eds.), Public Relations Metrics: Research and Evaluation, Routledge, New York, NY, pp. 88–119.

Grunig, J. (2014), “Thought leaders in PR measurement. Communication Controlling, available at: http://www.communicationcontrolling.de/index.php?id=302&L=3 (accessed 8 April, 2019).

Grunig, J. Grunig, L. and Dozier, D. (2006), “The excellence theory”, in C. Botan and V. Hazelton (Eds), Public Relations Theory II, Lawrence Erlbaum, Mahwah, NJ, pp. 21–62.

Grunig, L. Grunig, J. and Dozier, D. (2002), Excellent Organizations and Effective Organizations: A Study of Communication Management in Three Countries, Lawrence Erlbaum, Mahwah, NJ.

Hering, R., Schuppener, B. and Sommerhalder, M. (2004), Die Communication Scorecard, Haupt, Bern, Switzerland.

Huhn, J., Sass, J. and Storck, C. (2011), “Communication controlling: How to maximize and demonstrate the value creation through communication”, Berlin, Germany: German Public Relations Association (DPRG), available at: http://www.communicationcontrolling.de/fileadmin/communicationcontrolling/sonst_files/Position_paper_DPRG_ICV_2011_english.pdf (accessed 8 March 2019).

Holtzhausen, D. and Zerfass, A. (2013), “Strategic communication: Pillars and perspectives of an alternative paradigm”, in Zerfass, A., Rademacher, L. and Wehmeier, S (Eds.), Organizationskommunikation und Public Relations: Forschungsparadigmen und neue Perspektiven, Springer, Weisbaden, Germany, pp. 73–94.

Invernizzi, E. and Romenti, S. (2009), “Institutionalization and evaluation of corporate communication in Italian companies”, International Journal of Strategic Communication, Vol. 3, No. 2, pp. 116–30. DOI: https://doi.org/10.1080/15531180902810197  

Kaplan, R. and Norton, D. (1992), “The Balanced Scorecard: Measures that drive performance”, Harvard Business Review, January-February, pp. 71–9.

Kellogg Foundation (2004), Logic Model Development Guide, Battle Creek, MI, available at: https://www.wkkf.org/resource-directory/resources/2004/01/logic-model-development-guide  (accessed 20 November 2019).

Knowlton, L. and Phillips, C. (2013), The Logic Model Guidebook: Better Strategies for Great Results, 2nd ed., Sage, Thousand Oaks, CA.

Lange, M. (2005), “Das communications value system der GPRA”, in J. Pfannenberg J. and Zerfass (Eds), Wertschöpfung Durch Kommunikation Durch Kommunikation. Wie Unternehmen den Erfolg Ihrer Kommunikation Steuern und Bilanzieren, Allgemeine Buch, Frankfurt, Germany, pp. 199–211).

Likely, F. and Watson, T. (2013), “Measuring the edifice: Public relations measurement and evaluation practice over the course of 40 years”, in Sriramesh, K. Zerfass, A. and Kim, J. (Eds), Public Relations and Communication Management: Current Trends and Emerging Topics, Routledge, New York, NY, pp. 143–62.

Lincoln, Y. and Guba, E. (1985), Naturalistic Inquiry, Sage, Beverly Hills, CA.

Lindenman, W. (1993), “An ‘effectiveness yardstick’ to measure public relations success”, Public Relations Quarterly, Vol. 38 No. 1, pp. 7–9.

Lindenmann, W. (2003), “Guidelines for measuring the effectiveness of PR programs and activities”, Institute for Public Relations, Gainesville, FL, available at: http://www.instituteforpr.org/wp-content/uploads/2002_MeasuringPrograms.pdf (accessed 1 March 2019).

Macnamara, J. (2015), “Overcoming the measurement and evaluation deadlock: A new approach and model”, Journal of Communication Management, Vol. 19 No. 4, pp. 371–387. DOI: https://doi.org/10.1108/JCOM-04-2014-0020

Macnamara, J. (2018), Evaluating Public Communication: Exploring New Models, Standards, and Best Practice. Routledge, Abingdon, UK.

Macnamara, J. and Gregory, A. (2018), “Expanding evaluation to enable true strategic communication: Beyond message tracking to open listening”, International Journal of Strategic Communication, Vol. 12 No. 4, pp.  469–486. DOI: https://doi.org/10.1080/1553118X.2018.1450255

Macnamara J. and Zerfass, A. (2017), “Evaluation stasis continues in PR and corporate communication: Asia Pacific insights into causes”, Communication Research and Practice, Vol. 3 No. 4, pp. 319–334. DOI: http://dx.doi.org/10.1080/22041451.2017.1275258

Marklein, T. and Paine, K. (2012). “The march to standards”, presentation to the 4th European Summit on Measurement, Dublin, Ireland, June, available at: http://amecorg.com/downloads/dublin2012/The-March-to-Social-Standards-Tim-Marklein-and-Katie-Paine.pdf (accessed 1 March 2019).

Michaelson, D. and Stacks, D. (2011), “Standardization in public relations measurement and evaluation”, Public Relations Journal, Vol. 5 No. 2, pp. 1–22, available at: http://apps.prsa.org/Intelligence/PRJournal/past-editions/Vol5 No2 (accessed 1 February 2019).

Noble, P. and Watson, T. (1999), “Applying a unified public relations evaluation model in a European context”, paper presented to the Transnational Communication in Europe: Practice and Research Congress, Berlin, Germany.

Paine, K. (2014), “The worst new thing in measurement: Ogilvy Australia’s black box of 70 measurement tools”, The Measurement Adviser, 9 July, available at: http://painepublishing.com/measurementadvisor/the-worst-new-thing-in-measurement-ogilvy-australias-black-box-of-70-measurement-tools (accessed 1 February 2019).

Panter-Brick, C. Clarke, S. Lomas, H, Pinder, M. and Lindsay, S. (2006), “Culturally compelling strategies for behaviour changes: A social ecology model and case study in malaria prevention”, Social Science & Medicine, Vol. 62, pp. 2810–2825. DOI: https://doi.org/10.1016/j.socscimed.2005.10.009

Practical Concepts. (1979), The Logical Framework. A Manager’s Guide to a Scientific Approach to Design and Evaluation, Washington, DC, available at: http://pdf.usaid.gov/pdf_docs/pnabn963.pdf (accessed 20 October 2018).

Rice, R. and Atkin, C. (eds) (2013), Public Communication Campaigns, 4th edn, Sage, Thousand Oaks, CA.

Rossi, P., Lipsey, M. and Freeman, H. (2004), Evaluation: A Systematic Approach, 7th edn, Sage, Thousand Oaks, CA.

Salter, K. and Kothari, A. (2014), “Using realist evaluation to open the black box of knowledge translation: A state-of-the-art review”, Implementation Science, Vol. 9 No. 115, pp. 1–14.

Schriner, M. Swenson, R. and Gilkerson, N. (2017), “Outputs or outcomes? Assessing public relations evaluation practices in award-winning PR campaigns”, Public Relations Journal, Vol. 11, No. 1, pp. 1–15, available at: https://prjournal.instituteforpr.org/past-issues (accessed 8 April 2019).

Shapiro, I. (2005), “Theories of change”, in Burgess G. and & H. Burgess, H. (Eds), Beyond Intractability Knowledge Base, Conflict Information Consortium, University of Colorado, Boulder, CO, available at: http://www.beyondintractability.org/essay/theories-of-change (accessed 9 April 2019).

Shenton, A. (2004), “Strategies for ensuring trustworthiness in qualitative research projects”, Education for Information, Vol. 22 No. 2, pp. 63–75. DOI: https://doi.org/10.3233/EFI-2004-22201

Slager, R., Gond, J. and Moon, J. (2012), “Standardization as institutional work: The regulatory power of a responsible investment standard”, Organization Studies, Vol, 33 No. 5/6, pp. 763–90.  DOI: https://doi.org/10.1177/0170840612443628 (accessed 8 March 2019).

Stacks, D. (2011), Primer of Public Relations Research, 2nd ed., Guildford Press, New York, NY.

Suchman, E. (1967), Evaluative Research: Principles and Practice in Public Service and Social Action Programs, Russell Sage Foundation, New York, NY.

Taylor, M. and Kent, M. (2014), “Dialogic engagement: Clarifying foundational concepts”, Journal of Public Relations Research, Vol. 26 No. 5, pp. 384–98. DOI: https://doi.org/10.1080/1062726X.2014.956106

Taylor-Power, E. and Henert, E. (2008), Developing a Logic Model: Teaching and Training Guide, available at: https://fyi.uwex.edu/programdevelopment/files/2016/03/lmguidecomplete.pdf (accessed 1 September 2018).

Tench, R., Verčič, D., Zerfass, A., Moreno, A. and Verhoeven, P. (2017), Communication Excellence. How to Develop, Manage and Lead Exceptional Communications, Palgrave Macmillan, London, UK.

USC (University of Southern California) Annenberg Center for Public Relations and The Holmes Report (2016), Global Communications Report 2016, available at: https://annenberg.usc.edu/research/center-public-relations/reports/global-communications-report-2016 (accessed 1 February 2019).

Volk, S. and Zerfass, A. (2018), “Explicating a key concept in strategic communication”, International Journal of Strategic Communication, Vol. 12 No. 3, pp. 433–51. DOI: https://doi.org/10.1080/1553118X.2018.1452742

Watson, T. (2012), “The evolution of public relations measurement and evaluation”, Public Relations Review, Vol 38 No. 3, pp. 390–98. DOI: https://doi.org/10.1016/j.pubrev.2011.12.018

Watson, T. and Noble, P. (2014), Evaluating Public Relations: A Best Practice Guide to Public Relations Planning, Research and Evaluation, 3rd edn, Kogan Page, London and Philadelphia, PA.

Watson, T. and Zerfass, A. (2011), “Return on investment in public relations: A critique of concepts used by practitioners from communication and management sciences perspectives”, Prism, Vol. 8 No. 1, pp. 1–14, available at http://www.prismjournal.org/vol8_1.html (accessed 1 March 2019).

Weiss, C. (1972), Evaluative Research: Methods of Assessing Program Effectiveness, Prentice Hal, Englewood Cliffs, NJ.

Wholey, J. (1979), Evaluation: Promise and Performance, Urban Institute Press, Washington, DC.

Wholey, J. (1983), Evaluation and Effective Public Management, Little Brown & Co, Boston, MA.

Wholey, J. (1987), “Evaluability assessment: Developing program theory”, New Directions for Program Evaluation, No. 33, pp. 77–92.

Zerfass, A., Verčič, D., Verhoeven, P., Moreno, A., & Tench, R. (2015), European Communication Monitor 2015, European Association of Communication Directors (EACD), European Public Relations Education and Research Association (EUPRERA), and Helios Media, Brussels, Belgium, available at: http://www.communicationmonitor.eu (accessed 8 April 2019).

Zerfass, A., Moreno, A., Tench, R., Verčič, D. and Verhoeven, P. (2017), European Communication Monitor 2017, European Public Relations Education and Research Association (EUPRERA), and Quadriga Media, Berlin, Germany, available at: http://www.communicationmonitor.eu (accessed 8 April 2019).

Citation:

Macnamara, J. (2020), Embracing evaluation theory to overcome ‘stasis’: Informing standards, impact and methodology, Corporate Communications – An International Journal. vol. 25, no. 2, pp. 339–354. https://doi.org/10.1108/CCIJ-04-2019-0044 . Republish with the permission of the author. 

[1]     While some define corporate communication as apply only to corporations, many use the term broadly to refer bodies (corpora). For example, the European Commission refers to its major campaigns to promote the EU as ‘corporate communication’.

[2]     Multidisciplinarity refers to drawing on and incorporating the perspectives of two or more disciplines and discussing these in parallel or contrast. Transdisciplinarity refers to scholarship that draws on multiple disciplines and, rather than presenting their respective views, synthesizes them to form new perspectives that are transformative and emergent, leading to new perspectives and knowledge beyond that of any participating discipline (Bernstein, 2015).

[3]     Now the Chartered Institute of Public Relations (CIPR).

***

Jim Macnamara is a Distinguished Professor in the School of Communication within the Faculty of Arts & Social Sciences at the University of Technlogy Sydney (UTS). He is currently serving as Deputy Dean of the UTS Faculty of Science.

He is also a Visiting Professor at The London School of Economics and Political Science (LSE), Media and Communications Department, and a Visiting Professor at the London College of Communication (LCC) in the University of the Arts London (UAL).

Jim is internationally recognised for his research into evaluation of public communication including advertising, public relations, and marketing, corporate, and government communication to identify effectiveness and inform strategy, and his research into organisational listening has been described as "of major international significance". He also has conducted extensive research into health communication, including for the Cancer Institute NSW, the NSW Department of Health, the NSW Multicultural Health Communication Service, and he has led global evaluation research for the World Health Organization (WHO) during the COVID-19 pandemic and for World Health Days in 2020 and 2021.

His research has had substantial industry, professional, and social impact, including adoption of his evaluation framework for communication by the NSW Government, the International Association for Measurement and Evaluation of Communication (AMEC), and the World Health Organization (WHO), and his recommendations on evaluation and organisational listening have been adopted by the UK Government Communication Service (GCS) in the Cabinet Office, Whitehall, and the European Commission Directorate-General for Communication (DG COMM).

In 2017 he was presented with The Pathfinder Award,"the highest academic honor" awarded by the Institute for Public Relations (IPR) in the USA for his scholarly research, and The Don Bartholomew Award for "outstanding service to the communications industry" by the London-based International Association for Measurement and Evaluation of Communication (AMEC).

Jim is the author of more than 80 academic journal articles and book chapters, a number of influential research reports, and 16 books including, most recently, 'Beyond Post-Communication: Challenging Disinformation, Deception, and Manipulation' (Peter Lang, New York, 2020); 'Evaluating Public Communication: New Models, Standards, and Best Practice' (Routledge, UK, 2018); 'Organizational Listening: The Missing Link in Public Communication' (Peter Lang, New York, 2016); and 'The 21st Century Media (R)evolution: Emergent Communication Practices' (Peter Lang, New York, 2014).